Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Claim Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claim-resolver-overview.md | The following table lists the claim resolvers with information about the OpenID | {OIDC:Resource} |The `resource` query string parameter. | N/A | | {OIDC:Scope} |The `scope` query string parameter. | openid | | {OIDC:Username}| The [resource owner password credentials flow](add-ropc-policy.md) user's username.| emily@contoso.com|+| {OIDC:IdToken} | The `id token` query string parameter. | N/A | Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-resolver#openid-connect-relying-party-application) of the OpenID Connect claim resolvers. |
active-directory-b2c | Custom Policies Series Validate User Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-validate-user-input.md | Follow the steps in [Upload custom policy file](custom-policies-series-hello-wor ## Step 7 - Validate user input by using validation technical profiles -The validation techniques we've used in step 1, step 2 and step 3 aren't applicable for all scenarios. If your business rules are complex to be defined at claim declaration level, you can configure a [Validation Technical](validation-technical-profile.md), and then call it from a [Self-Asserted Technical Profile](self-asserted-technical-profile.md). +The validation techniques we've used in step 1, step 2 and step 3 aren't applicable for all scenarios. If your business rules are too complex to be defined at claim declaration level, you can configure a [Validation Technical](validation-technical-profile.md), and then call it from a [Self-Asserted Technical Profile](self-asserted-technical-profile.md). > [!NOTE] > Only self-asserted technical profiles can use validation technical profiles. Learn more about [validation technical profile](validation-technical-profile.md) |
ai-services | Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/configuration.md | -Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` only: +Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only: * [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)+* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) * [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)+* [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md) ✔️ See [**Configure Document Intelligence v3.0 containers**](?view=doc-intel-3.0.0&preserve-view=true) for supported container documentation. :::moniker-end -**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)** +**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)** ![checkmark](../media/yes-icon.png) **v3.1 (GA)** -With Document Intelligence containers, you can build an application architecture optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premises and in the cloud. In this article, we show you how to configure the Document Intelligence container run-time environment by using the `docker compose` command arguments. Document Intelligence features are supported by six Document Intelligence feature containers—**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section. +With Document Intelligence containers, you can build an application architecture optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premises and in the cloud. In this article, we show you how to configure the Document Intelligence container run-time environment by using the `docker compose` command arguments. Document Intelligence features are supported by seven Document Intelligence feature containers—**Read**, **Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section. ## Configuration settings |
ai-services | Disconnected | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/disconnected.md | -Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)`: +Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only: * [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)+* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) * [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)+* [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md) ✔️ See [**Document Intelligence v3.0 containers in disconnected environments**](?view=doc-intel-3.0.0&preserve-view=true) for supported container documentation. :::moniker-end -**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)** +**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)** ![checkmark](../media/yes-icon.png) **v3.1 (GA)** ## What are disconnected containers? |
ai-services | Image Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/image-tags.md | -Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` only: +Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only: * [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)+* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) * [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)+* [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md) ✔️ See [**Document Intelligence container image tags**](?view=doc-intel-3.0.0&preserve-view=true) for supported container documentation. The following containers support DocumentIntelligence v3.0 models and features: ::: moniker-end ++**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.1 (GA)** ++## Microsoft container registry (MCR) ++Document Intelligence container images can be found within the [**Microsoft Artifact Registry** (also know as Microsoft Container Registry(MCR))](https://mcr.microsoft.com/catalog?search=document%20intelligence), the primary registry for all Microsoft published container images. ++The following containers support DocumentIntelligence v3.0 models and features: ++| Container name |image | +||| +|[**Document Intelligence Studio**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/studio/tags)| `mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:latest`| +| [**Read 3.1**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/read-3.1/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1:latest`| +| [**Layout 3.1**](https://mcr.microsoft.com/en-us/product/azure-cognitive-services/form-recognizer/layout-3.1/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.1:latest`| ++ :::moniker range="doc-intel-2.1.0" > [!IMPORTANT] |
ai-services | Api Version Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md | Azure OpenAI API version 2023-12-01-preview is currently the latest preview rele This version contains support for all the latest Azure OpenAI features including: +- [Text to speech](./text-to-speech-quickstart.md). [**Added in 2024-02-15-preview**] - [Fine-tuning](./how-to/fine-tuning.md) `gpt-35-turbo`, `babbage-002`, and `davinci-002` models.[**Added in 2023-10-01-preview**] - [Whisper](./whisper-quickstart.md). [**Added in 2023-09-01-preview**] - [Function calling](./how-to/function-calling.md) [**Added in 2023-07-01-preview**] |
ai-services | Assistants Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-quickstart.md | + + Title: Quickstart - Getting started with Azure OpenAI Assistants (Preview) ++description: Walkthrough on how to get started with Azure OpenAI assistants with new features like code interpreter and retrieval. +++++ Last updated : 02/01/2024+zone_pivot_groups: openai-quickstart +recommendations: false ++++# Quickstart: Get started using Azure OpenAI Assistants (Preview) ++Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. +++++++++ |
ai-services | Assistants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/assistants.md | + + Title: Azure OpenAI Service Assistant API concepts ++description: Learn about the concepts behind the Azure OpenAI Assistants API. + Last updated : 02/05/2023++++recommendations: false +++# Azure OpenAI Assistants API (Preview) ++Assistants, a new feature of Azure OpenAI Service, is now available in public preview. Assistants API makes it easier for developers to create applications with sophisticated copilot-like experiences that can sift through data, suggest solutions, and automate tasks. ++## Overview ++Previously, building custom AI assistants needed heavy lifting even for experienced developers. While the chat completions API is lightweight and powerful, it's inherently stateless, which means that developers had to manage conversation state and chat threads, tool integrations, retrieval documents and indexes, and execute code manually. ++The Assistants API, as the stateful evolution of the chat completion API, provides a solution for these challenges. +Assistants API supports persistent automatically managed threads. This means that as a developer you no longer need to develop conversation state management systems and work around a modelΓÇÖs context window constraints. The Assistants API will automatically handle the optimizations to keep the thread below the max context window of your chosen model. Once you create a Thread, you can simply append new messages to it as users respond. Assistants can also access multiple tools in parallel, if needed. These tools include: ++- [Code Interpreter](../how-to/code-interpreter.md) +- [Function calling](../how-to/assistant-functions.md) ++Assistant API is built on the same capabilities that power OpenAIΓÇÖs GPT product. Some possible use cases range from AI-powered product recommender, sales analyst app, coding assistant, employee Q&A chatbot, and more. Start building on the no-code Assistants playground on the Azure OpenAI Studio or start building with the API. ++> [!IMPORTANT] +> Retrieving untrusted data using Function calling, Code Interpreter with file input, and Assistant Threads functionalities could compromise the security of your Assistant, or the application that uses the Assistant. Learn about mitigation approaches [here](https://aka.ms/oai/assistant-rai). ++## Assistants playground ++We provide a walkthrough of the Assistants playground in our [quickstart guide](../assistants-quickstart.md). This provides a no-code environment to test out the capabilities of assistants. ++## Assistants components ++| **Component** | **Description** | +||| +| **Assistant** | Custom AI that uses Azure OpenAI models in conjunction with tools. | +|**Thread** | A conversation session between an Assistant and a user. Threads store Messages and automatically handle truncation to fit content into a modelΓÇÖs context.| +| **Message** | A message created by an Assistant or a user. Messages can include text, images, and other files. Messages are stored as a list on the Thread. | +|**Run** | Activation of an Assistant to begin running based on the contents of the Thread. The Assistant uses its configuration and the ThreadΓÇÖs Messages to perform tasks by calling models and tools. As part of a Run, the Assistant appends Messages to the Thread.| +|**Run Step** | A detailed list of steps the Assistant took as part of a Run. An Assistant can call tools or create Messages during itΓÇÖs run. Examining Run Steps allows you to understand how the Assistant is getting to its final results. | ++## See also ++* Learn more about Assistants and [Code Interpreter](../how-to/code-interpreter.md) +* Learn more about Assistants and [function calling](../how-to/assistant-functions.md) +* [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) +++ |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | Azure OpenAI Service is powered by a diverse set of models with different capabi | [Embeddings](#embeddings-models) | A set of models that can convert text into numerical vector form to facilitate text similarity. | | [DALL-E](#dall-e-models-preview) (Preview) | A series of models in preview that can generate original images from natural language. | | [Whisper](#whisper-models-preview) (Preview) | A series of models in preview that can transcribe and translate speech to text. |+| [Text to speech](#text-to-speech-models-preview) (Preview) | A series of models in preview that can synthesize text to speech. | ## GPT-4 and GPT-4 Turbo Preview The Whisper models, currently in preview, can be used for speech to text. You can also use the Whisper model via Azure AI Speech [batch transcription](../../speech-service/batch-transcription-create.md) API. Check out [What is the Whisper model?](../../speech-service/whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service. -## Model summary table and region availability +## Text to speech (Preview) -> [!IMPORTANT] -> Due to high demand: -> -> - South Central US is temporarily unavailable for creating new resources and deployments. +The OpenAI text to speech models, currently in preview, can be used to synthesize text to speech. -### GPT-4 and GPT-4 Turbo Preview models +You can also use the OpenAI text to speech voices via Azure AI Speech. To learn more, see [OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech](../../speech-service/openai-voices.md#openai-text-to-speech-voices-via-azure-openai-service-or-via-azure-ai-speech) guide. ++## Model summary table and region availability +### GPT-4 and GPT-4 Turbo Preview models GPT-4, GPT-4-32k, and GPT-4 Turbo with Vision are now available to all Azure OpenAI Service customers. Availability varies by region. If you don't see GPT-4 in your region, please check back later. These models can only be used with the Chat Completion API. -GPT-4 version 0314 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support. +GPT-4 version 0314 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support. See [model versions](../concepts/model-versions.md) to learn about how Azure OpenAI Service handles model version upgrades, and [working with models](../how-to/working-with-models.md) to learn how to view and configure the model version settings of your GPT-4 deployments. > [!NOTE]-> Version `0613` of `gpt-4` and `gpt-4-32k` will be retired on June 13, 2024. Version `0314` of `gpt-4` and `gpt-4-32k` will be retired on July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior. +> Version `0314` of `gpt-4` and `gpt-4-32k` will be retired no earlier than July 5, 2024. Version `0613` of `gpt-4` and `gpt-4-32k` will be retired no earlier than September 30, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior. +++GPT-4 version 0125-preview is an updated version of the GPT-4 Turbo preview previously released as version 1106-preview. GPT-4 versio 0125-preview completes tasks such as code generation more completely compared to gpt-4-1106-preview. Because of this, depending on the task, customers may find that GPT-4-0125-preview generates more output compared to the gpt-4-1106-preview. We recommend customers compare the outputs of the new model. GPT-4-0125-preview also addresses bugs in gpt-4-1106-preview with UTF-8 handling for non-English languages. ++> [!IMPORTANT] +> +> - `gpt-4` version 0125-preview replaces version 1106-preview. Deployments of `gpt-4` version 1106-preview set to "Auto-update to default" and "Upgrade when expired" will start to be upgraded on February 20, 2024 and will complete upgrades within 2 weeks. Deployments of `gpt-4` version 1106-preview set to "No autoupgrade" will stop working starting February 20, 2024. If you have a deployment of `gpt-4` version 1106-preview, you can test version `0125-preview` in the available regions below. | Model ID | Max Request (tokens) | Training Data (up to) | | | : | :: | See [model versions](../concepts/model-versions.md) to learn about how Azure Ope | `gpt-4` (0613) | 8,192 | Sep 2021 | | `gpt-4-32k` (0613) | 32,768 | Sep 2021 | | `gpt-4` (1106-preview)**<sup>1</sup>**<br>**GPT-4 Turbo Preview** | Input: 128,000 <br> Output: 4,096 | Apr 2023 |+| `gpt-4` (0125-preview)**<sup>1</sup>**<br>**GPT-4 Turbo Preview** | Input: 128,000 <br> Output: 4,096 | Apr 2023 | | `gpt-4` (vision-preview)**<sup>2</sup>**<br>**GPT-4 Turbo with Vision Preview** | Input: 128,000 <br> Output: 4,096 | Apr 2023 | -**<sup>1</sup>** GPT-4 Turbo Preview = `gpt-4` (1106-preview). To deploy this model, under **Deployments** select model **gpt-4**. For **Model version** select **1106-preview**. +**<sup>1</sup>** GPT-4 Turbo Preview = `gpt-4` (0125-preview). To deploy this model, under **Deployments** select model **gpt-4**. For **Model version** select **0125-preview**. **<sup>2</sup>** GPT-4 Turbo with Vision Preview = `gpt-4` (vision-preview). To deploy this model, under **Deployments** select model **gpt-4**. For **Model version** select **vision-preview**. > [!CAUTION]-> We don't recommend using preview models in production. We will upgrade all deployments of preview models to a future stable version. Models designated preview do not follow the standard Azure OpenAI model lifecycle. +> We don't recommend using preview models in production. We will upgrade all deployments of preview models to future preview versions and a stable version. Models designated preview do not follow the standard Azure OpenAI model lifecycle. > [!NOTE] > Regions where GPT-4 (0314) & (0613) are listed as available have access to both the 8K and 32K versions of the model See [model versions](../concepts/model-versions.md) to learn about how Azure Ope | gpt-4 (0314) | | East US <br> France Central <br> South Central US <br> UK South | | gpt-4 (0613) | Australia East <br> Canada East <br> France Central <br> Sweden Central <br> Switzerland North | East US <br> East US 2 <br> Japan East <br> UK South | | gpt-4 (1106-preview) | Australia East <br> Canada East <br> East US 2 <br> France Central <br> Norway East <br> South India <br> Sweden Central <br> UK South <br> West US | | +| gpt-4 (0125-preview) | East US <br> North Central US <br> South Central US <br> | | gpt-4 (vision-preview) | Sweden Central <br> West US <br> Japan East| Switzerland North <br> Australia East | #### Azure Government regions The following Embeddings models are available with [Azure Government](/azure/azu | `babbage-002` | North Central US <br> Sweden Central | 16,384 | Sep 2021 | | `davinci-002` | North Central US <br> Sweden Central | 16,384 | Sep 2021 | | `gpt-35-turbo` (0613) | North Central US <br> Sweden Central | 4,096 | Sep 2021 |+| `gpt-35-turbo` (1106) | North Central US <br> Sweden Central | Input: 16,385<br> Output: 4,096 | Sep 2021| + ### Whisper models (Preview) The following Embeddings models are available with [Azure Government](/azure/azu | | | :: | | `whisper` | North Central US <br> West Europe | 25 MB | +### Text to speech models (Preview) ++| Model ID | Model Availability | +| | | :: | +| `tts-1` | North Central US <br> Sweden Central | +| `tts-1-hd` | North Central US <br> Sweden Central | ++### Assistants (Preview) ++For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. For example [parallel function](../how-to/assistant-functions.md) calling requires the latest 1106 models. ++| Region | `gpt-35-turbo (1106)` | `gpt-4 (1106-preview)` | `gpt-4 (0613)` | `gpt-4 (0314)` | `gpt-35-turbo (0301)` | `gpt-35-turbo (0613)` | `gpt-35-turbo-16k (0613)` | `gpt-4-32k (0314)` | `gpt-4-32k (0613)` | +||||||||||| +| Sweden Central | ✅|✅|✅|✅|✅|✅|✅||✅| +| East US 2 ||✅|✅|||✅|||✅| +| Australia East |✅|✅|✅|||✅|||✅| ++ ## Next steps - [Learn more about working with Azure OpenAI models](../how-to/working-with-models.md) |
ai-services | Provisioned Throughput | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md | We use a variation of the leaky bucket algorithm to maintain utilization below 1 a. When the current utilization is above 100%, the service returns a 429 code with the `retry-after-ms` header set to the time until utilization is below 100% - b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified max_tokens in the call. + b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified `max_tokens` in the call. If the `max_tokens` parameter is not specified, the service will estimate a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size. 3. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic: We use a variation of the leaky bucket algorithm to maintain utilization below 1 :::image type="content" source="../media/provisioned/utilization.jpg" alt-text="Diagram showing how subsequent calls are added to the utilization." lightbox="../media/provisioned/utilization.jpg"::: +#### How many concurrent calls can I have on my deployment? +The number of concurrent calls you can have at one time is dependent on each call's shape. The service will continue to accept calls until the utilization is above 100%. To determine the approximate number of concurrent calls you can model out the maximum requests per minute for a particular call shape in the [capacity calculator](https://oai.azure.com/portal/calculator). If `max_tokens` is empty, you can assume a value of 1000 ## Next steps |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | When you want to reuse the same URL/web address, you can select [Azure AI Search -## Custom parameters +## Ingestion parameters -You can modify the following additional settings in the **Data parameters** section in Azure OpenAI Studio and [the API](../reference.md#completions-extensions). +You can use the following parameter to change how your data is ingested in Azure OpenAI Studio, Azure AI Studio, and the ingestion API. Changing the parameter requires re-ingesting your data into Azure Search. ++|Parameter name | Description | +||| +| **Chunk size** | Azure OpenAI on your data processes your documents by splitting them into chunks before indexing them in Azure Search. The chunk size is the maximum number of tokens for any chunk in the search index. The default chunk size is 1024 tokens. However, given the uniqueness of your data, you may find a different chunk size (such as 256, 512, or 1536 tokens for example) more effective. Adjusting the chunk size can enhance the performance of the chat bot. While finding the optimal chunk size requires some trial and error, start by considering the nature of your dataset. A smaller chunk size is generally better for datasets with direct facts and less context, while a larger chunk size might be beneficial for more contextual information, though it can affect retrieval performance. This is the `chunkSize` parameter in the API.| +++## Runtime parameters ++You can modify the following additional settings in the **Data parameters** section in Azure OpenAI Studio and [the API](../reference.md#completions-extensions). You do not need to re-ingest your your data when you update these parameters. |Parameter name | Description | |||-|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 5. This is the `topNDocuments` parameter in the API. | -| **Strictness** | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. The default value is 3. | +| **Limit responses to your data** | This flag configures the chatbot's approach to handling queries unrelated to the data source or when search documents are insufficient for a complete answer. When this setting is disabled, the model supplements its responses with its own knowledge in addition to your documents. When this setting is enabled, the model attempts to only rely on your documents for responses. This is the `inScope` parameter in the API. | +|**Top K Documents** | This parameter is an integer that can be set to 3, 5, 10, or 20, and controls the number of document chunks provided to the large language model for formulating the final response. By default, this is set to 5. The search process can be noisy and sometimes, due to chunking, relevant information may be spread across multiple chunks in the search index. Selecting a top-K number, like 5, ensures that the model can extract relevant information, despite the inherent limitations of search and chunking. However, increasing the number too high can potentially distract the model. Additionally, the maximum number of documents that can be effectively used depends on the version of the model, as each has a different context size and capacity for handling documents. If you find that responses are missing important context, try increasing this parameter. Conversely, if you think the model is providing irrelevant information alongside useful data, consider decreasing it. When experimenting with the [chunk size](#ingestion-parameters), we recommend adjusting the top-K parameter to achieve the best performance. Usually, it is beneficial to change the top-K value in the opposite direction of your chunk size adjustment. For example, if you decrease the chunk size from the default of 1024, you might want to increase the top-K value to 10 or 20. This ensures a similar amount of information is provided to the model, as reducing the chunk size decreases the amount of information in the 5 documents given to the model. This is the `topNDocuments` parameter in the API. | +| **Strictness** | Determines the system's aggressiveness in filtering search documents based on their similarity scores. The system queries Azure Search or other document stores, then decides which documents to provide to large language models like ChatGPT. Filtering out irrelevant documents can significantly enhance the performance of the end-to-end chatbot. Some documents are excluded from the top-K results if they have low similarity scores before forwarding them to the model. This is controlled by an integer value ranging from 1 to 5. Setting this value to 1 means that the system will minimally filter documents based on search similarity to the user query. Conversely, a setting of 5 indicates that the system will aggressively filter out documents, applying a very high similarity threshold. If you find that the chatbot omits relevant information, lower the filter's strictness (set the value closer to 1) to include more documents. Conversely, if irrelevant documents distract the responses, increase the threshold (set the value closer to 5). This is the `strictness` parameter in the API. | + ## Document-level access control |
ai-services | Assistant Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/assistant-functions.md | + + Title: 'How to use Azure OpenAI Assistants function calling' ++description: Learn how to use Assistants function calling ++++ Last updated : 02/01/2024+++recommendations: false ++++# Azure OpenAI Assistants function calling ++The Assistants API supports function calling, which allows you to describe the structure of functions to an Assistant and then return the functions that need to be called along with their arguments. ++## Function calling support ++### Supported models ++The [models page](../concepts/models.md#assistants-preview) contains the most up-to-date information on regions/models where Assistants are supported. ++To use all features of function calling including parallel functions, you need to use the latest models. ++### API Version ++- `2024-02-15-preview` ++## Example function definition ++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++assistant = client.beta.assistants.create( + instructions="You are a weather bot. Use the provided functions to answer questions.", + model="gpt-4-1106-preview", #Replace with model deployment name + tools=[{ + "type": "function", + "function": { + "name": "getCurrentWeather", + "description": "Get the weather in location", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + "unit": {"type": "string", "enum": ["c", "f"]} + }, + "required": ["location"] + } + } + }, { + "type": "function", + "function": { + "name": "getNickname", + "description": "Get the nickname of a city", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + }, + "required": ["location"] + } + } + }] +) +``` ++# [REST](#tab/rest) ++> [!NOTE] +> With Azure OpenAI the `model` parameter requires model deployment name. If your model deployment name is different than the underlying model name then you would adjust your code to ` "model": "{your-custom-model-deployment-name}"`. ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "instructions": "You are a weather bot. Use the provided functions to answer questions.", + "tools": [{ + "type": "function", + "function": { + "name": "getCurrentWeather", + "description": "Get the weather in location", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + "unit": {"type": "string", "enum": ["c", "f"]} + }, + "required": ["location"] + } + } + }, + { + "type": "function", + "function": { + "name": "getNickname", + "description": "Get the nickname of a city", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"} + }, + "required": ["location"] + } + } + }], + "model": "gpt-4-1106-preview" + }' +``` ++++## Reading the functions ++When you initiate a **Run** with a user Message that triggers the function, the **Run** will enter a pending status. After it processes, the run will enter a requires_action state that you can verify by retrieving the **Run**. ++```json +{ + "id": "run_abc123", + "object": "thread.run", + "assistant_id": "asst_abc123", + "thread_id": "thread_abc123", + "status": "requires_action", + "required_action": { + "type": "submit_tool_outputs", + "submit_tool_outputs": { + "tool_calls": [ + { + "id": "call_abc123", + "type": "function", + "function": { + "name": "getCurrentWeather", + "arguments": "{\"location\":\"San Francisco\"}" + } + }, + { + "id": "call_abc456", + "type": "function", + "function": { + "name": "getNickname", + "arguments": "{\"location\":\"Los Angeles\"}" + } + } + ] + } + }, +... +``` ++## Submitting function outputs ++You can then complete the **Run** by submitting the tool output from the function(s) you call. Pass the `tool_call_id` referenced in the `required_action` object above to match output to each function call. +++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) +++run = client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=[ + { + "tool_call_id": call_ids[0], + "output": "22C", + }, + { + "tool_call_id": call_ids[1], + "output": "LA", + }, + ] +) +``` ++# [REST](#tab/rest) ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/runs/run_123/submit_tool_outputs?api-version=2024-02-15-preview \ + -H "Content-Type: application/json" \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -d '{ + "tool_outputs": [{ + "tool_call_id": "call_abc123", + "output": "{"temperature": "22", "unit": "celsius"}" + }, { + "tool_call_id": "call_abc456", + "output": "{"nickname": "LA"}" + }] + }' +``` ++++After you submit tool outputs, the **Run** will enter the `queued` state before it continues execution. ++## See also ++* Learn more about how to use Assistants with our [How-to guide on Assistants](../how-to/assistant.md). +* [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) |
ai-services | Assistant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/assistant.md | + + Title: 'How to create Assistants with Azure OpenAI Service' ++description: Learn how to create helpful AI Assistants with tools like Code Interpreter +++++ Last updated : 02/01/2024+++recommendations: false ++++# Getting started with Azure OpenAI Assistants (Preview) ++Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. In this article we'll provide an in-depth walkthrough of getting started with the Assistants API. ++## Assistants support ++### Region and model support ++The [models page](../concepts/models.md#assistants-preview) contains the most up-to-date information on regions/models where Assistants are currently supported. ++### API Version ++- `2024-02-15-preview` ++### Supported file types ++|File format|MIME Type|Code Interpreter | +|||| +|.c| text/x-c |✅| +|.cpp|text/x-c++ |✅| +|.csv|application/csv|✅| +|.docx|application/vnd.openxmlformats-officedocument.wordprocessingml.document|✅| +|.html|text/html|✅| +|.java|text/x-java|✅| +|.json|application/json|✅| +|.md|text/markdown| ✅ | +|.pdf|application/pdf|✅| +|.php|text/x-php|✅| +|.pptx|application/vnd.openxmlformats-officedocument.presentationml.presentation|✅| +|.py|text/x-python|✅| +|.py|text/x-script.python|✅| +|.rb|text/x-ruby|✅| +|.tex|text/x-tex|✅| +|.txt|text/plain|✅| +|.css|text/css|✅| +|.jpeg|image/jpeg|✅| +|.jpg|image/jpeg|✅| +|.js|text/javascript|✅| +|.gif|image/gif|✅| +|.png|image/png|✅| +|.tar|application/x-tar|✅| +|.ts|application/typescript|✅| +|.xlsx|application/vnd.openxmlformats-officedocument.spreadsheetml.sheet|✅| +|.xml|application/xml or "text/xml"|✅| +|.zip|application/zip|✅| ++### Tools ++An individual assistant can access up to 128 tools including `code interpreter`, but you can also define your own custom tools via [functions](./assistant-functions.md). ++### Files ++Files can be uploaded via Studio, or programmatically. The `file_ids` parameter is required to give tools like `code_interpreter` access to files. When using the File upload endpoint, you must have the `purpose` set to assistants to be used with the Assistants API. ++## Assistants playground ++We provide a walkthrough of the Assistants playground in our [quickstart guide](../assistants-quickstart.md). This provides a no-code environment to test out the capabilities of assistants. ++## Assistants components ++| **Component** | **Description** | +||| +| **Assistant** | Custom AI that uses Azure OpenAI models in conjunction with tools. | +|**Thread** | A conversation session between an Assistant and a user. Threads store Messages and automatically handle truncation to fit content into a model’s context.| +| **Message** | A message created by an Assistant or a user. Messages can include text, images, and other files. Messages are stored as a list on the Thread. | +|**Run** | Activation of an Assistant to begin running based on the contents of the Thread. The Assistant uses its configuration and the Thread’s Messages to perform tasks by calling models and tools. As part of a Run, the Assistant appends Messages to the Thread.| +|**Run Step** | A detailed list of steps the Assistant took as part of a Run. An Assistant can call tools or create Messages during it’s run. Examining Run Steps allows you to understand how the Assistant is getting to its final results. | ++## Setting up your first Assistant ++### Create an assistant ++For this example we'll create an assistant that writes code to generate visualizations using the capabilities of the `code_interpreter` tool. The examples below are intended to be run sequentially in an environment like [Jupyter Notebooks](https://jupyter.org/). ++```Python +import os +import json +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++# Create an assistant +assistant = client.beta.assistants.create( + name="Data Visualization", + instructions=f"You are a helpful AI assistant who makes interesting visualizations based on data." + f"You have access to a sandboxed environment for writing and testing code." + f"When you are asked to create a visualization you should follow these steps:" + f"1. Write the code." + f"2. Anytime you write new code display a preview of the code to show your work." + f"3. Run the code to confirm that it runs." + f"4. If the code is successful display the visualization." + f"5. If the code is unsuccessful display the error message and try to revise the code and rerun going through the steps from above again.", + tools=[{"type": "code_interpreter"}], + model="gpt-4-1106-preview" #You must replace this value with the deployment name for your model. +) ++``` ++There are a few details you should note from the configuration above: ++- We enable this assistant to access code interpreter with the line ` tools=[{"type": "code_interpreter"}],`. This gives the model access to a sand-boxed python environment to run and execute code to help formulating responses to a user's question. +- In the instructions we remind the model that it can execute code. Sometimes the model needs help guiding it towards the right tool to solve a given query. If you know, you want to use a particular library to generate a certain response that you know is part of code interpreter it can help to provide guidance by saying something like "Use Matplotlib to do x." +- Since this is Azure OpenAI the value you enter for `model=` **must match the deployment name**. By convention our docs will often use a deployment name that happens to match the model name to indicate which model was used when testing a given example, but in your environment the deployment names can be different and that is the name that you should enter in the code. ++Next we're going to print the contents of assistant that we just created to confirm that creation was successful: ++```python +print(assistant.model_dump_json(indent=2)) +``` ++```json +{ + "id": "asst_7AZSrv5I3XzjUqWS40X5UgRr", + "created_at": 1705972454, + "description": null, + "file_ids": [], + "instructions": "You are a helpful AI assistant who makes interesting visualizations based on data.You have access to a sandboxed environment for writing and testing code.When you are asked to create a visualization you should follow these steps:1. Write the code.2. Anytime you write new code display a preview of the code to show your work.3. Run the code to confirm that it runs.4. If the code is successful display the visualization.5. If the code is unsuccessful display the error message and try to revise the code and rerun going through the steps from above again.", + "metadata": {}, + "model": "gpt-4-1106-preview", + "name": "Data Visualization", + "object": "assistant", + "tools": [ + { + "type": "code_interpreter" + } + ] +} +``` ++### Create a thread ++Now let's create a thread ++```python +# Create a thread +thread = client.beta.threads.create() +print(thread) +``` ++```output +Thread(id='thread_6bunpoBRZwNhovwzYo7fhNVd', created_at=1705972465, metadata={}, object='thread') +``` ++A thread is essentially the record of the conversation session between the assistant and the user. It's similar to the messages array/list in a typical chat completions API call. One of the key differences, is unlike a chat completions messages array, you don't need to track tokens with each call to make sure that you're remaining below the context length of the model. Threads abstract away this management detail and will compress the thread history as needed in order to allow the conversation to continue. The ability for threads to accomplish this with larger conversations is enhanced when using the latest models, which have larger context lengths as well as support for the latest features. ++Next create the first user question to add to the thread ++```python +# Add a user question to the thread +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="Create a visualization of a sinewave" +) +``` ++### List thread messages ++```python +thread_messages = client.beta.threads.messages.list(thread.id) +print(thread_messages.model_dump_json(indent=2)) +``` ++```json +{ + "data": [ + { + "id": "msg_JnkmWPo805Ft8NQ0gZF6vA2W", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Create a visualization of a sinewave" + }, + "type": "text" + } + ], + "created_at": 1705972476, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_6bunpoBRZwNhovwzYo7fhNVd" + } + ], + "object": "list", + "first_id": "msg_JnkmWPo805Ft8NQ0gZF6vA2W", + "last_id": "msg_JnkmWPo805Ft8NQ0gZF6vA2W", + "has_more": false +} +``` ++### Run thread ++```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + #instructions="New instructions" #You can optionally provide new instructions but these will override the default instructions +) +``` ++We could also pass an `instructions` parameter here, but this would override the existing instructions that we have already provided for the assistant. ++### Retrieve thread status ++```python +# Retrieve the status of the run +run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id +) ++status = run.status +print(status) +``` ++```output +completed +``` ++Depending on the complexity of the query you run, the thread could take longer to execute. In that case you can create a loop to monitor the [run status](#run-status-definitions) of the thread with code like the example below: ++```python +import time +from IPython.display import clear_output ++start_time = time.time() ++status = run.status ++while status not in ["completed", "cancelled", "expired", "failed"]: + time.sleep(5) + run = client.beta.threads.runs.retrieve(thread_id=thread.id,run_id=run.id) + print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60))) + status = run.status + print(f'Status: {status}') + clear_output(wait=True) ++messages = client.beta.threads.messages.list( + thread_id=thread.id +) ++print(f'Status: {status}') +print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60))) +print(messages.model_dump_json(indent=2)) +``` ++When a Run is `in_progress` or in other nonterminal states the thread is locked. When a thread is locked new messages can't be added, and new runs can't be created. ++### List thread messages post run ++Once the run status indicates successful completion, you can list the contents of the thread again to retrieve the model's and any tools response: ++```python +messages = client.beta.threads.messages.list( + thread_id=thread.id +) ++print(messages.model_dump_json(indent=2)) +``` ++```json +{ + "data": [ + { + "id": "msg_M5pz73YFsJPNBbWvtVs5ZY3U", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Is there anything else you would like to visualize or any additional features you'd like to add to the sine wave plot?" + }, + "type": "text" + } + ], + "created_at": 1705967782, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_AGQHJrrfV3eM0eI9T3arKgYY", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_oJbUanImBRpRran5HSa4Duy4", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD" + }, + "type": "image_file" + }, + { + "text": { + "annotations": [], + "value": "Here is the visualization of a sine wave: \n\nThe wave is plotted using values from 0 to \\( 4\\pi \\) on the x-axis, and the corresponding sine values on the y-axis. I've also added grid lines for easier reading of the plot." + }, + "type": "text" + } + ], + "created_at": 1705967044, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_8PsweDFn6gftUd91H87K0Yts", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Create a visualization of a sinewave" + }, + "type": "text" + } + ], + "created_at": 1705966634, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + } + ], + "object": "list", + "first_id": "msg_M5pz73YFsJPNBbWvtVs5ZY3U", + "last_id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "has_more": false +} +``` ++### Retrieve file ID ++We had requested that the model generate an image of a sine wave. In order to download the image, we first need to retrieve the images file ID. ++```python +data = json.loads(messages.model_dump_json(indent=2)) # Load JSON data into a Python object +image_file_id = data['data'][1]['content'][0]['image_file']['file_id'] ++print(image_file_id) # Outputs: assistant-1YGVTvNzc2JXajI5JU9F0HMD +``` ++### Download image ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++content = client.files.content(image_file_id) ++image= content.write_to_file("sinewave.png") +``` ++Open the image locally once it's downloaded: ++```python +from PIL import Image ++# Display the image in the default image viewer +image = Image.open("sinewave.png") +image.show() +``` +++### Ask a follow-up question on the thread ++Since the assistant didn't quite follow our instructions and include the code that was run in the text portion of its response lets explicitly ask for that information. ++```python +# Add a new user question to the thread +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="Show me the code you used to generate the sinewave" +) +``` ++Again we'll need to run and retrieve the status of the thread: ++```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + #instructions="New instructions" #You can optionally provide new instructions but these will override the default instructions +) ++# Retrieve the status of the run +run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id +) ++status = run.status +print(status) ++``` ++```output +completed +``` ++Once the run status reaches completed, we'll list the messages in the thread again which should now include the response to our latest question. ++```python +messages = client.beta.threads.messages.list( + thread_id=thread.id +) ++print(messages.model_dump_json(indent=2)) +``` ++```json +{ + "data": [ + { + "id": "msg_oaF1PUeozAvj3KrNnbKSy4LQ", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Certainly, here is the code I used to generate the sine wave visualization:\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Generating data for the sinewave\nx = np.linspace(0, 4 * np.pi, 1000) # Generate values from 0 to 4*pi\ny = np.sin(x) # Compute the sine of these values\n\n# Plotting the sine wave\nplt.plot(x, y)\nplt.title('Sine Wave')\nplt.xlabel('x')\nplt.ylabel('sin(x)')\nplt.grid(True)\nplt.show()\n```\n\nThis code snippet uses `numpy` to generate an array of x values and then computes the sine for each x value. It then uses `matplotlib` to plot these values and display the resulting graph." + }, + "type": "text" + } + ], + "created_at": 1705969710, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_oDS3fH7NorCUVwROTZejKcZN", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_moYE3aNwFYuRq2aXpxpt2Wb0", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Show me the code you used to generate the sinewave" + }, + "type": "text" + } + ], + "created_at": 1705969678, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_M5pz73YFsJPNBbWvtVs5ZY3U", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Is there anything else you would like to visualize or any additional features you'd like to add to the sine wave plot?" + }, + "type": "text" + } + ], + "created_at": 1705967782, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_AGQHJrrfV3eM0eI9T3arKgYY", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_oJbUanImBRpRran5HSa4Duy4", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD" + }, + "type": "image_file" + }, + { + "text": { + "annotations": [], + "value": "Here is the visualization of a sine wave: \n\nThe wave is plotted using values from 0 to \\( 4\\pi \\) on the x-axis, and the corresponding sine values on the y-axis. I've also added grid lines for easier reading of the plot." + }, + "type": "text" + } + ], + "created_at": 1705967044, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_8PsweDFn6gftUd91H87K0Yts", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Create a visualization of a sinewave" + }, + "type": "text" + } + ], + "created_at": 1705966634, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + } + ], + "object": "list", + "first_id": "msg_oaF1PUeozAvj3KrNnbKSy4LQ", + "last_id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "has_more": false +} +``` ++To extract only the response to our latest question: ++```python +data = json.loads(messages.model_dump_json(indent=2)) +code = data['data'][0]['content'][0]['text']['value'] +print(code) +``` ++*Certainly, here is the code I used to generate the sine wave visualization:* ++```python +import numpy as np +import matplotlib.pyplot as plt ++# Generating data for the sinewave +x = np.linspace(0, 4 * np.pi, 1000) # Generate values from 0 to 4*pi +y = np.sin(x) # Compute the sine of these values ++# Plotting the sine wave +plt.plot(x, y) +plt.title('Sine Wave') +plt.xlabel('x') +plt.ylabel('sin(x)') +plt.grid(True) +plt.show() +``` ++### Dark mode ++Let's add one last question to the thread to see if code interpreter can swap the chart to dark mode for us. ++```python +# Add a user question to the thread +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="I prefer visualizations in darkmode can you change the colors to make a darkmode version of this visualization." +) ++# Run the thread +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, +) ++# Retrieve the status of the run +run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id +) ++status = run.status +print(status) +``` ++```output +completed +``` ++```python +messages = client.beta.threads.messages.list( + thread_id=thread.id +) ++print(messages.model_dump_json(indent=2)) +``` ++```json +{ + "data": [ + { + "id": "msg_KKzOHCArWGvGpuPo0pVZTHgV", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "You're viewing the dark mode version of the sine wave visualization in the image above. The plot is set against a dark background with a cyan colored sine wave for better contrast and visibility. If there's anything else you'd like to adjust or any other assistance you need, feel free to let me know!" + }, + "type": "text" + } + ], + "created_at": 1705971199, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_izZFyTVB1AlFM1VVMItggRn4", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_30pXFVYNgP38qNEMS4Zbozfk", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "I prefer visualizations in darkmode can you change the colors to make a darkmode version of this visualization." + }, + "type": "text" + } + ], + "created_at": 1705971194, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_3j31M0PaJLqO612HLKVsRhlw", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "assistant-kfqzMAKN1KivQXaEJuU0u9YS" + }, + "type": "image_file" + }, + { + "text": { + "annotations": [], + "value": "Here is the dark mode version of the sine wave visualization. I've used the 'dark_background' style in Matplotlib and chosen a cyan color for the plot line to ensure it stands out against the dark background." + }, + "type": "text" + } + ], + "created_at": 1705971123, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_B91erEPWro4bZIfryQeIDDlx", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_FgDZhBvvM1CLTTFXwgeJLdua", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "I prefer visualizations in darkmode can you change the colors to make a darkmode version of this visualization." + }, + "type": "text" + } + ], + "created_at": 1705971052, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_oaF1PUeozAvj3KrNnbKSy4LQ", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Certainly, here is the code I used to generate the sine wave visualization:\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Generating data for the sinewave\nx = np.linspace(0, 4 * np.pi, 1000) # Generate values from 0 to 4*pi\ny = np.sin(x) # Compute the sine of these values\n\n# Plotting the sine wave\nplt.plot(x, y)\nplt.title('Sine Wave')\nplt.xlabel('x')\nplt.ylabel('sin(x)')\nplt.grid(True)\nplt.show()\n```\n\nThis code snippet uses `numpy` to generate an array of x values and then computes the sine for each x value. It then uses `matplotlib` to plot these values and display the resulting graph." + }, + "type": "text" + } + ], + "created_at": 1705969710, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_oDS3fH7NorCUVwROTZejKcZN", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_moYE3aNwFYuRq2aXpxpt2Wb0", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Show me the code you used to generate the sinewave" + }, + "type": "text" + } + ], + "created_at": 1705969678, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_M5pz73YFsJPNBbWvtVs5ZY3U", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Is there anything else you would like to visualize or any additional features you'd like to add to the sine wave plot?" + }, + "type": "text" + } + ], + "created_at": 1705967782, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_AGQHJrrfV3eM0eI9T3arKgYY", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_oJbUanImBRpRran5HSa4Duy4", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD" + }, + "type": "image_file" + }, + { + "text": { + "annotations": [], + "value": "Here is the visualization of a sine wave: \n\nThe wave is plotted using values from 0 to \\( 4\\pi \\) on the x-axis, and the corresponding sine values on the y-axis. I've also added grid lines for easier reading of the plot." + }, + "type": "text" + } + ], + "created_at": 1705967044, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_8PsweDFn6gftUd91H87K0Yts", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Create a visualization of a sinewave" + }, + "type": "text" + } + ], + "created_at": 1705966634, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + } + ], + "object": "list", + "first_id": "msg_KKzOHCArWGvGpuPo0pVZTHgV", + "last_id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "has_more": false +} +``` ++Extract the new image file ID and download and display the image: ++```python +data = json.loads(messages.model_dump_json(indent=2)) # Load JSON data into a Python object +image_file_id = data['data'][0]['content'][0]['image_file']['file_id'] # index numbers can vary if you have had a different conversation over the course of the thread. ++print(image_file_id) ++content = client.files.content(image_file_id) +image= content.write_to_file("dark_sine.png") ++# Display the image in the default image viewer +image = Image.open("dark_sine.png") +image.show() +``` +++## Additional reference ++### Run status definitions ++|**Status**| **Definition**| +||--| +|`queued`| When Runs are first created or when you complete the required_action, they are moved to a queued status. They should almost immediately move to in_progress.| +|`in_progress` | While in_progress, the Assistant uses the model and tools to perform steps. You can view progress being made by the Run by examining the Run Steps.| +|`completed` | The Run successfully completed! You can now view all Messages the Assistant added to the Thread, and all the steps the Run took. You can also continue the conversation by adding more user Messages to the Thread and creating another Run.| +|`requires_action` | When using the Function calling tool, the Run will move to a required_action state once the model determines the names and arguments of the functions to be called. You must then run those functions and submit the outputs before the run proceeds. If the outputs are not provided before the expires_at timestamp passes (roughly 10-mins past creation), the run will move to an expired status.| +|`expired` | This happens when the function calling outputs weren't submitted before expires_at and the run expires. Additionally, if the runs take too long to execute and go beyond the time stated in expires_at, our systems will expire the run.| +|`cancelling`| You can attempt to cancel an in_progress run using the Cancel Run endpoint. Once the attempt to cancel succeeds, status of the Run moves to canceled. Cancelation is attempted but not guaranteed.| +|`cancelled` |Run was successfully canceled.| +|`failed` |You can view the reason for the failure by looking at the `last_error` object in the Run. The timestamp for the failure will be recorded under failed_at.| ++## Message annotations ++Assistant message annotations are different from the [content filtering annotations](../concepts/content-filter.md) that are present in completion and chat completion API responses. Assistant annotations can occur within the content array of the object. Annotations provide information around how you should annotate the text in the responses to the user. ++When annotations are present in the Message content array, you'll see illegible model-generated substrings in the text that you need to replace with the correct annotations. These strings might look something like `【13†source】` or `sandbox:/mnt/data/file.csv`. Here’s a Python code snippet from OpenAI that replaces these strings with the information present in the annotations. ++```Python ++from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++# Retrieve the message object +message = client.beta.threads.messages.retrieve( + thread_id="...", + message_id="..." +) ++# Extract the message content +message_content = message.content[0].text +annotations = message_content.annotations +citations = [] ++# Iterate over the annotations and add footnotes +for index, annotation in enumerate(annotations): + # Replace the text with a footnote + message_content.value = message_content.value.replace(annotation.text, f' [{index}]') ++ # Gather citations based on annotation attributes + if (file_citation := getattr(annotation, 'file_citation', None)): + cited_file = client.files.retrieve(file_citation.file_id) + citations.append(f'[{index}] {file_citation.quote} from {cited_file.filename}') + elif (file_path := getattr(annotation, 'file_path', None)): + cited_file = client.files.retrieve(file_path.file_id) + citations.append(f'[{index}] Click <here> to download {cited_file.filename}') + # Note: File download functionality not implemented above for brevity ++# Add footnotes to the end of the message before displaying to user +message_content.value += '\n' + '\n'.join(citations) ++``` ++|Message annotation | Description | +||| +| `file_citation` | File citations are created by the retrieval tool and define references to a specific quote in a specific file that was uploaded and used by the Assistant to generate the response. | +|`file_path` | File path annotations are created by the code_interpreter tool and contain references to the files generated by the tool. | ++## See also ++* Learn more about Assistants and [Code Interpreter](./code-interpreter.md) +* Learn more about Assistants and [function calling](./assistant-functions.md) +* [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) |
ai-services | Code Interpreter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/code-interpreter.md | + + Title: 'How to use Azure OpenAI Assistants Code Interpreter' ++description: Learn how to use Assistants Code Interpreter ++++ Last updated : 02/01/2024+++recommendations: false ++++# Azure OpenAI Assistants Code Interpreter (Preview) ++Code Interpreter allows the Assistants API to write and run Python code in a sandboxed execution environment. With Code Interpreter enabled, your Assistant can run code iteratively to solve more challenging code, math, and data analysis problems. When your Assistant writes code that fails to run, it can iterate on this code by modifying and running different code until the code execution succeeds. ++> [!IMPORTANT] +> Code Interpreter has [additional charges](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) beyond the token based fees for Azure OpenAI usage. If your Assistant calls Code Interpreter simultaneously in two different threads, two code interpreter sessions are created. Each session is active by default for one hour. ++## Code interpreter support ++### Supported models ++The [models page](../concepts/models.md#assistants-preview) contains the most up-to-date information on regions/models where Assistants and code interpreter are supported. ++We recommend using assistants with the latest models to take advantage of the new features, as well as the larger context windows, and more up-to-date training data. ++### API Version ++- `2024-02-15-preview` ++### Supported file types ++|File format|MIME Type| +||| +|.c| text/x-c | +|.cpp|text/x-c++ | +|.csv|application/csv| +|.docx|application/vnd.openxmlformats-officedocument.wordprocessingml.document| +|.html|text/html| +|.java|text/x-java| +|.json|application/json| +|.md|text/markdown| +|.pdf|application/pdf| +|.php|text/x-php| +|.pptx|application/vnd.openxmlformats-officedocument.presentationml.presentation| +|.py|text/x-python| +|.py|text/x-script.python| +|.rb|text/x-ruby| +|.tex|text/x-tex| +|.txt|text/plain| +|.css|text/css| +|.jpeg|image/jpeg| +|.jpg|image/jpeg| +|.js|text/javascript| +|.gif|image/gif| +|.png|image/png| +|.tar|application/x-tar| +|.ts|application/typescript| +|.xlsx|application/vnd.openxmlformats-officedocument.spreadsheetml.sheet| +|.xml|application/xml or "text/xml"| +|.zip|application/zip| ++## Enable Code Interpreter ++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++assistant = client.beta.assistants.create( + instructions="You are an AI assistant that can write code to help answer math questions", + model="<REPLACE WITH MODEL DEPLOYMENT NAME>", # replace with model deployment name. + tools=[{"type": "code_interpreter"}] +) +``` ++# [REST](#tab/rest) ++> [!NOTE] +> With Azure OpenAI the `model` parameter requires model deployment name. If your model deployment name is different than the underlying model name then you would adjust your code to ` "model": "{your-custom-model-deployment-name}"`. ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -H 'Content-Type: application/json' \ + -d '{ + "instructions": "You are an AI assistant that can write code to help answer math questions.", + "tools": [ + { "type": "code_interpreter" } + ], + "model": "gpt-4-1106-preview" + }' +``` ++++## Upload file for Code Interpreter +++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++# Upload a file with an "assistants" purpose +file = client.files.create( + file=open("speech.py", "rb"), + purpose='assistants' +) ++# Create an assistant using the file ID +assistant = client.beta.assistants.create( + instructions="You are an AI assistant that can write code to help answer math questions.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}], + file_ids=[file.id] +) +``` ++# [REST](#tab/rest) ++```console +# Upload a file with an "assistants" purpose ++curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/files?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -F purpose="assistants" \ + -F file="@c:\\path_to_file\\file.csv" ++# Create an assistant using the file ID ++curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -H 'Content-Type: application/json' \ + -d '{ + "instructions": "You are an AI assistant that can write code to help answer math questions.", + "tools": [ + { "type": "code_interpreter" } + ], + "model": "gpt-4-1106-preview", + "file_ids": ["file_123abc456"] + }' +``` ++++### Pass file to an individual thread ++In addition to making files accessible at the Assistants level you can pass files so they're only accessible to a particular thread. ++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++thread = client.beta.threads.create( + messages=[ + { + "role": "user", + "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "file_ids": ["file.id"] # file id will look like: "assistant-R9uhPxvRKGH3m0x5zBOhMjd2" + } + ] +) +``` ++# [REST](#tab/rest) ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/<YOUR-THREAD-ID>/messages?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -H 'Content-Type: application/json' \ + -d '{ + "role": "user", + "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "file_ids": ["file_123abc456"] + }' +``` ++++## Download files generated by Code Interpreter ++Files generated by Code Interpreter can be found in the Assistant message responses ++```json + { + "id": "msg_oJbUanImBRpRran5HSa4Duy4", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "file-1YGVTvNzc2JXajI5JU9F0HMD" + }, + "type": "image_file" + }, + # ... + } +``` ++You can download these generated files by passing the files to the files API: ++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++image_data = client.files.content("file-abc123") +image_data_bytes = image_data.read() ++with open("./my-image.png", "wb") as file: + file.write(image_data_bytes) +``` ++# [REST](#tab/rest) ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/files/<YOUR-FILE-ID>/content?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + --output image.png +``` ++++## See also ++* Learn more about how to use Assistants with our [How-to guide on Assistants](../how-to/assistant.md). +* [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) |
ai-services | Fine Tuning Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning-functions.md | + + Title: Fine-tuning function calls with Azure OpenAI Service +description: Learn how to improve function calling performance with Azure OpenAI fine-tuning +# +++ Last updated : 02/05/2024++++++# Fine-tuning and function calling ++Models that use the chat completions API support [function calling](../how-to/function-calling.md). Unfortunately, functions defined in your chat completion calls don't always perform as expected. Fine-tuning your model with function calling examples can improve model output by enabling you to: ++* Get similarly formatted responses even when the full function definition isn't present. (Allowing you to potentially save money on prompt tokens.) +* Get more accurate and consistent outputs. ++> [!IMPORTANT] +> The `functions` and `function_call` parameters have been deprecated with the release of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) version of the API. However, the fine-tuning API currently requires use of the legacy parameters. ++## Constructing a training file ++When constructing a training file of function calling examples, you would take a function definition like this: ++```json +{ + "messages": [ + {"role": "user", "content": "What is the weather in San Francisco?"}, + {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"} + ], + "functions": [{ + "name": "get_current_weather", + "description": "Get the current weather", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and country, eg. San Francisco, USA"}, + "format": {"type": "string", "enum": ["celsius", "fahrenheit"]} + }, + "required": ["location", "format"] + } + }] +} +``` ++And express the information as a single line within your `.jsonl` training file as below: ++```jsonl +{"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}, {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"}}], "functions": [{"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and country, eg. San Francisco, USA"}, "format": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location", "format"]}}]} +``` ++As with all fine-tuning training your example file requires at least 10 examples. ++## Optimize for cost ++OpenAI recommends that if you're trying to optimize to use fewer prompt tokens post fine-tuning your model on the full function definitions you can experiment with: ++* Omit function and parameter descriptions: remove the description field from function and parameters. +* Omit parameters: remove the entire properties field from the parameters object. +* Omit function entirely: remove the entire function object from the functions array. ++## Optimize for quality ++Alternatively, if you're trying to improve the quality of the function calling output, it's recommended that the function definitions present in the fine-tuning training dataset and subsequent chat completion calls remain identical. ++## Customize model responses to function outputs ++Fine-tuning based on function calling examples can also be used to improve the model's response to function outputs. To accomplish this, you include examples consisting of function response messages and assistant response messages where the function response is interpreted and put into context by the assistant. ++```json +{ + "messages": [ + {"role": "user", "content": "What is the weather in San Francisco?"}, + {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celcius\"}"}} + {"role": "function", "name": "get_current_weather", "content": "21.0"}, + {"role": "assistant", "content": "It is 21 degrees celsius in San Francisco, CA"} + ], + "functions": [...] // same as before +} +``` ++As with the example before, this example is artificially expanded for readability. The actual entry in the `.jsonl` training file would be a single line: ++```jsonl +{"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}, {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celcius\"}"}}, {"role": "function", "name": "get_current_weather", "content": "21.0"}, {"role": "assistant", "content": "It is 21 degrees celsius in San Francisco, CA"}], "functions": []} +``` ++## Next steps ++- Explore the fine-tuning capabilities in the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md). +- Review fine-tuning [model regional availability](../concepts/models.md#fine-tuning-models) |
ai-services | Fine Tuning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md | A fine-tuned model improves on the few-shot learning approach by training the mo [!INCLUDE [REST API fine-tuning](../includes/fine-tuning-rest.md)] ::: zone-end++## Troubleshooting ++### How do I enable fine-tuning? Create a custom model is greyed out in Azure OpenAI Studio? ++In order to successfully access fine-tuning, you need **Cognitive Services OpenAI Contributor assigned**. Even someone with high-level Service Administrator permissions would still need this account explicitly set in order to access fine-tuning. For more information, please review the [role-based access control guidance](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor). + +## Next steps ++- Explore the fine-tuning capabilities in the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md). +- Review fine-tuning [model regional availability](../concepts/models.md#fine-tuning-models) |
ai-services | Working With Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/working-with-models.md | You can learn more about Azure OpenAI model versions and how they work in the [A ### Auto update to default -When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a change in the default version. +When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a change in the default version. For a preview version, it will update automatically when a new preview version is available starting two weeks after the new preview version is released. If you're still in the early testing phases for inference models, we recommend deploying models with **auto-update to default** set whenever it's available. |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md | Apply here for access: ## Comparing Azure OpenAI and OpenAI -Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, DALL-E, and Whisper models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other. +Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, DALL-E, Whisper, and text to speech models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other. With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering. The DALL-E models, currently in preview, generate images from text prompts that The Whisper models, currently in preview, can be used to transcribe and translate speech to text. +The text to speech models, currently in preview, can be used to synthesize text to speech. + Learn more about each model on our [models concept page](./concepts/models.md). ## Next steps |
ai-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md | The following sections provide you with a quick guide to the default quotas and | Max fine-tuned model deployments | 5 | | Total number of training jobs per resource | 100 | | Max simultaneous running training jobs per resource | 1 |-| Max training jobs queued | 20 | -| Max Files per resource | 30 | -| Total size of all files per resource | 1 GB | +| Max training jobs queued | 20 | +| Max Files per resource (fine-tuning) | 30 | +| Total size of all files per resource (fine-tuning) | 1 GB | | Max training job time (job will fail if exceeded) | 720 hours | | Max training job size (tokens in training file) x (# of epochs) | 2 Billion | | Max size of all files per upload (Azure OpenAI on your data) | 16 MB | The following sections provide you with a quick guide to the default quotas and | Max number of `/chat/completions` functions | 128 | | Max number of `/chat completions` tools | 128 | | Maximum number of Provisioned throughput units per deployment | 100,000 |--+| Max files per Assistant/thread | 20 | +| Max file size for Assistants | 512 MB | +| Assistants token limit | 2,000,000 token limit | ## Regional quota limits |
ai-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md | Title: Azure OpenAI Service REST API reference -description: Learn how to use Azure OpenAI's REST API. In this article, you'll learn about authorization options, how to structure a request and receive a response. +description: Learn how to use Azure OpenAI's REST API. In this article, you learn about authorization options, how to structure a request and receive a response. # POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM ## Completions -With the Completions operation, the model will generate one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position. +With the Completions operation, the model generates one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position. **Create a completion** POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen | Parameter | Type | Required? | Default | Description | |--|--|--|--|--|-| ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, or array of strings. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model will generate as if from the beginning of a new document. | +| ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, or array of strings. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model generates as if from the beginning of a new document. | | ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |-| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. | +| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values means the model takes more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. | | ```top_p``` | number | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |-| ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. | +| ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. | | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help monitoring and detecting abuse | | ```n``` | integer | Optional | 1 | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. |-| ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.| +| ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens are sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.| | ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. | | ```suffix```| string | Optional | null | The suffix that comes after a completion of inserted text. | | ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. | In the example response, `finish_reason` equals `stop`. If `finish_reason` equal | ```max_tokens``` | integer | Optional | inf | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).| | ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.| | ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.|-| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.| +| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.| | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.|-|```function_call```| | Optional | | `[Deprecated in 2023-12-01-preview replacement paremeter is tools_choice]`Controls how the model responds to function calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json) | +|```function_call```| | Optional | | `[Deprecated in 2023-12-01-preview replacement paremeter is tools_choice]`Controls how the model responds to function calls. "none" means the model doesn't call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json) | |```functions``` | [`FunctionDefinition[]`](#functiondefinition-deprecated) | Optional | | `[Deprecated in 2023-12-01-preview replacement paremeter is tools]` A list of functions the model can generate JSON inputs for. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)|-|```tools```| string (The type of the tool. Only [`function`](#function) is supported.) | Optional | |A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/generated.json) | -|```tool_choice```| string or object | Optional | none is the default when no functions are present. auto is the default if functions are present. | Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)| +|```tools```| string (The type of the tool. Only [`function`](#function) is supported.) | Optional | |A list of tools the model can call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model can generate JSON inputs for. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/generated.json) | +|```tool_choice```| string or object | Optional | none is the default when no functions are present. auto is the default if functions are present. | Controls which (if any) function is called by the model. none means the model won't call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)| ### ChatMessage The name and arguments of a function that should be called, as generated by the | Name | Type | Description| ||||-| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and might fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | +| arguments | string | The arguments to call the function with, as generated by the model in JSON format. The model doesn't always generate valid JSON, and might fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | | name | string | The name of the function to call.| ### FunctionDefinition-Deprecated The definition of a caller-specified function that chat completions can invoke i |Name | Type| Description| ||||-| description | string | A description of what the function does. The model will use this description when selecting the function and interpreting its parameters. | +| description | string | A description of what the function does. The model uses this description when selecting the function and interpreting its parameters. | | name | string | The name of the function to be called. | | parameters | | The parameters the functions accepts, described as a [JSON Schema](https://json-schema.org/understanding-json-schema/) object.| curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/exten | `dataSources` | array | Required | | The data sources to be used for the Azure OpenAI on your data feature. | | `temperature` | number | Optional | 0 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | | `top_p` | number | Optional | 1 |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.|-| `stream` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` | -| `stop` | string or array | Optional | null | Up to 2 sequences where the API will stop generating further tokens. | +| `stream` | boolean | Optional | false | If set, partial message deltas are sent, like in ChatGPT. Tokens are sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` | +| `stop` | string or array | Optional | null | Up to two sequences where the API will stop generating further tokens. | | `max_tokens` | integer | Optional | 1000 | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return is `4096 - prompt_tokens`. | The following parameters can be used inside of the `parameters` field inside of `dataSources`. The following parameters can be used inside of the `parameters` field inside of |--|--|--|--|--| | `type` | string | Required | null | The data source to be used for the Azure OpenAI on your data feature. For Azure AI Search the value is `AzureCognitiveSearch`. For Azure Cosmos DB for MongoDB vCore, the value is `AzureCosmosDB`. For Elasticsearch the value is `Elasticsearch`. For Azure Machine Learning, the value is `AzureMLIndex`. For Pinecone, the value is `Pinecone`. | | `indexName` | string | Required | null | The search index to be used. |-| `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. | +| `inScope` | boolean | Optional | true | If set, this value limits responses specific to the grounding data content. | | `topNDocuments` | number | Optional | 5 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. | | `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. | | `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.| | `filter` | string | Optional | null | The filter pattern used for [restricting access to sensitive documents](./concepts/use-your-data.md#document-level-access-control) | `embeddingEndpoint` | string | Optional | null | The endpoint URL for an Ada embedding model deployment, generally of the format `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15`. Use with the `embeddingKey` parameter for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. | | `embeddingKey` | string | Optional | null | The API key for an Ada embedding model deployment. Use with `embeddingEndpoint` for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. | -| `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data will use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models) starting in API versions `2023-06-01-preview` and later.| +| `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models) starting in API versions `2023-06-01-preview` and later.| | `strictness` | number | Optional | 3 | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. | The following parameters are used for Azure AI Search. |--|--|--|--|--| | `endpoint` | string | Required | null | Azure AI Search only. The data source endpoint. | | `key` | string | Required | null | Azure AI Search only. One of the Azure AI Search admin keys for your service. |-| `queryType` | string | Optional | simple | Indicates which query option will be used for Azure AI Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. | +| `queryType` | string | Optional | simple | Indicates which query option is used for Azure AI Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. | | `fieldsMapping` | dictionary | Optional for Azure AI Search. | null | defines which [fields](./concepts/use-your-data.md?tabs=ai-search#index-field-mapping) you want to map when you add your data source. | The following parameters are used inside of the `authentication` field, which enables you to use Azure OpenAI [without public network access](./how-to/use-your-data-securely.md). curl -i -X PUT https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on- | Parameters | Type | Required? | Default | Description | |||||| | `searchServiceEndpoint` | string | Required |null | The endpoint of the search resource in which the data will be ingested.|-| `searchServiceAdminKey` | string | Optional | null | If provided, the key will be used to authenticate with the `searchServiceEndpoint`. If not provided, the system-assigned identity of the Azure OpenAI resource will be used. In this case, the system-assigned identity must have "Search Service Contributor" role assignment on the search resource. | +| `searchServiceAdminKey` | string | Optional | null | If provided, the key is used to authenticate with the `searchServiceEndpoint`. If not provided, the system-assigned identity of the Azure OpenAI resource will be used. In this case, the system-assigned identity must have "Search Service Contributor" role assignment on the search resource. | | `storageConnectionString` | string | Required | null | The connection string for the storage account where the input data is located. An account key has to be provided in the connection string. It should look something like `DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>` | | `storageContainer` | string | Required | null | The name of the container where the input data is located. | -| `embeddingEndpoint` | string | Optional | null | Not required if you use semantic or only keyword search. It is required if you use vector, hybrid, or hybrid + semantic search | -| `embeddingKey` | string | Optional | null | The key of the embedding endpoint. This is required if the embedding endpoint is not empty. | -| `url` | string | Optional | null | If URL is not null, the provided url will be crawled into the provided storage container and then ingested accordingly.| +| `embeddingEndpoint` | string | Optional | null | Not required if you use semantic or only keyword search. It's required if you use vector, hybrid, or hybrid + semantic search | +| `embeddingKey` | string | Optional | null | The key of the embedding endpoint. This is required if the embedding endpoint isn't empty. | +| `url` | string | Optional | null | If URL isn't null, the provided url is crawled into the provided storage container and then ingested accordingly.| **Body Parameters** POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen | Parameter | Type | Required? | Description | |--|--|--|--|-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. | +| ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. | | ```deployment-id``` | string | Required | The name of your Whisper model deployment such as *MyWhisperDeployment*. You're required to first deploy a Whisper model before you can make calls. | | ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. | POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen | Parameter | Type | Required? | Default | Description | |--|--|--|--|--|-| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). | +| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: `flac`, `mp3`, `mp4`, `mpeg`, `mpga`, `m4a`, `ogg`, `wav`, or `webm`.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). | | ```language``` | string | No | Null | The language of the input audio such as `fr`. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format improves accuracy and latency.<br/><br/>For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```prompt``` | string | No | Null | An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.<br/><br/>For more information about prompts including example use cases, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```response_format``` | string | No | json | The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.<br/><br/>The default value is *json*. | POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen | Parameter | Type | Required? | Description | |--|--|--|--|-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. | +| ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. | | ```deployment-id``` | string | Required | The name of your Whisper model deployment such as *MyWhisperDeployment*. You're required to first deploy a Whisper model before you can make calls. | | ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. | curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM } ``` +## Text to speech ++Synthesize text to speech. ++```http +POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/audio/speech?api-version={api-version} +``` ++**Path parameters** ++| Parameter | Type | Required? | Description | +|--|--|--|--| +| ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. | +| ```deployment-id``` | string | Required | The name of your text to speech model deployment such as *MyTextToSpeechDeployment*. You're required to first deploy a text to speech model (such as `tts-1` or `tts-1-hd`) before you can make calls. | +| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. | ++**Supported versions** ++- `2024-02-15-preview` ++**Request body** ++| Parameter | Type | Required? | Default | Description | +|--|--|--|--|--| +| ```model```| string | Yes | N/A | One of the available TTS models: `tts-1` or `tts-1-hd` | +| ```input``` | string | Yes | N/A | The text to generate audio for. The maximum length is 4096 characters. Specify input text in the language of your choice.<sup>1</sup> | +| ```voice``` | string | Yes | N/A | The voice to use when generating the audio. Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are available in the [OpenAI text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options). | ++<sup>1</sup> The text to speech models generally support the same languages as the Whisper model. For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). ++### Example request ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/audio/speech?api-version=2024-02-15-preview \ + -H "api-key: $YOUR_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "tts-hd", + "input": "I'm excited to try text to speech.", + "voice": "alloy" +}' --output speech.mp3 +``` ++### Example response ++The speech is returned as an audio file from the previous request. + ## Management APIs Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource. Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI servic ## Next steps -Learn about [ Models, and fine-tuning with the REST API](/rest/api/azureopenai/fine-tuning?view=rest-azureopenai-2023-10-01-preview). +Learn about [Models, and fine-tuning with the REST API](/rest/api/azureopenai/fine-tuning?view=rest-azureopenai-2023-10-01-preview). Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md). |
ai-services | Text To Speech Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/text-to-speech-quickstart.md | + + Title: 'Text to speech with Azure OpenAI Service' ++description: Use the Azure OpenAI Service for text to speech with OpenAI voices. +++ Last updated : 2/1/2024++++recommendations: false +++# Quickstart: Text to speech with the Azure OpenAI Service ++In this quickstart, you use the Azure OpenAI Service for text to speech with OpenAI voices. ++The available voices are: `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`. For more information, see [Azure OpenAI Service reference documentation for text to speech](./reference.md#text-to-speech). ++## Prerequisites ++- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true). +- Access granted to Azure OpenAI Service in the desired Azure subscription. +- An Azure OpenAI resource created in the North Central US or Sweden Central regions with the `tts-1` or `tts-1-hd` model deployed. For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md). ++> [!NOTE] +> Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete [this form](https://aka.ms/oai/access). ++## Set up ++### Retrieve key and endpoint ++To successfully make a call against Azure OpenAI, you need an **endpoint** and a **key**. ++|Variable name | Value | +|--|-| +| `AZURE_OPENAI_ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in the **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://aoai-docs.openai.azure.com/`.| +| `AZURE_OPENAI_KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.| ++Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption. +++Create and assign persistent environment variables for your key and endpoint. ++### Environment variables ++# [Command Line](#tab/command-line) ++```CMD +setx AZURE_OPENAI_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE" +``` ++```CMD +setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE" +``` ++# [PowerShell](#tab/powershell) ++```powershell +[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User') +``` ++```powershell +[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_ENDPOINT', 'REPLACE_WITH_YOUR_ENDPOINT_HERE', 'User') +``` ++# [Bash](#tab/bash) ++```Bash +echo export AZURE_OPENAI_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment && source /etc/environment +``` ++```Bash +echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/environment && source /etc/environment +``` ++++## Clean up resources ++If you want to clean up and remove an OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models. ++- [Portal](../multi-service-resource.md?pivots=azportal#clean-up-resources) +- [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources) ++## Next steps ++* Learn more about how to work with text to speech with Azure OpenAI Service in the [Azure OpenAI Service reference documentation](./reference.md#text-to-speech). +* For more examples, check out the [Azure OpenAI Samples GitHub repository](https://aka.ms/AOAICodeSamples) |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | +## February 2024 ++### Assistants API public preview ++Azure OpenAI now supports the API that powers OpenAI's GPTs. Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and advanced tools like code interpreter, and custom functions. To learn more, see: ++- [Quickstart](./assistants-quickstart.md) +- [Concepts](./concepts/assistants.md) +- [In-depth Python how-to](./how-to/assistant.md) +- [Code Interpreter](./how-to/code-interpreter.md) +- [Function calling](./how-to/assistant-functions.md) +- [Assistants model & region availability](./concepts/models.md#assistants-preview) +- [Assistants Samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) ++### OpenAI text to speech voices public preview ++Azure OpenAI Service now supports text to speech APIs with OpenAI's voices. Get AI-generated speech from the text you provide. To learn more, see the [overview guide](../speech-service/openai-voices.md) and try the [quickstart](./text-to-speech-quickstart.md). ++> [!NOTE] +> Azure AI Speech also supports OpenAI text to speech voices. To learn more, see [OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech](../speech-service/openai-voices.md#openai-text-to-speech-voices-via-azure-openai-service-or-via-azure-ai-speech) guide. ++### New Fine-tuning capabilities and model support ++- [Continuous fine-tuning](https://aka.ms/oai/fine-tuning-continuous) +- [Fine-tuning & function calling](./how-to/fine-tuning-functions.md) +- [`gpt-35-turbo 1106` support](./concepts/models.md#fine-tuning-models) ++### Chunk size parameter for Azure OpenAI on your data ++- You can now set the [chunk size](./concepts/use-your-data.md#ingestion-parameters) parameter when your data is ingested. Adjusting the chunk size can enhance the model's responses by setting the maximum number of tokens for any given chunk of your data in the search index. + ## December 2023 ### Azure OpenAI on your data Try out DALL-E 3 by following a [quickstart](./dall-e-quickstart.md). ### Azure OpenAI on your data -- New [custom parameters](./concepts/use-your-data.md#custom-parameters) for determining the number of retrieved documents and strictness.+- New [custom parameters](./concepts/use-your-data.md#runtime-parameters) for determining the number of retrieved documents and strictness. - The strictness setting sets the threshold to categorize documents as relevant to your queries. - The retrieved documents setting specifies the number of top-scoring documents from your data index used to generate responses. - You can see data ingestion/upload status in the Azure OpenAI Studio. |
ai-services | Whisper Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md | Title: 'Speech to text with Azure OpenAI Service' description: Use the Azure OpenAI Whisper model for speech to text. -# - Last updated : 2/1/2024+ Previously updated : 09/15/2023+ recommendations: false zone_pivot_groups: openai-whisper The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true). - Access granted to Azure OpenAI Service in the desired Azure subscription.- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI Service by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access?azure-portal=true). - An Azure OpenAI resource created in the North Central US or West Europe regions with the `whisper` model deployed. For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md). +> [!NOTE] +> Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete [this form](https://aka.ms/oai/access). + ## Set up ### Retrieve key and endpoint |
ai-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md | Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
ai-services | Embedded Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md | Follow these steps to install the Speech SDK for Java using Apache Maven: <dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk-embedded</artifactId>- <version>1.34.1</version> + <version>1.35.0</version> </dependency> </dependencies> </project> Be sure to use the `@aar` suffix when the dependency is specified in `build.grad ``` dependencies {- implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.34.1@aar' + implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.35.0@aar' } ``` ::: zone-end |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md | More remarks for text to speech locales are included in the [voice styles and ro ### Multilingual voices -Multilingual voices can support more languages. This expansion enhances your ability to express content in various languages, to overcome language barriers and foster a more inclusive global communication environment. Use this table to understand all supported speaking languages for each multilingual neural voice. If the voice doesnΓÇÖt speak the language of the input text, the Speech service doesnΓÇÖt output synthesized audio. The table is sorted by the number of supported languages in descending order. The primary locale for each voice is the prefix in its name, such as the voice `en-US-AndrewMultilingualNeural`, its primary locale is `en-US`. +Multilingual voices can support more languages. This expansion enhances your ability to express content in various languages, to overcome language barriers and foster a more inclusive global communication environment. ++Use this table to understand all supported speaking languages for each multilingual neural voice. If the voice doesnΓÇÖt speak the language of the input text, the Speech service doesnΓÇÖt output synthesized audio. The table is sorted by the number of supported languages in descending order. The primary locale for each voice is indicated by the prefix in its name, such as the voice `en-US-AndrewMultilingualNeural`, its primary locale is `en-US`. [!INCLUDE [Language support include](includes/language-support/multilingual-voices.md)] Use the following table to determine supported styles and roles for each neural [!INCLUDE [Language support include](includes/language-support/voice-styles-and-roles.md)] + ### Viseme This table lists all the locales supported for [Viseme](speech-synthesis-markup-structure.md#viseme-element). For more information about Viseme, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md) and [Viseme element](speech-synthesis-markup-structure.md#viseme-element). With the cross-lingual feature, you can transfer your custom neural voice model # [Pronunciation assessment](#tab/pronunciation-assessment) -The table in this section summarizes the 25 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 24 more languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. +The table in this section summarizes the 26 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 25 more languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. [!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)] |
ai-services | Openai Voices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/openai-voices.md | + + Title: What are OpenAI text to speech voices? ++description: Learn about OpenAI text to speech voices that you can use with speech synthesis. +++++ Last updated : 2/1/2024+++#customer intent: As a user who implements text to speech, I want to understand the options and differences between available OpenAI text to speech voices in Azure AI services. +++# What are OpenAI text to speech voices? ++Like Azure AI Speech voices, OpenAI text to speech voices deliver high-quality speech synthesis to convert written text into natural sounding spoken audio. This unlocks a wide range of possibilities for immersive and interactive user experiences. ++OpenAI text to speech voices are available via two model variants: `Neural` and `NeuralHD`. ++- `Neural`: Optimized for real-time use cases with the lowest latency, but lower quality than `NeuralHD`. +- `NeuralHD`: Optimized for quality. ++## Available text to speech voices in Azure AI services ++You might ask: If I want to use an OpenAI text to speech voice, should I use it via the Azure OpenAI Service or via Azure AI Speech? What are the scenarios that guide me to use one or the other? ++Each voice model offers distinct features and capabilities, allowing you to choose the one that best suits your specific needs. You want to understand the options and differences between available text to speech voices in Azure AI services. ++You can choose from the following text to speech voices in Azure AI ++- OpenAI text to speech voices in [Azure OpenAI Service](../openai/reference.md#text-to-speech). Available in the following regions: North Central US and Sweden Central. +- OpenAI text to speech voices in [Azure AI Speech](./language-support.md?tabs=tts#multilingual-voices). Available in the following regions: North Central US and Sweden Central. +- Azure AI Speech service [text to speech voices](./language-support.md?tabs=tts#prebuilt-neural-voices). Available in dozens of regions. See the [region list](regions.md#speech-service). ++## OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech? ++If you want to use OpenAI text to speech voices, you can choose whether to use them via [Azure OpenAI](../openai/text-to-speech-quickstart.md) or via [Azure AI Speech](./get-started-text-to-speech.md#openai-text-to-speech-voices-in-azure-ai-speech). In either case, the speech synthesis result is the same. ++Here's a comparison of features between OpenAI text to speech voices in Azure OpenAI Service and OpenAI text to speech voices in Azure AI Speech. ++| Feature | Azure OpenAI Service (OpenAI voices) | Azure AI Speech (OpenAI voices) | Azure AI Speech voices | +|||| +| **Region** | North Central US, Sweden Central | North Central US, Sweden Central | Available in dozens of regions. See the [region list](regions.md#speech-service).| +| **Voice variety** | 6 | 6 | More than 400 | +| **Multilingual voice number** | 6 | 6 | 14 | +| **Max multilingual language coverage** | 57 | 57 | 77 | +| **Speech Synthesis Markup Language (SSML) support** | Not supported | Support for [a subset of SSML elements](#ssml-elements-supported-by-openai-text-to-speech-voices-in-azure-ai-speech). | Support for the [full set of SSML](speech-synthesis-markup-structure.md) in Azure AI Speech. | +| **Development options** | REST API | Speech SDK, Speech CLI, REST API | Speech SDK, Speech CLI, REST API | +| **Deployment option** | Cloud only | Cloud only | Cloud, embedded, hybrid, and containers. | +| **Real-time or batch synthesis** | Real-time | Real-time and batch synthesis | Real-time and batch synthesis | +| **Latency** | greater than 500 ms | greater than 500 ms | less than 300 ms | +| **Sample rate of synthesized audio** | 24 kHz | 8, 16, 24, and 48 kHz | 8, 16, 24, and 48 kHz | +| **Speech output audio format** | opus, mp3, aac, flac | opus, mp3, pcm, truesilk | opus, mp3, pcm, truesilk | ++## SSML elements supported by OpenAI text to speech voices in Azure AI Speech ++The [Speech Synthesis Markup Language (SSML)](./speech-synthesis-markup.md) with input text determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application. ++The following table outlines the Speech Synthesis Markup Language (SSML) elements supported by OpenAI text to speech voices in Azure AI speech. Only a subset of SSML tags are supported for OpenAI voices. See [SSML document structure and events](speech-synthesis-markup-structure.md) for more information. ++| SSML element name | Description | +| | | +| `<speak>` | Encloses the entire content to be spoken. ItΓÇÖs the root element of an SSML document. | +| `<voice>` | Specifies a voice used for text to speech output. | +| `<sub>` | Indicates that the alias attribute's text value should be pronounced instead of the element's enclosed text. | +| `<say-as>` | Indicates the content type, such as number or date, of the element's text.<br/><br/>All of the `interpret-as` property values are supported for this element except `interpret-as="name"`. For example, `<say-as interpret-as="date" format="dmy">10-12-2016</say-as>` is supported, but `<say-as interpret-as="name">ED</say-as>` isn't supported. For more information, see [pronunciation with SSML](./speech-synthesis-markup-pronunciation.md#say-as-element). | +| `<s>` | Denotes sentences. | +| `<lang>` | Indicates the default locale for the language that you want the neural voice to speak. | +| `<break>` | Use to override the default behavior of breaks or pauses between words. | ++## Next steps ++- [Try the text to speech quickstart in Azure AI Speech](get-started-text-to-speech.md#openai-text-to-speech-voices-in-azure-ai-speech) +- [Try the text to speech via Azure OpenAI Service](../openai/text-to-speech-quickstart.md) |
ai-services | Releasenotes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/releasenotes.md | -> [!IMPORTANT] -> You'll be charged for custom speech model training if the base model was created on October 1, 2023 and later. You are not charged for training if the base model was created prior to October 2023. For more information, see [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and the [Charge for adaptation section in the speech to text 3.2 migration guide](./migrate-v3-1-to-v3-2.md#charge-for-adaptation). - ## Recent highlights * Azure AI Speech now supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#use-a-whisper-model) guide. |
ai-services | Speech Synthesis Markup Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md | The following SSML example uses the `<mstts:ttsembedding>` element with a voice ## Adjust speaking languages -By default, multilingual voices can autodetect the language of the input text and speak in the language in primary locale of the input text without using SSML. However, you can still use the `<lang xml:lang>` element to adjust the speaking language for these voices to set preferred accent with non-primary locales such as British accent (`en-GB`) for English. You can adjust the speaking language at both the sentence level and word level. For information about the supported languages for multilingual voice, see [Multilingual voices with the lang element](#multilingual-voices-with-the-lang-element) for a table showing the `<lang>` syntax and attribute definitions. +By default, multilingual voices can autodetect the language of the input text and speak in the language of the default locale of the input text without using SSML. Optionally, you can use the `<lang xml:lang>` element to adjust the speaking language for these voices to set the preferred accent such as `en-GB` for British English. You can adjust the speaking language at both the sentence level and word level. For information about the supported languages for multilingual voice, see [Multilingual voices with the lang element](#multilingual-voices-with-the-lang-element) for a table showing the `<lang>` syntax and attribute definitions. The following table describes the usage of the `<lang xml:lang>` element's attributes: The following table describes the usage of the `<lang xml:lang>` element's attri Use the [multilingual voices section](language-support.md?tabs=tts#multilingual-voices) to determine which speaking languages the Speech service supports for each neural voice, as demonstrated in the following example table. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio. -| Voice | Supported language number | Supported languages | Supported locales | +| Voice | Supported language number | Supported languages | Auto-detected default locale for each language | |-||--|-|-|`en-US-AndrewMultilingualNeural`<sup>1,2</sup> (Male)<br/>`en-US-AvaMultilingualNeural`<sup>1,2</sup> (Female)<br/>`en-US-BrianMultilingualNeural`<sup>1,2</sup> (Male)<br/>`en-US-EmmaMultilingualNeural`<sup>1,2</sup> (Female)| 76 | Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Bahasa Indonesian, Bangla, Basque, Bengali, Bosnian, Bulgarian, Burmese, Catalan, Chinese Cantonese, Chinese Mandarin, Croatian, Czech, Danish, Dutch, English, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Lao, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Maltese, Mongolian, Nepali, Norwegian Bokmål, Pashto, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Sinhala, Slovak, Slovene, Somali, Spanish, Sundanese, Swahili, Swedish,Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, Zulu |`af-ZA`, `am-ET`, `ar-SA`, `az-AZ`, `bg-BG`, `bn-BD`, `bn-IN`, `bs-BA`, `ca-ES`, `cs-CZ`, `cy-GB`, `da-DK`, `de-DE`, `el-GR`, `en-US`, `es-ES`, `et-EE`, `eu-ES`, `fa-IR`, `fi-FI`, `fil-PH`, `fr-FR`, `ga-IE`, `gl-ES`, `he-IL`, `hi-IN`, `hr-HR`, `hu-HU`, `hy-AM`, `id-ID`, `is-IS`, `it-IT`, `ja-JP`, `jv-ID`, `ka-GE`, `kk-KZ`, `km-KH`, `kn-IN`, `ko-KR`, `lo-LA`, `lt-LT`, `lv-LV`, `mk-MK`, `ml-IN`, `mn-MN`, `ms-MY`, `mt-MT`, `my-MM`, `nb-NO`, `ne-NP`, `nl-NL`, `pl-PL`, `ps-AF`, `pt-BR`, `ro-RO`, `ru-RU`, `si-LK`, `sk-SK`, `sl-SI`, `so-SO`, `sq-AL`, `sr-RS`, `su-ID`, `sv-SE`, `sw-KE`, `ta-IN`, `te-IN`, `th-TH`, `tr-TR`, `uk-UA`, `ur-PK`, `uz-UZ`, `vi-VN`, `zh-CN`, `zh-HK`, `zu-ZA`.| +|`en-US-AndrewMultilingualNeural`<sup>1,2</sup> (Male)<br/>`en-US-AvaMultilingualNeural`<sup>1,2</sup> (Female)<br/>`en-US-BrianMultilingualNeural`<sup>1,2</sup> (Male)<br/>`en-US-EmmaMultilingualNeural`<sup>1,2</sup> (Female)| 77 | Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Bahasa Indonesian, Bangla, Basque, Bengali, Bosnian, Bulgarian, Burmese, Catalan, Chinese Cantonese, Chinese Mandarin, Chinese Taiwanese, Croatian, Czech, Danish, Dutch, English, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Lao, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Maltese, Mongolian, Nepali, Norwegian Bokmål, Pashto, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Sinhala, Slovak, Slovene, Somali, Spanish, Sundanese, Swahili, Swedish,Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, Zulu |`af-ZA`, `am-ET`, `ar-EG`, `az-AZ`, `bg-BG`, `bn-BD`, `bn-IN`, `bs-BA`, `ca-ES`, `cs-CZ`, `cy-GB`, `da-DK`, `de-DE`, `el-GR`, `en-US`, `es-ES`, `et-EE`, `eu-ES`, `fa-IR`, `fi-FI`, `fil-PH`, `fr-FR`, `ga-IE`, `gl-ES`, `he-IL`, `hi-IN`, `hr-HR`, `hu-HU`, `hy-AM`, `id-ID`, `is-IS`, `it-IT`, `ja-JP`, `jv-ID`, `ka-GE`, `kk-KZ`, `km-KH`, `kn-IN`, `ko-KR`, `lo-LA`, `lt-LT`, `lv-LV`, `mk-MK`, `ml-IN`, `mn-MN`, `ms-MY`, `mt-MT`, `my-MM`, `nb-NO`, `ne-NP`, `nl-NL`, `pl-PL`, `ps-AF`, `pt-BR`, `ro-RO`, `ru-RU`, `si-LK`, `sk-SK`, `sl-SI`, `so-SO`, `sq-AL`, `sr-RS`, `su-ID`, `sv-SE`, `sw-KE`, `ta-IN`, `te-IN`, `th-TH`, `tr-TR`, `uk-UA`, `ur-PK`, `uz-UZ`, `vi-VN`, `zh-CN`, `zh-HK`, `zh-TW`, `zu-ZA`.| <sup>1</sup> The neural voice is available in public preview. Voices and styles in public preview are only available in three service [regions](regions.md): East US, West Europe, and Southeast Asia. -<sup>2</sup> Those are TTS multilingual voices in Azure AI Speech. By default, all multilingual voices (except `en-US-JennyMultilingualNeural`) can speak in the language in primary locale of the input text without [using SSML](speech-synthesis-markup-voice.md#adjust-speaking-languages). However, you can still use the `<lang xml:lang>` element to adjust the speaking language for these voices to set preferred accent with non-primary locales such as British accent (`en-GB`) for English. The primary locale for each voice is indicated by the prefix in its name, such as the voice `en-US-AndrewMultilingualNeural`, its primary locale is `en-US`. +<sup>2</sup> Those are neural multilingual voices in Azure AI Speech. All multilingual voices (except `en-US-JennyMultilingualNeural`) can speak in the language in default locale of the input text without [using SSML](#adjust-speaking-languages). However, you can still use the `<lang xml:lang>` element to adjust the speaking accent of each language to set preferred accent such as British accent (`en-GB`) for English. The primary locale for each voice is indicated by the prefix in its name, such as the voice `en-US-AndrewMultilingualNeural`, its primary locale is `en-US`. Check the [full list](https://speech.microsoft.com/portal/voicegallery) of supported locales through SSML. > [!NOTE] > Multilingual voices don't fully support certain SSML elements, such as `break`, `emphasis`, `silence`, and `sub`. |
ai-studio | Ai Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/ai-resources.md | Title: Azure AI resource concepts + Title: Azure AI hub resource concepts -description: This article introduces concepts about Azure AI resources. +description: This article introduces concepts about Azure AI hub resources. - ignite-2023 Previously updated : 12/14/2023 Last updated : 2/5/2024 -# Azure AI resources +# Azure AI hub resources [!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)] -The 'Azure AI' resource is the top-level Azure resource for AI Studio and provides the working environment for a team to build and manage AI applications. In Azure, resources enable access to Azure services for individuals and teams. Resources also provide a container for billing, security configuration and monitoring. +The Azure AI hub resource is the top-level Azure resource for AI Studio and provides the working environment for a team to build and manage AI applications. In Azure, resources enable access to Azure services for individuals and teams. Resources also provide a container for billing, security configuration and monitoring. -The Azure AI resource is used to access multiple Azure AI services with a single setup. Previously, different Azure AI services including [Azure OpenAI](../../ai-services/openai/overview.md), [Azure Machine Learning](../../machine-learning/overview-what-is-azure-machine-learning.md), [Azure Speech](../../ai-services/speech-service/overview.md), required their individual setup. +The Azure AI hub resource can be used to access [multiple Azure AI services](#azure-ai-services-api-access-keys) with a single setup. Previously, different Azure AI services including [Azure OpenAI](../../ai-services/openai/overview.md), [Azure Machine Learning](../../machine-learning/overview-what-is-azure-machine-learning.md), [Azure AI Speech](../../ai-services/speech-service/overview.md), required their individual setup. -In this article, you learn more about Azure AI resource's capabilities, and how to set up Azure AI for your organization. You can see the resources created in the [Azure portal](https://portal.azure.com/) and in [Azure AI Studio](https://ai.azure.com). +In this article, you learn more about Azure AI hub resource's capabilities, and how to set up Azure AI for your organization. You can see the resources created in the [Azure portal](https://portal.azure.com/) and in [Azure AI Studio](https://ai.azure.com). ## Collaboration environment for a team -The AI resource provides the collaboration environment for a team to build and manage AI applications, catering to two personas: +The Azure AI hub resource provides the collaboration environment for a team to build and manage AI applications, catering to two personas: -* To AI developers, the Azure AI resource provides the working environment for building AI applications granting access to various tools for AI model building. Tools can be used together, and lets you use and produce shareable components including datasets, indexes, models. An AI resource allows you to configure connections to external resources, provide compute resources used by tools and [endpoints and access keys to prebuilt AI models](#azure-ai-services-api-access-keys). When you use a project to customize AI capabilities, it's hosted by an AI resource and can access the same shared resources. -* To IT administrators, team leads and risk officers, the Azure AI resource provides a single pane of glass on projects created by a team, audit connections that are in use to external resources, and other governance controls to help meet cost and compliance requirements. Security settings are configured on the Azure AI resource, and once set up apply to all projects created under it, allowing administrators to enable developers to self-serve create projects to organize work. +* To AI developers, the Azure AI hub resource provides the working environment for building AI applications granting access to various tools for AI model building. Tools can be used together, and lets you use and produce shareable components including datasets, indexes, models. An Azure AI hub resource allows you to configure connections to external resources, provide compute resources used by tools and [endpoints and access keys to prebuilt AI models](#azure-ai-services-api-access-keys). When you use a project to customize AI capabilities, it's hosted by an Azure AI hub resource and can access the same shared resources. +* To IT administrators, team leads and risk officers, the Azure AI hub resource provides a single pane of glass on projects created by a team, audit connections that are in use to external resources, and other governance controls to help meet cost and compliance requirements. Security settings are configured on the Azure AI hub resource, and once set up apply to all projects created under it, allowing administrators to enable developers to self-serve create projects to organize work. ## Central setup and management concepts -Various management concepts are available on AI resource to support team leads and admins to centrally manage a team's environment. In [Azure AI studio](https://ai.azure.com/), you find these on the **Manage** page. +Various management concepts are available on Azure AI hub resources to support team leads and admins to centrally manage a team's environment. -* **Security configuration** including public network access, [virtual networking](#virtual-networking), customer-managed key encryption, and privileged access to whom can create projects for customization. Security settings configured on the AI resource automatically pass down to each project. A managed virtual network is shared between all projects that share the same AI resource +* **Security configuration** including public network access, [virtual networking](#virtual-networking), customer-managed key encryption, and privileged access to whom can create projects for customization. Security settings configured on the Azure AI hub resource automatically pass down to each project. A managed virtual network is shared between all projects that share the same Azure AI hub resource. * **Connections** are named and authenticated references to Azure and non-Azure resources like data storage providers. Use a connection as a means for making an external resource available to a group of developers without having to expose its stored credential to an individual.-* **Compute and quota allocation** is managed as shared capacity for all projects in AI studio that share the same Azure AI resource. This includes compute instance as managed cloud-based workstation for an individual. Compute instance can be used across projects by the same user. -* **AI services access keys** to endpoints for prebuilt AI models are managed on the AI resource scope. Use these endpoints to access foundation models from Azure OpenAI, Speech, Vision, and Content Safety with one [API key](#azure-ai-services-api-access-keys) -* **Policy** enforced in Azure on the Azure AI resource scope applies to all projects managed under it. -* **Dependent Azure resources** are set up once per AI resource and associated projects and used to store artifacts you generate while working in AI studio such as logs or when uploading data. See [Azure AI dependencies](#azure-ai-dependencies) for more details. +* **Compute and quota allocation** is managed as shared capacity for all projects in AI Studio that share the same Azure AI hub resource. This includes compute instance as managed cloud-based workstation for an individual. Compute instance can be used across projects by the same user. +* **AI services access keys** to endpoints for prebuilt AI models are managed on the Azure AI hub resource scope. Use these endpoints to access foundation models from Azure OpenAI, Speech, Vision, and Content Safety with one [API key](#azure-ai-services-api-access-keys) +* **Policy** enforced in Azure on the Azure AI hub resource scope applies to all projects managed under it. +* **Dependent Azure resources** are set up once per Azure AI hub resource and associated projects and used to store artifacts you generate while working in AI Studio such as logs or when uploading data. See [Azure AI dependencies](#azure-ai-dependencies) for more details. ## Organize work in projects for customization -An Azure AI resource provides the hosting environment for **projects** in AI studio. A project is an organizational container that has tools for AI customization and orchestration, lets you organize your work, save state across different tools like prompt flow, and collaborate with others. For example, you can share uploaded files and connections to data sources. +An Azure AI hub resource provides the hosting environment for [Azure AI projects](../how-to/create-projects.md) in AI Studio. A project is an organizational container that has tools for AI customization and orchestration, lets you organize your work, save state across different tools like prompt flow, and collaborate with others. For example, you can share uploaded files and connections to data sources. -Multiple projects can use an Azure AI resource, and a project can be used by multiple users. A project also helps you keep track of billing, and manage access and provides data isolation. Every project has dedicated storage containers to let you upload files and share it with only other project members when using the 'data' experiences. +Multiple projects can use an Azure AI hub resource, and a project can be used by multiple users. A project also helps you keep track of billing, and manage access and provides data isolation. Every project has dedicated storage containers to let you upload files and share it with only other project members when using the 'data' experiences. -Projects let you create and group reusable components that can be used across tools in AI studio: +Projects let you create and group reusable components that can be used across tools in AI Studio: | Asset | Description | | | | Projects also have specific settings that only hold for that project: | Asset | Description | | | |-| Project connections | Connections to external resources like data storage providers that only you and other project members can use. They complement shared connections on the AI resource accessible to all projects.| +| Project connections | Connections to external resources like data storage providers that only you and other project members can use. They complement shared connections on the Azure AI hub resource accessible to all projects.| | Prompt flow runtime | Prompt flow is a feature that can be used to generate, customize, or run a flow. To use prompt flow, you need to create a runtime on top of a compute instance. | > [!NOTE]-> In AI Studio you can also manage language and notification settings that apply to all Azure AI Studio projects that you can access regardless of the Azure AI resource or project. +> In AI Studio you can also manage language and notification settings that apply to all Azure AI Studio projects that you can access regardless of the Azure AI hub resource or project. ## Azure AI services API access keys -The Azure AI Resource exposes API endpoints and keys for prebuilt AI services that are created by Microsoft such as Speech services and Language service. Which precise services are available to you is subject to your Azure region and your chosen Azure AI services provider at the time of setup ('advanced' option): +The Azure AI hub resource exposes API endpoints and keys for prebuilt AI services that are created by Microsoft such as Azure OpenAI Service. Which precise services are available to you is subject to your Azure region and your chosen Azure AI services provider at the time of setup ('advanced' option): -* If you create an Azure AI resource using the default configuration, you'll have by default capabilities enabled for Azure OpenAI service, Speech, Vision, Content Safety. -* If you create an Azure AI resource and choose an existing Azure OpenAI resource as service provider, you'll only have capabilities for Azure OpenAI service. Use this option if you'd like to reuse existing Azure OpenAI quota and models deployments. Currently, there's no upgrade path to get Speech and Vision capabilities after deployment. +* If you create an Azure AI hub resource together with an existing Azure OpenAI Service resource, you only have capabilities for Azure OpenAI Service. Use this option if you'd like to reuse existing Azure OpenAI quota and models deployments. Currently, there's no upgrade path to get Speech and Vision capabilities after the AI hub is created. +* If you create an Azure AI hub resource together with an Azure AI services provider, you can use Azure OpenAI Service and other AI services such as Speech and Vision. Currently, this option is only available via the Azure AI CLI and SDK. -To understand the full layering of Azure AI resources and its Azure dependencies including the Azure AI services provider, and how these is represented in Azure AI Studio and in the Azure portal, see [Find Azure AI Studio resources in the Azure portal](#find-azure-ai-studio-resources-in-the-azure-portal). --> [!NOTE] -> This Azure AI services resource is similar but not to be confused with the standalone "Azure AI services multi-service account" resource. Their capabilities vary, and the standalone resource is not supported in Azure AI Studio. Going forward, we recommend using the Azure AI services resource that's provided with your Azure AI resource. +To understand the full layering of Azure AI hub resources and its Azure dependencies including the Azure AI services provider, and how these is represented in Azure AI Studio and in the Azure portal, see [Find Azure AI Studio resources in the Azure portal](#find-azure-ai-studio-resources-in-the-azure-portal). With the same API key, you can access all of the following Azure AI With the same API key, you can access all of the following Azure AI | ![Speech icon](../../ai-services/media/service-icons/speech.svg) [Speech](../../ai-services/speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition | | ![Vision icon](../../ai-services/media/service-icons/vision.svg) [Vision](../../ai-services/computer-vision/index.yml) | Analyze content in images and videos | -Large language models that can be used to generate text, speech, images, and more, are hosted by the AI resource. Fine-tuned models and open models deployed from the [model catalog](../how-to/model-catalog.md) are always created in the project context for isolation. +Large language models that can be used to generate text, speech, images, and more, are hosted by the Azure AI hub resource. Fine-tuned models and open models deployed from the [model catalog](../how-to/model-catalog.md) are always created in the project context for isolation. ### Virtual networking -Azure AI resources, compute resources, and projects share the same Microsoft-managed Azure virtual network. After you configure the managed networking settings during the Azure AI resource creation process, all new projects created using that Azure AI resource will inherit the same virtual network settings. Therefore, any changes to the networking settings are applied to all current and new project in that Azure AI resource. By default, Azure AI resources provide public network access. +Azure AI hub resources, compute resources, and projects share the same Microsoft-managed Azure virtual network. After you configure the managed networking settings during the Azure AI hub resource creation process, all new projects created using that Azure AI hub resource will inherit the same virtual network settings. Therefore, any changes to the networking settings are applied to all current and new project in that Azure AI hub resource. By default, Azure AI hub resources provide public network access. -To establish a private inbound connection to your Azure AI resource environment, create an Azure Private Link endpoint on the following scopes: -* The Azure AI resource +To establish a private inbound connection to your Azure AI hub resource environment, create an Azure Private Link endpoint on the following scopes: +* The Azure AI hub resource * The dependent `Azure AI services` providing resource * Any other [Azure AI dependency](#azure-ai-dependencies) such as Azure storage -While projects show up as their own tracking resources in the Azure portal, they don't require their own private link endpoints to be accessed. New projects that are created post AI resource setup, do automatically get added to the network-isolated environment. +While projects show up as their own tracking resources in the Azure portal, they don't require their own private link endpoints to be accessed. New projects that are created after Azure AI hub resource setup, do automatically get added to the network-isolated environment. ## Connections to Azure and third-party resources Azure AI offers a set of connectors that allows you to connect to different types of data sources and other Azure tools. You can take advantage of connectors to connect with data such as indices in Azure AI Search to augment your flows. -Connections can be set up as shared with all projects in the same Azure AI resource, or created exclusively for one project. To manage project connections via Azure AI Studio, navigate to a project page, then navigate to **Settings** > **Connections**. To manage shared connections, navigate to the **Manage** page. As an administrator, you can audit both shared and project-scoped connections on an Azure AI resource level to have a single pane of glass of connectivity across projects. +Connections can be set up as shared with all projects in the same Azure AI hub resource, or created exclusively for one project. To manage project connections via Azure AI Studio, navigate to a project page, then navigate to **Settings** > **Connections**. To manage shared connections, navigate to the **Manage** page. As an administrator, you can audit both shared and project-scoped connections on an Azure AI hub resource level to have a single pane of glass of connectivity across projects. ## Azure AI dependencies -Azure AI studio layers on top of existing Azure services including Azure AI and Azure Machine Learning services. While this might not be visible on the display names in Azure portal, AI studio, or when using the SDK or CLI, some of these architectural details become apparent when you work with the Azure REST APIs, use Azure cost reporting, or use infrastructure-as-code templates such as Azure Bicep or Azure Resource Manager. From an Azure Resource Provider perspective, Azure AI studio resource types map to the following resource provider kinds: +Azure AI Studio layers on top of existing Azure services including Azure AI and Azure Machine Learning services. While this might not be visible on the display names in Azure portal, AI Studio, or when using the SDK or CLI, some of these architectural details become apparent when you work with the Azure REST APIs, use Azure cost reporting, or use infrastructure-as-code templates such as Azure Bicep or Azure Resource Manager. From an Azure Resource Provider perspective, Azure AI Studio resource types map to the following resource provider kinds: |Resource type|Resource provider|Kind| ||||-|Azure AI resources|Microsoft.MachineLearningServices/workspace|hub| +|Azure AI hub resources|Microsoft.MachineLearningServices/workspace|hub| |Azure AI project|Microsoft.MachineLearningServices/workspace|project| |Azure AI services|Microsoft.CognitiveServices/account|AIServices| |Azure AI OpenAI Service|Microsoft.CognitiveServices/account|OpenAI| -When you create a new Azure AI resource, a set of dependent Azure resources are required to store data that you upload or get generated when working in AI studio. If not provided by you, these resources are automatically created. +When you create a new Azure AI hub resource, a set of dependent Azure resources are required to store data that you upload or get generated when working in AI Studio. If not provided by you, these resources are automatically created. |Dependent Azure resource|Note| |||-|Azure AI services|Either Azure AI services multi-service provider, or Azure OpenAI service. Provides API endpoints and keys for prebuilt AI services.| +|Azure AI services|Either Azure AI services multi-service provider, or Azure OpenAI Service. Provides API endpoints and keys for prebuilt AI services.| |Azure Storage account|Stores artifacts for your projects like flows and evaluations. For data isolation, storage containers are prefixed using the project GUID, and conditionally secured using Azure ABAC for the project identity.| |Azure Key Vault| Stores secrets like connection strings for your resource connections. For data isolation, secrets can't be retrieved across projects via APIs.| |Azure Container Registry| Stores docker images created when using custom runtime for prompt flow. For data isolation, docker images are prefixed using the project GUID.| When you create a new Azure AI resource, a set of dependent Azure resources are Azure AI costs accrue by [various Azure resources](#central-setup-and-management-concepts). -In general, an Azure AI resource and project don't have a fixed monthly cost, and you're only charged for usage in terms of compute hours and tokens used. Azure Key Vault, Storage, and Application Insights charge transaction and volume-based, dependent on the amount of data stored with your Azure AI projects. +In general, an Azure AI hub resource and project don't have a fixed monthly cost, and you're only charged for usage in terms of compute hours and tokens used. Azure Key Vault, Storage, and Application Insights charge transaction and volume-based, dependent on the amount of data stored with your Azure AI projects. -If you require to group costs of these different services together, we recommend creating Azure AI resources in one or more dedicated resource groups and subscriptions in your Azure environment. +If you require to group costs of these different services together, we recommend creating Azure AI hub resources in one or more dedicated resource groups and subscriptions in your Azure environment. You can use [cost management](/azure/cost-management-billing/costs/quick-acm-cost-analysis) and [Azure resource tags](/azure/azure-resource-manager/management/tag-resources) to help with a detailed resource-level cost breakdown, or run [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) on the above listed resources to obtain a pricing estimate. For more information, see [Plan and manage costs for Azure AI services](../how-to/costs-plan-manage.md). You can use [cost management](/azure/cost-management-billing/costs/quick-acm-cos In the Azure portal, you can find resources that correspond to your Azure AI project in Azure AI Studio. > [!NOTE]-> This section assumes that the Azure AI resource and Azure AI project are in the same resource group. --In Azure AI Studio, go to **Build** > **Settings** to view your Azure AI project resources such as connections and API keys. There's a link to view the corresponding resources in the Azure portal and a link to your Azure AI resource in Azure AI Studio. ---In Azure AI Studio, go to **Manage** (or select the Azure AI resource link from the project settings page) to view your Azure AI resource, including projects and shared connections. There's also a link to view the corresponding resources in the Azure portal. ---After you select **View in the Azure Portal**, you see your Azure AI resource in the Azure portal. -+> This section assumes that the Azure AI hub resource and Azure AI project are in the same resource group. -Select the resource group name to see all associated resources, including the Azure AI project, storage account, and key vault. +1. In [Azure AI Studio](https://ai.azure.com), go to **Build** > **Settings** to view your Azure AI project resources such as connections and API keys. There's a link to your Azure AI hub resource in Azure AI Studio and links to view the corresponding project resources in the [Azure portal](https://portal.azure.com). + :::image type="content" source="../media/concepts/azureai-project-view-ai-studio.png" alt-text="Screenshot of the Azure AI project and related resources in the Azure AI Studio." lightbox="../media/concepts/azureai-project-view-ai-studio.png"::: -From the resource group, you can select the Azure AI project for more details. +1. Select the AI hub name to view your Azure AI hub's projects and shared connections. There's also a link to view the corresponding resources in the [Azure portal](https://portal.azure.com). + :::image type="content" source="../media/concepts/azureai-resource-view-ai-studio.png" alt-text="Screenshot of the Azure AI hub resource and related resources in the Azure AI Studio." lightbox="../media/concepts/azureai-resource-view-ai-studio.png"::: -Also from the resource group, you can select the **Azure AI service** resource to see the keys and endpoints needed to authenticate your requests to Azure AI services. +1. Select **View in the Azure Portal** to see your Azure AI hub resource in the Azure portal. + :::image type="content" source="../media/concepts/ai-hub-azure-portal.png" alt-text="Screenshot of the Azure AI hub resource in the Azure portal." lightbox="../media/concepts/ai-hub-azure-portal.png"::: -You can use the same API key to access all of the supported service endpoints that are listed. + - Select the **AI Services provider** to see the keys and endpoints needed to authenticate your requests to Azure AI services such as Azure OpenAI. For more information, see [Azure AI services API access keys](#azure-ai-services-api-access-keys). + - Also from the Azure AI hub page, you can select the **Project resource group** to find your Azure AI project. ## Next steps -- [Quickstart: Generate product name ideas in the Azure AI Studio playground](../quickstarts/playground-completions.md)+- [Quickstart: Analyze images and video with GPT-4 for Vision in the playground](../quickstarts/multimodal-vision.md) - [Learn more about Azure AI Studio](../what-is-ai-studio.md) - [Learn more about Azure AI Studio projects](../how-to/create-projects.md) |
ai-studio | Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/connections.md | -Connections in Azure AI Studio are a way to authenticate and consume both Microsoft and third-party resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI resource. +Connections in Azure AI Studio are a way to authenticate and consume both Microsoft and third-party resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI hub resource. ## Connections to Azure AI services -You can create connections to Azure AI services such as Azure AI Content Safety and Azure OpenAI. You can then use the connection in a prompt flow tool such as the LLM tool. +You can create connections to Azure AI services such as Azure OpenAI and Azure AI Content Safety. You can then use the connection in a prompt flow tool such as the LLM tool. :::image type="content" source="../media/prompt-flow/llm-tool-connection.png" alt-text="Screenshot of a connection used by the LLM tool in prompt flow." lightbox="../media/prompt-flow/llm-tool-connection.png"::: A Uniform Resource Identifier (URI) represents a storage location on your local ## Key vaults and secrets -Connections allow you to securely store credentials, authenticate access, and consume data and information. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards. As an administrator, you can audit both shared and project-scoped connections on an Azure AI resource level (link to connection rbac). +Connections allow you to securely store credentials, authenticate access, and consume data and information. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards. As an administrator, you can audit both shared and project-scoped connections on an Azure AI hub resource level (link to connection rbac). -Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. Azure AI Studio connections store API keys securely, as secrets, in a key vault. The key vault [Azure role-based access control (Azure RBAC)](./rbac-ai-studio.md) controls access to these connection resources. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they are stored in the Azure AI resource's key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you avoid credential storage in a YAML file, because a security breach could lead to a credential leak. +Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. Azure AI Studio connections store API keys securely, as secrets, in a key vault. The key vault [Azure role-based access control (Azure RBAC)](./rbac-ai-studio.md) controls access to these connection resources. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they are stored in the Azure AI hub resource's key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you avoid credential storage in a YAML file, because a security breach could lead to a credential leak. ## Next steps |
ai-studio | Deployments Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/deployments-overview.md | Azure AI Studio simplifies deployments. A simple select or a line of code deploy ### Azure OpenAI models -Azure OpenAI allows you to get access to the latest OpenAI models with the enterprise features from Azure. Learn more about [how to deploy OpenAI models in AI studio](../how-to/deploy-models-openai.md). +Azure OpenAI allows you to get access to the latest OpenAI models with the enterprise features from Azure. Learn more about [how to deploy OpenAI models in AI Studio](../how-to/deploy-models-openai.md). ### Open models |
ai-studio | Evaluation Improvement Strategies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-improvement-strategies.md | Mitigating harms presented by large language models (LLMs) such as the Azure Ope :::image type="content" source="../media/evaluations/mitigation-layers.png" alt-text="Diagram of strategy to mitigate potential harms of generative AI applications." lightbox="../media/evaluations/mitigation-layers.png"::: ## Model layer-At the model level, it's important to understand the models you use and what fine-tuning steps might have been taken by the model developers to align the model towards its intended uses and to reduce the risk of potentially harmful uses and outcomes. Azure AI studio's model catalog enables you to explore models from Azure OpenAI Service, Meta, etc., organized by collection and task. In the [model catalog](../how-to/model-catalog.md), you can explore model cards to understand model capabilities and limitations, experiment with sample inferences, and assess model performance. You can further compare multiple models side-by-side through benchmarks to select the best one for your use case. Then, you can enhance model performance by fine-tuning with your training data. +At the model level, it's important to understand the models you use and what fine-tuning steps might have been taken by the model developers to align the model towards its intended uses and to reduce the risk of potentially harmful uses and outcomes. Azure AI Studio's model catalog enables you to explore models from Azure OpenAI Service, Meta, etc., organized by collection and task. In the [model catalog](../how-to/model-catalog.md), you can explore model cards to understand model capabilities and limitations, experiment with sample inferences, and assess model performance. You can further compare multiple models side-by-side through benchmarks to select the best one for your use case. Then, you can enhance model performance by fine-tuning with your training data. ## Safety systems layer For most applications, itΓÇÖs not enough to rely on the safety fine-tuning built into the model itself. LLMs can make mistakes and are susceptible to attacks like jailbreaks. In many applications at Microsoft, we use another AI-based safety system, [Azure AI Content Safety](https://azure.microsoft.com/products/ai-services/ai-content-safety/), to provide an independent layer of protection, helping you to block the output of harmful content. |
ai-studio | Rbac Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md | -In this article, you learn how to manage access (authorization) to an Azure AI resource. Azure Role-based access control is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Users in your Microsoft Entra ID are assigned specific roles, which grant access to resources. Azure provides both built-in roles and the ability to create custom roles. +In this article, you learn how to manage access (authorization) to an Azure AI hub resource. Azure Role-based access control is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Users in your Microsoft Entra ID are assigned specific roles, which grant access to resources. Azure provides both built-in roles and the ability to create custom roles. > [!WARNING] > Applying some roles might limit UI functionality in Azure AI Studio for other users. For example, if a user's role does not have the ability to create a compute instance, the option to create a compute instance will not be available in studio. This behavior is expected, and prevents the user from attempting operations that would return an access denied error. -## Azure AI resource vs Azure AI project -In the Azure AI Studio, there are two levels of access: the Azure AI resource and the Azure AI project. The resource is home to the infrastructure (including virtual network setup, customer-managed keys, managed identities, and policies) as well as where you configure your Azure AI services. Azure AI resource access can allow you to modify the infrastructure, create new Azure AI resources, and create projects. Azure AI projects are a subset of the Azure AI resource that act as workspaces that allow you to build and deploy AI systems. Within a project you can develop flows, deploy models, and manage project assets. Project access lets you develop AI end-to-end while taking advantage of the infrastructure setup on the Azure AI resource. +## Azure AI hub resource vs Azure AI project +In the Azure AI Studio, there are two levels of access: the Azure AI hub resource and the Azure AI project. The resource is home to the infrastructure (including virtual network setup, customer-managed keys, managed identities, and policies) as well as where you configure your Azure AI services. Azure AI hub resource access can allow you to modify the infrastructure, create new Azure AI hub resources, and create projects. Azure AI projects are a subset of the Azure AI hub resource that act as workspaces that allow you to build and deploy AI systems. Within a project you can develop flows, deploy models, and manage project assets. Project access lets you develop AI end-to-end while taking advantage of the infrastructure setup on the Azure AI hub resource. -## Default roles for the Azure AI resource +## Default roles for the Azure AI hub resource -The Azure AI Studio has built-in roles that are available by default. In addition to the Reader, Contributor, and Owner roles, the Azure AI Studio has a new role called Azure AI Developer. This role can be assigned to enable users to create connections, compute, and projects, but not let them create new Azure AI resources or change permissions of the existing Azure AI resource. +The Azure AI Studio has built-in roles that are available by default. In addition to the Reader, Contributor, and Owner roles, the Azure AI Studio has a new role called Azure AI Developer. This role can be assigned to enable users to create connections, compute, and projects, but not let them create new Azure AI hub resources or change permissions of the existing Azure AI hub resource. -Here's a table of the built-in roles and their permissions for the Azure AI resource: +Here's a table of the built-in roles and their permissions for the Azure AI hub resource: | Role | Description | | | |-| Owner | Full access to the Azure AI resource, including the ability to manage and create new Azure AI resources and assign permissions. This role is automatically assigned to the Azure AI resource creator| -| Contributor | User has full access to the Azure AI resource, including the ability to create new Azure AI resources, but isn't able to manage Azure AI resource permissions on the existing resource. | -| Azure AI Developer | Perform all actions except create new Azure AI resources and manage the Azure AI resource permissions. For example, users can create projects, compute, and connections. Users can assign permissions within their project. Users can interact with existing AI resources such as Azure OpenAI, Azure AI Search, and Azure AI services. | -| Reader | Read only access to the Azure AI resource. This role is automatically assigned to all project members within the Azure AI resource. | +| Owner | Full access to the Azure AI hub resource, including the ability to manage and create new Azure AI hub resources and assign permissions. This role is automatically assigned to the Azure AI hub resource creator| +| Contributor | User has full access to the Azure AI hub resource, including the ability to create new Azure AI hub resources, but isn't able to manage Azure AI hub resource permissions on the existing resource. | +| Azure AI Developer | Perform all actions except create new Azure AI hub resources and manage the Azure AI hub resource permissions. For example, users can create projects, compute, and connections. Users can assign permissions within their project. Users can interact with existing Azure AI resources such as Azure OpenAI, Azure AI Search, and Azure AI services. | +| Reader | Read only access to the Azure AI hub resource. This role is automatically assigned to all project members within the Azure AI hub resource. | -The key difference between Contributor and Azure AI Developer is the ability to make new Azure AI resources. If you don't want users to make new Azure AI resources (due to quota, cost, or just managing how many Azure AI resources you have), assign the AI Developer role. +The key difference between Contributor and Azure AI Developer is the ability to make new Azure AI hub resources. If you don't want users to make new Azure AI hub resources (due to quota, cost, or just managing how many Azure AI hub resources you have), assign the AI Developer role. -Only the Owner and Contributor roles allow you to make an Azure AI resource. At this time, custom roles won't grant you permission to make Azure AI resources. +Only the Owner and Contributor roles allow you to make an Azure AI hub resource. At this time, custom roles won't grant you permission to make Azure AI hub resources. The full set of permissions for the new "Azure AI Developer" role are as follows: Here's a table of the built-in roles and their permissions for the Azure AI proj | Azure AI Developer | User can perform most actions, including create deployments, but can't assign permissions to project users. | | Reader | Read only access to the Azure AI project. | -When a user gets access to a project, two more roles are automatically assigned to the project user. The first role is Reader on the Azure AI resource. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```. +When a user gets access to a project, two more roles are automatically assigned to the project user. The first role is Reader on the Azure AI hub resource. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```. In order to complete end-to-end AI development and deployment, users only need these two autoassigned roles and either the Contributor or Azure AI Developer role on a *project*. Below is an example of how to set up role-based access control for your Azure AI | Persona | Role | Purpose | | | | |-| IT admin | Owner of the Azure AI resource | The IT admin can ensure the Azure AI resource is set up to their enterprise standards and assign managers the Contributor role on the resource if they want to enable managers to make new Azure AI resources or they can assign managers the Azure AI Developer role on the resource to not allow for new Azure AI resource creation. | -| Managers | Contributor or Azure AI Developer on the Azure AI resource | Managers can create projects for their team and create shared resources (ex: compute and connections) for their group at the Azure AI resource level. | +| IT admin | Owner of the Azure AI hub resource | The IT admin can ensure the Azure AI hub resource is set up to their enterprise standards and assign managers the Contributor role on the resource if they want to enable managers to make new Azure AI hub resources or they can assign managers the Azure AI Developer role on the resource to not allow for new Azure AI hub resource creation. | +| Managers | Contributor or Azure AI Developer on the Azure AI hub resource | Managers can create projects for their team and create shared resources (ex: compute and connections) for their group at the Azure AI hub resource level. | | Managers | Owner of the Azure AI Project | When managers create a project, they become the project owner. This allows them to add their team/developers to the project. Their team/developers can be added as Contributors or Azure AI Developers to allow them to develop in the project. | | Team members/developers | Contributor or Azure AI Developer on the Azure AI Project | Developers can build and deploy AI models within a project and create assets that enable development such as computes and connections. | -## Access to resources created outside of the Azure AI resource +## Access to resources created outside of the Azure AI hub resource -When you create an Azure AI resource, the built-in role-based access control permissions grant you access to use the resource. However, if you wish to use resources outside of what was created on your behalf, you need to ensure both: +When you create an Azure AI hub resource, the built-in role-based access control permissions grant you access to use the resource. However, if you wish to use resources outside of what was created on your behalf, you need to ensure both: - The resource you're trying to use has permissions set up to allow you to access it.-- Your Azure AI resource is allowed to access it. +- Your Azure AI hub resource is allowed to access it. -For example, if you're trying to consume a new Blob storage, you need to ensure that Azure AI resource's managed identity is added to the Blob Storage Reader role for the Blob. If you're trying to use a new Azure AI Search source, you might need to add the Azure AI resource to the Azure AI Search's role assignments. +For example, if you're trying to consume a new Blob storage, you need to ensure that Azure AI hub resource's managed identity is added to the Blob Storage Reader role for the Blob. If you're trying to use a new Azure AI Search source, you might need to add the Azure AI hub resource to the Azure AI Search's role assignments. ## Manage access with roles -If you're an owner of an Azure AI resource, you can add and remove roles for the Studio. Within the Azure AI Studio, go to **Manage** and select your Azure AI resource. Then select **Permissions** to add and remove users for the Azure AI resource. You can also manage permissions from the Azure portal under **Access Control (IAM)** or through the Azure CLI. For example, use the [Azure CLI](/cli/azure/) to assign the Azure AI Developer role to "joe@contoso.com" for resource group "this-rg" with the following command: +If you're an owner of an Azure AI hub resource, you can add and remove roles for the Studio. Within the Azure AI Studio, go to **Manage** and select your Azure AI hub resource. Then select **Permissions** to add and remove users for the Azure AI hub resource. You can also manage permissions from the Azure portal under **Access Control (IAM)** or through the Azure CLI. For example, use the [Azure CLI](/cli/azure/) to assign the Azure AI Developer role to "joe@contoso.com" for resource group "this-rg" with the following command: ```azurecli-interactive az role assignment create --role "Azure AI Developer" --assignee "joe@contoso.com" --resource-group this-rg az role assignment create --role "Azure AI Developer" --assignee "joe@contoso.co ## Create custom roles > [!NOTE]-> In order to make a new Azure AI resource, you need the Owner or Contributor role. At this time, a custom role, even with all actions allowed, will not enable you to make an Azure AI resource. +> In order to make a new Azure AI hub resource, you need the Owner or Contributor role. At this time, a custom role, even with all actions allowed, will not enable you to make an Azure AI hub resource. If the built-in roles are insufficient, you can create custom roles. Custom roles might have read, write, delete, and compute resource permissions in that AI Studio. You can make the role available at a specific project level, a specific resource group level, or a specific subscription level. If the built-in roles are insufficient, you can create custom roles. Custom role ## Next steps -- [How to create an Azure AI resource](../how-to/create-azure-ai-resource.md)+- [How to create an Azure AI hub resource](../how-to/create-azure-ai-resource.md) - [How to create an Azure AI project](../how-to/create-projects.md) - [How to create a connection in Azure AI Studio](../how-to/connections-add.md) |
ai-studio | Cli Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/cli-install.md | You can run the Azure AI CLI in a Docker container using VS Code Dev Containers: ## Try the Azure AI CLI The AI CLI offers many capabilities, including an interactive chat experience, tools to work with prompt flows and search and speech services, and tools to manage AI services. -If you plan to use the AI CLI as part of your development, we recommend you start by running `ai init`, which guides you through setting up your AI resources and connections in your development environment. +If you plan to use the AI CLI as part of your development, we recommend you start by running `ai init`, which guides you through setting up your Azure resources and connections in your development environment. Try `ai help` to learn more about these capabilities. ### ai init -The `ai init` command allows interactive and non-interactive selection or creation of Azure AI resources. When an AI resource is selected or created, the associated resource keys and region are retrieved and automatically stored in the local AI configuration datastore. +The `ai init` command allows interactive and non-interactive selection or creation of Azure AI hub resources. When an Azure AI hub resource is selected or created, the associated resource keys and region are retrieved and automatically stored in the local AI configuration datastore. You can initialize the Azure AI CLI by running the following command: The following table describes the scenarios for each flow. | Scenario | Description | | | | | Initialize a new AI project | Choose if you don't have an existing AI project that you have been working with in the Azure AI Studio. The `ai init` command walks you through creating or attaching resources. |-| Initialize an existing AI project | Choose if you have an existing AI project you want to work with. The `ai init` command checks your existing linked resources, and ask you to set anything that hasn't been set before. | +| Initialize an existing AI project | Choose if you have an existing AI project you want to work with. The `ai init` command checks your existing linked resources, and asks you to set anything that hasn't been set before. | | Initialize standalone resources| Choose if you're building a simple solution connected to a single AI service, or if you want to attach more resources to your development environment | -Working with an AI project is recommended when using the Azure AI Studio and/or connecting to multiple AI services. Projects come with an AI Resource that houses related projects and shareable resources like compute and connections to services. Projects also allow you to connect code to cloud resources (storage and model deployments), save evaluation results, and host code behind online endpoints. You're prompted to create and/or attach Azure AI Services to your project. +Working with an AI project is recommended when using the Azure AI Studio and/or connecting to multiple AI services. Projects come with An Azure AI hub resource that houses related projects and shareable resources like compute and connections to services. Projects also allow you to connect code to cloud resources (storage and model deployments), save evaluation results, and host code behind online endpoints. You're prompted to create and/or attach Azure AI Services to your project. Initializing standalone resources is recommended when building simple solutions connected to a single AI service. You can also choose to initialize more standalone resources after initializing a project. The following resources can be initialized standalone, or attached to projects: -- Azure AI +- Azure AI - Azure OpenAI: Provides access to OpenAI's powerful language models. - Azure AI Search: Provides keyword, vector, and hybrid search capabilities. - Azure AI Speech: Provides speech recognition, synthesis, and translation. The following resources can be initialized standalone, or attached to projects: 1. Run `ai init` and choose **Initialize new AI project**. 1. Select your subscription. You might be prompted to sign in through an interactive flow.-1. Select your Azure AI Resource, or create a new one. An AI Resource can have multiple projects that can share resources. +1. Select your Azure AI hub resource, or create a new one. An Azure AI hub resource can have multiple projects that can share resources. 1. Select the name of your new project. There are some suggested names, or you can enter a custom one. Once you submit, the project might take a minute to create. 1. Select the resources you want to attach to the project. You can skip resource types you don't want to attach. 1. `ai init` checks you have the connections you need for the attached resources, and your development environment is configured with your new project. The following resources can be initialized standalone, or attached to projects: ## Project connections -When working the Azure AI CLI, you want to use your project's connections. Connections are established to attached resources and allow you to integrate services with your project. You can have project-specific connections, or connections shared at the Azure AI resource level. For more information, see [Azure AI resources](../concepts/ai-resources.md) and [connections](../concepts/connections.md). +When working the Azure AI CLI, you want to use your project's connections. Connections are established to attached resources and allow you to integrate services with your project. You can have project-specific connections, or connections shared at the Azure AI hub resource level. For more information, see [Azure AI hub resources](../concepts/ai-resources.md) and [connections](../concepts/connections.md). When you run `ai init` your project connections get set in your development environment, allowing seamless integration with AI services. You can view these connections by running `ai service connection list`, and further manage these connections with `ai service connection` subcommands. ai dev new .env `ai service` helps you manage your connections to resources and services. -- `ai service resource` lets you list, create or delete AI Resources.-- `ai service project` lets you list, create, or delete AI Projects.+- `ai service resource` lets you list, create or delete Azure AI hub resources. +- `ai service project` lets you list, create, or delete Azure AI projects. - `ai service connection` lets you list, create, or delete connections. These are the connections to your attached services. ## ai flow |
ai-studio | Commitment Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/commitment-tier.md | -Azure AI offers commitment tier pricing, each offering a discounted rate compared to the pay-as-you-go pricing model. With commitment tier pricing, you can commit to using the Azure AI resources and features for a fixed fee, enabling you to have a predictable total cost based on the needs of your workload. +Azure AI offers commitment tier pricing, each offering a discounted rate compared to the pay-as-you-go pricing model. With commitment tier pricing, you can commit to using the Azure AI hub resources and features for a fixed fee, enabling you to have a predictable total cost based on the needs of your workload. ## Purchase a commitment plan by updating your Azure resource |
ai-studio | Configure Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-private-link.md | You get several Azure AI default resources in your resource group. You need to c - Disable public network access flag of Azure AI default resources such as Storage, Key Vault, Container Registry. Azure AI services and Azure AI Search should be public. - Establish private endpoint connection to Azure AI default resource. Note that you need to have blob and file PE for the default storage account.-- [Managed identity configurations](#managed-identity-configuration) to allow Azure AI resources access your storage account if it's private.+- [Managed identity configurations](#managed-identity-configuration) to allow Azure AI hub resources access your storage account if it's private. ## Prerequisites You get several Azure AI default resources in your resource group. You need to c ## Create an Azure AI that uses a private endpoint -Use one of the following methods to create an Azure AI resource with a private endpoint. Each of these methods __requires an existing virtual network__: +Use one of the following methods to create an Azure AI hub resource with a private endpoint. Each of these methods __requires an existing virtual network__: # [Azure CLI](#tab/cli) -Create your Azure AI resource with the Azure AI CLI. Run the following command and follow the prompts. For more information, see [Get started with Azure AI CLI](cli-install.md). +Create your Azure AI hub resource with the Azure AI CLI. Run the following command and follow the prompts. For more information, see [Get started with Azure AI CLI](cli-install.md). ```azurecli-interactive ai init See [this documentation](../../machine-learning/how-to-custom-dns.md#find-the-ip - [Create a project](create-projects.md) - [Learn more about Azure AI Studio](../what-is-ai-studio.md)-- [Learn more about Azure AI resources](../concepts/ai-resources.md)+- [Learn more about Azure AI hub resources](../concepts/ai-resources.md) - [Troubleshoot secure connectivity to a project](troubleshoot-secure-connection-project.md) |
ai-studio | Connections Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/connections-add.md | -Connections are a way to authenticate and consume both Microsoft and third-party resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI resource. +Connections are a way to authenticate and consume both Microsoft and third-party resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI hub resource. ## Connection types Here's a table of the available connection types in Azure AI Studio with descrip ### Connection details -When you [create a new connection](#create-a-new-connection), you enter the following information for the service connection type you selected. You can create a connection that's only available for the current project or available for all projects associated with the Azure AI resource. +When you [create a new connection](#create-a-new-connection), you enter the following information for the service connection type you selected. You can create a connection that's only available for the current project or available for all projects associated with the Azure AI hub resource. > [!NOTE]-> When you create a connection from the **Manage** page, the connection is always created at the Azure AI resource level and shared accross all associated projects. +> When you create a connection from the **Manage** page, the connection is always created at the Azure AI hub resource level and shared accross all associated projects. # [Azure AI Search](#tab/azure-ai-search) |
ai-studio | Costs Plan Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/costs-plan-manage.md | As you add new resources to your project, return to this calculator and add the ### Costs that typically accrue with Azure AI and Azure AI Studio -When you create resources for an Azure AI resource, resources for other Azure services are also created. They are: +When you create resources for an Azure AI hub resource, resources for other Azure services are also created. They are: | Service pricing page | Description with example use cases | | | | -| [Azure AI services](https://azure.microsoft.com/pricing/details/cognitive-services/) | You pay to use services such as Azure OpenAI, Speech, Content Safety, Vision, Document Intelligence, and Language. Costs vary for each service and for some features within each service. | -| [Azure AI Search](https://azure.microsoft.com/pricing/details/search/) | An example use case is to store data in a vector search index. | -| [Azure Machine Learning](https://azure.microsoft.com/pricing/details/machine-learning/) | Compute instances are needed to run Visual Studio Code (Web) and prompt flow via Azure AI Studio.<br/><br/>When you create a compute instance, the virtual machine (VM) stays on so it's available for your work.<br/><br/>Enable idle shutdown to save on cost when the VM is idle for a specified time period.<br/><br/>Or set up a schedule to automatically start and stop the compute instance to save cost when you aren't planning to use it. | +| [Azure AI services](https://azure.microsoft.com/pricing/details/cognitive-services/) | You pay to use services such as Azure OpenAI, Speech, Content Safety, Vision, Document Intelligence, and Language. Costs vary for each service and for some features within each service. For more information about provisioning of Azure AI services, see [Azure AI hub resources](../concepts/ai-resources.md#azure-ai-services-api-access-keys).| +| [Azure AI Search](https://azure.microsoft.com/pricing/details/search/) | An example use case is to store data in a [vector search index](./index-add.md). | +| [Azure Machine Learning](https://azure.microsoft.com/pricing/details/machine-learning/) | Compute instances are needed to run [Visual Studio Code (Web or Desktop)](./develop-in-vscode.md) and [prompt flow](./prompt-flow.md) via Azure AI Studio.<br/><br/>When you create a compute instance, the virtual machine (VM) stays on so it's available for your work.<br/><br/>Enable idle shutdown to save on cost when the VM is idle for a specified time period.<br/><br/>Or set up a schedule to automatically start and stop the compute instance to save cost when you aren't planning to use it. | | [Azure Virtual Machine](https://azure.microsoft.com/pricing/details/virtual-machines/) | Azure Virtual Machines gives you the flexibility of virtualization for a wide range of computing solutions with support for Linux, Windows Server, SQL Server, Oracle, IBM, SAP, and more. | | [Azure Container Registry Basic account](https://azure.microsoft.com/pricing/details/container-registry) | Provides storage of private Docker container images, enabling fast, scalable retrieval, and network-close deployment of container workloads on Azure. |-| [Azure Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs/) | Can be used to store Azure AI project files. | +| [Azure Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs/) | Can be used to store [Azure AI project](./create-projects.md) files. | | [Key Vault](https://azure.microsoft.com/pricing/details/key-vault/) | A key vault for storing secrets. | | [Azure Private Link](https://azure.microsoft.com/pricing/details/private-link/) | Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) over a private endpoint in your virtual network. | ### Costs might accrue before resource deletion -Before you delete an Azure AI resource in the Azure portal or with Azure CLI, the following sub resources are common costs that accumulate even when you aren't actively working in the workspace. If you're planning on returning to your Azure AI resource at a later time, these resources might continue to accrue costs: +Before you delete an Azure AI hub resource in the Azure portal or with Azure CLI, the following sub resources are common costs that accumulate even when you aren't actively working in the workspace. If you're planning on returning to your Azure AI hub resource at a later time, these resources might continue to accrue costs: - Azure AI Search (for the data) - Virtual machines - Load Balancer Compute instances also incur P10 disk costs even in stopped state. This is becau ### Costs might accrue after resource deletion -After you delete an Azure AI resource in the Azure portal or with Azure CLI, the following resources continue to exist. They continue to accrue costs until you delete them. +After you delete an Azure AI hub resource in the Azure portal or with Azure CLI, the following resources continue to exist. They continue to accrue costs until you delete them. - Azure Container Registry - Azure Blob Storage - Key Vault-- Application Insights (if you enabled it for your Azure AI resource)+- Application Insights (if you enabled it for your Azure AI hub resource) ## Monitor costs -As you use Azure AI Studio with Azure AI resources, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). You can see the incurred costs in [cost analysis](../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +As you use Azure AI Studio with Azure AI hub resources, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). You can see the incurred costs in [cost analysis](../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). -When you use cost analysis, you view Azure AI resource costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. +When you use cost analysis, you view Azure AI hub resource costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. ### Monitor Azure AI Studio project costs You can get to cost analysis from the [Azure portal](https://portal.azure.com). You can also get to cost analysis from the [Azure AI Studio portal](https://ai.azure.com). > [!IMPORTANT]-> Your Azure AI project costs are only a subset of your overall application or solution costs. You need to monitor costs for all Azure resources used in your application or solution. See [Azure AI resources](../concepts/ai-resources.md) for more information. +> Your Azure AI project costs are only a subset of your overall application or solution costs. You need to monitor costs for all Azure resources used in your application or solution. See [Azure AI hub resources](../concepts/ai-resources.md) for more information. For the examples in this section, assume that all Azure AI Studio resources are in the same resource group. But you can have resources in different resource groups. For example, your Azure AI Search resource might be in a different resource group than your Azure AI Studio project. Here's an example of how to monitor costs for an Azure AI Studio project. The co :::image type="content" source="../media/cost-management/project-costs/costs-per-project-resource-details.png" alt-text="Screenshot of the Azure portal cost analysis with Azure AI project expanded." lightbox="../media/cost-management/project-costs/costs-per-project-resource-details.png"::: -1. Expand **contoso_ai_resource** to see the costs for services underlying the [Azure AI](../concepts/ai-resources.md#azure-ai-resources) resource. You can also apply a filter to focus on other costs in your resource group. +1. Expand **contoso_ai_resource** to see the costs for services underlying the [Azure AI](../concepts/ai-resources.md#azure-ai-hub-resources) resource. You can also apply a filter to focus on other costs in your resource group. You can also view resource group costs directly from the Azure portal. To do so: 1. Sign in to [Azure portal](https://portal.azure.com). |
ai-studio | Create Azure Ai Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-azure-ai-resource.md | Title: How to create and manage an Azure AI resource + Title: How to create and manage an Azure AI hub resource -description: This article describes how to create and manage an Azure AI resource +description: This article describes how to create and manage an Azure AI hub resource - ignite-2023 Previously updated : 11/15/2023 Last updated : 2/5/2024 -# How to create and manage an Azure AI resource +# How to create and manage an Azure AI hub resource [!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)] -As an administrator, you can create and manage Azure AI resources. Azure AI resources provide a hosting environment for the projects of a team, and help you as an IT admin centrally set up security settings and govern usage and spend. You can create and manage an Azure AI resource from the Azure portal or from the Azure AI Studio. +As an administrator, you can create and manage Azure AI hub resources. Azure AI hub resources provide a hosting environment for the projects of a team, and help you as an IT admin centrally set up security settings and govern usage and spend. You can create and manage an Azure AI hub resource from the Azure portal or from the Azure AI Studio. -In this article, you learn how to create and manage an Azure AI resource in Azure AI Studio (for getting started) and from the Azure portal (for advanced security setup). +In this article, you learn how to create and manage an Azure AI hub resource in Azure AI Studio (for getting started) and from the Azure portal (for advanced security setup). -## Create an Azure AI resource in AI Studio for getting started -To create a new Azure AI resource, you need either the Owner or Contributor role on the resource group or on an existing Azure AI resource. If you are unable to create an Azure AI resource due to permissions, reach out to your administrator. If your organization is using [Azure Policy](../../governance/policy/overview.md), don't create the resource in AI Studio. Create the Azure AI resource [in the Azure Portal](#create-a-secure-azure-ai-resource-in-the-azure-portal) instead. +## Create an Azure AI hub resource in AI Studio -Follow these steps to create a new Azure AI resource in AI Studio. +To create a new Azure AI hub resource, you need either the Owner or Contributor role on the resource group or on an existing Azure AI hub resource. If you are unable to create an Azure AI hub resource due to permissions, reach out to your administrator. If your organization is using [Azure Policy](../../governance/policy/overview.md), don't create the resource in AI Studio. Create the Azure AI hub resource [in the Azure portal](#create-a-secure-azure-ai-hub-resource-in-the-azure-portal) instead. -1. From Azure AI Studio, navigate to `manage` and select `New Azure AI resource`. +Follow these steps to create a new Azure AI hub resource in AI Studio. -1. Fill in **Subscription**, **Resource group**, and **Location** for your new Azure AI resource. +1. Go to the **Manage** page in [Azure AI Studio](https://ai.azure.com). +1. Select **+ New AI hub**. - :::image type="content" source="../media/how-to/resource-create-advanced.png" alt-text="Screenshot of the Create an Azure AI resource wizard with the option to set basic information." lightbox="../media/how-to/resource-create-advanced.png"::: +1. Enter your AI hub name, subscription, resource group, and location details. -1. Optionally, choose an existing Azure AI services provider. By default a new provider is created. New Azure AI services include multiple API endpoints for Speech, Content Safety and Azure OpenAI. You can also bring an existing Azure OpenAI resource. +1. In the **Azure OpenAI** dropdown, you can select an existing Azure OpenAI resource to bring all your deployments into AI Studio. If you do not bring one, we will create one for you. -1. Optionally, connect an existing Azure AI Search instance to share search indices with all projects in this Azure AI resource. An Azure AI Search instance isn't created for you if you don't provide one. + :::image type="content" source="../media/how-to/resource-create-advanced.png" alt-text="Screenshot of the Create an Azure AI hub resource wizard with the option to set basic information." lightbox="../media/how-to/resource-create-advanced.png"::: -## Create a secure Azure AI resource in the Azure portal +1. Optionally, connect an existing Azure AI Search instance to share search indices with all projects in this Azure AI hub resource. An Azure AI Search instance isn't created for you if you don't provide one. +1. Select **Next**. +1. On the **Review and finish** page, you see the **AI Services** provider for you to access the Azure AI services such as Azure OpenAI. -If your organization is using [Azure Policy](../../governance/policy/overview.md), setup a resource that meets your organization's requirements instead of using AI Studio for resource creation. + :::image type="content" source="../media/how-to/resource-create-studio-review.png" alt-text="Screenshot of the review and finish page for creating an AI hub." lightbox="../media/how-to/resource-create-studio-review.png"::: ++1. Select **Create**. ++When the AI hub is created, you can see it on the **Manage** page in AI Studio. You can see that initially no projects are created in the AI hub. You can [create a new project](./create-projects.md). +++## Create a secure Azure AI hub resource in the Azure portal ++If your organization is using [Azure Policy](../../governance/policy/overview.md), set up an Azure AI hub resource that meets your organization's requirements instead of using AI Studio for resource creation. 1. From the Azure portal, search for `Azure AI Studio` and create a new resource by selecting **+ New Azure AI**-1. Fill in **Subscription**, **Resource group**, and **Region**. **Name** your new Azure AI resource. - - For advanced settings, select **Next: Resources** to specify resources, networking, encryption, identity, and tags. - - Your subscription must have access to Azure AI to create this resource. +1. Enter your AI hub name, subscription, resource group, and location details. +1. For advanced settings, select **Next: Resources** to specify resources, networking, encryption, identity, and tags. - :::image type="content" source="../media/how-to/resource-create-basics.png" alt-text="Screenshot of the option to set Azure AI resource basic information." lightbox="../media/how-to/resource-create-basics.png"::: + :::image type="content" source="../media/how-to/resource-create-basics.png" alt-text="Screenshot of the option to set Azure AI hub resource basic information." lightbox="../media/how-to/resource-create-basics.png"::: -1. Select an existing **Azure AI services** or create a new one. New Azure AI services include multiple API endpoints for Speech, Content Safety and Azure OpenAI. You can also bring an existing Azure OpenAI resource. Optionally, choose an existing **Storage account**, **Key vault**, **Container Registry**, and **Application insights** to host artifacts generated when you use AI Studio. +1. Select an existing **Azure AI services** resource or create a new one. New Azure AI services include multiple API endpoints for Speech, Content Safety and Azure OpenAI. You can also bring an existing Azure OpenAI resource. Optionally, choose an existing **Storage account**, **Key vault**, **Container Registry**, and **Application insights** to host artifacts generated when you use AI Studio. - :::image type="content" source="../media/how-to/resource-create-resources.png" alt-text="Screenshot of the Create an Azure AI resource with the option to set resource information." lightbox="../media/how-to/resource-create-resources.png"::: + :::image type="content" source="../media/how-to/resource-create-resources.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to set resource information." lightbox="../media/how-to/resource-create-resources.png"::: 1. Set up Network isolation. Read more on [network isolation](configure-managed-network.md). - :::image type="content" source="../media/how-to/resource-create-networking.png" alt-text="Screenshot of the Create an Azure AI resource with the option to set network isolation information." lightbox="../media/how-to/resource-create-networking.png"::: + :::image type="content" source="../media/how-to/resource-create-networking.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to set network isolation information." lightbox="../media/how-to/resource-create-networking.png"::: 1. Set up data encryption. You can either use **Microsoft-managed keys** or enable **Customer-managed keys**. - :::image type="content" source="../media/how-to/resource-create-encryption.png" alt-text="Screenshot of the Create an Azure AI resource with the option to select your encryption type." lightbox="../media/how-to/resource-create-encryption.png"::: + :::image type="content" source="../media/how-to/resource-create-encryption.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to select your encryption type." lightbox="../media/how-to/resource-create-encryption.png"::: 1. By default, **System assigned identity** is enabled, but you can switch to **User assigned identity** if existing storage, key vault, and container registry are selected in Resources. - :::image type="content" source="../media/how-to/resource-create-identity.png" alt-text="Screenshot of the Create an Azure AI resource with the option to select a managed identity." lightbox="../media/how-to/resource-create-identity.png"::: + :::image type="content" source="../media/how-to/resource-create-identity.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to select a managed identity." lightbox="../media/how-to/resource-create-identity.png"::: >[!Note]- >If you select **User assigned identity**, your identity needs to have the `Cognitive Services Contributor` role in order to successfully create a new Azure AI resource. + >If you select **User assigned identity**, your identity needs to have the `Cognitive Services Contributor` role in order to successfully create a new Azure AI hub resource. 1. Add tags. - :::image type="content" source="../media/how-to/resource-create-tags.png" alt-text="Screenshot of the Create an Azure AI resource with the option to add tags." lightbox="../media/how-to/resource-create-tags.png"::: + :::image type="content" source="../media/how-to/resource-create-tags.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to add tags." lightbox="../media/how-to/resource-create-tags.png"::: 1. Select **Review + create** -## Manage your Azure AI resource from the Azure portal +## Manage your Azure AI hub resource from the Azure portal -### Azure AI resource keys -View your keys and endpoints for your Azure AI resource from the overview page within the Azure portal. +### Azure AI hub resource keys +View your keys and endpoints for your Azure AI hub resource from the overview page within the Azure portal. ### Manage access control -Manage role assignments from **Access control (IAM)** within the Azure portal. Learn more about Azure AI resource [role-based access control](../concepts/rbac-ai-studio.md). +Manage role assignments from **Access control (IAM)** within the Azure portal. Learn more about Azure AI hub resource [role-based access control](../concepts/rbac-ai-studio.md). To add grant users permissions: -1. Select **+ Add** to add users to your Azure AI resource +1. Select **+ Add** to add users to your Azure AI hub resource 1. Select the **Role** you want to assign. - :::image type="content" source="../media/how-to/resource-rbac-role.png" alt-text="Screenshot of the page to add a role within the Azure AI resource Azure portal view." lightbox="../media/how-to/resource-rbac-role.png"::: + :::image type="content" source="../media/how-to/resource-rbac-role.png" alt-text="Screenshot of the page to add a role within the Azure AI hub resource Azure portal view." lightbox="../media/how-to/resource-rbac-role.png"::: 1. Select the **Members** you want to give the role to. - :::image type="content" source="../media/how-to/resource-rbac-members.png" alt-text="Screenshot of the add members page within the Azure AI resource Azure portal view." lightbox="../media/how-to/resource-rbac-members.png"::: + :::image type="content" source="../media/how-to/resource-rbac-members.png" alt-text="Screenshot of the add members page within the Azure AI hub resource Azure portal view." lightbox="../media/how-to/resource-rbac-members.png"::: 1. **Review + assign**. It can take up to an hour for permissions to be applied to users. ### Networking-Azure AI resource networking settings can be set during resource creation or changed in the Networking tab in the Azure portal view. Creating a new Azure AI resource invokes a Managed Virtual Network. This streamlines and automates your network isolation configuration with a built-in Managed Virtual Network. The Managed Virtual Network settings are applied to all projects created within an Azure AI resource. +Azure AI hub resource networking settings can be set during resource creation or changed in the **Networking** tab in the Azure portal view. Creating a new Azure AI hub resource invokes a Managed Virtual Network. This streamlines and automates your network isolation configuration with a built-in Managed Virtual Network. The Managed Virtual Network settings are applied to all projects created within an Azure AI hub resource. -At Azure AI resource creation, select between the networking isolation modes: Public, Private with Internet Outbound, and Private with Approved Outbound. To secure your resource, select either Private with Internet Outbound or Private with Approved Outbound for your networking needs. For the private isolation modes, a private endpoint should be created for inbound access. Read more information on Network Isolation and Managed Virtual Network Isolation [here](../../machine-learning/how-to-managed-network.md). To create a secure Azure AI resource, follow the tutorial [here](../../machine-learning/tutorial-create-secure-workspace.md). +At Azure AI hub resource creation, select between the networking isolation modes: **Public**, **Private with Internet Outbound**, and **Private with Approved Outbound**. To secure your resource, select either **Private with Internet Outbound** or Private with Approved Outbound for your networking needs. For the private isolation modes, a private endpoint should be created for inbound access. Read more information on Network Isolation and Managed Virtual Network Isolation [here](../../machine-learning/how-to-managed-network.md). To create a secure Azure AI hub resource, follow the tutorial [here](../../machine-learning/tutorial-create-secure-workspace.md). -At Azure AI resource creation in the Azure portal, creation of associated Azure AI services, Storage account, Key vault, Application insights, and Container registry is given. These resources are found on the Resources tab during creation. +At Azure AI hub resource creation in the Azure portal, creation of associated Azure AI services, Storage account, Key vault, Application insights, and Container registry is given. These resources are found on the Resources tab during creation. -To connect to Azure AI services (Azure OpenAI, Azure AI Search, and Azure AI Content Safety) or storage accounts in Azure AI Studio, create a private endpoint in your virtual network. Ensure the PNA flag is disabled when creating the private endpoint connection. For more about Azure AI service connections, follow documentation [here](../../ai-services/cognitive-services-virtual-networks.md). You can optionally bring your own (BYO) search, but this requires a private endpoint connection from your virtual network. +To connect to Azure AI services (Azure OpenAI, Azure AI Search, and Azure AI Content Safety) or storage accounts in Azure AI Studio, create a private endpoint in your virtual network. Ensure the PNA flag is disabled when creating the private endpoint connection. For more about Azure AI services connections, follow documentation [here](../../ai-services/cognitive-services-virtual-networks.md). You can optionally bring your own (BYO) search, but this requires a private endpoint connection from your virtual network. ### Encryption-Projects that use the same Azure AI resource, share their encryption configuration. Encryption mode can be set only at the time of Azure AI resource creation between Microsoft-managed keys and Customer-managed keys. +Projects that use the same Azure AI hub resource, share their encryption configuration. Encryption mode can be set only at the time of Azure AI hub resource creation between Microsoft-managed keys and Customer-managed keys. -From the Azure portal view, navigate to the encryption tab, to find the encryption settings for your AI resource. -For Azure AI resources that use CMK encryption mode, you can update the encryption key to a new key version. This update operation is constrained to keys and key versions within the same Key Vault instance as the original key. +From the Azure portal view, navigate to the encryption tab, to find the encryption settings for your Azure AI hub resource. +For Azure AI hub resources that use CMK encryption mode, you can update the encryption key to a new key version. This update operation is constrained to keys and key versions within the same Key Vault instance as the original key. -## Manage your Azure AI resource from the Manage tab within the AI Studio +## Manage your Azure AI hub resource from the Manage tab within the AI Studio ### Getting started with the AI Studio -When you enter your AI Studio, under **Manage**, you have the options to create a new Azure AI resource, manage an existing Azure AI resource, or view your Quota. +On the **Manage** page in [Azure AI Studio](https://ai.azure.com), you have the options to create a new Azure AI hub resource, manage an existing Azure AI hub resource, or view your quota. :::image type="content" source="../media/how-to/resource-manage-studio.png" alt-text="Screenshot of the Manage page of the Azure AI Studio." lightbox="../media/how-to/resource-manage-studio.png"::: -### Managing an Azure AI resource +### Managing an Azure AI hub resource When you manage a resource, you see an Overview page that lists **Projects**, **Description**, **Resource Configuration**, **Connections**, and **Permissions**. You also see pages to further manager **Permissions**, **Compute instances**, **Connections**, **Policies**, and **Billing**. -You can view all Projects that use this Azure AI resource. Be linked to the Azure portal to manage the Resource Configuration. Manage who has access to this Azure AI resource. View all of the connections within the resource. Manage who has access to this Azure AI resource. +You can view all Projects that use this Azure AI hub resource. Be linked to the Azure portal to manage the Resource Configuration. Manage who has access to this Azure AI hub resource. View all of the connections within the resource. Manage who has access to this Azure AI hub resource. :::image type="content" source="../media/how-to/resource-manage-details.png" alt-text="Screenshot of the Details page of the Azure AI Studio showing an overview of the resource." lightbox="../media/how-to/resource-manage-details.png"::: ### Permissions-Within Permissions you can view who has access to the Azure AI resource and also manage permissions. Learn more about [permissions](../concepts/rbac-ai-studio.md). +Within Permissions you can view who has access to the Azure AI hub resource and also manage permissions. Learn more about [permissions](../concepts/rbac-ai-studio.md). To add members: 1. Select **+ Add member**-1. Enter the member's name in **Add member** and assign a **Role**. For most users, we recommend the AI Developer role. This permission applies to the entire Azure AI resource. If you wish to only grant access to a specific Project, manage permissions in the [Project](create-projects.md) +1. Enter the member's name in **Add member** and assign a **Role**. For most users, we recommend the AI Developer role. This permission applies to the entire Azure AI hub resource. If you wish to only grant access to a specific Project, manage permissions in the [Project](create-projects.md) ### Compute instances-View and manage computes for your Azure AI resource. Create computes, delete computes, and review all compute resources you have in one place. +View and manage computes for your Azure AI hub resource. Create computes, delete computes, and review all compute resources you have in one place. ### Connections-From the Connections page, you can view all Connections in your Azure AI resource by their Name, Authentication method, Category type, if the connection is shared to all projects in the resource or specifically to a Project, Target, Owner, and Provisioning state. +From the Connections page, you can view all Connections in your Azure AI hub resource by their Name, Authentication method, Category type, if the connection is shared to all projects in the resource or specifically to a Project, Target, Owner, and Provisioning state. You can also add a connection through **+ Connection** -Learn more on how to [create and manage Connections](connections-add.md). Connections created in the Azure AI resource Manage page are automatically shared across all projects. If you want to make a project specific connection, make that within the Project. +Learn more on how to [create and manage Connections](connections-add.md). Connections created in the Azure AI hub resource Manage page are automatically shared across all projects. If you want to make a project specific connection, make that within the Project. ### Policies-View and configure policies for your Azure AI resource. See all the policies you have in one place. Policies are applied across all Projects. +View and configure policies for your Azure AI hub resource. See all the policies you have in one place. Policies are applied across all Projects. ### Billing-Here you're linked to the Azure portal to review the cost analysis information for your Azure AI resource. +Here you're linked to the Azure portal to review the cost analysis information for your Azure AI hub resource. ## Next steps - [Create a project](create-projects.md) - [Learn more about Azure AI Studio](../what-is-ai-studio.md)-- [Learn more about Azure AI resources](../concepts/ai-resources.md)+- [Learn more about Azure AI hub resources](../concepts/ai-resources.md) |
ai-studio | Create Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md | -Projects are hosted by an Azure AI resource that provides enterprise-grade security and a collaborative environment. For more information about the Azure AI projects and resources model, see [Azure AI resources](../concepts/ai-resources.md). +Projects are hosted by an Azure AI hub resource that provides enterprise-grade security and a collaborative environment. For more information about the Azure AI projects and resources model, see [Azure AI hub resources](../concepts/ai-resources.md). ++## Create a project You can create a project in Azure AI Studio in more than one way. The most direct way is from the **Build** tab. 1. Select the **Build** tab at the top of the page. You can create a project in Azure AI Studio in more than one way. The most direc :::image type="content" source="../media/how-to/projects-create-new.png" alt-text="Screenshot of the Build tab of the Azure AI Studio with the option to create a new project visible." lightbox="../media/how-to/projects-create-new.png"::: 1. Enter a name for the project.-1. Select an Azure AI resource from the dropdown to host your project. If you don't have access to an Azure AI resource yet, select **Create a new resource**. +1. Select an Azure AI hub resource from the dropdown to host your project. If you don't have access to an Azure AI hub resource yet, select **Create a new resource**. :::image type="content" source="../media/how-to/projects-create-details.png" alt-text="Screenshot of the project details page within the create project dialog." lightbox="../media/how-to/projects-create-details.png"::: > [!NOTE]- > To create an Azure AI resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. + > To create an Azure AI hub resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI hub resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. -1. If you're creating a new Azure AI resource, enter a name. +1. If you're creating a new Azure AI hub resource, enter a name. :::image type="content" source="../media/how-to/projects-create-resource.png" alt-text="Screenshot of the create resource page within the create project dialog." lightbox="../media/how-to/projects-create-resource.png"::: You can create a project in Azure AI Studio in more than one way. The most direc 1. Leave the **Resource group** as the default to create a new resource group. Alternatively, you can select an existing resource group from the dropdown. > [!TIP]- > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI resource, a container registry, and a storage account. + > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI hub resource, a container registry, and a storage account. -1. Enter the **Location** for the Azure AI resource and then select **Next**. The location is the region where the Azure AI resource is hosted. The location of the Azure AI resource is also the location of the project. Azure AI services availability differs per region. For example, certain models might not be available in certain regions. -1. Review the project details and then select **Create a project**. +1. Enter the **Location** for the Azure AI hub resource and then select **Next**. The location is the region where the Azure AI hub resource is hosted. The location of the Azure AI hub resource is also the location of the project. Azure AI services availability differs per region. For example, certain models might not be available in certain regions. +1. On the **Review and finish** page, you see the **AI Services** provider for you to access the Azure AI services such as Azure OpenAI. :::image type="content" source="../media/how-to/projects-create-review-finish.png" alt-text="Screenshot of the review and finish page within the create project dialog." lightbox="../media/how-to/projects-create-review-finish.png"::: -Once a project is created, you can access the **Tools**, **Components**, and **Settings** assets in the left navigation panel. Tools and assets listed under each of those subheadings can vary depending on the type of project you've selected. For example, if you've selected a project that uses Azure OpenAI, you see the **Playground** navigation option under **Tools**. +1. Review the project details and then select **Create a project**. ++Once a project is created, you can access the **Tools**, **Components**, and **Settings** assets in the left navigation panel. For a project that uses an Azure AI hub with support for Azure OpenAI, you see the **Playground** navigation option under **Tools**. ## Project details -In the project details page (select **Build** > **Settings**), you can find information about the project, such as the project name, description, and the Azure AI resource that hosts the project. You can also find the project ID, which is used to identify the project in the Azure AI Studio API. +In the project details page (select **Build** > **Settings**), you can find information about the project, such as the project name, description, and the Azure AI hub resource that hosts the project. You can also find the project ID, which is used to identify the project in the Azure AI Studio API. -- Project name: The name of the project corresponds to the selected project in the left panel. -- Azure AI resource: The Azure AI resource that hosts the project. -- Location: The location of the Azure AI resource that hosts the project. For supported locations, see [Azure AI Studio regions](../reference/region-support.md).-- Subscription: The subscription that hosts the Azure AI resource that hosts the project.-- Resource group: The resource group that hosts the Azure AI resource that hosts the project.+- Name: The name of the project corresponds to the selected project in the left panel. +- AI hub: The Azure AI hub resource that hosts the project. +- Location: The location of the Azure AI hub resource that hosts the project. For supported locations, see [Azure AI Studio regions](../reference/region-support.md). +- Subscription: The subscription that hosts the Azure AI hub resource that hosts the project. +- Resource group: The resource group that hosts the Azure AI hub resource that hosts the project. - Permissions: The users that have access to the project. For more information, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). -Select the Azure AI resource, subscription, or resource group to navigate to the corresponding resource in the Azure portal. +Select **View in the Azure portal** to navigate to the project resources in the Azure portal. ## Next steps -- [QuickStart: Moderate text and images with content safety in Azure AI Studio](../quickstarts/content-safety.md)+- [Deploy a web app for chat on your data](../tutorials/deploy-chat-web-app.md) - [Learn more about Azure AI Studio](../what-is-ai-studio.md)-- [Learn more about Azure AI resources](../concepts/ai-resources.md)+- [Learn more about Azure AI hub resources](../concepts/ai-resources.md) |
ai-studio | Data Image Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-image-add.md | This guide is scoped to the Azure AI Studio playground, but you can also add ima From the Azure AI Studio playground, you can choose how to add your image data for GPT-4 Turbo with Vision: * [Upload image files and metadata](?tabs=upload-image-files-and-metadata): You can upload image files and metadata in the playground. This option is useful if you have a small number of image files.-* [Azure AI Search](?tabs=azure-ai-search): If you have an existing [Azure AI search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. +* [Azure AI Search](?tabs=azure-ai-search): If you have an existing [Azure AI Search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. * [Azure Blob Storage](?tabs=azure-blob-storage): The Azure Blob storage option is especially useful if you have a large number of image files and don't want to manually upload each one. Each option uses an Azure AI Search index to do image-to-image search and retrieve the top search results for your input prompt image. Each option uses an Azure AI Search index to do image-to-image search and retrie # [Azure AI Search](#tab/azure-ai-search) -If you have an existing [Azure AI search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. +If you have an existing [Azure AI Search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. If you don't already have a search index created for your images: - You can create one using the [AI Search vector search repository on GitHub](https://github.com/Azure/cognitive-search-vector-pr), which provides you with scripts to create an index with your image files. If you don't already have a search index created for your images: 1. Enter your data source details: - :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-ai-search.png" alt-text="A screenshot showing the Azure AI search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-ai-search.png"::: + :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-ai-search.png" alt-text="A screenshot showing the Azure AI Search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-ai-search.png"::: - **Subscription**: Select the Azure subscription that contains the Azure OpenAI resource you want to use. - **Azure AI Search service**: Select your Azure AI Search service resource that has an image search index. If you don't already have a search index created for your images: 1. Review the details you entered. - :::image type="content" source="../media/data-add/use-your-image-data/add-your-data-ai-search-review-finish.png" alt-text="Screenshot of the review and finish page for adding data via Azure AI search." lightbox="../media/data-add/use-your-image-data/add-your-data-ai-search-review-finish.png"::: + :::image type="content" source="../media/data-add/use-your-image-data/add-your-data-ai-search-review-finish.png" alt-text="Screenshot of the review and finish page for adding data via Azure AI Search." lightbox="../media/data-add/use-your-image-data/add-your-data-ai-search-review-finish.png"::: 1. Select **Save and close**. After you have a blob storage populated with image files and at least one metada > > When adding data to the selected storage account for the first time in Azure AI Studio, you might be prompted to turn on [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). Azure AI Studio and Azure OpenAI need access your Azure Blob storage account. - :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-blob.png" alt-text="A screenshot showing the Azure storage account and Azure AI search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-blob.png"::: + :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-blob.png" alt-text="A screenshot showing the Azure storage account and Azure AI Search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-blob.png"::: - **Subscription**: Select the Azure subscription that contains the Azure OpenAI resource you want to use. - **Storage resource** and **Storage container**: Select the Azure Blob storage resource where the image files and metadata are already stored. |
ai-studio | Deploy Models Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md | The following is an example response: ## Deploy Llama 2 models to real-time endpoints -Llama 2 models can be deployed to real-time endpoints in AI studio. When deployed to real-time endpoints, you can select all the details about on the infrastructure running the model including the virtual machines used to run it and the number of instances to handle the load you're expecting. Models deployed in this modality consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints. +Llama 2 models can be deployed to real-time endpoints in AI Studio. When deployed to real-time endpoints, you can select all the details about on the infrastructure running the model including the virtual machines used to run it and the number of instances to handle the load you're expecting. Models deployed in this modality consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints. ### Create a new deployment |
ai-studio | Develop In Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop-in-vscode.md | This table summarizes the folder structure: | `shared` | Use for working with a project's shared files and assets such as prompt flows.<br/><br/>For example, `shared\Users\{user-name}\promptflow` is where you find the project's prompt flows. | > [!IMPORTANT]-> It's recommended that you work within this project directory. Files, folders, and repos you include in your project directory persist on your host machine (your compute instance). Files stored in the code and data folders will persist even when the compute instance is stopped or restarted, but will be lost if the compute is deleted. However, the shared files are saved in your Azure AI resource's storage account, and therefore aren't lost if the compute instance is deleted. +> It's recommended that you work within this project directory. Files, folders, and repos you include in your project directory persist on your host machine (your compute instance). Files stored in the code and data folders will persist even when the compute instance is stopped or restarted, but will be lost if the compute is deleted. However, the shared files are saved in your Azure AI hub resource's storage account, and therefore aren't lost if the compute instance is deleted. ### The Azure AI SDK For cross-language compatibility and seamless integration of Azure AI capabiliti ## Next steps - [Get started with the Azure AI CLI](cli-install.md)-- [Quickstart: Generate product name ideas in the Azure AI Studio playground](../quickstarts/playground-completions.md)+- [Quickstart: Analyze images and video with GPT-4 for Vision in the playground](../quickstarts/multimodal-vision.md) |
ai-studio | Flow Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-deploy.md | This step allows you to configure the basic settings of the deployment. |Virtual machine| The VM size to use for the deployment.| |Instance count| The number of instances to use for the deployment. Specify the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades.| |Inference data collection| If you enable this, the flow inputs and outputs are auto collected in an Azure Machine Learning data asset, and can be used for later monitoring.|-|Application Insights diagnostics| If you enable this, system metrics during inference time (such as token count, flow latency, flow request, and etc.) will be collected into Azure AI resource default Application Insights.| +|Application Insights diagnostics| If you enable this, system metrics during inference time (such as token count, flow latency, flow request, and etc.) will be collected into Azure AI hub resource default Application Insights.| After you finish the basic settings, you can directly **Review + Create** to finish the creation, or you can select **Next** to configure advanced settings. The authentication method for the endpoint. Key-based authentication provides a #### Identity type -The endpoint needs to access Azure resources such as the Azure Container Registry or your Azure AI resource connections for inferencing. You can allow the endpoint permission to access Azure resources via giving permission to its managed identity. +The endpoint needs to access Azure resources such as the Azure Container Registry or your Azure AI hub resource connections for inferencing. You can allow the endpoint permission to access Azure resources via giving permission to its managed identity. System-assigned identity will be autocreated after your endpoint is created, while user-assigned identity is created by user. [Learn more about managed identities.](../../active-directory/managed-identities-azure-resources/overview.md) You notice there's an option whether *Enforce access to connection secrets (prev ##### User-assigned -When you create the deployment, Azure tries to pull the user container image from the Azure AI resource Azure Container Registry (ACR) and mounts the user model and code artifacts into the user container from the Azure AI resource storage account. +When you create the deployment, Azure tries to pull the user container image from the Azure AI hub resource Azure Container Registry (ACR) and mounts the user model and code artifacts into the user container from the Azure AI hub resource storage account. If you created the associated endpoint with **User Assigned Identity**, the user-assigned identity must be granted the following roles before the deployment creation; otherwise, the deployment creation fails. You can grant all permissions in Azure portal UI by following steps. 1. Select **Azure Machine Learning Workspace Connection Secrets Reader**, go to **Next**. > [!NOTE]- > The **Azure Machine Learning Workspace Connection Secrets Reader** role is a built-in role which has permission to get Azure AI resource connections. + > The **Azure Machine Learning Workspace Connection Secrets Reader** role is a built-in role which has permission to get Azure AI hub resource connections. > > If you want to use a customized role, make sure the customized role has the permission of `Microsoft.MachineLearningServices/workspaces/connections/listsecrets/action`. Learn more about [how to create custom roles](../../role-based-access-control/custom-roles-portal.md#step-3-basics). You can grant all permissions in Azure portal UI by following steps. For **user-assigned identity**, select **User-assigned managed identity**, and search by identity name. -1. For **user-assigned** identity, you need to grant permissions to the Azure AI resource container registry and storage account as well. You can find the container registry and storage account in the Azure AI resource overview page in Azure portal. +1. For **user-assigned** identity, you need to grant permissions to the Azure AI hub resource container registry and storage account as well. You can find the container registry and storage account in the Azure AI hub resource overview page in Azure portal. :::image type="content" source="../media/prompt-flow/how-to-deploy-for-real-time-inference/storage-container-registry.png" alt-text="Screenshot of the overview page with storage and container registry highlighted." lightbox = "../media/prompt-flow/how-to-deploy-for-real-time-inference/storage-container-registry.png"::: - Go to the Azure AI resource container registry overview page, select **Access control**, and select **Add role assignment**, and assign **ACR pull |Pull container image** to the endpoint identity. + Go to the Azure AI hub resource container registry overview page, select **Access control**, and select **Add role assignment**, and assign **ACR pull |Pull container image** to the endpoint identity. - Go to the Azure AI resource default storage overview page, select **Access control**, and select **Add role assignment**, and assign **Storage Blob Data Reader** to the endpoint identity. + Go to the Azure AI hub resource default storage overview page, select **Access control**, and select **Add role assignment**, and assign **Storage Blob Data Reader** to the endpoint identity. -1. (optional) For **user-assigned** identity, if you want to monitor the endpoint related metrics like CPU/GPU/Disk/Memory utilization, you need to grant **Workspace metrics writer** role of Azure AI resource to the identity as well. +1. (optional) For **user-assigned** identity, if you want to monitor the endpoint related metrics like CPU/GPU/Disk/Memory utilization, you need to grant **Workspace metrics writer** role of Azure AI hub resource to the identity as well. ## Check the status of the endpoint |
ai-studio | Index Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md | This can happen if you are trying to create an index using an **Owner**, **Contr > [!NOTE] > You need to be assigned the **Owner** role of the resource group or higher scope (like Subscription) to perform the operation in the next steps. This is because only the Owner role can assign roles to others. See details [here](/azure/role-based-access-control/built-in-roles). -#### Method 1: Assign more permissions to the user on the Azure AI resource +#### Method 1: Assign more permissions to the user on the Azure AI hub resource -If the Azure AI resource the project uses was created through Azure AI Studio: +If the Azure AI hub resource the project uses was created through Azure AI Studio: 1. Sign in to [Azure AI Studio](https://aka.ms/azureaistudio) and select your project via **Build** > **Projects**. 1. Select **Settings** from the collapsible left menu. 1. From the **Resource Configuration** section, select the link for your resource group name that takes you to the Azure portal. If the Azure AI resource the project uses was created through Azure AI Studio: #### Method 2: Assign more permissions on the resource group -If the Azure AI resource the project uses was created through Azure portal: +If the Azure AI hub resource the project uses was created through Azure portal: 1. Sign in to [Azure AI Studio](https://aka.ms/azureaistudio) and select your project via **Build** > **Projects**. 1. Select **Settings** from the collapsible left menu. 1. From the **Resource Configuration** section, select the link for your resource group name that takes you to the Azure portal. |
ai-studio | Models Foundation Azure Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/models-foundation-azure-ai.md | Explore more Speech capabilities in the [Speech Studio](https://aka.ms/speechstu :::image type="content" source="../media/explore/explore-vision.png" alt-text="Screenshot of vision capability cards in the Azure AI Studio explore tab." lightbox="../media/explore/explore-vision.png"::: +> [!TIP] +> You can also try GPT-4 Turbo with Vision capabilities in the Azure AI Studio playground. For more information, see [GPT-4 Turbo with Vision on your images and videos in Azure AI Studio playground](../quickstarts/multimodal-vision.md). + Explore more vision capabilities in the [Vision Studio](https://portal.vision.cognitive.azure.com/) and the [Azure AI Vision documentation](/azure/ai-services/computer-vision/). To try more Azure AI services, go to the following studio links: - [Content Safety](https://contentsafety.cognitive.azure.com/) - [Custom Translator](https://portal.customtranslator.azure.ai/) -You can conveniently access these links from a menu at the top-right corner of AI Studio. +You can conveniently access these links from the **All Azure AI** menu at the top-right corner of AI Studio. ## Prompt samples Prompt engineering is an important aspect of working with generative AI models as it allows users to have greater control, customization, and influence over the outputs. By skillfully designing prompts, users can harness the capabilities of generative AI models to generate desired content, address specific requirements, and cater to various application domains. -The prompt samples are designed to assist AI studio users in finding and utilizing prompts for common use-cases and quickly get started. Users can explore the catalog, view available prompts, and easily open them in a playground for further customization and fine-tuning. +The prompt samples are designed to assist AI Studio users in finding and utilizing prompts for common use-cases and quickly get started. Users can explore the catalog, view available prompts, and easily open them in a playground for further customization and fine-tuning. > [!NOTE] > These prompts serve as starting points to help users get started and we recommend users to tune and evaluate before using in production. -On the **Explore** page, select **Samples** > **Prompts** from the left menu to learn more and try it out. +On the **Explore** page, select **Models** > **Prompt catalog** from the left menu to learn more and try it out. ### Filter by Modalities, Industries or Tasks |
ai-studio | Azure Open Ai Gpt 4V Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md | The prompt flow *Azure OpenAI GPT-4 Turbo with Vision* tool enables you to use y Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI resource](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.+- An [Azure AI hub resource](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version. ## Connection |
ai-studio | Content Safety Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/content-safety-tool.md | Create an Azure Content Safety connection: 1. Sign in to [Azure AI Studio](https://studio.azureml.net/). 1. Go to **Settings** > **Connections**. 1. Select **+ New connection**.-1. Complete all steps in the **Create a new connection** dialog box. You can use an Azure AI resource or Azure AI Content Safety resource. An Azure AI resource that supports multiple Azure AI services is recommended. +1. Complete all steps in the **Create a new connection** dialog box. You can use an Azure AI hub resource or Azure AI Content Safety resource. An Azure AI hub resource that supports multiple Azure AI services is recommended. ## Build with the Content Safety tool |
ai-studio | Python Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md | Create a custom connection that stores all your LLM API KEY or other required cr - azureml.flow.connection_type: Custom - azureml.flow.module: promptflow.connections - :::image type="content" source="./media/python-tool/custom-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI studio." lightbox = "./media/python-tool/custom-connection-meta.png"::: + :::image type="content" source="./media/python-tool/custom-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI Studio." lightbox = "./media/python-tool/custom-connection-meta.png"::: > [!NOTE] |
ai-studio | Serp Api Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/serp-api-tool.md | Create a Serp connection: - azureml.flow.module: promptflow.connections - api_key: Your_Serp_API_key, please mark it as a secret. - :::image type="content" source="./media/serp-api-tool/serp-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI studio." lightbox = "./media/serp-api-tool/serp-connection-meta.png"::: + :::image type="content" source="./media/serp-api-tool/serp-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI Studio." lightbox = "./media/serp-api-tool/serp-connection-meta.png"::: The connection is the model used to establish connections with Serp API. Get your API key from the SerpAPI account dashboard. |
ai-studio | Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/quota.md | Azure uses limits and quotas to prevent budget overruns due to fraud, and to hon In this article, you learn about: - Default limits on Azure resources -- Creating Azure AI resource-level quotas. +- Creating Azure AI hub resource-level quotas. - Viewing your quotas and limits - Requesting quota and limit increases Azure Storage has a limit of 250 storage accounts per region, per subscription. ## View and request quotas in the studio -Use quotas to manage compute target allocation between multiple Azure AI resources in the same subscription. +Use quotas to manage compute target allocation between multiple Azure AI hub resources in the same subscription. -By default, all Azure AI resources share the same quota as the subscription-level quota for VM families. However, you can set a maximum quota for individual VM families for more granular cost control and governance on Azure AI resources in a subscription. Quotas for individual VM families let you share capacity and avoid resource contention issues. +By default, all Azure AI hub resources share the same quota as the subscription-level quota for VM families. However, you can set a maximum quota for individual VM families for more granular cost control and governance on Azure AI hub resources in a subscription. Quotas for individual VM families let you share capacity and avoid resource contention issues. In Azure AI Studio, select **Manage** from the top menu. Select **Quota** to view your quota at the subscription level in a region for both Azure Machine Learning virtual machine families and for your Azure Open AI resources. |
ai-studio | Sdk Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/sdk-install.md | -The Azure AI SDK is a family of packages that provide access to Azure AI services such as Azure OpenAI and Speech. +The Azure AI SDK is a family of packages that provide access to Azure AI services such as Azure OpenAI. In this article, you'll learn how to get started with the Azure AI SDK for generative AI applications. You can either: - [Install the SDK into an existing development environment](#install-the-sdk-into-an-existing-development-environment) or |
ai-studio | Troubleshoot Deploy And Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/troubleshoot-deploy-and-monitor.md | This article provides instructions on how to troubleshoot your deployments and m For the general deployment error code reference, you can go to the [Azure Machine Learning documentation](/azure/machine-learning/how-to-troubleshoot-online-endpoints). Much of the information there also applies to Azure AI Studio deployments. **Question:** I got the following error message. What should I do?-"Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI services resources. This subscription or region doesn't have access to this model." +"Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI Services resources. This subscription or region doesn't have access to this model." **Answer:** You might not have access to this particular Azure OpenAI model. For example, your subscription might not have access to the latest GPT model yet or this model isn't offered in the region you want to deploy to. You can learn more about it on [Azure OpenAI Service models](../../ai-services/openai/concepts/models.md). You might have come across an ImageBuildFailure error: This happens when the env Option 1: Find the build log for the Azure default blob storage. -1. Go to your project and select the settings icon on the lower left corner. -2. Select YourAIResourceName under AI Resource on the Settings page. -3. On the AI resource page, select YourStorageName under Storage Account. This should be the name of storage account listed in the error message you received. -4. On the storage account page, select Container under Data Storage on the left navigation UI -5. Select the ContainerName listed in the error message you received. +1. Go to your project in [Azure AI Studio](https://ai.azure.com) and select the settings icon on the lower left corner. +2. Select your Azure AI hub resource name under **Resource configurations** on the **Settings** page. +3. On the Azure AI hub overview page, select your storage account name. This should be the name of storage account listed in the error message you received. You'll be taken to the storage account page in the [Azure portal](https://portal.azure.com). +4. On the storage account page, select **Containers** under **Data Storage** on the left menu. +5. Select the container name listed in the error message you received. 6. Select through folders to find the build logs. Option 2: Find the build log within Azure Machine Learning studio, which is a separate portal from Azure AI Studio. |
ai-studio | Content Safety | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/content-safety.md | In this quickstart, get started with the [Azure AI Content Safety](/azure/ai-ser ## Prerequisites + * An active Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).-* An [Azure AI resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio. +* An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio. ## Moderate text or images |
ai-studio | Hear Speak Playground | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md | The speech to text and text to speech features can be used together or separatel ## Prerequisites + - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. - Access granted to Azure OpenAI in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a chat model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) with a chat model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md). - An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio. |
ai-studio | Multimodal Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/multimodal-vision.md | Extra usage fees might apply for using GPT-4 Turbo with Vision and Azure AI Visi ## Prerequisites + - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. - Access granted to Azure OpenAI in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the [regions that support GPT-4 Turbo with Vision](../../ai-services/openai/concepts/models.md#gpt-4-and-gpt-4-turbo-preview-model-availability): Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your Azure AI project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version. - An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio. ## Start a chat session to analyze images or video |
ai-studio | Playground Completions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/playground-completions.md | Use this article to get started making your first calls to Azure OpenAI. - Access granted to Azure OpenAI in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.-- An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) with a model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md). - An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio. ### Try text completions |
ai-studio | Region Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/region-support.md | Title: Azure AI Studio feature availability across clouds regions-+ description: This article lists Azure AI Studio feature availability across clouds regions. Azure AI Studio brings together various Azure AI capabilities that previously we ## Azure Public regions -Azure AI Studio is currently available in preview in the following Azure regions. You can create [Azure AI resources](../how-to/create-azure-ai-resource.md) and projects in these regions. +Azure AI Studio is currently available in preview in the following Azure regions. You can create [Azure AI hub resources](../how-to/create-azure-ai-resource.md) and projects in these regions. - Australia East - Brazil South Azure AI Studio preview is currently not available in Azure Government regions o ## Speech capabilities -Speech capabilities including custom neural voice vary in regional availability due to underlying hardware availability. See [Speech service supported regions](../../ai-services/speech-service/regions.md) for an overview. ++Azure AI Speech capabilities including custom neural voice vary in regional availability due to underlying hardware availability. See [Speech service supported regions](../../ai-services/speech-service/regions.md) for an overview. ## Next steps |
ai-studio | Deploy Chat Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md | The steps in this tutorial are: Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio.+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio. - You need at least one file to upload that contains example data. To complete this tutorial, use the product information samples from the [Azure/aistudio-copilot-sample repository on GitHub](https://github.com/Azure/aistudio-copilot-sample/tree/main/data). Specifically, the [product_info_11.md](https://github.com/Azure/aistudio-copilot-sample/blob/main/dat` on your local computer. Once you're satisfied with the experience in Azure AI Studio, you can deploy the ### Find your resource group in the Azure portal -In this tutorial, your web app is deployed to the same resource group as your Azure AI resource. Later you configure authentication for the web app in the Azure portal. +In this tutorial, your web app is deployed to the same resource group as your Azure AI hub resource. Later you configure authentication for the web app in the Azure portal. Follow these steps to navigate from Azure AI Studio to your resource group in the Azure portal: -1. In Azure AI Studio, select **Manage** from the top menu and then select **Details**. If you have multiple Azure AI resources, select the one you want to use in order to see its details. +1. In Azure AI Studio, select **Manage** from the top menu and then select **Details**. If you have multiple Azure AI hub resources, select the one you want to use in order to see its details. 1. In the **Resource configuration** pane, select the resource group name to open the resource group in the Azure portal. In this example, the resource group is named `rg-docsazureairesource`. :::image type="content" source="../media/tutorials/chat-web-app/resource-group-manage-page.png" alt-text="Screenshot of the resource group in the Azure AI Studio." lightbox="../media/tutorials/chat-web-app/resource-group-manage-page.png"::: -1. You should now be in the Azure portal, viewing the contents of the resource group where you deployed the Azure AI resource. +1. You should now be in the Azure portal, viewing the contents of the resource group where you deployed the Azure AI hub resource. :::image type="content" source="../media/tutorials/chat-web-app/resource-group-azure-portal.png" alt-text="Screenshot of the resource group in the Azure portal." lightbox="../media/tutorials/chat-web-app/resource-group-azure-portal.png"::: To deploy the web app: 1. On the **Deploy to a web app** page, enter the following details: - **Name**: A unique name for your web app. - **Subscription**: Your Azure subscription.- - **Resource group**: Select a resource group in which to deploy the web app. You can use the same resource group as the Azure AI resource. - - **Location**: Select a location in which to deploy the web app. You can use the same location as the Azure AI resource. + - **Resource group**: Select a resource group in which to deploy the web app. You can use the same resource group as the Azure AI hub resource. + - **Location**: Select a location in which to deploy the web app. You can use the same location as the Azure AI hub resource. - **Pricing plan**: Choose a pricing plan for the web app. - **Enable chat history in the web app**: For the tutorial, make sure this box isn't selected. - **I acknowledge that web apps will incur usage to my account**: Selected To deploy the web app: By default, the web app will only be accessible to you. In this tutorial, you add authentication to restrict access to the app to members of your Azure tenant. Users are asked to sign in with their Microsoft Entra account to be able to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign in information in any other way other than verifying they're a member of your tenant. -1. Return to the browser tab containing the Azure portal (or re-open the [Azure portal](https://portal.azure.com?azure-portal=true) in a new browser tab) and view the contents of the resource group where you deployed the Azure AI resource and web app (you might need to refresh the view the see the web app). +1. Return to the browser tab containing the Azure portal (or re-open the [Azure portal](https://portal.azure.com?azure-portal=true) in a new browser tab) and view the contents of the resource group where you deployed the Azure AI hub resource and web app (you might need to refresh the view the see the web app). 1. Select the **App Service** resource from the list of resources in the resource group. |
ai-studio | Deploy Copilot Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md | The steps in this tutorial are: Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- You need an Azure AI resource and your user role must be **Azure AI Developer**, **Contributor**, or **Owner** on the Azure AI resource. For more information, see [Azure AI resources](../concepts/ai-resources.md) and [Azure AI roles](../concepts/rbac-ai-studio.md).- - If your role is **Contributor** or **Owner**, you can [create an Azure AI resource in this tutorial](#create-an-azure-ai-project-in-azure-ai-studio). - - If your role is **Azure AI Developer**, the Azure AI resource must already be created. +- You need an Azure AI hub resource and your user role must be **Azure AI Developer**, **Contributor**, or **Owner** on the Azure AI hub resource. For more information, see [Azure AI hub resources](../concepts/ai-resources.md) and [Azure AI roles](../concepts/rbac-ai-studio.md). + - If your role is **Contributor** or **Owner**, you can [create an Azure AI hub resource in this tutorial](#create-an-azure-ai-project-in-azure-ai-studio). + - If your role is **Azure AI Developer**, the Azure AI hub resource must already be created. - Your subscription needs to be below your [quota limit](../how-to/quota.md) to [deploy a new model in this tutorial](#deploy-a-chat-model). Otherwise you already need to have a [deployed chat model](../how-to/deploy-models-openai.md). The steps in this tutorial are: ## Create an Azure AI project in Azure AI Studio -Your Azure AI project is used to organize your work and save state while building your copilot. During this tutorial, your project contains your data, prompt flow runtime, evaluations, and other resources. For more information about the Azure AI projects and resources model, see [Azure AI resources](../concepts/ai-resources.md). +Your Azure AI project is used to organize your work and save state while building your copilot. During this tutorial, your project contains your data, prompt flow runtime, evaluations, and other resources. For more information about the Azure AI projects and resources model, see [Azure AI hub resources](../concepts/ai-resources.md). To create an Azure AI project in Azure AI Studio, follow these steps: 1. Sign in to [Azure AI Studio](https://ai.azure.com) and go to the **Build** page from the top menu. 1. Select **+ New project**. 1. Enter a name for the project.-1. Select an Azure AI resource from the dropdown to host your project. If you don't have access to an Azure AI resource yet, select **Create a new resource**. +1. Select an Azure AI hub resource from the dropdown to host your project. If you don't have access to an Azure AI hub resource yet, select **Create a new resource**. :::image type="content" source="../media/tutorials/copilot-deploy-flow/create-project-details.png" alt-text="Screenshot of the project details page within the create project dialog." lightbox="../media/tutorials/copilot-deploy-flow/create-project-details.png"::: > [!NOTE]- > To create an Azure AI resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. + > To create an Azure AI hub resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI hub resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. -1. If you're creating a new Azure AI resource, enter a name. +1. If you're creating a new Azure AI hub resource, enter a name. :::image type="content" source="../media/tutorials/copilot-deploy-flow/create-project-resource.png" alt-text="Screenshot of the create resource page within the create project dialog." lightbox="../media/tutorials/copilot-deploy-flow/create-project-resource.png"::: To create an Azure AI project in Azure AI Studio, follow these steps: 1. Leave the **Resource group** as the default to create a new resource group. Alternatively, you can select an existing resource group from the dropdown. > [!TIP]- > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI resource, a container registry, and a storage account. + > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI hub resource, a container registry, and a storage account. -1. Enter the **Location** for the Azure AI resource and then select **Next**. The location is the region where the Azure AI resource is hosted. The location of the Azure AI resource is also the location of the project. +1. Enter the **Location** for the Azure AI hub resource and then select **Next**. The location is the region where the Azure AI hub resource is hosted. The location of the Azure AI hub resource is also the location of the project. > [!NOTE]- > Azure AI resources and services availability differ per region. For example, certain models might not be available in certain regions. The resources in this tutorial are created in the **East US 2** region. + > Azure AI hub resources and services availability differ per region. For example, certain models might not be available in certain regions. The resources in this tutorial are created in the **East US 2** region. 1. Review the project details and then select **Create a project**. |
ai-studio | Screen Reader | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/screen-reader.md | -This article is for people who use screen readers such as Microsoft's Narrator, JAWS, NVDA or Apple's Voiceover, and provides guidance on how to use the Azure AI Studio with a screen reader. +This article is for people who use screen readers such as Microsoft's Narrator, JAWS, NVDA or Apple's Voiceover. You learn how to use the Azure AI Studio with a screen reader. ## Getting started in the Azure AI Studio For efficient navigation, it might be helpful to navigate by landmarks to move b ## Explore -In **Explore** you can explore the different capabilities of Azure AI before creating a project. You can find this in the primary navigation landmark. +In **Explore** you can explore the different capabilities of Azure AI before creating a project. You can find this page in the primary navigation landmark. -Within **Explore**, you can explore many capabilities found within the secondary navigation. These include model catalog, model leaderboard, and pages for Azure AI services such as Speech, Vision, and Content Safety. -- Model catalog contains three main areas: Announcements, Models and Filters. You can use Search and Filters to narrow down model selection +Within **Explore**, you can [explore many capabilities](../how-to/models-foundation-azure-ai.md) found within the secondary navigation. These include [model catalog](../how-to/model-catalog.md), model leaderboard, and pages for Azure AI services such as Speech, Vision, and Content Safety. +- [Model catalog](../how-to/model-catalog.md) contains three main areas: Announcements, Models and Filters. You can use Search and Filters to narrow down model selection - Azure AI service pages such as Speech consist of many cards containing links. These cards lead you to demo experiences where you can sample our AI capabilities and might link out to another webpage. ## Projects To work within the Azure AI Studio, you must first [create a project](../how-to/create-projects.md): -1. Navigate to the Build tab in the primary navigation. -1. Press the Tab key until you hear *New project* and select this button. +1. In [Azure AI Studio](https://ai.azure.com), navigate to the **Build** tab in the primary navigation. +1. Press the **Tab** key until you hear *New project* and select this button. 1. Enter the information requested in the **Create a new project** dialog. You then get taken to the project details page. From the **Build** tab, navigate to the secondary navigation landmark and press ### Playground structure -When you first arrive the playground mode dropdown is set to **Chat** by default. In this mode the playground is composed of the command toolbar and three main panes: **Assistant setup**, **Chat session**, and **Configuration**. If you have added your own data in the playground, the **Citations** pane will also appear when selecting a citation as part of the model response. +When you first arrive, the playground mode dropdown is set to **Chat** by default. In this mode, the playground is composed of the command toolbar and three main panes: **Assistant setup**, **Chat session**, and **Configuration**. If you added your own data in the playground, the **Citations** pane also appears when selecting a citation as part of the model response. You can navigate by heading to move between these panes, as each pane has its own H2 heading. ### Assistant setup pane -This is where you can set up the chat assistant according to your organization's needs. +The assistant setup pane is where you can set up the chat assistant according to your organization's needs. Once you edit the system message or examples, your changes don't save automatically. Press the **Save changes** button to ensure your changes are saved. ### Chat session pane -This is where you can chat to the model and test out your assistant -- After you send a message, the model might take some time to respond, especially if the response is long. You hear a screen reader announcement "Message received from the chatbot" when the model has finished composing a response. +The chat session pane is where you can chat to the model and test out your assistant +- After you send a message, the model might take some time to respond, especially if the response is long. You hear a screen reader announcement "Message received from the chatbot" when the model finishes composing a response. - Content in the chatbot follows this format: ``` There's also a dashboard view provided to allow you to compare evaluation runs. ## Technical support for customers with disabilities -Microsoft wants to provide the best possible experience for all our customers. If you have a disability or questions related to accessibility, please contact the Microsoft Disability Answer Desk for technical assistance. The Disability Answer Desk support team is trained in using many popular assistive technologies and can offer assistance in English, Spanish, French, and American Sign Language. Go to the Microsoft Disability Answer Desk site to find out the contact details for your region. +Microsoft wants to provide the best possible experience for all our customers. If you have a disability or questions related to accessibility, contact the Microsoft Disability Answer Desk for technical assistance. The Disability Answer Desk support team is trained in using many popular assistive technologies. They can offer assistance in English, Spanish, French, and American Sign Language. Go to the Microsoft Disability Answer Desk site to find out the contact details for your region. -If you're a government, commercial, or enterprise customer, please contact the enterprise Disability Answer Desk. +If you're a government, commercial, or enterprise customer, contact the enterprise Disability Answer Desk. ## Next steps * Learn how you can build generative AI applications in the [Azure AI Studio](../what-is-ai-studio.md). |
ai-studio | What Is Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md | Azure AI Studio brings together capabilities from across multiple Azure AI servi [Azure AI Studio](https://ai.azure.com) is designed for developers to: - Build generative AI applications on an enterprise-grade platform. -- Directly from the studio you can interact with a project code-first via the Azure AI SDK and Azure AI CLI. +- Directly from the studio you can interact with a project code-first via the [Azure AI SDK](how-to/sdk-install.md) and [Azure AI CLI](how-to/cli-install.md). - Azure AI Studio is a trusted and inclusive platform that empowers developers of all abilities and preferences to innovate with AI and shape the future. - Seamlessly explore, build, test, and deploy using cutting-edge AI tools and ML models, grounded in responsible AI practices. -- Build together as one team. Your Azure AI resource provides enterprise-grade security, and a collaborative environment with shared files and connections to pretrained models, data and compute.-- Organize your way. Your project helps you save state, allowing you iterate from first idea, to first prototype, and then first production deployment. Also easily invite others to collaborate along this journey.+- Build together as one team. Your [Azure AI hub resource](./concepts/ai-resources.md) provides enterprise-grade security, and a collaborative environment with shared files and connections to pretrained models, data and compute. +- Organize your way. Your [Azure AI project](./how-to/create-projects.md) helps you save state, allowing you iterate from first idea, to first prototype, and then first production deployment. Also easily invite others to collaborate along this journey. With Azure AI Studio, you can evaluate large language model (LLM) responses and orchestrate prompt application components with prompt flow for better performance. The platform facilitates scalability for transforming proof of concepts into full-fledged production with ease. Continuous monitoring and refinement support long-term success. ## Getting around in Azure AI Studio -Wherever you're at or going in Azure AI Studio, use the Home, Explore, Build, and Manage tabs to find your way around. -+Wherever you're at or going in Azure AI Studio, use the **Home**, **Explore**, **Build**, and **Manage** tabs to find your way around. # [Home](#tab/home) Build is an experience where AI Devs and ML Pros can build or customize AI solut - Simplified development of large language model (LLM) solutions and copilots with end-to-end app templates and prompt samples for common use cases. - Orchestration framework to handle the complex mapping of functions and code between LLMs, tools, custom code, prompts, data, search indexes, and more.-- Evaluate, deploy, and continuously monitor your AI application and app performance +- Evaluate, deploy, and continuously monitor your AI application and app performance. :::image type="content" source="./media/explore/ai-studio-tab-build.png" alt-text="Screenshot of the signed-out Azure AI Studio Build page." lightbox="./media/explore/ai-studio-tab-build.png"::: Build is an experience where AI Devs and ML Pros can build or customize AI solut As a developer, you can manage settings such as connections and compute. Your admin will mainly use this section to look at access control, usage, and billing. -- Centralized backend infrastructure to reduce complexity for developers-- A single Azure AI resource for enterprise configuration, unified data story, and built-in governance+- Centralized backend infrastructure to reduce complexity for developers. +- A single Azure AI hub resource for enterprise configuration, unified data story, and built-in governance. :::image type="content" source="./media/explore/ai-studio-tab-manage.png" alt-text="Screenshot of the signed-out Azure AI Studio manage page." lightbox="./media/explore/ai-studio-tab-manage.png"::: -## Azure AI studio enterprise chat solution demo +## Azure AI Studio enterprise chat solution demo Learn how to create a retail copilot using your data with Azure AI Studio in this [end-to-end walkthrough video](https://youtu.be/Qes7p5w8Tz8). > [!VIDEO https://www.youtube.com/embed/Qes7p5w8Tz8] Using Azure AI Studio also incurs cost associated with the underlying services, ## Region availability -Azure AI Studio is currently available in the following regions: Australia East, Brazil South, Canada Central, East US, East US 2, France Central, Germany West Central, India South, Japan East, North Central US, Norway East, Poland Central, South Africa North, South Central US, Sweden Central, Switzerland North, UK South, West Europe, and West US. --To learn more, see [Azure AI Studio regions](./reference/region-support.md). +Azure AI Studio is available in most regions where Azure AI services are available. For more information, see [region support for Azure AI Studio](reference/region-support.md). ## How to get access |
aks | Egress Outboundtype | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md | You can customize egress for an AKS cluster to fit specific scenarios. By defaul This article covers the various types of outbound connectivity that are available in AKS clusters. > [!NOTE]-> You can now update the `outboundType` after cluster creation. This feature is in preview. See [Updating `outboundType after cluster creation (preview)](#updating-outboundtype-after-cluster-creation-preview). +> You can now update the `outboundType` after cluster creation. ## Limitations You must deploy the AKS cluster into an existing virtual network with a subnet t For more information, see [configuring cluster egress via user-defined routing](egress-udr.md). -## Updating `outboundType` after cluster creation (preview) +## Updating `outboundType` after cluster creation Changing the outbound type after cluster creation will deploy or remove resources as required to put the cluster into the new egress configuration. Migration is only supported between `loadBalancer`, `managedNATGateway` (if usin > [!WARNING] > Changing the outbound type on a cluster is disruptive to network connectivity and will result in a change of the cluster's egress IP address. If any firewall rules have been configured to restrict traffic from the cluster, you need to update them to match the new egress IP address. - ### Update cluster to use a new outbound type > [!NOTE] |
aks | Monitor Control Plane Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md | This article helps you understand this new feature, how to implement it, and how - [Private link](../azure-monitor/logs/private-link-security.md) isn't supported. - Only the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps) can be customized. All other customizations are not supported. - The cluster must use [managed identity authentication](use-managed-identity.md).-- This feature is currently available in the following regions: West Central US, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central, Israel Central, Italy North, Japan East, JioIndia West, Korea Central, Malaysia South, Mexico Central, North Central US, North Europe, Norway East, Qatar Central, South Africa North, Sweden Central, Switzerland North, Taiwan North, UAE North, UK West, West US 2.+- This feature is currently available in the following regions: West Central US, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central, Israel Central, Italy North, Japan East, JioIndia West, Korea Central, Malaysia South, Mexico Central, North Central US, North Europe, Norway East, Qatar Central, South Africa North, Sweden Central, Switzerland North, Taiwan North, UAE North, UK West, West US 2, Australia Central 2, Austrial South East, Austria East, Belgium Central, Brazil South East, Canada East, Central US, Chile Central, France South, Germany North, Israel North West, Japan West, Jio India Central. ### Install or update the `aks-preview` Azure CLI extension |
aks | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md | Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
aks | Static Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md | This article shows you how to create a static public IP address and assign it to 2. Get the static public IP address using the [`az network public-ip list`][az-network-public-ip-list] command. Specify the name of the node resource group and public IP address you created, and query for the `ipAddress`. ```azurecli-interactive- az network public-ip show --resource-group myNetworkResourceGroup --name myAKSPublicIP --query ipAddress --output tsv + az network public-ip show --resource-group <node resource group name> --name myAKSPublicIP --query ipAddress --output tsv ``` ## Create a service using the static IP address This article shows you how to create a static public IP address and assign it to kind: Service metadata: annotations:- service.beta.kubernetes.io/azure-load-balancer-resource-group: myNetworkResourceGroup + service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group name> service.beta.kubernetes.io/azure-pip-name: myAKSPublicIP name: azure-load-balancer spec: This article shows you how to create a static public IP address and assign it to kind: Service metadata: annotations:- service.beta.kubernetes.io/azure-load-balancer-resource-group: myNetworkResourceGroup + service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group name> service.beta.kubernetes.io/azure-pip-name: myAKSPublicIP service.beta.kubernetes.io/azure-dns-label-name: <unique-service-label> name: azure-load-balancer |
api-management | Api Management Howto Log Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md | Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger { "properties": { "loggerType": "azureEventHub",- "description": "adding a new logger with system assigned managed identity", + "description": "adding a new logger with user-assigned managed identity", "credentials": { "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net", "identityClientId":"<ClientID>", |
api-management | Migrate Stv1 To Stv2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md | For more information about the `stv1` and `stv2` platforms and the benefits of u API Management platform migration from `stv1` to `stv2` involves updating the underlying compute alone and has no impact on the service/API configuration persisted in the storage layer. -* The upgrade process involves creating a new compute in parallel to the old compute. The old compute takes 15-45 mins to be deleted with an option to delay it for up to 48 hours. +* The upgrade process involves creating a new compute in parallel to the old compute. The old compute takes 15-45 mins to be deleted with an option to delay it for up to 48 hours. The 48 hour delay option is only available for VNet injected services. * The API Management status in the Portal will be "Updating". * Azure manages the management endpoint DNS, and updates to the new compute immediately on successful migration. * The Gateway DNS still points to the old compute if custom domain is in use. On successful migration, update any network dependencies including DNS, firewall - **Can I roll back the migration if required?** - Yes, you can. If there's a failure during the migration process, the instance will automatically roll back to the stv1 platform. However, if you encounter any other issues post migration, you can roll back only if you have requested an extension to the old gateway purge. By default, the old gateway is purged in 15 mins that can be extended up to 48 hours by contacting support in advance. You should make sure to contact support before the old gateway is purged, if a rollback is required. + **VNet-injected instances:** Yes, you can. If there's a failure during the migration process, the instance will automatically roll back to the stv1 platform. However, if you encounter any other issues post migration, you can roll back only if you have requested an extension to the old gateway purge. By default, the old gateway is purged in 15 mins that can be extended up to 48 hours by contacting support in advance. You should make sure to contact support before the old gateway is purged, if a rollback is required. ++ **Non-VNet injected instances:** If there is a failure during the migration process, the instance will automatically roll back to the stv1 platform. If the migration completes successfully, a rollback is not possible. - **Is there any change required in custom domain/private DNS zones?** |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
app-service | Configure Ssl App Service Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-app-service-certificate.md | The following domain verification methods are supported: | **App Service Verification** | The most convenient option when the domain is already mapped to an App Service app in the same subscription because the App Service app has already verified the domain ownership. Review the last step in [Confirm domain ownership](#confirm-domain-ownership). | | **Domain Verification** | Confirm an [App Service domain that you purchased from Azure](manage-custom-dns-buy-domain.md). Azure automatically adds the verification TXT record for you and completes the process. | | **Mail Verification** | Confirm the domain by sending an email to the domain administrator. Instructions are provided when you select the option. |-| **Manual Verification** | Confirm the domain by using either a DNS TXT record or an HTML page, which applies only to **Standard** certificates per the following note. The steps are provided after you select the option. The HTML page option doesn't work for web apps with "HTTPS Only' enabled. | +| **Manual Verification** | Confirm the domain by using either a DNS TXT record or an HTML page, which applies only to **Standard** certificates per the following note. The steps are provided after you select the option. The HTML page option doesn't work for web apps with "HTTPS Only' enabled. For subdomain verification, the domain verification token needs to be added to the root domain. | > [!IMPORTANT] > With the **Standard** certificate, you get a certificate for the requested top-level domain *and* the `www` subdomain, for example, `contoso.com` and `www.contoso.com`. However, **App Service Verification** and **Manual Verification** both use HTML page verification, which doesn't support the `www` subdomain when issuing, rekeying, or renewing a certificate. For the **Standard** certificate, use **Domain Verification** and **Mail Verification** to include the `www` subdomain with the requested top-level domain in the certificate. |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md | Title: App Service Environment overview description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 11/08/2023 Last updated : 02/06/2024 An App Service Environment is a single-tenant deployment of Azure App Service th Applications are hosted in App Service plans, which are created in an App Service Environment. An App Service plan is essentially a provisioning profile for an application host. As you scale out your App Service plan, you create more application hosts with all the apps in that App Service plan on each host. A single App Service Environment v3 can have up to 200 total App Service plan instances across all the App Service plans combined. A single App Service Isolated v2 (Iv2) plan can have up to 100 instances by itself. -When you're deploying onto dedicated hardware (hosts), you're limited in scaling across all App Service plans to the number of cores in this type of environment. An App Service Environment that's deployed on dedicated hosts has 132 vCores available. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance. +When you're deploying onto dedicated hardware (hosts), you're limited in scaling across all App Service plans to the number of cores in this type of environment. An App Service Environment that's deployed on dedicated hosts has 132 vCores available. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance. Only I1v2, I2v2 and I3v2 SKU sizes are available on App Service Environment deployed on dedicated hosts. ## Virtual network support |
app-service | Language Support Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md | Title: Language runtime support policy -description: Learn about the language runtime support policy for Azure App Service. +description: Learn about the language runtime support policy for Azure App Service. Last updated 12/23/2023 -+ # Language runtime support policy for App Service |
app-service | Manage Custom Dns Buy Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md | Title: Buy a custom domain -description: Learn how to buy an App Service domain and use it as a custom domain for your app Azure App Service. +description: Learn how to buy an App Service domain and use it as a custom domain for your app Azure App Service. ms.assetid: 70fb0e6e-8727-4cca-ba82-98a4d21586ff Last updated 01/31/2023- - # Buy an App Service domain and configure an app with it |
app-service | Manage Custom Dns Migrate Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-migrate-domain.md | |
app-service | Manage Scale Per App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-per-app.md | ms.assetid: a903cb78-4927-47b0-8427-56412c4e3e64 Last updated 06/29/2023 -+ # High-density hosting on Azure App Service using per-app scaling |
app-service | Manage Scale Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-up.md | description: Learn how to scale up an app in Azure App Service. Get more CPU, me ms.assetid: f7091b25-b2b6-48da-8d4a-dcf9b7baccab Last updated 05/08/2023- - # Scale up an app in Azure App Service |
app-service | Migrate Wordpress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/migrate-wordpress.md | When you migrate a live site and its DNS domain name to App Service, that DNS na If your site is configured with SSL certs, then follow [Add and manage TLS/SSL certificates](configure-ssl-certificate.md?tabs=apex%2Cportal) to configure SSL. Next steps:-[At-scale assessment of .NET web apps](/training/modules/migrate-app-service-migration-assistant/) +[At-scale assessment of .NET web apps](/training/modules/migrate-app-service-migration-assistant/) |
app-service | Monitor Instances Health Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md | |
app-service | Overview App Gateway Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-app-gateway-integration.md | |
app-service | Overview Authentication Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md | ms.assetid: b7151b57-09e5-4c77-a10c-375a262f17e5 Last updated 02/03/2023 -+ |
app-service | Overview Nat Gateway Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-nat-gateway-integration.md | |
app-service | Overview Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md | Title: Integrate your app with an Azure virtual network description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 07/21/2023 Last updated : 02/06/2024 App Service has two variations: * The dedicated compute pricing tiers, which include the Basic, Standard, Premium, Premium v2, and Premium v3. * The App Service Environment, which deploys directly into your virtual network with dedicated supporting infrastructure and is using the Isolated and Isolated v2 pricing tiers. -The virtual network integration feature is used in Azure App Service dedicated compute pricing tiers. If your app is in an [App Service Environment](./environment/overview.md), it's already integrated with a virtual network and doesn't require you to configure virtual network integration feature to reach resources in the same virtual network. For more information on all the networking features, see [App Service networking features](./networking-features.md). +The virtual network integration feature is used in Azure App Service dedicated compute pricing tiers. If your app is in an [App Service Environment](./environment/overview.md), it already integrates with a virtual network and doesn't require you to configure virtual network integration feature to reach resources in the same virtual network. For more information on all the networking features, see [App Service networking features](./networking-features.md). Virtual network integration gives your app access to resources in your virtual network, but it doesn't grant inbound private access to your app from the virtual network. Private site access refers to making an app accessible only from a private network, such as from within an Azure virtual network. Virtual network integration is used only to make outbound calls from your app into your virtual network. Refer to [private endpoint](./networking/private-endpoint.md) for inbound private access. Virtual network integration supports connecting to a virtual network in the same When you use virtual network integration, you can use the following Azure networking features: -* **Network security groups (NSGs)**: You can block outbound traffic with an NSG that's placed on your integration subnet. The inbound rules don't apply because you can't use virtual network integration to provide inbound access to your app. +* **Network security groups (NSGs)**: You can block outbound traffic with an NSG that you use on your integration subnet. The inbound rules don't apply because you can't use virtual network integration to provide inbound access to your app. * **Route tables (UDRs)**: You can place a route table on the integration subnet to send outbound traffic where you want. * **NAT gateway**: You can use [NAT gateway](./networking/nat-gateway-integration.md) to get a dedicated outbound IP and mitigate SNAT port exhaustion. Because subnet size can't be changed after assignment, use a subnet that's large > > Since you have 1 App Service plan, 1 x 50 = 50 IP addresses. -When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration. +When you want your apps in your plan to reach a virtual network that apps in another plan already connect to, select a different subnet than the one being used by the pre-existing virtual network integration. ## Permissions You must have at least the following Role-based access control permissions on th | Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition | | Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network | -If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it's automatically registered when creating the first web app in a subscription. +If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it also automatically registers when creating the first web app in a subscription. ## Routes You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and [app settings with Key Vault reference](./app-service-key-vault-references.md). [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out. -Through application routing or configuration routing options, you can configure what traffic is sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it's sent through the virtual network integration. +Through application routing or configuration routing options, you can configure what traffic is sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if sent through the virtual network integration. ### Application routing -Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during startup. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the outbound internet traffic setting. If outbound internet traffic routing is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that outbound internet traffic is enabled. +Application routing applies to traffic that is sent from your app after it starts. See [configuration routing](#configuration-routing) for traffic during startup. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the outbound internet traffic setting. If outbound internet traffic routing is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that outbound internet traffic is enabled. * Only traffic configured in application or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet. * When outbound internet traffic routing is enabled, the source address for your outbound traffic from your app is still one of the IP addresses that are listed in your app properties. If you route your traffic through a firewall or a NAT gateway, the source IP address originates from this service. App settings using Key Vault references attempt to get secrets over the public r > * Configure SSL/TLS certificates from private Key Vaults is currently not supported. > * App Service Logs to private storage accounts is currently not supported. We recommend using Diagnostics Logging and allowing Trusted Services for the storage account. +### Routing app settings ++App Service has existing app settings to configure application and configuration routing. Site properties override the app settings if both exist. Site properties have the advantage of being auditable with Azure Policy and validated at the time of configuration. We recommend you to use site properties. ++You can still use the existing `WEBSITE_VNET_ROUTE_ALL` app setting to configure application routing. ++App settings also exist for some configuration routing options. These app settings are named `WEBSITE_CONTENTOVERVNET` and `WEBSITE_PULL_IMAGE_OVER_VNET`. + ### Network routing -You can use route tables to route outbound traffic from your app without restriction. Common destinations can include firewall devices or gateways. You can also use a [network security group](../virtual-network/network-security-groups-overview.md) (NSG) to block outbound traffic to resources in your virtual network or the internet. An NSG that's applied to your integration subnet is in effect regardless of any route tables applied to your integration subnet. +You can use route tables to route outbound traffic from your app without restriction. Common destinations can include firewall devices or gateways. You can also use a [network security group](../virtual-network/network-security-groups-overview.md) (NSG) to block outbound traffic to resources in your virtual network or the internet. An NSG that you apply to your integration subnet is in effect regardless of any route tables applied to your integration subnet. Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes don't apply to replies from inbound app requests and inbound rules in an NSG don't apply to your app. Virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the [access restrictions](./overview-access-restrictions.md) feature or [private endpoints](./networking/private-endpoint.md). There are some limitations with using virtual network integration: * The app and the virtual network must be in the same region. * The integration virtual network can't have IPv6 address spaces defined. * The integration subnet can't have [service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) enabled.-* The integration subnet can be used by only one App Service plan. +* Only one App Service plan virtual network integration connection per integration subnet is supported. * You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network. * You can't have more than two virtual network integrations per Windows App Service plan. You can't have more than one virtual network integration per Linux App Service plan. Multiple apps in the same App Service plan can use the same virtual network integration. * You can't change the subscription of an app or a plan while there's an app that's using virtual network integration. The feature is easy to set up, but that doesn't mean your experience is problem ### Deleting the App Service plan or app before disconnecting the network integration -If you deleted the app or the App Service plan without disconnecting the virtual network integration first, you aren't able to do any update/delete operations on the virtual network or subnet that was used for the integration with the deleted resource. A subnet delegation 'Microsoft.Web/serverFarms' remains assigned to your subnet and prevents the update/delete operations. +If you deleted the app or the App Service plan without disconnecting the virtual network integration first, you aren't able to do any update/delete operations on the virtual network or subnet that was used for the integration with the deleted resource. A subnet delegation 'Microsoft.Web/serverFarms' remains assigned to your subnet and prevents the update and delete operations. In order to do update/delete the subnet or virtual network again, you need to re-create the virtual network integration, and then disconnect it: 1. Re-create the App Service plan and app (it's mandatory to use the exact same web app name as before). |
app-service | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md | Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
app-service | Quickstart Dotnetcore Uiex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore-uiex.md | ms.assetid: b1e6bd58-48d1-4007-9d6c-53fd6db061e3 Last updated 11/23/2020 ms.devlang: csharp-+ zone_pivot_groups: app-service-platform-windows-linux |
app-service | Quickstart Dotnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md | adobe-target-experience: Experience B adobe-target-content: ./quickstart-dotnetcore-uiex -+ ai-usage: ai-assisted |
app-service | Quickstart Java Uiex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java-uiex.md | ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a ms.devlang: java Last updated 08/01/2020-+ zone_pivot_groups: app-service-platform-windows-linux |
app-service | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md | ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a ms.devlang: java Last updated 08/31/2023-+ zone_pivot_groups: app-service-java-hosting adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 |
app-service | Quickstart Multi Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md | |
app-service | Quickstart Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md | |
app-service | Quickstart Python 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python-1.md | |
app-service | Samples Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-cli.md | tags: azure-service-management ms.assetid: 53e6a15a-370a-48df-8618-c6737e26acec Last updated 04/21/2022-+ keywords: azure cli samples, azure cli examples, azure cli code samples |
app-service | Cli Continuous Deployment Vsts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md | ms.devlang: azurecli Last updated 04/15/2022 -+ # Create an App Service app with continuous deployment from an Azure DevOps repository using Azure CLI |
app-service | Cli Linux Acr Aspnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-linux-acr-aspnetcore.md | ms.devlang: azurecli Last updated 04/25/2022 -+ # Create an ASP.NET Core app in a Docker container in App Service from Azure Container Registry |
app-service | Powershell Backup Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-delete.md | ms.assetid: ebcadb49-755d-4202-a5eb-f211827a9168 Last updated 10/30/2017 -+ # Delete a backup for a web using Azure PowerShell |
app-service | Powershell Backup Onetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-onetime.md | ms.assetid: fc755f82-ca3e-4532-b251-690b699324d6 Last updated 10/30/2017 -+ # Back up a web app using PowerShell |
app-service | Powershell Backup Restore Diff Sub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-restore-diff-sub.md | ms.assetid: a2a27d94-d378-4c17-a6a9-ae1e69dc4a72 Last updated 12/06/2022 -+ # Restore a web app from a backup in another subscription using PowerShell |
app-service | Powershell Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-restore.md | ms.assetid: a2a27d94-d378-4c17-a6a9-ae1e69dc4a72 Last updated 12/06/2022 -+ # Restore a web app from a backup using Azure PowerShell |
app-service | Powershell Backup Scheduled | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-scheduled.md | ms.assetid: a2a27d94-d378-4c17-a6a9-ae1e69dc4a72 Last updated 10/30/2017 -+ # Create a scheduled backup for a web app using PowerShell |
app-service | Powershell Configure Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-configure-custom-domain.md | ms.assetid: 356f5af9-f62e-411c-8b24-deba05214103 Last updated 12/06/2022 -+ # Assign a custom domain to a web app using PowerShell |
app-service | Powershell Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-configure-ssl-certificate.md | tags: azure-service-management ms.assetid: 23e83b74-614a-49a0-bc08-7542120eeec5 Last updated 12/06/2022-+ # Bind a custom TLS/SSL certificate to a web app using PowerShell |
app-service | Powershell Scale Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-scale-manual.md | ms.assetid: de5d4285-9c7d-4735-a695-288264047375 Last updated 12/06/2022 -+ # Scale a web app manually using PowerShell |
app-service | Troubleshoot Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md | |
app-service | Troubleshoot Domain Ssl Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-domain-ssl-certificates.md | tags: top-support-issue Last updated 03/01/2019 - # Troubleshoot domain and TLS/SSL certificate problems in Azure App Service |
app-service | Troubleshoot Dotnet Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-dotnet-visual-studio.md | ms.assetid: def8e481-7803-4371-aa55-64025d116c97 ms.devlang: csharp Last updated 08/29/2016-+ |
app-service | Troubleshoot Http 502 Http 503 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-http-502-http-503.md | keywords: 502 bad gateway, 503 service unavailable, error 503, error 502 ms.assetid: 51cd331a-a3fa-438f-90ef-385e755e50d5 Last updated 07/06/2016- - # Troubleshoot HTTP errors of "502 bad gateway" and "503 service unavailable" in Azure App Service "502 bad gateway" and "503 service unavailable" are common errors in your app hosted in [Azure App Service](./overview.md). This article helps you troubleshoot these errors. This is often the simplest way to recover from one-time issues. On the [Azure Po ![restart app to solve HTTP errors of 502 bad gateway and 503 service unavailable](./media/app-service-web-troubleshoot-HTTP-502-503/2-restart.png) You can also manage your app using Azure PowerShell. For more information, see-[Using Azure PowerShell with Azure Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md). +[Using Azure PowerShell with Azure Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md). |
app-service | Troubleshoot Performance Degradation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-performance-degradation.md | keywords: web app performance, slow app, app slow ms.assetid: b8783c10-3a4a-4dd6-af8c-856baafbdde5 Last updated 08/03/2016- - # Troubleshoot slow app performance issues in Azure App Service This article helps you troubleshoot slow app performance issues in [Azure App Service](./overview.md). |
app-service | Tutorial Auth Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md | Title: 'Tutorial: Authenticate users E2E' + Title: 'Tutorial: Authenticate users E2E' description: Learn how to use App Service authentication and authorization to secure your App Service apps end-to-end, including access to remote APIs. keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad |
app-service | Tutorial Connect App App Graph Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-app-graph-javascript.md | Title: 'Tutorial: Authenticate users E2E to Azure' + Title: 'Tutorial: Authenticate users E2E to Azure' description: Learn how to use App Service authentication and authorization to secure your App Service apps end-to-end to a downstream Azure service. keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad |
app-service | Tutorial Connect Msi Azure Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md | ms.devlang: csharp # ms.devlang: csharp,java,javascript,python Last updated 04/12/2022-+ # Tutorial: Connect to Azure databases from App Service without secrets using a managed identity |
app-service | Tutorial Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md | Last updated 11/29/2022 keywords: azure app service, web app, linux, windows, docker, container-+ zone_pivot_groups: app-service-containers-windows-linux |
app-service | Tutorial Dotnetcore Sqldb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md | |
app-service | Tutorial Java Spring Cosmosdb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md | |
app-service | Tutorial Nodejs Mongodb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md | ms.role: developer ms.devlang: javascript -+ # Deploy a Node.js + MongoDB web app to Azure |
app-service | Tutorial Php Mysql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md | Title: 'Tutorial: PHP app with MySQL and Redis' + Title: 'Tutorial: PHP app with MySQL and Redis' description: Learn how to get a PHP app working in Azure, with connection to a MySQL database and a Redis cache in Azure. Laravel is used in the tutorial. ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Last updated 06/30/2023-+ # Tutorial: Deploy a PHP, MySQL, and Redis app to Azure App Service |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | |
app-service | Web Sites Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-monitor.md | |
app-service | Web Sites Traffic Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-traffic-manager.md | description: Find best practices for configuring Azure Traffic Manager when you ms.assetid: dabda633-e72f-4dd4-bf1c-6e945da456fd Last updated 02/25/2016- - # Controlling Azure App Service traffic with Azure Traffic Manager > [!NOTE] When using Azure Traffic Manager with Azure, keep in mind the following points: ## Next Steps For a conceptual and technical overview of Azure Traffic Manager, see [Traffic Manager Overview](../traffic-manager/traffic-manager-overview.md).-- |
app-service | Webjobs Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md | Last updated 7/30/2023 --#Customer intent: As a web developer, I want to leverage background tasks to keep my application running smoothly. adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B adobe-target-content: ./webjobs-create-ieux+#Customer intent: As a web developer, I want to leverage background tasks to keep my application running smoothly. # Run background tasks with WebJobs in Azure App Service |
app-service | Webjobs Sdk How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-sdk-how-to.md | description: Learn more about how to write code for the WebJobs SDK. Create even ms.devlang: csharp-+ Last updated 06/24/2021 |
attestation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md | Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
automation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md | Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-app-configuration | Howto Integrate Azure Managed Service Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md | To set up a managed identity in the portal, you first create an application and 1. When prompted, answer **Yes** to turn on the system-assigned managed identity. - :::image type="content" source="./media/add-managed-identity-app-service.png" alt-text="Screenshot of how to add a managed identity in App Service."::: + :::image type="content" source="./media/managed-identity/add-managed-identity-app-service.png" alt-text="Screenshot of how to add a managed identity in App Service."::: ## Grant access to App Configuration The following steps describe how to assign the App Configuration Data Reader role to App Service. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). -1. In the [Azure portal](https://portal.azure.com), select **All resources** and select the App Configuration store that you created in the [quickstart](../azure-app-configuration/quickstart-azure-functions-csharp.md). +1. In the [Azure portal](https://portal.azure.com), select the App Configuration store that you created in the [quickstart](../azure-app-configuration/quickstart-azure-functions-csharp.md). 1. Select **Access control (IAM)**. |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-app-configuration | Quickstart Aspnet Core App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-aspnet-core-app.md | |
azure-arc | Configure Transparent Data Encryption Manually | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-manually.md | |
azure-arc | Configure Transparent Data Encryption Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-sql-managed-instance.md | Similar to above, to restore the credentials, copy them into the container and r ## Related content [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption)- |
azure-arc | Limitations Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-managed-instance.md | description: Limitations of SQL Managed Instance enabled by Azure Arc - |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 # |
azure-arc | Deliver Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md | Azure policies can be specified to a targeted subscription or resource group for There are some scenarios in which you may be eligible to receive Extended Security Updates patches at no additional cost. Two of these scenarios supported by Azure Arc are (1) [Dev/Test (Visual Studio)](/azure/devtest/offer/overview-what-is-devtest-offer-visual-studio) and (2) [Disaster Recovery (Entitled benefit DR instances from Software Assurance](https://www.microsoft.com/en-us/licensing/licensing-programs/software-assurance-by-benefits) or subscription only). Both of these scenarios require the customer is already using Windows Server 2012/R2 ESUs enabled by Azure Arc for billable, production machines. > [!WARNING]-> Don't create a Windows Server 2012/R2 ESU License for only Dev/Test or Disaster Recovery workloads. You can't provision an ESU License only for non-billable workloads. Moreover, you'll be billed fully for all of the cores provisioned with an ESU license. +> Don't create a Windows Server 2012/R2 ESU License for only Dev/Test or Disaster Recovery workloads. You shouldn't provision an ESU License only for non-billable workloads. Moreover, you'll be billed fully for all of the cores provisioned with an ESU license, and any dev/test cores on the license won't be billed as long as they're tagged accordingly based on the following qualifications. > To qualify for these scenarios, you must already have: -- **Billable ESU License.** You must already have provisioned and activated a WS2012 Arc ESU License intended to be linked to regular Azure Arc-enabled servers running in production environments (i.e., normally billed ESU scenarios). This license should be provisioned only for billable cores, not cores that are eligible for free Extended Security Updates.+- **Billable ESU License.** You must already have provisioned and activated a WS2012 Arc ESU License intended to be linked to regular Azure Arc-enabled servers running in production environments (i.e., normally billed ESU scenarios). This license should be provisioned only for billable cores, not cores that are eligible for free Extended Security Updates, for example, dev/test cores. - **Arc-enabled servers.** Onboarded your Windows Server 2012 and Windows Server 2012 R2 machines to Azure Arc-enabled servers for the purpose of Dev/Test with Visual Studio subscriptions or Disaster Recovery. This linking will not trigger a compliance violation or enforcement block, allow > Adding these tags to your license will NOT make the license free or reduce the number of license cores that are chargeable. These tags allow you to link your Azure machines to existing licenses that are already configured with payable cores without needing to create any new licenses or add additional cores to your free machines. **Example:**--You have 8 Windows Server 2012 R2 Standard instances, each with 8 physical cores. 6 of these Windows Server 2012 R2 Standard machines are for production, and 2 of these Windows Server 2012 R2 Standard machines are eligible for free ESUs through the Visual Studio Dev Test subscription. You should first provision and activate a regular ESU License for Windows Server 2012/R2 that's Standard edition and has 48 physical cores. You should link this regular, production ESU license to your 6 production servers. Next, you should use this existing license, not add any more cores or provision a separate license, and link this license to your 2 non-production Windows Server 2012 R2 standard machines. You should tag the license and the 2 non-production Windows Server 2012 R2 Standard machines with Name: ΓÇ£ESU UsageΓÇ¥ and Value: ΓÇ£WS2012 VISUAL STUDIO DEV TESTΓÇ¥. +- You have 8 Windows Server 2012 R2 Standard instances, each with 8 physical cores. Six of these Windows Server 2012 R2 Standard machines are for production, and 2 of these Windows Server 2012 R2 Standard machines are eligible for free ESUs through the Visual Studio Dev Test subscription. + - You should first provision and activate a regular ESU License for Windows Server 2012/R2 that's Standard edition and has 48 physical cores to cover the 6 production machines. You should link this regular, production ESU license to your 6 production servers. + - Next, you should reuse this existing license, don't add any more cores or provision a separate license, and link this license to your 2 non-production Windows Server 2012 R2 standard machines. You should tag the ESU license and the 2 non-production Windows Server 2012 R2 Standard machines with Name: "ESU Usage" and Value: "WS2012 VISUAL STUDIO DEV TEST". + - This will result in an ESU license for 48 cores, and you'll be billed for those 48 cores. You won't be charged for the additional 16 cores of the dev test servers that you added to this license, as long as the ESU license and the dev test server resources are tagged appropriately. > [!NOTE]-> You needed a regular production license to start with, and you'll be billed only for the production cores. You did not and should not provision non-production cores in your license. +> You needed a regular production license to start with, and you'll be billed only for the production cores. > ## Upgrading from Windows Server 2012/2012 R2 |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-cache-for-redis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md | Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-government | Azure Services In Fedramp Auditscope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md | For current Azure Government regions and available services, see [Products avail This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope-*Last updated: November 2023* +*Last updated: January 2024* ### Terminology used This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure for Education](https://azureforeducation.microsoft.com/) | ✅ | ✅ | | [Azure Information Protection](/azure/information-protection/) | ✅ | ✅ | | [Azure Kubernetes Service (AKS)](../../aks/index.yml) | ✅ | ✅ |+| [Azure Managed Grafana](../../managed-grafana/index.yml) | ✅ | ✅ | | [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | ✅ | ✅ | | [Azure Maps](../../azure-maps/index.yml) | ✅ | ✅ | | [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Bot Service](/azure/bot-service/) | ✅ | ✅ | | [Cloud Services](../../cloud-services/index.yml) | ✅ | ✅ | | [Cloud Shell](../../cloud-shell/overview.md) | ✅ | ✅ |-| [Cognitive Search](../../search/index.yml) (formerly Azure Search) | ✅ | ✅ | +| [Azure AI Health Bot](/healthbot/) | ✅ | ✅ | +| [Azure AI Search](../../search/index.yml) (formerly Azure Cognitive Search) | ✅ | ✅ | | [Azure AI | [Azure AI | [Azure AI This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Dedicated HSM](../../dedicated-hsm/index.yml) | ✅ | ✅ | | [DevTest Labs](../../devtest-labs/index.yml) | ✅ | ✅ | | [DNS](../../dns/index.yml) | ✅ | ✅ |-| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | ✅ | ✅ | +| [Omnichannel for Customer Service (Formerly Dynamics 365 Chat and Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | ✅ | ✅ | | [Dynamics 365 Commerce](/dynamics365/commerce/)| ✅ | ✅ | | [Dynamics 365 Customer Service](/dynamics365/customer-service/overview)| ✅ | ✅ | | [Dynamics 365 Field Service](/dynamics365/field-service/overview)| ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure AI Document Intelligence](../../ai-services/document-intelligence/index.yml) | ✅ | ✅ | | [Front Door](../../frontdoor/index.yml) | ✅ | ✅ | | [Functions](../../azure-functions/index.yml) | ✅ | ✅ |-| [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | ✅ | ✅ | -| [Health Bot](/healthbot/) | ✅ | ✅ | | [HDInsight](../../hdinsight/index.yml) | ✅ | ✅ | | [HPC Cache](../../hpc-cache/index.yml) | ✅ | ✅ | | [Immersive Reader](../../ai-services/immersive-reader/index.yml) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | ✅ | ✅ | | [Media Services](/azure/media-services/) | ✅ | ✅ | | [Metrics Advisor](../../ai-services/metrics-advisor/index.yml) | ✅ | ✅ |-| [Microsoft Defender XDR](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | ✅ | ✅ | | [Microsoft Azure Attestation](../../attestation/index.yml)| ✅ | ✅ | | [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| ✅ | ✅ | | [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) (formerly Azure Security Center) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Microsoft Defender for Identity](/defender-for-identity/) (formerly Azure Advanced Threat Protection) | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | | [Microsoft Defender for IoT](../../defender-for-iot/index.yml) (formerly Azure Security for IoT) | ✅ | ✅ |-| [Microsoft Defender Vulnerability Management](../../defender-for-iot/index.yml) | ✅ | ✅ | +| [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/) | ✅ | ✅ | | [Microsoft Graph](/graph/) | ✅ | ✅ | | [Microsoft Intune](/mem/intune/) | ✅ | ✅ | | [Microsoft Purview](../../purview/index.yml) (incl. Data Map, Data Estate Insights, and governance portal) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Site Recovery](../../site-recovery/index.yml) | ✅ | ✅ | | [SQL Database](/azure/azure-sql/database/sql-database-paas-overview) | ✅ | ✅ | | [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) | ✅ | ✅ |-| [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | ✅ | ✅ | | [SQL Server Stretch Database](../../sql-server-stretch-database/index.yml) | ✅ | ✅ | | [Storage: Archive](../../storage/blobs/access-tiers-overview.md) | ✅ | ✅ | | [Storage: Blobs](../../storage/blobs/index.yml) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Sign-up portal](https://signup.azure.com/) | ✅ | ✅ | ✅ | ✅ | |-| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [Azure Stack](/azure-stack/operator/azure-stack-usage-reporting) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) ***** | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Stack HCI](/azure-stack/hci/) | ✅ | ✅ | ✅ | | | | [Azure Video Indexer](/azure/azure-video-indexer/) | ✅ | ✅ | ✅ | | | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Front Door](../../frontdoor/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Functions](../../azure-functions/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |-| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | ✅ | ✅ | ✅ | | | | [HDInsight](../../hdinsight/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [HPC Cache](../../hpc-cache/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Import/Export](../../import-export/index.yml) | ✅ | ✅ | ✅ | ✅ | | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Machine Learning](../../machine-learning/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Media Services](/azure/media-services/) | ✅ | ✅ | ✅ | ✅ | ✅ |-| [Microsoft Defender XDR](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | ✅ | ✅ | ✅ | ✅ | | | [Microsoft Azure portal](../../azure-portal/index.yml) | ✅ | ✅ | ✅| ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | ✅ | ✅ | ✅| ✅ | | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Power BI](/power-bi/fundamentals/) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Power BI Embedded](/power-bi/developer/embedded/) | ✅ | ✅ | ✅ | ✅ | | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | ✅ | ✅ | ✅ | ✅ | |-| [Power Query Online](/power-query/) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Power Virtual Agents](/power-virtual-agents/) | ✅ | ✅ | ✅ | | | | [Private Link](../../private-link/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | ✅ | ✅ | ✅ | ✅ | | |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | We strongly recommended to always update to the latest version, or opt in to the ## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|-| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Fixed syslog timestamp parsing where an incorrect timezone offset might be applied</li></ul> | 1.23.0 | 1.29.5 | +| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Syslog time zones incorrect: AMA now uses machine current time when AMA receives an event to populate the TimeGenerated field. The previous behavior parsed the time zone from the Syslog event which caused incorrect times if a device sent an event from a time zone different than the AMA collector machine.</li></ul> | 1.23.0 | 1.29.5 | | December 2023 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.4 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li><li>Multiple IIS subscriptions causes a memory leak. feature reverted in 1.23.0.</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4| | October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11| | September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | |
azure-monitor | Azure Monitor Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md | There are built-in policy initiatives for Windows and Linux virtual machines, sc These initiatives above comprise individual policies that: - (Optional) Create and assign built-in user-assigned managed identity, per subscription, per region. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#policy-definition-and-details).- - `Bring Your Own User-Assigned Identity`: If set to `true`, it creates the built-in user-assigned managed identity in the predefined resource group and assigns it to all machines that the policy is applied to. If set to `false`, you can instead use existing user-assigned identity that *you must assign* to the machines beforehand. + - `Bring Your Own User-Assigned Identity`: If set to `false`, it creates the built-in user-assigned managed identity in the predefined resource group and assigns it to all the machines that the policy is applied to. Location of the resource group can be configured in the `Built-In-Identity-RG Location` parameter. + If set to `true`, you can instead use an existing user-assigned identity that is automatically assigned to all the machines that the policy is applied to. - Install Azure Monitor Agent extension on the machine, and configure it to use user-assigned identity as specified by the following parameters.- - `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the preceding policy. If set to `true`, it configures the agent to use an existing user-assigned identity that *you must assign* to the machines in scope beforehand. + - `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the preceding policy. If set to `true`, it configures the agent to use an existing user-assigned identity. - `User-Assigned Managed Identity Name`: If you use your own identity (selected `true`), specify the name of the identity that's assigned to the machines. - `User-Assigned Managed Identity Resource Group`: If you use your own identity (selected `true`), specify the resource group where the identity exists. - `Additional Virtual Machine Images`: Pass additional VM image names that you want to apply the policy to, if not already included.+ - `Built-In-Identity-RG Location`: If you use built-in user-assigned managed identity, specify the location where the identity and the resource group should be created. This parameter is only used when 'Bring Your Own User-Assigned Managed Identity' parameter is false. - Create and deploy the association to link the machine to specified data collection rule. - `Data Collection Rule Resource Id`: The Azure Resource Manager resourceId of the rule you want to associate via this policy to all machines the policy is applied to. |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | All custom tables created with or migrated to the [data collection rule (DCR)-ba | Managed Lustre | [AFSAuditLogs](/azure/azure-monitor/reference/tables/AFSAuditLogs) | | Managed NGINX | [NGXOperationLogs](/azure/azure-monitor/reference/tables/ngxoperationlogs) | | Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) |+| Microsoft Graph | [MicrosoftGraphActivityLogs](/azure/azure-monitor/reference/tables/microsoftgraphactivitylogs) | | Monitor | [AzureMetricsV2](/azure/azure-monitor/reference/tables/AzureMetricsV2) | | Network Devices (Operator Nexus) | [MNFDeviceUpdates](/azure/azure-monitor/reference/tables/MNFDeviceUpdates)<br>[MNFSystemStateMessageUpdates](/azure/azure-monitor/reference/tables/MNFSystemStateMessageUpdates) | | Network Managers | [AVNMConnectivityConfigurationChange](/azure/azure-monitor/reference/tables/AVNMConnectivityConfigurationChange)<br>[AVNMIPAMPoolAllocationChange](/azure/azure-monitor/reference/tables/AVNMIPAMPoolAllocationChange) | |
azure-monitor | Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md | Title: Azure Monitor customer-managed key description: Information and steps to configure Customer-managed key to encrypt data in your Log Analytics workspaces using an Azure Key Vault key. Previously updated : 01/04/2024 Last updated : 01/06/2024 Review [limitations and constraints](#limitationsandconstraints) before configur Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). You can encrypt data using your own key in [Azure Key Vault](../../key-vault/general/overview.md), for control over the key lifecycle, and ability to revoke access to your data. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption) operates. -Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data is encrypted twice, once at the service level using Microsoft-managed keys or Customer-managed keys, and once at the infrastructure level, using two different encryption algorithms and two different keys. [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. Dedicated cluster also lets you protect data with [Lockbox](#customer-lockbox). +Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data is encrypted in storage twice, once at the service level using Microsoft-managed keys or Customer-managed keys, and once at the infrastructure level, using two different [encryption algorithms](../../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption) and two different keys. [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. Dedicated cluster also lets you protect data with [Lockbox](#customer-lockbox). Data ingested in the last 14 days or recently used in queries is kept in hot-cache (SSD-backed) for query efficiency. SSD data is encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD access adheres to [key revocation](#key-revocation) |
azure-monitor | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md | Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md | Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md | Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md | Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-resource-manager | Tag Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md | Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 01/03/2024 Last updated : 02/05/2024 # Tag support for Azure resources To get the same data as a file of comma-separated values, download [tag-support. > | flexibleServers | Yes | Yes | > | getPrivateDnsZoneSuffix | No | No | > | serverGroups | Yes | Yes |-> | serverGroupsv2 | Yes | Yes | +> | serverGroupsv2 | Yes | No | > | servers | Yes | Yes | > | servers / advisors | No | No | > | servers / keys | No | No | To get the same data as a file of comma-separated values, download [tag-support. > | servers / topQueryStatistics | No | No | > | servers / virtualNetworkRules | No | No | > | servers / waitStatistics | No | No |-> | serversv2 | Yes | Yes | +> | serversv2 | Yes | No | ## Microsoft.DelegatedNetwork |
azure-signalr | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md | Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-signalr | Signalr Concept Authenticate Oauth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md | In this section, you implement a `Login` API that authenticates clients using th ### Update the Hub class -By default, web client connects to SignalR Service using an internal access token. This access token isn't associated with an authenticated identity. +By default, web client connects to SignalR Service using an internal access. This access token isn't associated with an authenticated identity. Basically, it's anonymous access. In this section, you turn on real authentication by adding the `Authorize` attribute to the hub class, and updating the hub methods to read the username from the authenticated user's claim. In this section, you turn on real authentication by adding the `Authorize` attri ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png) - You prompt to authorize the chat app's access to your GitHub account. Select the **Authorize** button. + You're prompted to authorize the chat app's access to your GitHub account. Select the **Authorize** button. ![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png) |
azure-signalr | Signalr Concept Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-disaster-recovery.md | -Resiliency and disaster recovery is a common need for online systems. Azure SignalR Service already guarantees 99.9% availability, but it's still a regional service. -Your service instance is always running in one region and doesn't fail over to another region when there's a region-wide outage. +Resiliency and disaster recovery is a common need for online systems. Azure SignalR Service already provides 99.9% availability, however it's still a regional service. +When there's a region-wide outage, your service instance doesn't fail over to another region because it's always running in the one region. For regional disaster recovery, we recommend the following two approaches: -- **Enable Geo-Replication** (Easy way). This feature will handle regional failover for you automatically. When enabled, there remains just one Azure SignalR instance and no code changes are introduced. Check [geo-replication](howto-enable-geo-replication.md) for details.-- **Utilize Multiple Endpoints in Service SDK**. Our service SDK provides a functionality to support multiple SignalR service instances and automatically switch to other instances when some of them aren't available. With this feature, you're able to recover when a disaster takes place, but you need to set up the right system topology by yourself. You learn how to do so **in this document**.+- **Enable Geo-Replication** (Easy way). This feature handles regional failover for you automatically. When enabled, there's only one Azure SignalR instance and no code changes are introduced. Check [geo-replication](howto-enable-geo-replication.md) for details. +- **Utilize Multiple Endpoints in Service SDK**. Our service SDK supports multiple SignalR service instances and automatically switches to other instances when some of them are unavailable. With this feature, you're able to recover when a disaster takes place, but you need to set up the right system topology by yourself. You learn how to do so **in this document**. ## High available architecture for SignalR service -In order to have cross region resiliency for SignalR service, you need to set up multiple service instances in different regions. So when one region is down, the others can be used as backup. +To ensure cross region resiliency for SignalR service, you need to set up multiple service instances in different regions. So when one region is down, the others can be used as backup. When app servers connect to multiple service instances, there are two roles, primary and secondary.-Primary is an instance who is taking online traffic and secondary is a fully functional but backup instance for primary. -In our SDK implementation, negotiate only returns primary endpoints so in normal case clients only connect to primary endpoints. +Primary is an instance responsible for receiving online traffic, while secondary serves as a fallback instance that is fully functional. +In our SDK implementation, negotiate only returns primary endpoints, so clients only connect to primary endpoints in normal cases. But when primary instance is down, negotiate returns secondary endpoints so client can still make connections. Primary instance and app server are connected through normal server connections but secondary instance and app server are connected through a special type of connection called weak connection.-The main difference of a weak connection is that it doesn't accept client connection routing, because secondary instance is located in another region. Routing a client to another region isn't an optimal choice (increases latency). +One distinguishing characteristic of a weak connection is that it's unable to accept client connection routing due to the location of secondary instance in another region. Routing a client to another region isn't an optimal choice (increases latency). One service instance can have different roles when connecting to multiple app servers.-One typical setup for cross region scenario is to have two (or more) pairs of SignalR service instances and app servers. +One typical setup for cross region scenario is to have two or more pairs of SignalR service instances and app servers. Inside each pair app server and SignalR service are located in the same region, and SignalR service is connected to the app server as a primary role. Between each pairs app server and SignalR service are also connected, but SignalR becomes a secondary when connecting to server in another region. With this topology, message from one server can still be delivered to all clients as all app servers and SignalR service instances are interconnected.-But when a client is connected, it's always routed to the app server in the same region to achieve optimal network latency. +But when a client is connected, it routes to the app server in the same region to achieve optimal network latency. -Below is a diagram that illustrates such topology: +The following diagram illustrates such topology: ![Diagram shows two regions each with an app server and a SignalR service, where each server is associated with the SignalR service in its region as primary and with the service in the other region as secondary.](media/signalr-concept-disaster-recovery/topology.png) If you have multiple endpoints, you can set them in multiple config entries, eac Azure:SignalR:ConnectionString:<name>:<role> ``` -Here `<name>` is the name of the endpoint and `<role>` is its role (primary or secondary). +In the ConnectionString, `<name>` is the name of the endpoint and `<role>` is its role (primary or secondary). Name is optional but it's useful if you want to further customize the routing behavior among multiple endpoints. #### Through code Follow the steps to trigger the failover: ## Next steps -In this article, you have learned how to configure your application to achieve resiliency for SignalR service. To understand more details about server/client connection and connection routing in SignalR service, you can read [this article](signalr-concept-internals.md) for SignalR service internals. +In this article, you learned how to configure your application to achieve resiliency for SignalR service. To understand more details about server/client connection and connection routing in SignalR service, you can read [this article](signalr-concept-internals.md) for SignalR service internals. For scaling scenarios such as sharding that uses multiple instances together to handle large number of connections read [how to scale multiple instances](signalr-howto-scale-multi-instances.md). |
azure-signalr | Signalr Concept Event Grid Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-event-grid-integration.md | Title: React to Azure SignalR Service events -description: Use Azure Event Grid to subscribe to Azure SignalR Service events. Other downstream services can be triggered by these events. +description: Use Azure Event Grid to subscribe to Azure SignalR Service events. These events can also trigger other downstream services. -Azure SignalR Service events are reliably sent to the Event Grid service which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery. To learn more, see [Event Grid message delivery and retry](../event-grid/delivery-and-retry.md). +Azure SignalR Service events are reliably sent to the Event Grid service that provides reliable delivery services to your applications through rich retry policies and dead-letter delivery. To learn more, see [Event Grid message delivery and retry](../event-grid/delivery-and-retry.md). ![Event Grid Model](/azure/event-grid/media/overview/functional-model.png) ## Serverless state-Azure SignalR Service events are only active when client connections are in a serverless state. If a client does not route to a hub server, it goes into the serverless state. Classic mode only works when the hub that client connections connect to doesn't have a hub server. Serverless mode is recommended as a best practice. To learn more details about service mode, see [How to choose Service Mode](https://github.com/Azure/azure-signalr/blob/dev/docs/faq.md#what-is-the-meaning-of-service-mode-defaultserverlessclassic-how-can-i-choose). +Azure SignalR Service events are only active when client connections are in a serverless state. If a client doesn't route to a hub server, it goes into the serverless state. Classic mode only works when the hub that client connections connect to doesn't have a hub server. Serverless mode is recommended as a best practice. To learn more details about service mode, see [How to choose Service Mode](https://github.com/Azure/azure-signalr/blob/dev/docs/faq.md#what-is-the-meaning-of-service-mode-defaultserverlessclassic-how-can-i-choose). ## Available Azure SignalR Service events-Event grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Azure SignalR Service event subscriptions support two types of events: +Event Grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Azure SignalR Service event subscriptions support two types of events: |Event Name|Description| |-|--| Event grid uses [event subscriptions](../event-grid/concepts.md#event-subscripti |`Microsoft.SignalRService.ClientConnectionDisconnected`|Raised when a client connection is disconnected.| ## Event schema-Azure SignalR Service events contain all the information you need to respond to the changes in your data. You can identify an Azure SignalR Service event with the eventType property starts with "Microsoft.SignalRService". Additional information about the usage of Event Grid event properties is documented at [Event Grid event schema](../event-grid/event-schema.md). +Azure SignalR Service events contain all the information you need to respond to the changes in your data. You can identify an Azure SignalR Service event with the eventType property starts with **Microsoft.SignalRService**. Additional information about the usage of Event Grid event properties is documented at [Event Grid event schema](../event-grid/event-schema.md). -Here is an example of a client connection connected event: +Here's an example of a client connection connected event: ```json [{ "topic": "/subscriptions/{subscription-id}/resourceGroups/signalr-rg/providers/Microsoft.SignalRService/SignalR/signalr-resource", |
azure-signalr | Signalr Concept Scale Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-scale-aspnet-core.md | -Currently, there are [two versions](/aspnet/core/signalr/version-differences) of SignalR you can use with your web applications: ASP.NET SignalR, and the new ASP.NET Core SignalR. ASP.NET Core SignalR is a rewrite of the previous version. As a result, ASP.NET Core SignalR isn't backward compatible with the earlier SignalR version. The APIs and behaviors are different. The Azure SignalR Service supports both versions. +SignalR is currently available in [two versions](/aspnet/core/signalr/version-differences) for use with web applications: +- ASP.NET SignalR +- new ASP.NET Core SignalR -With Azure SignalR Service, you have the ability to run your actual web application on multiple platforms (Windows, Linux, and macOS) while hosting with [Azure App Service](../app-service/overview.md), [IIS](/aspnet/core/host-and-deploy/iis/index), [Nginx](/aspnet/core/host-and-deploy/linux-nginx), [Apache](/aspnet/core/host-and-deploy/linux-apache), [Docker](/aspnet/core/host-and-deploy/docker/index). You can also use self-hosting in your own process. +ASP.NET Core SignalR is a rewrite of the previous version. As a result, ASP.NET Core SignalR isn't backward compatible with the earlier SignalR version. The APIs and behaviors are different. The Azure SignalR Service supports both versions. -If the goals for your application include: supporting the latest functionality for updating web clients with real-time content updates, running across multiple platforms (Azure, Windows, Linux, and macOS), and hosting in different environments, then the best choice could be using the Azure SignalR Service. +Azure SignalR Service lets you host your actual web application on multiple platforms (Windows, Linux, and macOS) [Azure App Service](../app-service/overview.md), [IIS](/aspnet/core/host-and-deploy/iis/index), [Nginx](/aspnet/core/host-and-deploy/linux-nginx), [Apache](/aspnet/core/host-and-deploy/linux-apache), [Docker](/aspnet/core/host-and-deploy/docker/index). You can also use self-hosting in your own process. ++Azure SignalR Service is the best choice if the goals for your application include: +- supporting the latest functionality for updating web clients with real-time content updates, +- running across multiple platforms (Azure, Windows, Linux, and macOS) +- hosting in different environments ## Why not deploy SignalR myself? One of the key reasons to use the Azure SignalR Service is simplicity. With Azur Also, WebSockets are typically the preferred technique to support real-time content updates. However, load balancing a large number of persistent WebSocket connections becomes a complicated problem to solve as you scale. Common solutions use: DNS load balancing, hardware load balancers, and software load balancing. Azure SignalR Service handles this problem for you. -For ASP.NET Core SignalR, another reason might be you have no requirements to actually host a web application at all. The logic of your web application might use [Serverless computing](https://azure.microsoft.com/overview/serverless-computing/). For example, maybe your code is only hosted and executed on demand with [Azure Functions](../azure-functions/index.yml) triggers. This scenario can be tricky because your code only runs on-demand and doesn't maintain long connections with clients. Azure SignalR Service can handle this situation since the service already manages connections for you. For more information, see [overview on how to use SignalR Service with Azure Functions](signalr-concept-azure-functions.md). Since ASP.NET SignalR uses a different protocol, such Serverless mode isn't supported for ASP.NET SignalR. +For ASP.NET Core SignalR, another reason might be you have no requirements to actually host a web application at all. The logic of your web application might use [Serverless computing](https://azure.microsoft.com/overview/serverless-computing/). For example, maybe your code is only hosted and executed on demand with [Azure Functions](../azure-functions/index.yml) triggers. This scenario can be challenging because your code only runs on-demand and doesn't maintain long connections with clients. Azure SignalR Service can handle this situation since the service already manages connections for you. For more information, see [overview on how to use SignalR Service with Azure Functions](signalr-concept-azure-functions.md). Since ASP.NET SignalR uses a different protocol, such Serverless mode isn't supported for ASP.NET SignalR. ## How does it scale? |
azure-signalr | Signalr Quickstart Azure Signalr Service Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-arm-template.md | -This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure SignalR Service. You can deploy the Azure SignalR Service through the Azure portal, PowerShell, or CLI. +This quickstart walks you through the process of creating an Azure SignalR Service using an Azure Resource Manager (ARM) template. You can deploy the Azure SignalR Service through the Azure portal, PowerShell, or CLI. [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal once you sign in. +If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal once you sign in. [:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy Azure SignalR Service to Azure using an ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.signalrservice%2fsignalr%2fazuredeploy.json) The template defines one Azure resource: # [Portal](#tab/azure-portal) -Select the following link to deploy the Azure SignalR Service using the ARM template in the Azure portal: +To deploy the Azure SignalR Service using the ARM template, Select the following link in the Azure portal: [:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy Azure SignalR Service to Azure using the ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.signalrservice%2fsignalr%2fazuredeploy.json) On the **Deploy an Azure SignalR Service** page: 3. If you created a new resource group, select a **Region** for the resource group. -4. If you want, enter a new **Name** and the **Location** (such as **eastus2**) of the Azure SignalR Service. If you don't specify a name, it's automatically generated. The location for the Azure SignalR Service can be the same as or different from the region of the resource group. If you don't specify a location, it's set to the same region as the resource group. +4. If you want, enter a new **Name** and the **Location** (For example **eastus2**) of the Azure SignalR Service. If you don't specify a name, it generates automatically. The Azure SignalR Service's location can be the same or different from the region of the resource group. If you don't specify a location, it defaults to the same region as your resource group. -5. Choose the **Pricing Tier** (**Free_F1** or **Standard_S1**), enter the **Capacity** (number of SignalR units), and choose a **Service Mode** of **Default** (requires hub server), **Serverless** (doesn't allow any server connection), or **Classic** (routed to hub server only if hub has server connection). Then choose whether to **Enable Connectivity Logs** or **Enable Messaging Logs**. +5. Choose the **Pricing Tier** (**Free_F1** or **Standard_S1**), enter the **Capacity** (number of SignalR units), and choose a **Service Mode** of **Default** (requires hub server), **Serverless** (doesn't allow any server connection), or **Classic** (routed to hub server only if hub has server connection). Now, choose whether to **Enable Connectivity Logs** or **Enable Messaging Logs**. > [!NOTE] > For the **Free_F1** pricing tier, the capacity is limited to 1 unit. Follow these steps to see an overview of your new Azure SignalR Service: # [PowerShell](#tab/PowerShell) -Run the following interactive code to view details about your Azure SignalR Service. You'll have to enter the name of the new service and the resource group. +Run the following interactive code to view details about your Azure SignalR Service. You have to enter the name of the new service and the resource group. ```azurepowershell-interactive $serviceName = Read-Host -Prompt "Enter the name of your Azure SignalR Service" Read-Host "Press [ENTER] to continue" # [CLI](#tab/CLI) -Run the following interactive code to view details about your Azure SignalR Service. You'll have to enter the name of the new service and the resource group. +Run the following interactive code to view details about your Azure SignalR Service. You have to enter the name of the new service and the resource group. ```azurecli-interactive read -p "Enter the name of your Azure SignalR Service: " serviceName && |
azure-signalr | Signalr Quickstart Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-rest-api.md | -Azure SignalR Service provides [REST API](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to support server to client communication scenarios, such as broadcasting. You can choose any programming language that can make REST API call. You can post messages to all connected clients, a specific client by name, or a group of clients. +Azure SignalR Service provides [REST API](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to support server-to-client communication scenarios such as broadcasting. You can choose any programming language that can make REST API calls. You can post messages to all connected clients, a specific client by name, or a group of clients. -In this quickstart, you will learn how to send messages from a command-line app to connected client apps in C#. +In this quickstart, you learn how to send messages from a command-line app to connected client apps in C#. ## Prerequisites Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide. ## Clone the sample application -While the service is deploying, let's switch to prepare the code. Clone the [sample app from GitHub](https://github.com/aspnet/AzureSignalR-samples.git), set the SignalR Service connection string, and run the application locally. +While the service is being deployed, let's switch to prepare the code. Clone the [sample app from GitHub](https://github.com/aspnet/AzureSignalR-samples.git), set the SignalR Service connection string, and run the application locally. 1. Open a git terminal window. Change to a folder where you want to clone the sample project. This sample is a console app showing the use of Azure SignalR Service. It provid - Server Mode: use simple commands to call Azure SignalR Service REST API. - Client Mode: connect to Azure SignalR Service and receive messages from server. -Also you can find how to generate an access token to authenticate with Azure SignalR Service. +You also learn how to generate an access token to authenticate with Azure SignalR Service. ### Build the executable file Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide. ## Run the sample without publishing -You can also run the command below to start a server or client +You can also run the following command to start a server or client ```bash # Start a server You can start multiple clients with different client names. Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsapi). -## <a name="usage"> </a> Integration with third-party services +## <a name="usage"> </a> Integration with non-Microsoft services -The Azure SignalR service allows third-party services to integrate with the system. +The Azure SignalR service allows non-Microsoft services to integrate with the system. ### Definition of technical specifications Send to some users | **✓** (Deprecated) | `N / A` Version | API HTTP Method | Request URL | Request body | | | `1.0-preview` | `POST` | `https://<instance-name>.service.signalr.net:5002/api/v1-preview/hub/<hub-name>` | `{"target": "<method-name>", "arguments": [...]}`-`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>` | Same as above +`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>` | `{"target": "<method-name>", "arguments": [...]}` <a name="broadcast-group"> </a> ### Broadcast to a group Version | API HTTP Method | Request URL | Request body Version | API HTTP Method | Request URL | Request body | | | `1.0-preview` | `POST` | `https://<instance-name>.service.signalr.net:5002/api/v1-preview/hub/<hub-name>/group/<group-name>` | `{"target": "<method-name>", "arguments": [...]}`-`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>/groups/<group-name>` | Same as above +`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>/groups/<group-name>` | `{"target": "<method-name>", "arguments": [...]}` <a name="send-user"> </a> ### Sending to a user Version | API HTTP Method | Request URL | Request body Version | API HTTP Method | Request URL | Request body | | | `1.0-preview` | `POST` | `https://<instance-name>.service.signalr.net:5002/api/v1-preview/hub/<hub-name>/user/<user-id>` | `{"target": "<method-name>", "arguments": [...]}`-`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>/users/<user-id>` | Same as above +`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>/users/<user-id>` | `{"target": "<method-name>", "arguments": [...]}` <a name="add-user-to-group"> </a> ### Adding a user to a group |
azure-web-pubsub | Tutorial Build Chat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-build-chat.md | |
azure-web-pubsub | Tutorial Permission | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-permission.md | |
azure-web-pubsub | Tutorial Subprotocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-subprotocol.md | |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
batch | Batch Account Create Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-account-create-portal.md | For detailed steps, see [Assign Azure roles by using the Azure portal](../role-b ### Create a key vault -User subscription mode requires [Azure Key Vault](/azure/key-vault/general/overview). The key vault must be in the same subscription and region as the Batch account and use a [Vault Access Policy](/azure/key-vault/general/assign-access-policy). +User subscription mode requires [Azure Key Vault](/azure/key-vault/general/overview). The key vault must be in the same subscription and region as the Batch account. To create a new key vault: 1. Search for and select **key vaults** from the Azure Search box, and then select **Create** on the **Key vaults** page. 1. On the **Create a key vault** page, enter a name for the key vault, and choose an existing resource group or create a new one in the same region as your Batch account.-1. On the **Access configuration** tab, select **Vault access policy** under **Permission model**. +1. On the **Access configuration** tab, select either **Azure role-based access control** or **Vault access policy** under **Permission model**, and under **Resource access**, check all 3 checkboxes for **Azure Virtual Machine for deployment**, **Azure Resource Manager for template deployment** and **Azure Disk Encryption for volume encryption**. 1. Leave the remaining settings at default values, select **Review + create**, and then select **Create**. ### Create a Batch account in user subscription mode To create a Batch account in user subscription mode: ### Grant access to the key vault manually -You can also grant access to the key vault manually. +You can also grant access to the key vault manually in [Azure portal](https://portal.azure.com). +#### If the Key Vault permission model is **Azure role-based access control**: +1. Select **Access control (IAM)** from the left navigation of the key vault page. +1. At the top of the **Access control (IAM)** page, select **Add** > **Add role assignment**. +1. On the **Add role assignment** screen, under **Role** tab, under **Job function roles** sub tab, select either **Key Vault Secrets Officer** or **Key Vault Administrator** role for the Batch account, and then select **Next**. +1. On the **Members** tab, select **Select members**. On the **Select members** screen, search for and select **Microsoft Azure Batch**, and then select **Select**. +1. Click the **Review + create** button on the bottom to go to **Review + assign** tab, and click the **Review + create** button on the bottom again. ++For detailed steps, see [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.md). ++#### If the Key Vault permission model is **Vault access policy**: 1. Select **Access policies** from the left navigation of the key vault page. 1. On the **Access policies** page, select **Create**. 1. On the **Create an access policy** screen, select a minimum of **Get**, **List**, **Set**, and **Delete** permissions under **Secret permissions**. For [key vaults with soft-delete enabled](/azure/key-vault/general/soft-delete-overview), also select **Recover**. |
batch | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md | Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
certification | How To Test Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-pnp.md | This article shows you how to: The application code that runs on your IoT Plug and Play must: - Connect to Azure IoT Hub using the [Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md).-- Follow the [IoT Plug an Play conventions](../iot-develop/concepts-developer-guide-device.md) to implement of telemetry, properties, and commands.+- Follow the [IoT Plug an Play conventions](../iot/concepts-developer-guide-device.md) to implement of telemetry, properties, and commands. The application is software that's installed separately from the operating system or is bundled with the operating system in a firmware image that's flashed to the device. -Prior to certifying your device through the certification process for IoT Plug and Play, you will want to validate that the device implementation matches the telemetry, properties and commands defined in the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) device model locally prior to submitting to the [Azure IoT Public Model Repository](../iot-develop/concepts-model-repository.md). +Prior to certifying your device through the certification process for IoT Plug and Play, you will want to validate that the device implementation matches the telemetry, properties and commands defined in the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) device model locally prior to submitting to the [Azure IoT Public Model Repository](../iot/concepts-model-repository.md). To meet the certification requirements, your device must: - Connects to Azure IoT Hub using the [DPS](../iot-dps/about-iot-dps.md). - Implement of telemetry, properties, or commands following the IoT Plug and Play convention. - Describe the device interactions with a [DTDL v2](https://aka.ms/dtdl) model.-- Send the model ID during [DPS registration](../iot-develop/concepts-developer-guide-device.md#dps-payload) in the DPS provisioning payload.-- Announce the model ID during the [MQTT connection](../iot-develop/concepts-developer-guide-device.md#model-id-announcement).+- Send the model ID during [DPS registration](../iot/concepts-developer-guide-device.md#dps-payload) in the DPS provisioning payload. +- Announce the model ID during the [MQTT connection](../iot/concepts-developer-guide-device.md#model-id-announcement). ## Test with the Azure IoT Extension CLI |
certification | How To Troubleshoot Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-troubleshoot-pnp.md | While running the tests, if you receive a result of `Passed with warnings`, this ## When you need help with the model repository -For IoT Plug and Play issues related to the model repository, refer to [our Docs guidance about the device model repository](../iot-develop/concepts-model-repository.md). +For IoT Plug and Play issues related to the model repository, refer to [our Docs guidance about the device model repository](../iot/concepts-model-repository.md). ## Next steps |
certification | Program Requirements Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-pnp.md | IoT Plug and Play enables solution builders to integrate smart devices with thei Promise of IoT Plug and Play certification are: 1. Defined device models and interfaces are compliant with the [Digital Twin Definition Language](https://github.com/Azure/opendigitaltwins-dtdl)-1. Easy integration with Azure IoT based solutions using the [Digital Twin APIs](../iot-develop/concepts-digital-twin.md) : Azure IoT Hub and Azure IoT Central +1. Easy integration with Azure IoT based solutions using the [Digital Twin APIs](../iot/concepts-digital-twin.md) : Azure IoT Hub and Azure IoT Central 1. Product truth validated through testing telemetry from end point to cloud using DTDL > [!Note] Promise of IoT Plug and Play certification are: | **OS** | Agnostic | | **Validation Type** | Automated | | **Validation** | The [portal workflow](https://certify.azure.com) validates: **1.** Model ID announcement and ensure the device is connected using either the MQTT or MQTT over WebSockets protocol **2.** Models are compliant with the DTDL v2 **3.** Telemetry, properties, and commands are properly implemented and interact between IoT Hub Digital Twin and Device Twin on the device |-| **Resources** | [Public Preview Refresh updates](../iot-develop/overview-iot-plug-and-play.md) | +| **Resources** | [Public Preview Refresh updates](../iot/overview-iot-plug-and-play.md) | **[Required] Device models are published in public model repository** Promise of IoT Plug and Play certification are: | **OS** | Agnostic | | **Validation Type** | Automated | | **Validation** | All device models are required to be published in public repository. Device models are resolved via models available in public repository **1.** User must manually publish the models to the public repository before submitting for the certification. **2.** Note that once the models are published, it is immutable. We strongly recommend publishing only when the models and embedded device code are finalized.*1 *1 User must contact Microsoft support to revoke the models once published to the model repository **3.** [Portal workflow](https://certify.azure.com) checks the existence of the models in the public repository when the device is connected to the certification service |-| **Resources** | [Model repository](../iot-develop/overview-iot-plug-and-play.md) | +| **Resources** | [Model repository](../iot/overview-iot-plug-and-play.md) | **[If implemented] Device info Interface: The purpose of test is to validate device info interface is implemented properly in the device code** Promise of IoT Plug and Play certification are: | **OS** | Agnostic | | **Validation Type** | Automated | | **Validation** | [Portal workflow](https://certify.azure.com) validates the device code implements device info interface **1.** Checks the values are emitted by the device code to IoT Hub **2.** Checks the interface is implemented in the DCM (this implementation will change in DTDL v2) **3.** Checks properties are not write-able (read only) **4.** Checks the schema type is string and/or long and not null |-| **Resources** | [Microsoft defined interface](../iot-develop/overview-iot-plug-and-play.md) | +| **Resources** | [Microsoft defined interface](../iot/overview-iot-plug-and-play.md) | | **Azure Recommended** | N/A | **[If implemented] Cloud to device: The purpose of test is to make sure messages can be sent from cloud to devices** |
communication-services | Migrating To Azure Communication Services Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/migrating-to-azure-communication-services-calling.md | +zone_pivot_groups: acs-plat-web-ios-android # Migration Guide from Twilio Video to Azure Communication Services -This article describes how to migrate an existing Twilio Video implementation to the [Azure Communication Services' Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md) for WebJS. Both Twilio Video and Azure Communication Services' Calling SDK for WebJS are also cloud-based platforms that enable developers to add voice and video calling features to their web applications. +This article describes how to migrate an existing Twilio Video implementation to the [Azure Communication Services' Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md). Both Twilio Video and Azure Communication Services' Calling SDK for WebJS are also cloud-based platforms that enable developers to add voice and video calling features to their web applications. However, there are some key differences between them that may affect your choice of platform or require some changes to your existing code if you decide to migrate. In this article, we compare the main features and functions of both platforms and provide some guidance on how to migrate your existing Twilio Video implementation to Azure Communication Services' Calling SDK for WebJS. However, there are some key differences between them that may affect your choice If you're embarking on a new project from the ground up, see the [Quickstarts of the Calling SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web). -**Prerequisites:** --1. **Azure Account:** Make sure that your Azure account is active. New users can create a free account at [Microsoft Azure](https://azure.microsoft.com/free/). -2. **Node.js 18:** Ensure Node.js 18 is installed on your system. Download from [Node.js](https://nodejs.org/en). -3. **Communication Services Resource:** Set up a [Communication Services Resource](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp) via your Azure portal and note your connection string. -4. **Azure CLI:** Follow the instructions at [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows?tabs=azure-cli).. -5. **User Access Token:** Generate a user access token to instantiate the call client. You can create one using the Azure CLI as follows: -```console -az communication identity token issue --scope voip --connection-string "yourConnectionString" -``` --For more information, see [Use Azure CLI to Create and Manage Access Tokens](../quickstarts/identity/access-tokens.md?pivots=platform-azcli). --For Video Calling as a Teams user: --- You can also use Teams identity. To generate an access token for a Teams User, see [Manage teams identity](../quickstarts/manage-teams-identity.md?pivots=programming-language-javascript).-- Obtain the Teams thread ID for call operations using the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). For information about creating a thread ID, see [Create chat - Microsoft Graph v1.0 > Example2: Create a group chat](/graph/api/chat-post?preserve-view=true&tabs=javascript&view=graph-rest-1.0#example-2-create-a-group-chat).--### UI library --The UI library simplifies the process of creating modern communication user interfaces using Azure Communication Services. It offers a collection of ready-to-use UI components that you can easily integrate into your application. --This open source prebuilt set of controls enables you to create aesthetically pleasing designs using [Fluent UI SDK](https://developer.microsoft.com/en-us/fluentui#/) components and develop high quality audio/video communication experiences. For more information, check out the [Azure Communications Services UI Library overview](../concepts/ui-library/ui-library-overview.md). The overview includes comprehensive information about both web and mobile platforms. ### Calling support Azure Communication Services offers various call types. The type of call you cho - **Group Calls** - Happens when three or more participants connect in a single call. Any combination of VoIP and PSTN-connected users can be on a group call. A one-to-one call can evolve into a group call by adding more participants to the call, and one of these participants can be a bot. - **Rooms Call** - A Room acts as a container that manages activity between end-users of Azure Communication Services. It provides application developers with enhanced control over who can join a call, when they can meet, and how they collaborate. For a more comprehensive understanding of Rooms, see the [Rooms overview](../concepts/rooms/room-concept.md). -## Installation --### Install the Azure Communication Services Calling SDK --Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. -```console -npm install @azure/communication-common npm install @azure/communication-calling -``` --### Remove the Twilio SDK from the project --You can remove the Twilio SDK from your project by uninstalling the package. -```console -npm uninstall twilio-video -``` --## Object Model --The following classes and interfaces handle some of the main features of the Azure Communication Services Calling SDK: --| **Name** | **Description** | -|--|-| -| CallClient | The main entry point to the Calling SDK. | -| AzureCommunicationTokenCredential | Implements the `CommunicationTokenCredential` interface, which is used to instantiate the CallAgent. | -| CallAgent | Start and manage calls. | -| Device Manager | Manage media devices. | -| Call | Represents a Call. | -| LocalVideoStream | Create a local video stream for a camera device on the local system. | -| RemoteParticipant | Represents a remote participant in the Call. | -| RemoteVideoStream | Represents a remote video stream from a Remote Participant. | -| LocalAudioStream | Represents a local audio stream for a local microphone device. | -| AudioOptions | Audio options, provided to a participant when making an outgoing call or joining a group call. | -| AudioIssue | Represents the end of call survey audio issues. Example responses might be `NoLocalAudio` - the other participants were unable to hear me, or `LowVolume` - the call audio volume was too low. | --When using ACS calling in a Teams call, there are a few differences: --- Instead of `CallAgent` - use `TeamsCallAgent` for starting and managing Teams calls.-- Instead of `Call` - use `TeamsCall` for representing a Teams Call.--## Initialize the Calling SDK (CallClient/CallAgent) --Using the `CallClient`, initialize a `CallAgent` instance. The `createCallAgent` method uses CommunicationTokenCredential as an argument. It accepts a [user access token](../quickstarts/identity/access-tokens.md?tabs=windows&pivots=programming-language-javascript). --### Device manager --#### Twilio --Twilio doesn't have a Device Manager analog. Tracks are created using the systemΓÇÖs default device. To customize a device, obtain the desired source track via: -```javascript -navigator.mediaDevices.getUserMedia() -``` --And pass it to the track creation method. --#### Azure Communication Services -```javascript -const { CallClient } = require('@azure/communication-calling'); -const { AzureCommunicationTokenCredential} = require('@azure/communication-common'); --const userToken = '<USER_TOKEN>'; -const tokenCredential = new AzureCommunicationTokenCredential(userToken); --callClient = new CallClient(); -const callAgent = await callClient.createCallAgent(tokenCredential, {displayName: 'optional user name'}); -``` --You can use the `getDeviceManager` method on the `CallClient` instance to access `deviceManager`. --const deviceManager = await callClient.getDeviceManager(); -```javascript -// Get a list of available video devices for use. -const localCameras = await deviceManager.getCameras(); --// Get a list of available microphone devices for use. -const localMicrophones = await deviceManager.getMicrophones(); --// Get a list of available speaker devices for use. -const localSpeakers = await deviceManager.getSpeakers(); -``` --### Get device permissions --#### Twilio --Twilio Video asks for device permissions on track creation. --#### Azure Communication Services --Prompt a user to grant camera and/or microphone permissions: -```javascript -const result = await deviceManager.askDevicePermission({audio: true, video: true}); -``` --The output returns with an object that indicates whether audio and video permissions were granted: -```javascript -console.log(result.audio); console.log(result.video); -``` --## Starting a call --### Twilio --```javascript -import * as TwilioVideo from 'twilio-video'; --const twilioVideo = TwilioVideo; -let twilioRoom; --twilioRoom = await twilioVideo.connect('token', { name: 'roomName', audio: false, video: false }); -``` --### Azure Communication Services --To create and start a call, use one of the `callAgent` APIs and provide a user that you created through the Communication Services identity SDK. --Call creation and start are synchronous. The `call` instance enables you to subscribe to call events. Subscribe to the `stateChanged` event for value changes. -```javascript -call.on('stateChanged', async () =\> { console.log(\`Call state changed: \${call.state}\`) }); -``` --#### 1:1 Call --To call another Azure Communication Services user, use the `startCall` method on `callAgent` and pass the recipient's `CommunicationUserIdentifier` that you [created with the Communication Services administration library](../quickstarts/identity/access-tokens.md). -```javascript -const userCallee = { communicationUserId: '\<Azure_Communication_Services_USER_ID\>' }; -const oneToOneCall = callAgent.startCall([userCallee]); -``` --#### Rooms Call --To join a `Room` call, you can instantiate a context object with the `roomId` property as the room identifier. To join the call, use the `join` method and pass the context instance. -```javascript -const context = { roomId: '\<RoomId\>' }; -const call = callAgent.join(context); -``` -A **Room** offers application developers better control over who can join a call, when they meet and how they collaborate. To learn more about **Rooms**, see the [Rooms overview](../concepts/rooms/room-concept.md), or see [Quickstart: Join a room call](../quickstarts/rooms/join-rooms-call.md). --#### Group Call --To start a new group call or join an ongoing group call, use the `join` method and pass an object with a `groupId` property. The `groupId` value must be a GUID. -```javascript -const context = { groupId: '\<GUID\>'}; -const call = callAgent.join(context); -``` --#### Teams call --Start a synchronous one-to-one or group call using the `startCall` API on `teamsCallAgent`. You can provide `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier` as a parameter to define the target of the call. The method returns the `TeamsCall` instance that allows you to subscribe to call events. -```javascript -const userCallee = { microsoftTeamsUserId: '\<MICROSOFT_TEAMS_USER_ID\>' }; -const oneToOneCall = teamsCallAgent.startCall(userCallee); -``` --## Accepting and joining a call --### Twilio --When using Twilio Video SDK, the Participant is created after joining the room; and it doesn't have any information about other rooms. --### Azure Communication Services --Azure Communication Services has the `CallAgent` instance, which emits an `incomingCall` event when the logged-in identity receives an incoming call. -```javascript -callAgent.on('incomingCall', async (call) =\>{ - // Incoming call - }); -``` --The `incomingCall` event includes an `incomingCall` instance that you can accept or reject. --When starting, joining, or accepting a call with *video on*, if the specified video camera device is being used by another process or if it's disabled in the system, the call starts with *video off*, and returns a `cameraStartFailed: true` call diagnostic. --```javascript -const incomingCallHandler = async (args: { incomingCall: IncomingCall }) => { - const incomingCall = args.incomingCall; -- // Get incoming call ID - var incomingCallId = incomingCall.id -- // Get information about this Call. - var callInfo = incomingCall.info; -- // Get information about caller - var callerInfo = incomingCall.callerInfo - - // Accept the call - var call = await incomingCall.accept(); -- // Reject the call - incomingCall.reject(); -- // Subscribe to callEnded event and get the call end reason - incomingCall.on('callEnded', args => - { console.log(args.callEndReason); - }); -- // callEndReason is also a property of IncomingCall - var callEndReason = incomingCall.callEndReason; -}; --callAgentInstance.on('incomingCall', incomingCallHandler); --``` --After starting a call, joining a call, or accepting a call, you can also use the `callAgent` `callsUpdated` event to be notified of the new `Call` object and start subscribing to it. -```javascript -callAgent.on('callsUpdated', (event) => { - event.added.forEach((call) => { - // User joined call - }); - - event.removed.forEach((call) => { - // User left call - }); -}); -``` --For Azure Communication Services Teams implementation, see how to [Receive a Teams Incoming Call](../how-tos/cte-calling-sdk/manage-calls.md#receive-a-teams-incoming-call). --## Adding and removing participants to a call --### Twilio --Participants can't be added or removed from Twilio Room, they need to join the Room or disconnect from it themselves. --Local Participant in Twilio Room can be accessed this way: -```javascript -let localParticipant = twilioRoom.localParticipant; -``` --Remote Participants in Twilio Room are represented with a map that has unique Participant SID as a key: -```javascript -twilioRoom.participants; -``` --### Azure Communication Services --All remote participants are represented by `RemoteParticipant` type and available through `remoteParticipants` collection on a call instance. --The `remoteParticipants` collection returns a list of remote participants in a call: -```javascript -call.remoteParticipants; // [remoteParticipant, remoteParticipant....] -``` --**Add participant:** --To add a participant to a call, you can use `addParticipant`. Provide one of the Identifier types. It synchronously returns the `remoteParticipant` instance. --The `remoteParticipantsUpdated` event from Call is raised when a participant is successfully added to the call. -```javascript -const userIdentifier = { communicationUserId: '<Azure_Communication_Services_USER_ID>' }; -const remoteParticipant = call.addParticipant(userIdentifier); -``` --**Remove participant:** --To remove a participant from a call, use `removeParticipant`. You need to pass one of the Identifier types. This method resolves asynchronously after the participant is removed from the call. The participant is also removed from the `remoteParticipants` collection. -```javascript -const userIdentifier = { communicationUserId: '<Azure_Communication_Services_USER_ID>' }; -await call.removeParticipant(userIdentifier); --``` --Subscribe to the call's `remoteParticipantsUpdated` event to be notified when new participants are added to the call or removed from the call. --```javascript -call.on('remoteParticipantsUpdated', e => { - e.added.forEach(remoteParticipant => { - // Subscribe to new remote participants that are added to the call - }); - - e.removed.forEach(remoteParticipant => { - // Unsubscribe from participants that are removed from the call - }) --}); -``` --Subscribe to remote participant's `stateChanged` event for value changes. -```javascript -remoteParticipant.on('stateChanged', () => { - console.log(`Remote participants state changed: ${remoteParticipant.state}`) -}); -``` --## Video calling --## Starting and stopping video --### Twilio --```javascript -const videoTrack = await twilioVideo.createLocalVideoTrack({ constraints }); -const videoTrackPublication = await localParticipant.publishTrack(videoTrack, { options }); -``` --The camera is enabled by default. It can be disabled and enabled back if necessary: -```javascript -videoTrack.disable(); -``` -Or: -```javascript -videoTrack.enable(); -``` --If there's a later created video track, attach it locally: --```javascript -const videoElement = videoTrack.attach(); -const localVideoContainer = document.getElementById( localVideoContainerId ); -localVideoContainer.appendChild(videoElement); -``` --Twilio Tracks rely on default input devices and reflect the changes in defaults. To change an input device, you need to unpublish the previous Video Track: --```javascript -localParticipant.unpublishTrack(videoTrack); -``` --Then create a new Video Track with the correct constraints. --### Azure Communication Services -To start a video while on a call, you need to enumerate cameras using the `getCameras` method on the `deviceManager` object. Then create a new instance of `LocalVideoStream` with the desired camera and pass the `LocalVideoStream` object into the `startVideo` method of an existing call object: --```javascript -const deviceManager = await callClient.getDeviceManager(); -const cameras = await deviceManager.getCameras(); -const camera = cameras[0] -const localVideoStream = new LocalVideoStream(camera); -await call.startVideo(localVideoStream); -``` --After you successfully start sending video, a `LocalVideoStream` instance of type Video is added to the `localVideoStreams` collection on a call instance. -```javascript -const localVideoStream = call.localVideoStreams.find( (stream) =\> { return stream.mediaStreamType === 'Video'} ); -``` --To stop local video while on a call, pass the `localVideoStream` instance that's being used for video: -```javascript -await call.stopVideo(localVideoStream); -``` --You can switch to a different camera device while a video is sending by calling `switchSource` on a `localVideoStream` instance: --```javascript -const cameras = await callClient.getDeviceManager().getCameras(); -const camera = cameras[1]; -localVideoStream.switchSource(camera); -``` --If the specified video device is being used by another process, or if it's disabled in the system: --- While in a call, if your video is off and you start video using `call.startVideo()`, this method returns a `SourceUnavailableError` and `cameraStartFailed` will be set to true.-- A call to the `localVideoStream.switchSource()` method causes `cameraStartFailed` to be set to true. See the [Call Diagnostics guide](../concepts/voice-video-calling/call-diagnostics.md) for more information about how to diagnose call-related issues.--To verify whether the local video is *on* or *off* you can use the `isLocalVideoStarted` API, which returns true or false: -```javascript -call.isLocalVideoStarted; -``` --To listen for changes to the local video, you can subscribe and unsubscribe to the `isLocalVideoStartedChanged` event: --```javascript -// Subscribe to local video event -call.on('isLocalVideoStartedChanged', () => { - // Callback(); -}); -// Unsubscribe from local video event -call.off('isLocalVideoStartedChanged', () => { - // Callback(); -}); --``` --### Rendering a remote user's video --#### Twilio --As soon as a Remote Participant publishes a Video Track, it needs to be attached. The `trackSubscribed` event on Room or Remote Participant enables you to detect when the track can be attached: --```javascript -twilioRoom.on('participantConneted', (participant) => { - participant.on('trackSubscribed', (track) => { - const remoteVideoElement = track.attach(); - const remoteVideoContainer = document.getElementById(remoteVideoContainerId + participant.identity); - remoteVideoContainer.appendChild(remoteVideoElement); - }); -}); -``` --Or --```javascript -twilioRoom..on('trackSubscribed', (track, publication, participant) => { - const remoteVideoElement = track.attach(); - const remoteVideoContainer = document.getElementById(remoteVideoContainerId + participant.identity); - remoteVideoContainer.appendChild(remoteVideoElement); - }); -}); -``` --#### Azure Communication Services --To list the video streams and screen sharing streams of remote participants, inspect the `videoStreams` collections: -```javascript -const remoteVideoStream: RemoteVideoStream = call.remoteParticipants[0].videoStreams[0]; -const streamType: MediaStreamType = remoteVideoStream.mediaStreamType; -``` --To render `RemoteVideoStream`, you need to subscribe to its `isAvailableChanged` event. If the `isAvailable` property changes to true, a remote participant is sending a stream. After that happens, create a new instance of `VideoStreamRenderer`, and then create a new `VideoStreamRendererView` instance by using the asynchronous `createView` method. You can then attach `view.target` to any UI element. --Whenever availability of a remote stream changes, you can destroy the whole `VideoStreamRenderer` or a specific `VideoStreamRendererView`. If you do decide to keep them, it displays a blank video frame. --```javascript -// Reference to the html's div where we would display a grid of all remote video streams from all participants. -let remoteVideosGallery = document.getElementById('remoteVideosGallery'); --subscribeToRemoteVideoStream = async (remoteVideoStream) => { - let renderer = new VideoStreamRenderer(remoteVideoStream); - let view; - let remoteVideoContainer = document.createElement('div'); - remoteVideoContainer.className = 'remote-video-container'; -- let loadingSpinner = document.createElement('div'); - // See the css example below for styling the loading spinner. - loadingSpinner.className = 'loading-spinner'; - remoteVideoStream.on('isReceivingChanged', () => { - try { - if (remoteVideoStream.isAvailable) { - const isReceiving = remoteVideoStream.isReceiving; - const isLoadingSpinnerActive = remoteVideoContainer.contains(loadingSpinner); - if (!isReceiving && !isLoadingSpinnerActive) { - remoteVideoContainer.appendChild(loadingSpinner); - } else if (isReceiving && isLoadingSpinnerActive) { - remoteVideoContainer.removeChild(loadingSpinner); - } - } - } catch (e) { - console.error(e); - } - }); -- const createView = async () => { - // Create a renderer view for the remote video stream. - view = await renderer.createView(); - // Attach the renderer view to the UI. - remoteVideoContainer.appendChild(view.target); - remoteVideosGallery.appendChild(remoteVideoContainer); - } -- // Remote participant has switched video on/off - remoteVideoStream.on('isAvailableChanged', async () => { - try { - if (remoteVideoStream.isAvailable) { - await createView(); - } else { - view.dispose(); - remoteVideosGallery.removeChild(remoteVideoContainer); - } - } catch (e) { - console.error(e); - } - }); -- // Remote participant has video on initially. - if (remoteVideoStream.isAvailable) { - try { - await createView(); - } catch (e) { - console.error(e); - } - } - - console.log(`Initial stream size: height: ${remoteVideoStream.size.height}, width: ${remoteVideoStream.size.width}`); - remoteVideoStream.on('sizeChanged', () => { - console.log(`Remote video stream size changed: new height: ${remoteVideoStream.size.height}, new width: ${remoteVideoStream.size.width}`); - }); -} -``` --Subscribe to the remote participant's `videoStreamsUpdated` event to be notified when the remote participant adds new video streams and removes video streams. --```javascript -remoteParticipant.on('videoStreamsUpdated', e => { - e.added.forEach(remoteVideoStream => { - // Subscribe to new remote participant's video streams - }); -- e.removed.forEach(remoteVideoStream => { - // Unsubscribe from remote participant's video streams - }); -}); -``` --### Virtual background --#### Twilio --To use Virtual Background, install Twilio helper library: -```console -npm install @twilio/video-processors -``` --Create and load a new `Processor` instance: --```javascript -import { GaussianBlurBackgroundProcessor } from '@twilio/video-processors'; --const blurProcessor = new GaussianBlurBackgroundProcessor({ assetsPath: virtualBackgroundAssets }); --await blurProcessor.loadModel(); -``` --As soon as the model is loaded, you can add the background to the video track using the `addProcessor` method: -```javascript -videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' }); -``` --#### Azure Communication Services --Use the npm install command to install the [Azure Communication Services Effects SDK](../quickstarts/voice-video-calling/get-started-video-effects.md?pivots=platform-web) for JavaScript. -```console -npm install @azure/communication-calling-effects --save -``` --> [!NOTE] -> To use video effects with the Azure Communication Calling SDK, once you've created a LocalVideoStream, you need to get the VideoEffects feature API of the LocalVideoStream to start/stop video effects: --```javascript -import * as AzureCommunicationCallingSDK from '@azure/communication-calling'; --import { BackgroundBlurEffect, BackgroundReplacementEffect } from '@azure/communication-calling-effects'; --// Get the video effects feature API on the LocalVideoStream -// (here, localVideoStream is the LocalVideoStream object you created while setting up video calling) -const videoEffectsFeatureApi = localVideoStream.feature(AzureCommunicationCallingSDK.Features.VideoEffects); --// Subscribe to useful events -videoEffectsFeatureApi.on(ΓÇÿeffectsStartedΓÇÖ, () => { - // Effects started -}); --videoEffectsFeatureApi.on(ΓÇÿeffectsStoppedΓÇÖ, () => { - // Effects stopped -}); --videoEffectsFeatureApi.on(ΓÇÿeffectsErrorΓÇÖ, (error) => { - // Effects error -}); -``` --To blur the background: --```javascript -// Create the effect instance -const backgroundBlurEffect = new BackgroundBlurEffect(); --// Recommended: Check support -const backgroundBlurSupported = await backgroundBlurEffect.isSupported(); --if (backgroundBlurSupported) { - // Use the video effects feature API we created to start effects - await videoEffectsFeatureApi.startEffects(backgroundBlurEffect); -} -``` --For background replacement with an image you need to provide the URL of the image you want as the background to this effect. Supported image formats are: PNG, JPG, JPEG, TIFF, and BMP. The supported aspect ratio is 16:9. --```javascript -const backgroundImage = 'https://linkToImageFile'; --// Create the effect instance -const backgroundReplacementEffect = new BackgroundReplacementEffect({ - backgroundImageUrl: backgroundImage -}); --// Recommended: Check support -const backgroundReplacementSupported = await backgroundReplacementEffect.isSupported(); --if (backgroundReplacementSupported) { - // Use the video effects feature API as before to start/stop effects - await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect); -} -``` --Change the image for this effect by passing it via the configured method: -```javascript -const newBackgroundImage = 'https://linkToNewImageFile'; --await backgroundReplacementEffect.configure({ - backgroundImageUrl: newBackgroundImage -}); -``` --To switch effects, use the same method on the video effects feature API: --```javascript -// Switch to background blur -await videoEffectsFeatureApi.startEffects(backgroundBlurEffect); --// Switch to background replacement -await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect); -``` --At any time, if you want to check which effects are active, use the `activeEffects` property. The `activeEffects` property returns an array with the names of the currently active effects and returns an empty array if there are no effects active. -```javascript -// Using the video effects feature api -const currentActiveEffects = videoEffectsFeatureApi.activeEffects; -``` --To stop effects: -```javascript -await videoEffectsFeatureApi.stopEffects(); -``` ---## Audio --### Starting and stopping audio --#### Twilio --```javascript -const audioTrack = await twilioVideo.createLocalAudioTrack({ constraints }); -const audioTrackPublication = await localParticipant.publishTrack(audioTrack, { options }); -``` --The microphone is enabled by default. You can disable and enable it back as needed: -```javascript -audioTrack.disable(); -``` --Or -```javascript -audioTrack.enable(); -``` --Any created Audio Track should be attached by Local Participant the same way as Video Track: --```javascript -const audioElement = audioTrack.attach(); -const localAudioContainer = document.getElementById(localAudioContainerId); -localAudioContainer.appendChild(audioElement); -``` --And by Remote Participant: --```javascript -twilioRoom.on('participantConneted', (participant) => { - participant.on('trackSubscribed', (track) => { - const remoteAudioElement = track.attach(); - const remoteAudioContainer = document.getElementById(remoteAudioContainerId + participant.identity); - remoteAudioContainer.appendChild(remoteAudioElement); - }); -}); -``` --Or: --```javascript -twilioRoom..on('trackSubscribed', (track, publication, participant) => { - const remoteAudioElement = track.attach(); - const remoteAudioContainer = document.getElementById(remoteAudioContainerId + participant.identity); - remoteVideoContainer.appendChild(remoteAudioElement); - }); -}); --``` --It isn't possible to mute incoming audio in Twilio Video SDK. --#### Azure Communication Services --```javascript -await call.startAudio(); -``` --To mute or unmute the local endpoint, you can use the mute and unmute asynchronous APIs: --```javascript -//mute local device (microphone / sent audio) -await call.mute(); --//unmute local device (microphone / sent audio) -await call.unmute(); -``` --Mute incoming audio sets the call volume to 0. To mute or unmute the incoming audio, use the `muteIncomingAudio` and `unmuteIncomingAudio` asynchronous APIs: --```javascript -//mute local device (speaker) -await call.muteIncomingAudio(); --//unmute local device (speaker) -await call.unmuteIncomingAudio(); --``` --### Detecting dominant speaker --#### Twilio --To detect the loudest Participant in the Room, use the Dominant Speaker API. You can enable it in the connection options when joining the Group Room with at least 2 participants: -```javascript -twilioRoom = await twilioVideo.connect('token', { -name: 'roomName', -audio: false, -video: false, -dominantSpeaker: true -}); -``` --When the loudest speaker in the Room changes, the `dominantSpeakerChanged` event is emitted: --```javascript -twilioRoom.on('dominantSpeakerChanged', (participant) => { - // Highlighting the loudest speaker -}); -``` --#### Azure Communication Services --Dominant speakers for a call are an extended feature of the core Call API. It enables you to obtain a list of the active speakers in the call. This is a ranked list, where the first element in the list represents the last active speaker on the call and so on. --In order to obtain the dominant speakers in a call, you first need to obtain the call dominant speakers feature API object: -```javascript -const callDominantSpeakersApi = call.feature(Features.CallDominantSpeakers); -``` --Next you can obtain the list of the dominant speakers by calling `dominantSpeakers`. This has a type of `DominantSpeakersInfo`, which has the following members: --- `speakersList` contains the list of the ranked dominant speakers in the call. These are represented by their participant ID.-- `timestamp` is the latest update time for the dominant speakers in the call.-```javascript -let dominantSpeakers: DominantSpeakersInfo = callDominantSpeakersApi.dominantSpeakers; -``` --You can also subscribe to the `dominantSpeakersChanged` event to know when the dominant speakers list changes. ---```javascript -const dominantSpeakersChangedHandler = () => { - // Get the most up-to-date list of dominant speakers - let dominantSpeakers = callDominantSpeakersApi.dominantSpeakers; -}; -callDominantSpeakersApi.on('dominantSpeakersChanged', dominantSpeakersChangedHandler); --``` --## Enabling screen sharing -### Twilio --To share the screen in Twilio Video, obtain the source track via `navigator.mediaDevices`: --Chromium-based browsers: -```javascript -const stream = await navigator.mediaDevices.getDisplayMedia({ - audio: false, - video: true - }); -const track = stream.getTracks()[0]; -``` --Firefox and Safari: -```javascript -const stream = await navigator.mediaDevices.getUserMedia({ mediaSource: 'screen' }); -const track = stream.getTracks()[0]; -``` --Obtain the screen share track, then you can publish and manage it the same way as the casual Video Track (see the ΓÇ£VideoΓÇ¥ section). --### Azure Communication Services --To start screen sharing while on a call, you can use the asynchronous API `startScreenSharing`: -```javascript -await call.startScreenSharing(); -``` --After successfully starting to sending screen sharing, a `LocalVideoStream` instance of type `ScreenSharing` is created and added to the `localVideoStreams` collection on the call instance. --```javascript -const localVideoStream = call.localVideoStreams.find( (stream) => { return stream.mediaStreamType === 'ScreenSharing'} ); -``` --To stop screen sharing while on a call, you can use the asynchronous API `stopScreenSharing`: -```javascript -await call.stopScreenSharing(); -``` --To verify whether screen sharing is on or off, you can use `isScreenSharingOn` API, which returns true or false: -```javascript -call.isScreenSharingOn; -``` --To listen for changes to the screen share, subscribe and unsubscribe to the `isScreenSharingOnChanged` event: --```javascript -// Subscribe to screen share event -call.on('isScreenSharingOnChanged', () => { - // Callback(); -}); -// Unsubscribe from screen share event -call.off('isScreenSharingOnChanged', () => { - // Callback(); -}); --``` --## Media quality statistics --### Twilio --To collect real-time media stats, use the `getStats`` method. -```javascript -const stats = twilioRoom.getStats(); -``` --### Azure Communication Services --Media quality statistics is an extended feature of the core Call API. You first need to obtain the `mediaStatsFeature` API object: --```javascript -const mediaStatsFeature = call.feature(Features.MediaStats); -``` ---To receive the media statistics data, you can subscribe `sampleReported` event or `summmaryReported` event: --- `sampleReported` event triggers every second. Suitable as a data source for UI display or your own data pipeline.-- `summmaryReported` event contains the aggregated values of the data over intervals. Useful when you just need a summary.--If you want control over the interval of the `summmaryReported` event, you need to define `mediaStatsCollectorOptions` of type `MediaStatsCollectorOptions`. Otherwise, the SDK uses default values. -```javascript -const mediaStatsCollectorOptions: SDK.MediaStatsCollectorOptions = { - aggregationInterval: 10, - dataPointsPerAggregation: 6 -}; --const mediaStatsCollector = mediaStatsFeature.createCollector(mediaStatsSubscriptionOptions); --mediaStatsCollector.on('sampleReported', (sample) => { - console.log('media stats sample', sample); -}); --mediaStatsCollector.on('summaryReported', (summary) => { - console.log('media stats summary', summary); -}); -``` --If you don't need to use the media statistics collector, you can call the dispose method of `mediaStatsCollector`. --```javascript -mediaStatsCollector.dispose(); -``` ---You don't need to call the dispose method of `mediaStatsCollector` every time a call ends. The collectors are reclaimed internally when the call ends. --For more information, see [Media quality statistics](../concepts/voice-video-calling/media-quality-sdk.md?pivots=platform-web). --## Diagnostics --### Twilio --To test connectivity, Twilio offers Preflight API. This is a test call performed to identify signaling and media connectivity issues. --An access token is required to launch the test: --```javascript -const preflightTest = twilioVideo.runPreflight(token); --// Emits when particular call step completes -preflightTest.on('progress', (progress) => { - console.log(`Preflight progress: ${progress}`); -}); --// Emits if the test has failed and returns error and partial test results -preflightTest.on('failed', (error, report) => { - console.error(`Preflight error: ${error}`); - console.log(`Partial preflight test report: ${report}`); -}); --// Emits when the test has been completed successfully and returns the report -preflightTest.on('completed', (report) => { - console.log(`Preflight test report: ${report}`); -}); -``` --Another way to identify network issues during the call is by using the Network Quality API, which monitors a Participant's network and provides quality metrics. You can enable it in the connection options when a participant joins the Group Room: --```javascript -twilioRoom = await twilioVideo.connect('token', { - name: 'roomName', - audio: false, - video: false, - networkQuality: { - local: 3, // Local Participant's Network Quality verbosity - remote: 1 // Remote Participants' Network Quality verbosity - } -}); -``` --When the network quality for Participant changes, it generates a `networkQualityLevelChanged` event: -```javascript -participant.on(networkQualityLevelChanged, (networkQualityLevel, networkQualityStats) => { - // Processing Network Quality stats -}); -``` --### Azure Communication Services -Azure Communication Services provides a feature called `"User Facing Diagnostics" (UFD)` that you can use to examine various properties of a call to identify the issue. User Facing Diagnostics events could be caused by some underlying issue (poor network, the user has their microphone muted) that could cause a user to have a poor call experience. --User-facing diagnostics is an extended feature of the core Call API and enables you to diagnose an active call. -```javascript -const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics); -``` --Subscribe to the `diagnosticChanged`` event to monitor when any user-facing diagnostic changes: -```javascript -/** - * Each diagnostic has the following data: - * - diagnostic is the type of diagnostic, e.g. NetworkSendQuality, DeviceSpeakWhileMuted - * - value is DiagnosticQuality or DiagnosticFlag: - * - DiagnosticQuality = enum { Good = 1, Poor = 2, Bad = 3 }. - * - DiagnosticFlag = true | false. - * - valueType = 'DiagnosticQuality' | 'DiagnosticFlag' - */ -const diagnosticChangedListener = (diagnosticInfo: NetworkDiagnosticChangedEventArgs | MediaDiagnosticChangedEventArgs) => { - console.log(`Diagnostic changed: ` + - `Diagnostic: ${diagnosticInfo.diagnostic}` + - `Value: ${diagnosticInfo.value}` + - `Value type: ${diagnosticInfo.valueType}`); -- if (diagnosticInfo.valueType === 'DiagnosticQuality') { - if (diagnosticInfo.value === DiagnosticQuality.Bad) { - console.error(`${diagnosticInfo.diagnostic} is bad quality`); -- } else if (diagnosticInfo.value === DiagnosticQuality.Poor) { - console.error(`${diagnosticInfo.diagnostic} is poor quality`); - } -- } else if (diagnosticInfo.valueType === 'DiagnosticFlag') { - if (diagnosticInfo.value === true) { - console.error(`${diagnosticInfo.diagnostic}`); - } - } -}; --userFacingDiagnostics.network.on('diagnosticChanged', diagnosticChangedListener); -userFacingDiagnostics.media.on('diagnosticChanged', diagnosticChangedListener); -``` --To learn more about User Facing Diagnostics and the different diagnostic values available, see [User Facing Diagnostics](../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web). --Azure Communication Services also provides a precall diagnostics API. To Access the Pre-Call API, you need to initialize a `callClient`, and provision an Azure Communication Services access token. Then you can access the `PreCallDiagnostics` feature and the `startTest` method. --```javascript -import { CallClient, Features} from "@azure/communication-calling"; -import { AzureCommunicationTokenCredential } from '@azure/communication-common'; --const callClient = new CallClient(); -const tokenCredential = new AzureCommunicationTokenCredential("INSERT ACCESS TOKEN"); -const preCallDiagnosticsResult = await callClient.feature(Features.PreCallDiagnostics).startTest(tokenCredential); -``` --The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a `PreCallDiagnosticsResult` object. --```javascript -export declare type PreCallDiagnosticsResult = { - deviceAccess: Promise<DeviceAccess>; - deviceEnumeration: Promise<DeviceEnumeration>; - inCallDiagnostics: Promise<InCallDiagnostics>; - browserSupport?: Promise<DeviceCompatibility>; - id: string; - callMediaStatistics?: Promise<MediaStatsCallFeature>; -}; -``` --You can learn more about ensuring precall readiness in [Pre-Call diagnostics](../concepts/voice-video-calling/pre-call-diagnostics.md). --## Event listeners --### Twilio --```javascript -twilioRoom.on('participantConneted', (participant) => { -// Participant connected -}); --twilioRoom.on('participantDisconneted', (participant) => { -// Participant Disconnected -}); --``` --### Azure Communication Services --Each object in the JavaScript Calling SDK has properties and collections. Their values change throughout the lifetime of the object. Use the `on()` method to subscribe to objects' events, and use the `off()` method to unsubscribe from objects' events. --**Properties** --- You must inspect their initial values, and subscribe to the `'\<property\>Changed'` event for future value updates.--**Collections** --- You must inspect their initial values, and subscribe to the `'\<collection\>Updated'` event for future value updates.-- The `'\<collection\>Updated'` event's payload, has an `added` array that contains values that were added to the collection.-- The `'\<collection\>Updated'` event's payload also has a removed array that contains values that were removed from the collection.--## Leaving and ending sessions --### Twilio -```javascript -twilioVideo.disconnect(); -``` ---### Azure Communication Services -```javascript -call.hangUp(); --// Set the 'forEveryone' property to true to end call for all participants -call.hangUp({ forEveryone: true }); --``` -## Cleaning Up -If you want to [clean up and remove a Communication Services subscription](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp#clean-up-resources), you can delete the resource or resource group. |
confidential-computing | Choose Confidential Containers Offerings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/choose-confidential-containers-offerings.md | Your current setup and operational needs dictate the most relevant path through - **Memory Isolation**: VM level isolation with unique memory encryption key per VM. - **Programming model**: Zero to minimal changes for containerized applications. Support is limited to containers that are Linux based (containers using a Linux base image for the container). -You can find more information on [Getting started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md) +You can find more information on [Getting started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md). ### Confidential Containers on AKS You can find more information on [Getting started with CVM worker nodes with a l - **Programming model**: Zero to minimal changes for containerized applications (containers using a Linux base image for the container). - **Ideal Workloads**: Applications with sensitive data processing, multi-party computations, and regulatory compliance requirements. -You can find more information on [Getting started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md) +You can find more information at [Confidential Containers with Azure Kubernetes Service](../aks/confidential-containers-overview.md). ### Confidential Computing Nodes with Intel SGX |
confidential-computing | Confidential Containers On Aks Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers-on-aks-preview.md | In alignment with the guidelines set by the [Confidential Computing Consortium]( * Code integrity: Runtime enforcement is always available through customer defined policies for containers and container configuration, such as immutable policies and container signing. * Isolation from operator: Security designs that assume least privilege and highest isolation shielding from all untrusted parties including customer/tenant admins. It includes hardening existing Kubernetes control plane access (kubelet) to confidential pods. -But with these features of confidentiality, the product maintains its ease of use: it supports all unmodified Linux containers with high Kubernetes feature conformance. Additionally, it supports heterogenous node pools (GPU, general-purpose nodes) in a single cluster to optimize for cost. +But with these features of confidentiality, the product should additioanally its ease of use: it supports all unmodified Linux containers with high Kubernetes feature conformance. Additionally, it supports heterogenous node pools (GPU, general-purpose nodes) in a single cluster to optimize for cost. ## What forms Confidential Containers on AKS? |
connectors | Enable Stateful Affinity Built In Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/enable-stateful-affinity-built-in-connectors.md | To run these connector operations in stateful mode, you must enable this capabil After you enable virtual network integration for your logic app, you must update your logic app's underlying website configuration (**<*logic-app-name*>.azurewebsites.net**) by using one the following methods: +- [Azure portal](#azure-portal) (bearer token not required) - [Azure Resource Management API](#azure-resource-management-api) (bearer token required) - [Azure PowerShell](#azure-powershell) (bearer token *not* required) +### Azure portal ++To configure virtual network private ports using the Azure portal, follow these steps: ++1. In the [Azure portal](https://portal.azure.com), find and open your Standard logic app resource. +1. On the logic app menu, under **Settings**, select **Configuration**. +1. On the **Configuration** page, select **General settings**. +1. Under **Platform settings**, in the **VNet Private Ports** box, enter the ports that you want to use. + ### Azure Resource Management API To complete this task with the [Azure Resource Management API - Update By Id](/rest/api/resources/resources/update-by-id), review the following requirements, syntax, and parameter values. |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
container-instances | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md | |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
cosmos-db | Custom Partitioning Analytical Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/custom-partitioning-analytical-store.md | You could use one or more partition keys for your analytical data. If you are us * Currently partitioned store can only point to the primary storage account associated with the Synapse workspace. Selecting custom storage accounts isn't supported at this point. -* Custom partitioning is only available for API for NoSQL in Azure Cosmos DB. API for MongoDB, Gremlin and Cassandra aren't supported at this time. +* Custom partitioning is only available for API for NoSQL in Azure Cosmos DB. API for MongoDB, Gremlin and Cassandra are in preview at this time. ## Pricing |
cosmos-db | Merge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md | $parameters = @{ Invoke-AzCosmosDBSqlContainerMerge @parameters ``` -For **shared-throughput databases**, use `Invoke-AzCosmosDBSqlDatabaseMerge` with the `-WhatIf` parameter to preview the merge without actually performing the operation. ----```azurepowershell-interactive -$parameters = @{ - ResourceGroupName = "<resource-group-name>" - AccountName = "<cosmos-account-name>" - Name = "<cosmos-database-name>" - WhatIf = $true -} -Invoke-AzCosmosDBSqlDatabaseMerge @parameters -``` --Start the merge by running the same command without the `-WhatIf` parameter. ----```azurepowershell-interactive -$parameters = @{ - ResourceGroupName = "<resource-group-name>" - AccountName = "<cosmos-account-name>" - Name = "<cosmos-database-name>" -} -Invoke-AzCosmosDBSqlDatabaseMerge @parameters --``` - #### [API for NoSQL](#tab/nosql/azure-cli) For **provisioned throughput** containers, start the merge by using [`az cosmosdb sql container merge`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-merge). $parameters = @{ Invoke-AzCosmosDBMongoDBCollectionMerge @parameters ``` -For **shared-throughput** databases, use `Invoke-AzCosmosDBMongoDBDatabaseMerge` with the `-WhatIf` parameter to preview the merge without actually performing the operation. ----```azurepowershell-interactive -$parameters = @{ - ResourceGroupName = "<resource-group-name>" - AccountName = "<cosmos-account-name>" - Name = "<cosmos-database-name>" - WhatIf = $true -} -Invoke-AzCosmosDBMongoDBDatabaseMerge @parameters -``` --Start the merge by running the same command without the `-WhatIf` parameter. ----```azurepowershell-interactive -$parameters = @{ - ResourceGroupName = "<resource-group-name>" - AccountName = "<cosmos-account-name>" - Name = "<cosmos-database-name>" -} -Invoke-AzCosmosDBMongoDBDatabaseMerge @parameters -``` - #### [API for MongoDB](#tab/mongodb/azure-cli) For **provisioned containers**, start the merge by using [`az cosmosdb mongodb collection merge`](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-merge). |
cosmos-db | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md | Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
data-factory | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md | |
data-factory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new-archive.md | This archive page retains updates from older months. Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly updates. +## May 2023 ++### Data Factory in Microsoft Fabric ++[Data factory in Microsoft Fabric](/fabric/data-factory/) provides cloud-scale data movement and data transformation services that allow you to solve the most complex data factory and ETL scenarios. It's intended to make your data factory experience easy to use, powerful, and truly enterprise-grade. + ## April 2023 ### Data flow |
data-factory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md | This page is updated monthly, so revisit it regularly. For older months' update Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos. +## January 2024 ++### Data movement ++- The new Salesforce connector now supports OAuth authentication on Bulk API 2.0 for both source and sink. [Learn more](connector-salesforce.md) +- The new Salesforce Service Cloud connector now supports OAuth authentication on Bulk API 2.0 for both source and sink. [Learn more](connector-salesforce-service-cloud.md) +- The Google Ads connector now supports upgrading to the newer driver version with the native Google Ads Query Language (GAQL). [Learn more](connector-google-adwords.md#upgrade-the-google-ads-driver-version) ++### Region expansion ++Azure Data Factory is now available in Israel Central and Italy North. You can co-locate your ETL workflow in this new region if you are utilizing the region for storing and managing your modern data warehouse. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/continued-region-expansion-azure-data-factory-is-generally/ba-p/4029391) + ## November 2023 ### Continuous integration and continuous deployment General Availability of Time to Live (TTL) for Managed Virtual Network [Learn mo Azure Data Factory is generally available in Poland Central [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/continued-region-expansion-azure-data-factory-is-generally/ba-p/3965769) - ## September 2023 ### Pipelines The Amazon S3 connector is now supported as a sink destination using Mapping Dat We introduce optional Source settings for DelimitedText and JSON sources in top-level CDC resource. The top-level CDC resource in data factory now supports optional source configurations for Delimited and JSON sources. You can now select the column/row delimiters for delimited sources and set the document type for JSON sources. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-optional-source-settings-for-delimitedtext-and-json/ba-p/3824274) -## May 2023 --### Data Factory in Microsoft Fabric --[Data factory in Microsoft Fabric](/fabric/data-factory/) provides cloud-scale data movement and data transformation services that allow you to solve the most complex data factory and ETL scenarios. It's intended to make your data factory experience easy to use, powerful, and truly enterprise-grade. - ## Related content - [What's new archive](whats-new-archive.md) |
data-lake-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
data-lake-store | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
databox-online | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md | Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
databox | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md | Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
ddos-protection | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md | |
defender-for-cloud | Agentless Vulnerability Assessment Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-aws.md | Vulnerability assessment for AWS, powered by Microsoft Defender Vulnerability Ma > [!NOTE] > This feature supports scanning of images in the ECR only. Images that are stored in other container registries should be imported into ECR for coverage. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images). -In every account where enablement of this capability is completed, all images stored in ECR that meet the following criteria for scan triggers are scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ECR as well as images that are currently running in EKS that were pulled from an ECR registry. Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every 24 hours. +In every account where enablement of this capability is completed, all images stored in ECR that meet the criteria for scan triggers are scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ECR as well as images that are currently running in EKS that were pulled from an ECR registry or any other Defender for Cloud supported registry (ACR, GCR, or GAR). Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every 24 hours. Container vulnerability assessment powered by Microsoft Defender Vulnerability Management has the following capabilities: |
defender-for-cloud | Azure Devops Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md | The Microsoft Security DevOps uses the following Open Source tools: | [Trivy](https://github.com/aquasecurity/trivy) | container images, Infrastructure as Code (IaC) | [Apache License 2.0](https://github.com/aquasecurity/trivy/blob/main/LICENSE) | > [!NOTE]-> Effective September 20, 2023, the secrets scanning (CredScan) tool within the Microsoft Security DevOps (MSDO) Extension for Azure DevOps has been deprecated. MSDO secrets scanning will be replaced with [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security). +> Effective September 20, 2023, the secrets scanning (CredScan) tool within the Microsoft Security DevOps (MSDO) Extension for Azure DevOps has been deprecated. MSDO secrets scanning will be replaced with [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security). ## Prerequisites |
defender-for-cloud | Concept Agentless Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md | -Microsoft Defender for Cloud improves compute posture for Azure, AWS and GCP environments with machine scanning. For requirements and support, see the [compute support matrix in Defender for Cloud](support-matrix-defender-for-servers.md). +Microsoft Defender for Cloud improves compute posture for Azure, AWS and GCP environments with machine scanning. For requirements and support, see the [compute support matrix in Defender for Cloud](support-matrix-defender-for-servers.md). Agentless scanning for virtual machines (VM) provides: Agentless scanning for virtual machines (VM) provides: - Deep analysis of operating system configuration and other machine meta data. - [Vulnerability assessment](enable-agentless-scanning-vms.md) using Defender Vulnerability Management. - [Secret scanning](secret-scanning.md) to locate plain text secrets in your compute environment.-- Threat detection with [agentless malware scanning](agentless-malware-scanning.md), using [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows?view=o365-worldwide).+- Threat detection with [agentless malware scanning](agentless-malware-scanning.md), using [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows). Agentless scanning assists you in the identification process of actionable posture issues without the need for installed agents, network connectivity, or any effect on machine performance. Agentless scanning is available through both the [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) plan and [Defender for Servers P2](plan-defender-for-servers-select-plan.md#plan-features) plan. |
defender-for-cloud | Concept Attack Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-attack-path.md | Last updated 05/07/2023 > [!VIDEO https://aka.ms/docs/player?id=36a5c440-00e6-4bd8-be1f-a27fbd007119] -One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolved and never enough resources to address them all. +One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolved and never enough resources to address them all. Defender for Cloud's contextual security capabilities assists security teams to assess the risk behind each security issue, and identify the highest risk issues that need to be resolved soonest. Defender for Cloud assists security teams to reduce the risk of an impactful breach to their environment in the most effective way. All of these capabilities are available as part of the [Defender Cloud Security ## What is cloud security graph? -The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. +The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. Defender for Cloud then uses the generated graph to perform an attack path analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the cloud security explorer. Defender for Cloud then uses the generated graph to perform an attack path analy ## What is attack path analysis? -Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. +Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. When you take your environment's contextual information into account, attack path analysis identifies issues that might lead to a breach on your environment, and helps you to remediate the highest risk ones first. For example its exposure to the internet, permissions, lateral movement, and more. Learn how to use [attack path analysis](how-to-manage-attack-path.md). ## What is cloud security explorer? -By running graph-based queries on the cloud security graph with the cloud security explorer, you can proactively identify security risks in your multicloud environments. Your security team can use the query builder to search for and locate risks, while taking your organization's specific contextual and conventional information into account. +By running graph-based queries on the cloud security graph with the cloud security explorer, you can proactively identify security risks in your multicloud environments. Your security team can use the query builder to search for and locate risks, while taking your organization's specific contextual and conventional information into account. Cloud security explorer provides you with the ability to perform proactive exploration features. You can search for security risks within your organization by running graph-based path-finding queries on top the contextual security data that is already provided by Defender for Cloud, such as cloud misconfigurations, vulnerabilities, resource context, lateral movement possibilities between resources and more. |
defender-for-cloud | Concept Cloud Security Posture Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md | The following table summarizes each plan and their cloud availability. | EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Permissions management (Preview)](enable-permissions-management.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | - > [!NOTE] > Starting March 7, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more. - ## Integrations (preview) Microsoft Defender for Cloud now has built-in integrations to help you use third-party systems to seamlessly manage and track tickets, events, and customer interactions. You can push recommendations to a third-party ticketing tool, and assign responsibility to a team for remediation. |
defender-for-cloud | Concept Data Security Posture Prepare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md | In order to protect GCP resources in Defender for Cloud, you can set up a Google Defender CSPM attack paths and cloud security graph insights include information about storage resources that are exposed to the internet and allow public access. The following table provides more details. -**State** | **Azure storage accounts** | **AWS S3 Buckets** | **GCP Storage Buckets** | - | | | -**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses. | All GCP storage buckets are exposed to the internet by default. | -**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**. | A GCP storage bucket is considered to allow public access if: it has an IAM (Identity and Access Management) role that meets these criteria: <br/><br/> The role is granted to the principal **allUsers** or **allAuthenticatedUsers**. <br/><br/>The role has at least one storage permission that *isn't* **storage.buckets.create** or **storage.buckets.list**. Public access in GCP is called ΓÇ£Public to internetΓÇ£. +| **State** | **Azure storage accounts** | **AWS S3 Buckets** | **GCP Storage Buckets** | +| | | | | +|**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses. | All GCP storage buckets are exposed to the internet by default. | +|**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**. | A GCP storage bucket is considered to allow public access if: it has an IAM (Identity and Access Management) role that meets these criteria: <br/><br/> The role is granted to the principal **allUsers** or **allAuthenticatedUsers**. <br/><br/>The role has at least one storage permission that *isn't* **storage.buckets.create** or **storage.buckets.list**. Public access in GCP is called ΓÇ£Public to internetΓÇ£.| Database resources don't allow public access but can still be exposed to the internet. |
defender-for-cloud | Concept Defender For Cosmos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md | Last updated 11/27/2022 Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders. -Defender for Azure Cosmos DB uses advanced threat detection capabilities, and [Microsoft Threat Intelligence](https://www.microsoft.com/insidetrack/microsoft-uses-threat-intelligence-to-protect-detect-and-respond-to-threats) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks. +Defender for Azure Cosmos DB uses advanced threat detection capabilities, and [Microsoft Threat Intelligence](https://www.microsoft.com/insidetrack/microsoft-uses-threat-intelligence-to-protect-detect-and-respond-to-threats) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks. -You can [enable protection for all your databases](quickstart-enable-database-protections.md) (recommended), or [enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-database-protections.md) at either the subscription level, or the resource level. +You can [enable protection for all your databases](quickstart-enable-database-protections.md) (recommended), or [enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-database-protections.md) at either the subscription level, or the resource level. -Defender for Azure Cosmos DB continually analyzes the telemetry stream generated by the Azure Cosmos DB service. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations. +Defender for Azure Cosmos DB continually analyzes the telemetry stream generated by the Azure Cosmos DB service. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations. -Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data, and doesn't have any effect on its performance. +Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data, and doesn't have any effect on its performance. ## Availability |Aspect|Details| |-|:-|-|Release state:| General Availability (GA) | +|Release state:| General Availability (GA) | | Protected Azure Cosmos DB API | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Cosmos DB for NoSQL <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for Apache Cassandra <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for MongoDB <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for Table <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for Apache Gremlin | |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government <br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet | ## What are the benefits of Microsoft Defender for Azure Cosmos DB -Microsoft Defender for Azure Cosmos DB uses advanced threat detection capabilities and Microsoft Threat Intelligence data. Defender for Azure Cosmos DB continuously monitors your Azure Cosmos DB accounts for threats such as SQL injection, compromised identities and data exfiltration. +Microsoft Defender for Azure Cosmos DB uses advanced threat detection capabilities and Microsoft Threat Intelligence data. Defender for Azure Cosmos DB continuously monitors your Azure Cosmos DB accounts for threats such as SQL injection, compromised identities and data exfiltration. -This service provides action-oriented security alerts in Microsoft Defender for Cloud with details of the suspicious activity and guidance on how to mitigate the threats. -You can use this information to quickly remediate security issues and improve the security of your Azure Cosmos DB accounts. +This service provides action-oriented security alerts in Microsoft Defender for Cloud with details of the suspicious activity and guidance on how to mitigate the threats. +You can use this information to quickly remediate security issues and improve the security of your Azure Cosmos DB accounts. -Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. To learn how to stream alerts, see [Stream alerts to a SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md). +Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. To learn how to stream alerts, see [Stream alerts to a SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md). > [!TIP] > For a comprehensive list of all Defender for Azure Cosmos DB alerts, see the [alerts reference page](alerts-reference.md#alerts-azurecosmos). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md). ## Alert types -Threat intelligence security alerts are triggered for: +Threat intelligence security alerts are triggered for: - **Potential SQL injection attacks**: <br>- Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and might result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats. - + Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and might result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats. + - **Anomalous database access patterns**: <br>- For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations. - + For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations. + - **Suspicious database activity**: <br>- For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns. + For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns. ## Next steps -In this article, you learned about Microsoft Defender for Azure Cosmos DB. +In this article, you learned about Microsoft Defender for Azure Cosmos DB. > [!div class="nextstepaction"] > [Enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-database-protections.md) |
defender-for-cloud | Concept Integration 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-integration-365.md | Last updated 01/03/2024 # Alerts and incidents in Microsoft Defender XDR -Microsoft Defender for Cloud is now integrated with Microsoft Defender XDR. This integration allows security teams to access Defender for Cloud alerts and incidents within the Microsoft Defender Portal. This integration provides richer context to investigations that span cloud resources, devices, and identities. +Microsoft Defender for Cloud is now integrated with Microsoft Defender XDR. This integration allows security teams to access Defender for Cloud alerts and incidents within the Microsoft Defender Portal. This integration provides richer context to investigations that span cloud resources, devices, and identities. -The partnership with Microsoft Defender XDR allows security teams to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment. Security teams can accomplish this goal through immediate correlations of alerts and incidents. +The partnership with Microsoft Defender XDR allows security teams to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment. Security teams can accomplish this goal through immediate correlations of alerts and incidents. Microsoft Defender XDR offers a comprehensive solution that combines protection, detection, investigation, and response capabilities. The solution protects against attacks on devices, email, collaboration, identity, and cloud apps. Our detection and investigation capabilities are now extended to cloud entities, offering security operations teams a single pane of glass to significantly improve their operational efficiency. -Incidents and alerts are now part of [Microsoft Defender XDR's public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide). This integration allows exporting of security alerts data to any system using a single API. As Microsoft Defender for Cloud, we're committed to providing our users with the best possible security solutions, and this integration is a significant step towards achieving that goal. +Incidents and alerts are now part of [Microsoft Defender XDR's public API](/microsoft-365/security/defender/api-overview). This integration allows exporting of security alerts data to any system using a single API. As Microsoft Defender for Cloud, we're committed to providing our users with the best possible security solutions, and this integration is a significant step towards achieving that goal. -## Investigation experience in Microsoft Defender XDR +## Investigation experience in Microsoft Defender XDR The following table describes the detection and investigation experience in Microsoft Defender XDR with Defender for Cloud alerts. | Area | Description | |--|--|-| Incidents | All Defender for Cloud incidents are integrated to Microsoft Defender XDR. <br> - Searching for cloud resource assets in the [incident queue](/microsoft-365/security/defender/incident-queue?view=o365-worldwide) is supported. <br> - The [attack story](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#attack-story) graph shows cloud resource. <br> - The [assets tab](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#assets) in an incident page shows the cloud resource. <br> - Each virtual machine has its own entity page containing all related alerts and activity. <br> <br> There are no duplications of incidents from other Defender workloads. | -| Alerts | All Defender for Cloud alerts, including multicloud, internal and external providersΓÇÖ alerts, are integrated to Microsoft Defender XDR. Defenders for Cloud alerts show on the Microsoft Defender XDR [alert queue](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response?view=o365-worldwide). <br>Microsoft Defender XDR<br> The `cloud resource` asset shows up in the Asset tab of an alert. Resources are clearly identified as an Azure, Amazon, or a Google Cloud resource. <br> <br> Defenders for Cloud alerts are automatically be associated with a tenant. <br> <br> There are no duplications of alerts from other Defender workloads.| +| Incidents | All Defender for Cloud incidents are integrated to Microsoft Defender XDR. <br> - Searching for cloud resource assets in the [incident queue](/microsoft-365/security/defender/incident-queue) is supported. <br> - The [attack story](/microsoft-365/security/defender/investigate-incidents#attack-story) graph shows cloud resource. <br> - The [assets tab](/microsoft-365/security/defender/investigate-incidents#assets) in an incident page shows the cloud resource. <br> - Each virtual machine has its own entity page containing all related alerts and activity. <br> <br> There are no duplications of incidents from other Defender workloads. | +| Alerts | All Defender for Cloud alerts, including multicloud, internal and external providersΓÇÖ alerts, are integrated to Microsoft Defender XDR. Defenders for Cloud alerts show on the Microsoft Defender XDR [alert queue](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response). <br>Microsoft Defender XDR<br> The `cloud resource` asset shows up in the Asset tab of an alert. Resources are clearly identified as an Azure, Amazon, or a Google Cloud resource. <br> <br> Defenders for Cloud alerts are automatically be associated with a tenant. <br> <br> There are no duplications of alerts from other Defender workloads.| | Alert and incident correlation | Alerts and incidents are automatically correlated, providing robust context to security operations teams to understand the complete attack story in their cloud environment. | | Threat detection | Accurate matching of virtual entities to device entities to ensure precision and effective threat detection. |-| Unified API | Defender for Cloud alerts and incidents are now included in [Microsoft Defender XDRΓÇÖs public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide), allowing customers to export their security alerts data into other systems using one API. | +| Unified API | Defender for Cloud alerts and incidents are now included in [Microsoft Defender XDRΓÇÖs public API](/microsoft-365/security/defender/api-overview), allowing customers to export their security alerts data into other systems using one API. | -Learn more about [handling alerts in Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud?view=o365-worldwide). +Learn more about [handling alerts in Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud). ## Sentinel customers Microsoft Sentinel customers can [benefit from the Defender for Cloud integratio First you need to [enabled incident integration in your Microsoft 365 Defender connector](../sentinel/connect-microsoft-365-defender.md). -Then, enable the `Tenant-based Microsoft Defender for Cloud (Preview)` connector to synchronize your subscriptions with your tenant-based Defender for Cloud incidents to stream through the Microsoft 365 Defender incidents connector. +Then, enable the `Tenant-based Microsoft Defender for Cloud (Preview)` connector to synchronize your subscriptions with your tenant-based Defender for Cloud incidents to stream through the Microsoft 365 Defender incidents connector. -The connector is available through the Microsoft Defender for Cloud solution, version 3.0.0, in the Content Hub. If you have an earlier version of this solution, you can upgrade it in the Content Hub. +The connector is available through the Microsoft Defender for Cloud solution, version 3.0.0, in the Content Hub. If you have an earlier version of this solution, you can upgrade it in the Content Hub. If you have the legacy subscription-based Microsoft Defender for Cloud alerts connector enabled (which is displayed as `Subscription-based Microsoft Defender for Cloud (Legacy)`), we recommend you disconnect the connector in order to prevent duplicating alerts in your logs. We recommend you disable analytic rules that are enabled (either scheduled or through Microsoft creation rules), from creating incidents from your Defender for Cloud alerts. -You can use automation rules to close incidents immediately and prevent specific types of Defender for Cloud alerts from becoming incidents. You can also use the built-in tuning capabilities in the Microsoft 365 Defender portal to prevent alerts from becoming incidents. +You can use automation rules to close incidents immediately and prevent specific types of Defender for Cloud alerts from becoming incidents. You can also use the built-in tuning capabilities in the Microsoft 365 Defender portal to prevent alerts from becoming incidents. -Customers who integrated their Microsoft 365 Defender incidents into Sentinel and want to keep their subscription-based settings and avoid tenant-based syncing can [opt out of syncing incidents and alerts](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud?view=o365-worldwide) through the Microsoft 365 Defender connector. +Customers who integrated their Microsoft 365 Defender incidents into Sentinel and want to keep their subscription-based settings and avoid tenant-based syncing can [opt out of syncing incidents and alerts](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud) through the Microsoft 365 Defender connector. Learn how [Defender for Cloud and Microsoft 365 Defender handle your data's privacy](data-security.md#defender-for-cloud-and-microsoft-defender-365-defender-integration). |
defender-for-cloud | Configure Servers Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-servers-coverage.md | Last updated 02/05/2024 # Configure Defender for Servers features -Microsoft Defender for Cloud's Defender for Servers plans contains components that monitor your environments to provide extended coverage on your servers. Each of these components can be enabled, disabled or configured to your meet your specific requirements. +Microsoft Defender for Cloud's Defender for Servers plans contains components that monitor your environments to provide extended coverage on your servers. Each of these components can be enabled, disabled or configured to your meet your specific requirements. | Component | Availability | Description | Learn more | |--|--|--|--| Vulnerability assessment for machines allows you to select between two vulnerabi ## Configure endpoint protection -With Microsoft Defender for Servers, you enable the protections provided by [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide) to your server resources. Defender for Endpoint includes automatic agent deployment to your servers, and security data integration with Defender for Cloud. +With Microsoft Defender for Servers, you enable the protections provided by [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) to your server resources. Defender for Endpoint includes automatic agent deployment to your servers, and security data integration with Defender for Cloud. To configure endpoint protection: You can also check the coverage for all of all your subscriptions and resources ## Disable Defender for Servers plan or features -To disable The Defender for Servers plan or any of the features of the plan, navigate to the Environment settings page of the relevant subscription or workspace and toggle the relevant switch to **Off**. +To disable The Defender for Servers plan or any of the features of the plan, navigate to the Environment settings page of the relevant subscription or workspace and toggle the relevant switch to **Off**. > [!NOTE] > When you disable the Defender for Servers plan on a subscription, it doesn't disable it on a workspace. To disable the plan on a workspace, you must navigate to the plans page for the workspace and toggle the switch to **Off**. |
defender-for-cloud | Connect Azure Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-azure-subscription.md | Microsoft Defender for Cloud is a cloud-native application protection platform ( - A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches - A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads -Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) for free. You can add additional paid plans to secure all aspects of your cloud resources. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender) for free. You can add additional paid plans to secure all aspects of your cloud resources. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). Defender for Cloud helps you find and fix security vulnerabilities. Defender for Cloud also applies access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. If you want to disable any of the plans, toggle the individual plan to **off**. When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft Defender Portal. No further steps are needed. -The integration between Microsoft Defender for Cloud and Microsoft Defender XDR brings your cloud environments into Microsoft Defender XDR. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft Defender XDR, SOC teams can now access all security information from a single interface. +The integration between Microsoft Defender for Cloud and Microsoft Defender XDR brings your cloud environments into Microsoft Defender XDR. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft Defender XDR, SOC teams can now access all security information from a single interface. Learn more about Defender for Cloud's [alerts in Microsoft Defender XDR](concept-integration-365.md). |
defender-for-cloud | Defender For Containers Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md | A full list of supported alerts is available in the [reference table of all Defe The expected response is `No resource found`. Within 30 minutes, Defender for Cloud detects this activity and trigger a security alert.+ > [!NOTE] + > To simulate agentless alerts for Defender for Containers, Azure Arc isn't a prerequisite. 1. In the Azure portal, open Microsoft Defender for Cloud's security alerts page and look for the alert on the relevant resource: |
defender-for-cloud | Export To Siem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md | There are built-in Azure tools that are available that ensure you can view your ## Stream alerts to Defender XDR with the Defender XDR API -Defender for Cloud natively integrates with [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) allows you to use Defender XDR's incidents and alerts API to stream alerts and incidents into non-Microsoft solutions. Defender for Cloud customers can access one API for all Microsoft security products and can use this integration as an easier way to export alerts and incidents. +Defender for Cloud natively integrates with [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender) allows you to use Defender XDR's incidents and alerts API to stream alerts and incidents into non-Microsoft solutions. Defender for Cloud customers can access one API for all Microsoft security products and can use this integration as an easier way to export alerts and incidents. -Learn how to [integrate SIEM tools with Defender XDR](/microsoft-365/security/defender/configure-siem-defender?view=o365-worldwide). +Learn how to [integrate SIEM tools with Defender XDR](/microsoft-365/security/defender/configure-siem-defender). ## Stream alerts to Microsoft Sentinel Defender for Cloud natively integrates with [Microsoft Sentinel](../sentinel/ove ### Microsoft Sentinel's connectors for Defender for Cloud -Microsoft Sentinel includes built-in connectors for Microsoft Defender for Cloud at the subscription and tenant levels. +Microsoft Sentinel includes built-in connectors for Microsoft Defender for Cloud at the subscription and tenant levels. You can: Before you set up the Azure services for exporting alerts, make sure you have: - if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read` - if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action` --> -### Set up the Azure services +### Set up the Azure services You can set up your Azure environment to support continuous export using either: You can set up your Azure environment to support continuous export using either: 1. Download and run [the PowerShell script](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Powershell%20scripts/3rd%20party%20SIEM%20integration). -1. Enter the required parameters. - +1. Enter the required parameters. + 1. Execute the script. The script performs all of the steps for you. When the script finishes, use the output to install the solution in the SIEM platform. The script performs all of the steps for you. When the script finishes, use the 1. Define a policy for the event hub with `Send` permissions. -**If you're streaming alerts to QRadar** +**If you're streaming alerts to QRadar**: 1. Create an event hub `Listen` policy. 1. Copy and save the connection string of the policy to use in QRadar. -1. Create a consumer group. +1. Create a consumer group. 1. Copy and save the name to use in the SIEM platform. To stream alerts into **ArcSight**, **SumoLogic**, **Syslog servers**, **LogRhyt |:|:| :| | SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hubs](https://help.sumologic.com/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-logs-azure-monitor/). | | ArcSight | No | The ArcSight Azure Event Hubs smart connector is available as part of [the ArcSight smart connector collection](https://community.microfocus.com/cyberres/arcsight/f/arcsight-product-announcements/163662/announcing-general-availability-of-arcsight-smart-connectors-7-10-0-8114-0). |- | Syslog server | No | If you want to stream Azure Monitor data directly to a syslog server, you can use a [solution based on an Azure function](https://github.com/miguelangelopereira/azuremonitor2syslog/). - | LogRhythm | No| Instructions to set up LogRhythm to collect logs from an event hub are available [here](https://logrhythm.com/six-tips-for-securing-your-azure-cloud-environment/). - |Logz.io | Yes | For more information, see [Getting started with monitoring and logging using Logz.io for Java apps running on Azure](/azure/developer/java/fundamentals/java-get-started-with-logzio) + | Syslog server | No | If you want to stream Azure Monitor data directly to a syslog server, you can use a [solution based on an Azure function](https://github.com/miguelangelopereira/azuremonitor2syslog/).| + | LogRhythm | No| Instructions to set up LogRhythm to collect logs from an event hub are available [here](https://logrhythm.com/six-tips-for-securing-your-azure-cloud-environment/).| + |Logz.io | Yes | For more information, see [Getting started with monitoring and logging using Logz.io for Java apps running on Azure](/azure/developer/java/fundamentals/java-get-started-with-logzio)| 1. (Optional) Stream the raw logs to the event hub and connect to your preferred solution. Learn more in [Monitoring data available](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md#monitoring-data-available). |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
defender-for-cloud | Tutorial Enable Servers Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-servers-plan.md | By enabling Defender for Servers on a Log Analytics workspace, you aren't enabli ## Enable Defender for Servers at the resource level -To protect all of your existing and future resources, we recommend you enable Defender for Servers on your entire Azure subscription. +To protect all of your existing and future resources, we recommend you [enable Defender for Servers on your entire Azure subscription](#enable-on-an-azure-subscription-aws-account-or-gcp-project). -You can exclude specific resources or manage security configurations at a lower hierarchy level by enabling the Defender for Servers plan at the resource level with REST API or at scale. +You can exclude specific resources or manage security configurations at a lower hierarchy level by enabling the Defender for Servers plan at the resource level. You can enable the plan on the resource level with REST API or at scale. The supported resource types include: After enabling the plan, you have the ability to [configure the features of the ## Next steps [Configure Defender for Servers features](configure-servers-coverage.md).-[Overview of Microsoft Defender for Servers](defender-for-servers-introduction.md) ++[Overview of Microsoft Defender for Servers](defender-for-servers-introduction.md). |
deployment-environments | Overview What Is Azure Deployment Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md | Title: What is Azure Deployment Environments? -description: Enable developer teams to spin up app infrastructure with project-based templates, minimize setup time & maximize security, compliance, and cost efficiency. +description: Enable developer teams to spin up infrastructure for deploying apps with project-based templates, while adding governance for Azure resource types, security, and cost. |
deployment-environments | Quickstart Create Access Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md | Title: Create and access an environment in the developer portal + Title: Create a deployment environment -description: Learn how to create and access an environment in an Azure Deployment Environments project through the developer portal. +description: Learn how to create and access an environment in Azure Deployment Environments through the developer portal. An environment has all Azure resource preconfigured for deploying your application. Last updated 12/01/2023 # Quickstart: Create and access an environment in Azure Deployment Environments -This quickstart shows you how to create and access an [environment](concept-environments-key-concepts.md#environments) in an existing Azure Deployment Environments project. +This quickstart shows you how to create and access an [environment](concept-environments-key-concepts.md#environments) in Azure Deployment Environments by using the developer portal. ++As a developer, you can create environments associated with a [project](concept-environments-key-concepts.md#projects) in Azure Deployment Environments. An environment has all Azure resource preconfigured for deploying your application. ## Prerequisites |
deployment-environments | Quickstart Create And Configure Devcenter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md | Title: Create and configure a dev center for Azure Deployment Environments + Title: Set up a dev center for Azure Deployment Environments -description: Learn how to configure a dev center, attach an identity, and attach a catalog in Azure Deployment Environments. +description: Learn how to set up the resources to get started with Azure Deployment Environments. Configure a dev center, attach an identity, and attach a catalog for using IaC templates. Last updated 12/01/2023 In this quickstart, you set up all the resources in Azure Deployment Environments to enable self-service deployment environments for development teams. Learn how to create and configure a dev center, add a catalog to the dev center, and define an environment type. -A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications. To learn more about the components of Azure Deployment Environments, see [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md). +A dev center is the top-level resource for getting started with Azure Deployment Environments that contains the collection of development projects. In the dev center, you specify the common configuration for your projects, such as catalogs with application templates, and the types of environments development teams can deploy to. ++A platform engineering team typically sets up the dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications. To learn more about the components of Azure Deployment Environments, see [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md). The following diagram shows the steps to configure a dev center for Azure Deployment Environments in the Azure portal. |
deployment-environments | Quickstart Create And Configure Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md | Title: Create and configure an Azure Deployment Environments project + Title: Create a project in Azure Deployment Environments -description: Learn how to create a project in Azure Deployment Environments and associate the project with a dev center. +description: Learn how to create a project for a dev center Azure Deployment Environments. In a project, you can define environment types and environments that are specific to a software development project. -# Quickstart: Create and configure an Azure Deployment Environments project +# Quickstart: Create and configure a project in Azure Deployment Environments -This quickstart shows you how to create a project in Azure Deployment Environments, then associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). After you complete this quickstart, developers can use the developer portal to create environments to deploy their applications. +This quickstart shows you how to create a project in Azure Deployment Environments, then associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). After you complete this quickstart, developers can use the developer portal to create environments in the project to deploy their applications. ++A project contains the specific configuration for environment types and environment definitions related to a development project. For example, you might create a project for the implementation of an ecommerce application, which has a development, staging, and production environment. For another project, you might define a different configuration. The following diagram shows the steps to configure a project associated with a dev center for Deployment Environments in the Azure portal. |
dev-box | Tutorial Connect To Dev Box With Remote Desktop App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md | Title: 'Tutorial: Use a Remote Desktop client to connect to a dev box' + Title: 'Tutorial: Access a dev box with a remote desktop client' -description: In this tutorial, you download and use a remote desktop client to connect to a dev box in Microsoft Dev Box. +description: In this tutorial, you learn how to connect to and access your dev box in Microsoft Dev Box by using a remote desktop (RDP) client app. -In this tutorial, you download and use a remote desktop client application to connect to a dev box. +In this tutorial, you download and use a remote desktop (RDP) client application to connect to and access a dev box. Remote desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a remote desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android. > [!TIP] > Many remote desktops apps allow you to [use multiple monitors](tutorial-configure-multiple-monitors.md) when you connect to your dev box. -Alternately, you can connect to your dev box through the browser from the Microsoft Dev Box developer portal. +Alternately, you can access your dev box through the browser from the Microsoft Dev Box developer portal. In this tutorial, you learn how to: To complete this tutorial, you must have access to a dev box through the develop ## Download the remote desktop client and connect to your dev box -You can use a remote desktop client application to connect to your dev box in Microsoft Dev Box. Remote desktop clients are available for many operating systems and devices. +You can use a remote desktop client application to access your dev box in Microsoft Dev Box. Remote desktop clients are available for many operating systems and devices. Select the relevant tab to view the steps to download and use the Remote Desktop client application from Windows or non-Windows operating systems. |
digital-twins | Concepts Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md | Models for Azure Digital Twins are defined using the Digital Twins Definition La You can view the full language description for DTDL v3 in GitHub: [DTDL Version 3 Language Description](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md). This page includes DTDL reference details and examples to help you get started writing your own DTDL models. -DTDL is based on JSON-LD and is programming-language independent. DTDL isn't exclusive to Azure Digital Twins. It is also used to represent device data in other IoT services such as [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md). +DTDL is based on JSON-LD and is programming-language independent. DTDL isn't exclusive to Azure Digital Twins. It is also used to represent device data in other IoT services such as [IoT Plug and Play](../iot/overview-iot-plug-and-play.md). The rest of this article summarizes how the language is used in Azure Digital Twins. |
digital-twins | How To Parse Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-parse-models.md | The capabilities of the parser include: * Determine whether a model is assignable from another model. > [!NOTE]-> [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) devices use a small syntax variant to describe their functionality. This syntax variant is a semantically compatible subset of the DTDL that is used in Azure Digital Twins. When using the parser library, you do not need to know which syntax variant was used to create the DTDL for your digital twin. The parser will always, by default, return the same model for both IoT Plug and Play and Azure Digital Twins syntax. +> [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) devices use a small syntax variant to describe their functionality. This syntax variant is a semantically compatible subset of the DTDL that is used in Azure Digital Twins. When using the parser library, you do not need to know which syntax variant was used to create the DTDL for your digital twin. The parser will always, by default, return the same model for both IoT Plug and Play and Azure Digital Twins syntax. ## Code with the parser library |
digital-twins | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md | In Azure Digital Twins, you define the digital entities that represent the peopl You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define a model that defines a Building type, a Floor type, and an Elevator type. Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md). In ADT, DTDL models describe types of entities according to their state properties, commands, and relationships. You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry. >[!TIP]->Version 2 of DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem. +>Version 2 of DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem. Once you've defined your data models, use them to create [digital twins](concepts-twins-graph.md) that represent each specific entity in your environment. For example, you might use the Building model definition to create several Building-type twins (Building 1, Building 2, and so on). You can also use the relationships in the model definitions to connect twins to each other, forming a conceptual graph. |
dms | Tutorial Mysql Azure External To Flex Online Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-external-to-flex-online-portal.md | To complete this tutorial, you need to: * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permissions on the source server for reading and applying the bin log. * If you're targeting an online migration, you will need to configure the binlog expiration on the source server to ensure that binlog files aren't purged before the replica commits the changes. We recommend at least two days to start. The parameter will depend on the version of your MySQL server. For MySQL 5.7 the parameter is expire_logs_days (by default it is set to 0, which is no auto purge). For MySQL 8.0 it is binlog_expire_logs_seconds (by default it is set to 30 days). After a successful cutover, you can reset the value. * To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:- * ΓÇ£READΓÇ¥ privilege on the source database. - * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database - * If migrating views, the user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege. - * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege. - * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: - * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. - * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine. - * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the events are to be shown. + * [ΓÇ£SELECTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_select) privilege at the server level on the source. + * If migrating views, user must have the [ΓÇ£SHOW VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_show-view) privilege on the source server and the [ΓÇ£CREATE VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-view) privilege on the target server. + * If migrating triggers, user must have the [ΓÇ£TRIGGERΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_trigger) privilege on the source and target server. + * If migrating routines (procedures and/or functions), the user must have the [ΓÇ£CREATE ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-routine) and [ΓÇ£ALTER ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_alter-routine) privileges granted at the server level on the target. + * If migrating events, the user must have the [ΓÇ£EVENTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_event) privilege on the source and target server. + * If migrating users/logins, the user must have the ["CREATE USER"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-user) privilege on the target server. + * ["DROP"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_drop) privilege at the server level on the target, in order to drop tables that might already exist. For example, when retrying a migration. + * ["REFERENCES"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_references) privilege at the server level on the target, in order to create tables with foreign keys. + * If migrating to MySQL 8.0, the user must have the ["SESSION_VARIABLES_ADMIN"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_session-variables-admin) privilege on the target server. + * ["CREATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create) privilege at the server level on the target. + * ["INSERT"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_insert) privilege at the server level on the target. + * ["UPDATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_update) privilege at the server level on the target. + * ["DELETE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_delete) privilege at the server level on the target. ## Limitations |
dms | Tutorial Mysql Azure Mysql Offline Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md | To complete this tutorial, you need to: * Azure Database for MySQL supports only InnoDB tables. To convert MyISAM tables to InnoDB, see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html) * The user must have the privileges to read data on the source database. * To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:- * ΓÇ£READΓÇ¥ privilege on the source database. - * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database - * If migrating views, the user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege. - * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege. - * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: - * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. - * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine. - * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the events are to be shown. + * [ΓÇ£SELECTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_select) privilege at the server level on the source. + * If migrating views, user must have the [ΓÇ£SHOW VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_show-view) privilege on the source server and the [ΓÇ£CREATE VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-view) privilege on the target server. + * If migrating triggers, user must have the [ΓÇ£TRIGGERΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_trigger) privilege on the source and target server. + * If migrating routines (procedures and/or functions), the user must have the [ΓÇ£CREATE ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-routine) and [ΓÇ£ALTER ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_alter-routine) privileges granted at the server level on the target. + * If migrating events, the user must have the [ΓÇ£EVENTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_event) privilege on the source and target server. + * If migrating users/logins, the user must have the ["CREATE USER"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-user) privilege on the target server. + * ["DROP"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_drop) privilege at the server level on the target, in order to drop tables that might already exist. For example, when retrying a migration. + * ["REFERENCES"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_references) privilege at the server level on the target, in order to create tables with foreign keys. + * If migrating to MySQL 8.0, the user must have the ["SESSION_VARIABLES_ADMIN"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_session-variables-admin) privilege on the target server. + * ["CREATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create) privilege at the server level on the target. + * ["INSERT"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_insert) privilege at the server level on the target. + * ["UPDATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_update) privilege at the server level on the target. + * ["DELETE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_delete) privilege at the server level on the target. ## Sizing the target Azure Database for MySQL instance |
dms | Tutorial Mysql Azure Single To Flex Offline Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md | To complete this tutorial, you need to: * Create or use an existing instance of Azure Database for MySQL ΓÇô Single Server (the source server). * To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:- * ΓÇ£READΓÇ¥ privilege on the source database. - * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database - * If migrating views, user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege. - * If migrating triggers, user must have the ΓÇ£TRIGGERΓÇ¥ privilege. - * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: - * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. - * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine. - * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the event is to be shown. + * [ΓÇ£SELECTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_select) privilege at the server level on the source. + * If migrating views, user must have the [ΓÇ£SHOW VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_show-view) privilege on the source server and the [ΓÇ£CREATE VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-view) privilege on the target server. + * If migrating triggers, user must have the [ΓÇ£TRIGGERΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_trigger) privilege on the source and target server. + * If migrating routines (procedures and/or functions), the user must have the [ΓÇ£CREATE ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-routine) and [ΓÇ£ALTER ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_alter-routine) privileges granted at the server level on the target. + * If migrating events, the user must have the [ΓÇ£EVENTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_event) privilege on the source and target server. + * If migrating users/logins, the user must have the ["CREATE USER"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-user) privilege on the target server. + * ["DROP"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_drop) privilege at the server level on the target, in order to drop tables that might already exist. For example, when retrying a migration. + * ["REFERENCES"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_references) privilege at the server level on the target, in order to create tables with foreign keys. + * If migrating to MySQL 8.0, the user must have the ["SESSION_VARIABLES_ADMIN"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_session-variables-admin) privilege on the target server. + * ["CREATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create) privilege at the server level on the target. + * ["INSERT"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_insert) privilege at the server level on the target. + * ["UPDATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_update) privilege at the server level on the target. + * ["DELETE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_delete) privilege at the server level on the target. ## Limitations |
dms | Tutorial Mysql Azure Single To Flex Online Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md | To complete this tutorial, you need to: * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permissions on the source server for reading and applying the bin log. * If you're targeting a online migration, configure the binlog_expire_logs_seconds parameter on the source server to ensure that binlog files aren't purged before the replica commits the changes. We recommend at least two days to start. After a successful cutover, you can reset the value. * To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:- * ΓÇ£READΓÇ¥ privilege on the source database. - * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database - * If migrating views, the user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege. - * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege. - * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: - * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. - * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine. - * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the events are to be shown. + * [ΓÇ£SELECTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_select) privilege at the server level on the source. + * If migrating views, user must have the [ΓÇ£SHOW VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_show-view) privilege on the source server and the [ΓÇ£CREATE VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-view) privilege on the target server. + * If migrating triggers, user must have the [ΓÇ£TRIGGERΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_trigger) privilege on the source and target server. + * If migrating routines (procedures and/or functions), the user must have the [ΓÇ£CREATE ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-routine) and [ΓÇ£ALTER ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_alter-routine) privileges granted at the server level on the target. + * If migrating events, the user must have the [ΓÇ£EVENTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_event) privilege on the source and target server. + * If migrating users/logins, the user must have the ["CREATE USER"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-user) privilege on the target server. + * ["DROP"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_drop) privilege at the server level on the target, in order to drop tables that might already exist. For example, when retrying a migration. + * ["REFERENCES"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_references) privilege at the server level on the target, in order to create tables with foreign keys. + * If migrating to MySQL 8.0, the user must have the ["SESSION_VARIABLES_ADMIN"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_session-variables-admin) privilege on the target server. + * ["CREATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create) privilege at the server level on the target. + * ["INSERT"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_insert) privilege at the server level on the target. + * ["UPDATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_update) privilege at the server level on the target. + * ["DELETE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_delete) privilege at the server level on the target. ## Limitations |
event-grid | Partner Events Overview For Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md | Last updated 04/10/2023 # Partner Events overview for partners - Azure Event Grid-Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams. They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few. Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Azure Storage, Outlook, Teams, or Microsoft Entra ID, to partner systems where customer's solutions can react to them. With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale. +Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams. They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few. Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Outlook, Teams, or Microsoft Entra ID, so that customer's solutions can react to them. With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale. > [!NOTE] > This is a conceptual article that's required reading before you decide to onboard as a partner to Azure Event Grid. For step-by-step instructions on how to onboard as an Event Grid partner using the Azure portal, see [How to onboard as an Event Grid partner (Azure portal)](onboard-partner.md). |
event-grid | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md | Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
event-grid | Powershell Webhook Secure Delivery Microsoft Entra App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-microsoft-entra-app.md | Title: Azure PowerShell - Secure WebHook delivery with Microsoft Entra Application in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Microsoft Entra Application using Azure Event Grid ms.devlang: powershell-+ Previously updated : 10/14/2021 Last updated : 02/02/2024 # Secure WebHook delivery with Microsoft Entra Application in Azure Event Grid try { Function CreateAppRole([string] $Name, [string] $Description) {- $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole + $appRole = New-Object Microsoft.Graph.PowerShell.Models.MicrosoftGraphAppRole $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string]- $appRole.AllowedMemberTypes.Add("Application"); - $appRole.AllowedMemberTypes.Add("User"); + $appRole.AllowedMemberTypes += "Application"; + $appRole.AllowedMemberTypes += "User"; $appRole.DisplayName = $Name $appRole.Id = New-Guid $appRole.IsEnabled = $true try { return $appRole } - # Creates Azure Event Grid Azure AD Application if not exists + # Creates Azure Event Grid Microsoft Entra Application if not exists # You don't need to modify this id- # But Azure Event Grid Azure AD Application Id is different for different clouds + # But Azure Event Grid Entra Application Id is different for different clouds $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # Azure Public Cloud # $eventGridAppId = "54316b56-3481-47f9-8f30-0300f5542a7b" # Azure Government Cloud- $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name - $eventGridSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") - if ($eventGridSP -match "Microsoft.EventGrid") + $eventGridSP = Get-MgServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") + if ($eventGridSP.DisplayName -match "Microsoft.EventGrid") {- Write-Host "The Azure AD Application is already defined.`n" + Write-Host "The Event Grid Microsoft Entra Application is already defined.`n" } else {- Write-Host "Creating the Azure Event Grid Azure AD Application" - $eventGridSP = New-AzureADServicePrincipal -AppId $eventGridAppId + Write-Host "Creating the Azure Event Grid Microsoft Entra Application" + $eventGridSP = New-MgServicePrincipal -AppId $eventGridAppId } - # Creates the Azure app role for the webhook Azure AD application -- $app = Get-AzureADApplication -ObjectId $webhookAppObjectId + # Creates the Azure app role for the webhook Microsoft Entra application + $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name + $app = Get-MgApplication -ObjectId $webhookAppObjectId $appRoles = $app.AppRoles - Write-Host "Azure AD App roles before addition of the new role..." - Write-Host $appRoles + Write-Host "Microsoft Entra App roles before addition of the new role..." + Write-Host $appRoles.DisplayName - if ($appRoles -match $eventGridRoleName) + if ($appRoles.DisplayName -match $eventGridRoleName) { Write-Host "The Azure Event Grid role is already defined.`n" } else { - Write-Host "Creating the Azure Event Grid role in Azure AD Application: " $webhookAppObjectId + Write-Host "Creating the Azure Event Grid role in Microsoft Entra Application: " $webhookAppObjectId $newRole = CreateAppRole -Name $eventGridRoleName -Description "Azure Event Grid Role"- $appRoles.Add($newRole) - Set-AzureADApplication -ObjectId $app.ObjectId -AppRoles $appRoles + $appRoles += $newRole + Update-MgApplication -ApplicationId $webhookAppObjectId -AppRoles $appRoles } - Write-Host "Azure AD App roles after addition of the new role..." - Write-Host $appRoles + Write-Host "Microsoft Entra App roles after addition of the new role..." + Write-Host $appRoles.DisplayName # Creates the user role assignment for the app that will create event subscription - $servicePrincipal = Get-AzureADServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") - $eventSubscriptionWriterSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventSubscriptionWriterAppId + "'") + $servicePrincipal = Get-MgServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") + $eventSubscriptionWriterSP = Get-MgServicePrincipal -Filter ("appId eq '" + $eventSubscriptionWriterAppId + "'") if ($null -eq $eventSubscriptionWriterSP) {- Write-Host "Create new Azure AD Application" - $eventSubscriptionWriterSP = New-AzureADServicePrincipal -AppId $eventSubscriptionWriterAppId + Write-Host "Create new Microsoft Entra Application" + $eventSubscriptionWriterSP = New-MgServicePrincipal -AppId $eventSubscriptionWriterAppId } try {- Write-Host "Creating the Azure AD Application role assignment: " $eventSubscriptionWriterAppId + Write-Host "Creating the Microsoft Entra Application role assignment: " $eventSubscriptionWriterAppId $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName- New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventSubscriptionWriterSP.ObjectId -PrincipalId $eventSubscriptionWriterSP.ObjectId + New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $eventSubscriptionWriterSP.Id -PrincipalId $eventSubscriptionWriterSP.Id -ResourceId $servicePrincipal.Id -AppRoleId $eventGridAppRole.Id } catch { if( $_.Exception.Message -like '*Permission being assigned already exists on the object*') {- Write-Host "The Azure AD Application role is already defined.`n" + Write-Host "The Microsoft Entra Application role is already defined.`n" } else { try { Break } - # Creates the service app role assignment for Event Grid Azure AD Application + # Creates the service app role assignment for Event Grid Microsoft Entra Application $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName- New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventGridSP.ObjectId -PrincipalId $eventGridSP.ObjectId + New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $eventGridSP.Id -PrincipalId $eventGridSP.Id -ResourceId $servicePrincipal.Id -AppRoleId $eventGridAppRole.Id # Print output references for backup - Write-Host ">> Webhook's Azure AD Application Id: $($app.AppId)" - Write-Host ">> Webhook's Azure AD Application ObjectId Id: $($app.ObjectId)" + Write-Host ">> Webhook's Microsoft Entra Application Id: $($app.AppId)" + Write-Host ">> Webhook's Microsoft Entra Application ObjectId Id: $($app.ObjectId)" } catch { Write-Host ">> Exception:" catch { ## Script explanation -For more details refer to [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md) +For more information, see [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md). |
event-grid | Powershell Webhook Secure Delivery Microsoft Entra User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-microsoft-entra-user.md | Title: Azure PowerShell - Secure WebHook delivery with Microsoft Entra user in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Microsoft Entra user using Azure Event Grid ms.devlang: powershell-+ Previously updated : 09/29/2021 Last updated : 02/02/2024 # Secure WebHook delivery with Microsoft Entra user in Azure Event Grid Here are the high level steps from the script: 1. Create a service principal for **Microsoft.EventGrid** if it doesn't already exist. 1. Create a role named **AzureEventGridSecureWebhookSubscriber** in the **Microsoft Entra app for your Webhook**.-1. Add service principal of user who will be creating the subscription to the AzureEventGridSecureWebhookSubscriber role. +1. Add service principal of user who is creating the subscription to the AzureEventGridSecureWebhookSubscriber role. 1. Add service principal of Microsoft.EventGrid to the AzureEventGridSecureWebhookSubscriber. -## Sample script - stable +## Sample script ```azurepowershell # NOTE: Before run this script ensure you are logged in Azure by using "az login" command. -$webhookAppObjectId = "[REPLACE_WITH_YOUR_ID]" +$webhookAppId = "[REPLACE_WITH_YOUR_ID]" $eventSubscriptionWriterUserPrincipalName = "[REPLACE_WITH_USER_PRINCIPAL_NAME_OF_THE_USER_WHO_WILL_CREATE_THE_SUBSCRIPTION]" # Start execution try { Function CreateAppRole([string] $Name, [string] $Description) {- $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole + $appRole = New-Object Microsoft.Graph.PowerShell.Models.MicrosoftGraphAppRole $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string]- $appRole.AllowedMemberTypes.Add("Application"); - $appRole.AllowedMemberTypes.Add("User"); + $appRole.AllowedMemberTypes += "Application"; + $appRole.AllowedMemberTypes += "User"; $appRole.DisplayName = $Name $appRole.Id = New-Guid $appRole.IsEnabled = $true try { return $appRole } - # Creates Azure Event Grid Azure AD Application if not exists + # Creates Azure Event Grid Microsoft Entra Application if not exists # You don't need to modify this id- # But Azure Event Grid Azure AD Application Id is different for different clouds + # But Azure Event Grid Microsoft Entra Application Id is different for different clouds $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # Azure Public Cloud # $eventGridAppId = "54316b56-3481-47f9-8f30-0300f5542a7b" # Azure Government Cloud- $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name - $eventGridSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") - if ($eventGridSP -match "Microsoft.EventGrid") + $eventGridSP = Get-MgServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") + if ($eventGridSP.DisplayName -match "Microsoft.EventGrid") {- Write-Host "The Azure AD Application is already defined.`n" + Write-Host "The Event Grid Microsoft Entra Application is already defined.`n" } else {- Write-Host "Creating the Azure Event Grid Azure AD Application" - $eventGridSP = New-AzureADServicePrincipal -AppId $eventGridAppId + Write-Host "Creating the Azure Event Grid Microsoft Entra Application" + $eventGridSP = New-MgServicePrincipal -AppId $eventGridAppId } - # Creates the Azure app role for the webhook Azure AD application + # Creates the Azure app role for the webhook Microsoft Entra application + $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name - $app = Get-AzureADApplication -ObjectId $webhookAppObjectId + $app = Get-MgApplication -ApplicationId $webhookAppObjectId $appRoles = $app.AppRoles - Write-Host "Azure AD App roles before addition of the new role..." - Write-Host $appRoles + Write-Host "Microsoft Entra App roles before addition of the new role..." + Write-Host $appRoles.DisplayName - if ($appRoles -match $eventGridRoleName) + if ($appRoles.DisplayName -match $eventGridRoleName) { Write-Host "The Azure Event Grid role is already defined.`n" } else { - Write-Host "Creating the Azure Event Grid role in Azure AD Application: " $webhookAppObjectId + Write-Host "Creating the Azure Event Grid role in Microsoft Entra Application: " $webhookAppObjectId $newRole = CreateAppRole -Name $eventGridRoleName -Description "Azure Event Grid Role"- $appRoles.Add($newRole) - Set-AzureADApplication -ObjectId $app.ObjectId -AppRoles $appRoles + $appRoles += $newRole + Update-MgApplication -ApplicationId $webhookAppObjectId -AppRoles $appRoles } - Write-Host "Azure AD App roles after addition of the new role..." - Write-Host $appRoles + Write-Host "Microsoft Entra App roles after addition of the new role..." + Write-Host $appRoles.DisplayName # Creates the user role assignment for the user who will create event subscription - $servicePrincipal = Get-AzureADServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") + $servicePrincipal = Get-MgServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") try {- Write-Host "Creating the Azure Ad App Role assignment for user: " $eventSubscriptionWriterUserPrincipalName - $eventSubscriptionWriterUser = Get-AzureAdUser -ObjectId $eventSubscriptionWriterUserPrincipalName + Write-Host "Creating the Microsoft Entra App Role assignment for user: " $eventSubscriptionWriterUserPrincipalName + $eventSubscriptionWriterUser = Get-MgUser -UserId $eventSubscriptionWriterUserPrincipalName $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName- New-AzureADUserAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventSubscriptionWriterUser.ObjectId -PrincipalId $eventSubscriptionWriterUser.ObjectId + New-MgUserAppRoleAssignment -UserId $eventSubscriptionWriterUser.Id -PrincipalId $eventSubscriptionWriterUser.Id -ResourceId $servicePrincipal.Id -AppRoleId $eventGridAppRole.Id } catch { if( $_.Exception.Message -like '*Permission being assigned already exists on the object*') {- Write-Host "The Azure AD User Application role is already defined.`n" + Write-Host "The Microsoft Entra User Application role is already defined.`n" } else { try { Break } - # Creates the service app role assignment for Event Grid Azure AD Application + # Creates the service app role assignment for Event Grid Microsoft Entra Application $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName- New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventGridSP.ObjectId -PrincipalId $eventGridSP.ObjectId + New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $eventGridSP.Id -PrincipalId $eventGridSP.Id -ResourceId $servicePrincipal.Id -AppRoleId $eventGridAppRole.Id # Print output references for backup - Write-Host ">> Webhook's Azure AD Application Id: $($app.AppId)" - Write-Host ">> Webhook's Azure AD Application ObjectId Id: $($app.ObjectId)" + Write-Host ">> Webhook's Microsoft Entra Application Id: $($app.AppId)" + Write-Host ">> Webhook's Microsoft Entra Application Object Id: $($app.Id)" } catch { Write-Host ">> Exception:" catch { ## Script explanation -For more details refer to [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md) +For more information, see [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md). |
event-grid | Secure Webhook Delivery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/secure-webhook-delivery.md | Title: Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Microsoft Entra ID using Azure Event Grid - Previously updated : 10/12/2022+ Last updated : 02/02/2024 # Deliver events to Microsoft Entra protected endpoints There are two subsections in this section. Read through both the scenarios or th This section shows how to configure the event subscription by using a Microsoft Entra user. -1. Create a Microsoft Entra application for the webhook configured to work with the Microsoft directory (single tenant). +1. Create a Microsoft Entra application for the webhook configured to work with the Microsoft Entra (single tenant). 2. Open the [Azure Shell](https://portal.azure.com/#cloudshell/) in the tenant and select the PowerShell environment. This section shows how to configure the event subscription by using a Microsoft - **$webhookAadTenantId**: Azure tenant ID ```Shell- PS /home/user>$webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" - PS /home/user>Connect-AzureAD -TenantId $webhookAadTenantId + $webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" + Connect-MgGraph -TenantId $webhookAadTenantId -Scopes "Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All" ``` 4. Open the [following script](scripts/powershell-webhook-secure-delivery-microsoft-entra-user.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterUserPrincipalName** with your identifiers, then continue to run the script. This section shows how to configure the event subscription by using a Microsoft If you see the following error message, you need to elevate to the service principal. An extra access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. ```- New-AzureADServiceAppRoleAssignment: Error occurred while executing NewServicePrincipalAppRoleAssignment + New-MgServicePrincipalAppRoleAssignment: Error occurred while executing NewServicePrincipalAppRoleAssignment Code: Authorization_RequestDenied Message: Insufficient privileges to complete the operation. ``` This section shows how to configure the event subscription by using a Microsoft This section shows how to configure the event subscription by using a Microsoft Entra application. -1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with the Microsoft directory (Single tenant). +1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with the Microsoft Entra (Single tenant). 2. Create a secret for the Microsoft Entra application and save the value (you need this value later). 3. Go to the **Access control (IAM)** page for the Event Grid topic and assign **Event Grid Contributor** role to the Event Grid subscription writer app. This step allows you to have access to the Event Grid resource when you logged-in into Azure with the Microsoft Entra application by using Azure CLI. -4. Create a Microsoft Entra application for the webhook configured to work with the Microsoft directory (Single tenant). +4. Create a Microsoft Entra application for the webhook configured to work with the Microsoft Entra (Single tenant). 5. Open the [Azure Shell](https://portal.azure.com/#cloudshell/) in the tenant and select the PowerShell environment. This section shows how to configure the event subscription by using a Microsoft - **$webhookAadTenantId**: Azure tenant ID ```Shell- PS /home/user>$webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" - PS /home/user>Connect-AzureAD -TenantId $webhookAadTenantId + $webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" + Connect-MgGraph -TenantId $webhookAadTenantId -Scopes "Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All" ``` 7. Open the [following script](scripts/powershell-webhook-secure-delivery-microsoft-entra-app.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterAppId** with your identifiers, then continue to run the script. This section shows how to configure the event subscription by using a Microsoft 8. Sign-in as the Event Grid subscription writer Microsoft Entra Application by running the command. ```azurecli- PS /home/user>az login --service-principal -u [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_ID] -p [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID] + az login --service-principal -u [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_ID] -p [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID] ``` 9. Create your subscription by running the command. ```azurecli- PS /home/user>az eventgrid system-topic event-subscription create --name [REPLACE_WITH_SUBSCRIPTION_NAME] -g [REPLACE_WITH_RESOURCE_GROUP] --system-topic-name [REPLACE_WITH_SYSTEM_TOPIC] --endpoint [REPLACE_WITH_WEBHOOK_ENDPOINT] --event-delivery-schema [REPLACE_WITH_WEBHOOK_EVENT_SCHEMA] --azure-active-directory-tenant-id [REPLACE_WITH_TENANT_ID] --azure-active-directory-application-id-or-uri [REPLACE_WITH_APPLICATION_ID_FROM_SCRIPT] --endpoint-type webhook + az eventgrid system-topic event-subscription create --name [REPLACE_WITH_SUBSCRIPTION_NAME] -g [REPLACE_WITH_RESOURCE_GROUP] --system-topic-name [REPLACE_WITH_SYSTEM_TOPIC] --endpoint [REPLACE_WITH_WEBHOOK_ENDPOINT] --event-delivery-schema [REPLACE_WITH_WEBHOOK_EVENT_SCHEMA] --azure-active-directory-tenant-id [REPLACE_WITH_TENANT_ID] --azure-active-directory-application-id-or-uri [REPLACE_WITH_APPLICATION_ID_FROM_SCRIPT] --endpoint-type webhook ``` > [!NOTE] Based on the diagram, follow next steps to configure both tenants. Do the following steps in **Tenant A**: -1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with any Microsoft Entra directory (multitenant). +1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with any Microsoft Entra (multitenant). 2. Create a secret for the Microsoft Entra application, and save the value (you need this value later). Do the following steps in **Tenant A**: Do the following steps in **Tenant B**: -1. Create a Microsoft Entra Application for the webhook configured to work with the Microsoft directory (single tenant). +1. Create a Microsoft Entra Application for the webhook configured to work with the Microsoft Entra (single tenant). 5. Open the [Azure Shell](https://portal.azure.com/#cloudshell/), and select the PowerShell environment. 6. Modify the **$webhookAadTenantId** value to connect to the **Tenant B**. - Variables: - **$webhookAadTenantId**: Azure Tenant ID for the **Tenant B** ```Shell- PS /home/user>$webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" - PS /home/user>Connect-AzureAD -TenantId $webhookAadTenantId + $webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" + Connect-MgGraph -TenantId $webhookAadTenantId -Scopes "Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All" ``` 7. Open the [following script](scripts/powershell-webhook-secure-delivery-microsoft-entra-app.md), and update values of **$webhookAppObjectId** and **$eventSubscriptionWriterAppId** with your identifiers, then continue to run the script. Do the following steps in **Tenant B**: If you see the following error message, you need to elevate to the service principal. An extra access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. ```- New-AzureADServiceAppRoleAssignment: Error occurred while executing NewServicePrincipalAppRoleAssignment + New-MgServicePrincipalAppRoleAssignment: Error occurred while executing NewServicePrincipalAppRoleAssignment Code: Authorization_RequestDenied Message: Insufficient privileges to complete the operation. ``` Back in **Tenant A**, do the following steps: 1. Open the [Azure Shell](https://portal.azure.com/#cloudshell/), and sign in as the Event Grid subscription writer Microsoft Entra Application by running the command. ```azurecli- PS /home/user>az login --service-principal -u [REPLACE_WITH_APP_ID] -p [REPLACE_WITH_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID] + az login --service-principal -u [REPLACE_WITH_APP_ID] -p [REPLACE_WITH_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID] ``` 2. Create your subscription by running the command. ```azurecli- PS /home/user>az eventgrid system-topic event-subscription create --name [REPLACE_WITH_SUBSCRIPTION_NAME] -g [REPLACE_WITH_RESOURCE_GROUP] --system-topic-name [REPLACE_WITH_SYSTEM_TOPIC] --endpoint [REPLACE_WITH_WEBHOOK_ENDPOINT] --event-delivery-schema [REPLACE_WITH_WEBHOOK_EVENT_SCHEMA] --azure-active-directory-tenant-id [REPLACE_WITH_TENANT_B_ID] --azure-active-directory-application-id-or-uri [REPLACE_WITH_APPLICATION_ID_FROM_SCRIPT] --endpoint-type webhook + az eventgrid system-topic event-subscription create --name [REPLACE_WITH_SUBSCRIPTION_NAME] -g [REPLACE_WITH_RESOURCE_GROUP] --system-topic-name [REPLACE_WITH_SYSTEM_TOPIC] --endpoint [REPLACE_WITH_WEBHOOK_ENDPOINT] --event-delivery-schema [REPLACE_WITH_WEBHOOK_EVENT_SCHEMA] --azure-active-directory-tenant-id [REPLACE_WITH_TENANT_B_ID] --azure-active-directory-application-id-or-uri [REPLACE_WITH_APPLICATION_ID_FROM_SCRIPT] --endpoint-type webhook ``` > [!NOTE] |
event-hubs | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md | Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 01/30/2024 Last updated : 02/06/2024 The name on each built-in links to the initiative definition source on the [!INCLUDE [azure-policy-reference-policysets-security-center](../../../../includes/policy/reference/bycat/policysets-security-center.md)] +## SQL +++## Synapse ++ ## Tags [!INCLUDE [azure-policy-reference-policysets-tags](../../../../includes/policy/reference/bycat/policysets-tags.md)] |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 01/30/2024 Last updated : 02/06/2024 The name of each built-in links to the policy definition in the Azure portal. Us [!INCLUDE [azure-policy-reference-policies-sql-server](../../../../includes/policy/reference/bycat/policies-sql-server.md)] +## Stack HCI ++ ## Storage [!INCLUDE [azure-policy-reference-policies-storage](../../../../includes/policy/reference/bycat/policies-storage.md)] |
hdinsight-aks | Hdinsight On Aks Autoscale Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/hdinsight-on-aks-autoscale-clusters.md | The following table describes the cluster types that are compatible with the Aut |Workload |Load Based |Schedule Based| |-|-|-| |Flink |Planned |Yes|-|Trino |Planned |Yes**| +|Trino |Yes** |Yes**| |Spark |Yes** |Yes**| **Graceful decommissioning is configurable. |
hdinsight-aks | Trino Connect To Metastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connect-to-metastore.md | The following example covers the addition of Hive catalog and metastore database **There are few important sections you need to add to your cluster ARM template to configure the Hive catalog and Hive metastore database:** -- `secretsProfile` ΓÇô It specifies Azure Key Vault and list of secrets to be used in Trino cluster, required to connect to external Hive metastore.-- `serviceConfigsProfiles` - It includes configuration for Trino catalogs. For more information, see [Add catalogs to existing cluster](trino-add-catalogs.md).-- `trinoProfile.catalogOptions.hive` ΓÇô List of Hive or iceberg or delta catalogs with parameters of external Hive metastore database for each catalog. To use external metastore database, catalog must be present in this list.-+### Metastore configuration +Configure external Hive metastore database in `config.properties` file: +```json +{ + "fileName": "config.properties", + "values": { + "hive.metastore.hdi.metastoreDbConnectionURL": "jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30", + "hive.metastore.hdi.metastoreDbConnectionUserName": "trinoadmin", + "hive.metastore.hdi.metastoreDbConnectionPasswordSecret": "hms-db-pwd", + "hive.metastore.hdi.metastoreWarehouseDir": "abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse" + } +} +``` +| Property| Description| Example| +|||| +|hive.metastore.hdi.metastoreDbConnectionURL|JDBC connection string to database.|jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30| +|hive.metastore.hdi.metastoreDbConnectionUserName|SQL user name to connect to database.|trinoadmin| +|hive.metastore.hdi.metastoreDbConnectionPasswordSecret|Secret referenceName configured in secretsProfile with password.|hms-db-pwd| +|hive.metastore.hdi.metastoreWarehouseDir|ABFS URI to location in storage where data is stored.|`abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse`| +### Metastore authentication +Configure authentication to external Hive metastore database specifying Azure Key Vault secrets. +> [!NOTE] +> `referenceName` should match value provided in `hive.metastore.hdi.metastoreDbConnectionPasswordSecret` +```json +"secretsProfile": { + "keyVaultResourceId": "/subscriptions/{USER_SUBSCRIPTION_ID}/resourceGroups/{USER_RESOURCE_GROUP}/providers/Microsoft.KeyVault/vaults/{USER_KEYVAULT_NAME}", + "secrets": [ + { + "referenceName": "hms-db-pwd", + "type": "secret", + "keyVaultObjectName": "hms-db-pwd" + } ] +}, +``` | Property| Description| Example| |||| |secretsProfile.keyVaultResourceId|Azure resource ID string to Azure Key Vault where secrets for Hive metastore are stored.|/subscriptions/0000000-0000-0000-0000-000000000000/resourceGroups/trino-rg/providers/Microsoft.KeyVault/vaults/trinoakv| |secretsProfile.secrets[*].referenceName|Unique reference name of the secret to use later in clusterProfile.|Secret1_ref| |secretsProfile.secrets[*].type|Type of object in Azure Key Vault, only ΓÇ£secretΓÇ¥ is supported.|secret| |secretsProfile.secrets[*].keyVaultObjectName|Name of secret object in Azure Key Vault containing actual secret value.|secret1|-|trinoProfile.catalogOptions.hive|List of Hive or iceberg or delta catalogs with parameters of external Hive metastore database, require parameters for each. To use external metastore database, catalog must be present in this list. -|trinoProfile.catalogOptions.hive[*].catalogName|Name of Trino catalog configured in `serviceConfigsProfiles`, which configured to use external Hive metastore database.|hive1| -|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionURL|JDBC connection string to database.|jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30| -|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionUserName|SQL user name to connect to database.|trinoadmin| -|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionPasswordSecret|Secret referenceName configured in secretsProfile with password.|hms-db-pwd| -|trinoProfile.catalogOptions.hive[*].metastoreWarehouseDir|ABFS URI to location in storage where data is stored.|`abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse`| -To configure external Hive metastore to an existing Trino cluster, add the required sections in your cluster ARM template by referring to the following example: +### Catalog configuration +In order for a Trino catalog to use external Hive metastore it should specify `hive.metastore=hdi` property. For more information, see [Add catalogs to existing cluster](trino-add-catalogs.md): +``` +{ + "fileName": "hive1.properties", + "values": { + "connector.name": "hive", + "hive.metastore": "hdi" + } +} +``` +### Complete example +To configure external Hive metastore to an existing Trino cluster, add the required sections in your cluster ARM template by referring to the following example: ```json { To configure external Hive metastore to an existing Trino cluster, add the requi { "serviceName": "trino", "configs": [+ { + "component": "common", + "files": [ + { + "fileName": "config.properties", + "values": { + "hive.metastore.hdi.metastoreDbConnectionURL": "jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30", + "hive.metastore.hdi.metastoreDbConnectionUserName": "trinoadmin", + "hive.metastore.hdi.metastoreDbConnectionPasswordSecret": "hms-db-pwd", + "hive.metastore.hdi.metastoreWarehouseDir": "abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse" + } + } + ] + }, { "component": "catalogs", "files": [ { "fileName": "hive1.properties", "values": {- "connector.name": "hive" + "connector.name": "hive", + "hive.metastore": "hdi" } } ] } ] }- ], - "trinoProfile": { - "catalogOptions": { - "hive": [ - { - "catalogName": "hive1", - "metastoreDbConnectionURL": "jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30", - "metastoreDbConnectionUserName": "trinoadmin", - "metastoreDbConnectionPasswordSecret": "hms-db-pwd", - "metastoreWarehouseDir": "abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse" - } - ] - } - } + ] } } } create schema hive1.schema1; create table hive1.schema1.tpchorders as select * from tpch.tiny.orders; select * from hive1.schema1.tpchorders limit 100; ```++## Alternative configuration +Alternatively external Hive metastore database parameters can be specified in `trinoProfile.catalogOptions.hive` together with `hive.metastore=hdi` catalog property: ++| Property| Description| Example| +|||| +|trinoProfile.catalogOptions.hive|List of Hive or iceberg or delta catalogs with parameters of external Hive metastore database, require parameters for each. To use external metastore database, catalog must be present in this list. +|trinoProfile.catalogOptions.hive[*].catalogName|Name of Trino catalog configured in `serviceConfigsProfiles`, which configured to use external Hive metastore database.|hive1| +|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionURL|JDBC connection string to database.|jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30| +|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionUserName|SQL user name to connect to database.|trinoadmin| +|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionPasswordSecret|Secret referenceName configured in secretsProfile with password.|hms-db-pwd| +|trinoProfile.catalogOptions.hive[*].metastoreWarehouseDir|ABFS URI to location in storage where data is stored.|`abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse`| ++### Complete example +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "resources": [ + { + "type": "microsoft.hdinsight/clusterpools/clusters", + "apiVersion": "<api-version>", + "name": "<cluster-pool-name>/<cluster-name>", + "location": "<region, e.g. westeurope>", + "tags": {}, + "properties": { + "clusterType": "Trino", ++ "clusterProfile": { + "secretsProfile": { + "keyVaultResourceId": "/subscriptions/{USER_SUBSCRIPTION_ID}/resourceGroups/{USER_RESOURCE_GROUP}/providers/Microsoft.KeyVault/vaults/{USER_KEYVAULT_NAME}", + "secrets": [ + { + "referenceName": "hms-db-pwd", + "type": "secret", + "keyVaultObjectName": "hms-db-pwd" + } ] + }, + "serviceConfigsProfiles": [ + { + "serviceName": "trino", + "configs": [ + { + "component": "catalogs", + "files": [ + { + "fileName": "hive1.properties", + "values": { + "connector.name": "hive", + "hive.metastore": "hdi" + } + } + ] + } + ] + } + ], + "trinoProfile": { + "catalogOptions": { + "hive": [ + { + "catalogName": "hive1", + "metastoreDbConnectionURL": "jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30", + "metastoreDbConnectionUserName": "trinoadmin", + "metastoreDbConnectionPasswordSecret": "hms-db-pwd", + "metastoreWarehouseDir": "abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse" + } + ] + } + } + } + } + } + ] +} +``` |
hdinsight-aks | Trino Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connectors.md | Trino in HDInsight on AKS enables seamless integration with data sources. You ca * [MongoDB](https://trino.io/docs/410/connector/mongodb.html) * [MySQL](https://trino.io/docs/410/connector/mysql.html) * [Oracle](https://trino.io/docs/410/connector/oracle.html)-* [Phoenix](https://trino.io/docs/410/connector/phoenix.html) * [PostgreSQL](https://trino.io/docs/410/connector/postgresql.html) * [Prometheus](https://trino.io/docs/410/connector/prometheus.html) * [Redis](https://trino.io/docs/410/connector/redis.html) |
hdinsight | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md | Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
healthcare-apis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md | Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
healthcare-apis | Events Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md | -**Applies to:** [!INCLUDE [Yes icon](../includes/applies-to.md)][!INCLUDE [FHIR service](../includes/fhir-service.md)], [!INCLUDE [DICOM service](../includes/DICOM-service.md)] - Events let you subscribe to data changes in the FHIR® or DICOM® service and get notified through Azure Event Grid. You can use events to trigger workflows, automate tasks, send alerts, and more. In this FAQ, youΓÇÖll find answers to some common questions about events. **Can I use events with a non-Microsoft FHIR or DICOM service?** |
healthcare-apis | Events Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md | -**Applies to:** [!INCLUDE [Yes icon](../includes/applies-to.md)][!INCLUDE [FHIR service](../includes/fhir-service.md)], [!INCLUDE [DICOM service](../includes/DICOM-service.md)] - Events in Azure Health Data Services allow you to subscribe to and receive notifications of changes to health data in the FHIR® service or the DICOM® service. Events also enable you to trigger other actions or services based changes to health data, such as starting workflows, sending email, text messages, or alerts. Events are: |
iot-central | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md | IoT Central can also control devices by calling commands on the device. For exam The telemetry, properties, and commands that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Assign a device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template). -The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/about-iot-sdks.md). +The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/about-iot-sdks.md). Devices connect to IoT Central using one the supported protocols: [MQTT, AMQP, or HTTP](../../iot-hub/iot-hub-devguide-protocols.md). |
iot-central | Concepts Device Implementation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md | An IoT Central device template includes a _model_ that specifies the behaviors a Each model has a unique _digital twin model identifier_ (DTMI), such as `dtmi:com:example:Thermostat;1`. When a device connects to IoT Central, it sends the DTMI of the model it implements. IoT Central can then assign the correct device template to the device. -[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot-develop/concepts-convention.md) that a device should follow when it implements a Digital Twin Definition Language (DTDL) model. +[IoT Plug and Play](../../iot/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot/concepts-convention.md) that a device should follow when it implements a Digital Twin Definition Language (DTDL) model. The [Azure IoT device SDKs](#device-sdks) include support for the IoT Plug and Play conventions. A DTDL model can be a _no-component_ or a _multi-component_ model: > [!TIP] > You can [import and export a complete device model or individual interface](howto-set-up-template.md#interfaces-and-components) from an IoT Central device template as a DTDL v2 file. -To learn more about device models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md) +To learn more about device models, see the [IoT Plug and Play modeling guide](../../iot/concepts-modeling-guide.md) ### Conventions A device should follow the IoT Plug and Play conventions when it exchanges data > [!NOTE] > Currently, IoT Central does not fully support the DTDL **Array** and **Geospatial** data types. -To learn more about the IoT Plug and Play conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md). +To learn more about the IoT Plug and Play conventions, see [IoT Plug and Play conventions](../../iot/concepts-convention.md). -To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). ### Device SDKs If the device gets any of the following errors when it connects, it should use a - Operator blocked device. - Internal error 500 from the service. -To learn more about device error codes, see [Troubleshooting device connections](troubleshoot-connection.md). +To learn more about device error codes, see [Troubleshooting device connections](troubleshooting.md). To learn more about implementing automatic reconnections, see [Manage device reconnections to create resilient applications](../../iot-develop/concepts-manage-device-reconnections.md). |
iot-central | Concepts Device Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md | -A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template. To learn more about the data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template. To learn more about the data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). A device template includes the following sections: The JSON file that defines the device model uses the [Digital Twin Definition La ] ``` -To learn more about DTDL models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md). +To learn more about DTDL models, see the [IoT Plug and Play modeling guide](../../iot/concepts-modeling-guide.md). > [!NOTE] > IoT Central defines some extensions to the DTDL v2 language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md). A solution developer creates views that let operators monitor and manage connect ## Next steps -Now that you've learned about device templates, a suggested next step is to read [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) to learn more about the data a device exchanges with IoT Central. +Now that you've learned about device templates, a suggested next step is to read [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md) to learn more about the data a device exchanges with IoT Central. |
iot-central | Concepts Faq Apaas Paas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md | So that you can seamlessly migrate devices from your IoT Central applications to - The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) model. IoT Central requires all devices to have a DTDL model. These models simplify the interoperability between an IoT PaaS solution and IoT Central. -- The device must follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).+- The device must follow the [IoT Plug and Play conventions](../../iot/concepts-convention.md). - IoT Central uses the DPS to provision the devices. The PaaS solution must also use DPS to provision the devices. - The updatable DPS pattern ensures that the device can move seamlessly between IoT Central applications and the PaaS solution without any downtime. |
iot-central | Howto Connect Eflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md | - Title: Connect Azure IoT Edge for Linux on Windows (EFLOW) -description: Learn how to connect an Azure IoT Edge for Linux on Windows (EFLOW) device to an IoT Central application -- Previously updated : 11/27/2023-----# Connect an IoT Edge for Linux on Windows device to IoT Central --[Azure IoT Edge for Linux on Windows (EFLOW)](/windows/iot/iot-enterprise/azure-iot-edge-for-linux-on-windows) lets you run Azure IoT Edge in a Linux container on your Windows device. In this article, you learn how to provision an EFLOW device and manage it from your IoT Central application. --In this how-to article, you learn how to: --* Import a device manifest for an IoT Edge device. -* Create a device template for an IoT Edge device. -* Create an IoT Edge device in IoT Central. -* Connect and provision an EFLOW device. --## Prerequisites --To complete the steps in this article, you need: --* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --* An [IoT Central application created](howto-create-iot-central-application.md) from the **Custom application** template. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md). --* A Windows device that meets the following minimum requirements: -- * Windows 10<sup>1</sup>/11 (Pro, Enterprise, IoT Enterprise) or Windows Server 2019<sup>1</sup>/2022 - * Minimum free memory: 1 GB - * Minimum free disk space: 10 GB -- <sup>1</sup> Windows 10 and Windows Server 2019 minimum build 17763 with all current cumulative updates installed. --To follow the steps in this article, download the [EnvironmentalSensorManifest-1-4.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/EnvironmentalSensorManifest-1-4.json) file to your computer. --## Import a deployment manifest --You use a deployment manifest to specify the modules to run on an IoT Edge device. IoT Central manages the deployment manifests for the IoT Edge devices in your solution. To import the deployment manifest for this example: --1. In your IoT Central application, navigate to **Edge manifests**. --1. Select **+ New**. Enter a name such as *Environmental Sensor* for your deployment manifest, and then upload the *EnvironmentalSensorManifest-1-4.json* file you downloaded previously. --1. Select **Next** and then **Create**. --The example deployment manifest includes a custom module called *SimulatedTemperatureSensor*. --## Add device template --In this section, you create an IoT Central device template for an IoT Edge device. You import an IoT Edge manifest to get started, and then modify the template to add telemetry definitions and views: --### Create the device template and import the manifest --1. Create a device template and choose **Azure IoT Edge** as the template type. --1. On the **Customize** page of the wizard, enter a name such as *Environmental Sensor Edge Device* for the device template. --1. On the **Review** page, select **Create**. --1. On the **Create a model** page, select **Custom model**. --1. In the model, select **Modules** and then **Import modules from manifest**. Select the **Environmental Sensor** deployment manifest and then select **Import**. --1. Select the **management** interface in the **SimulatedTemperatureSensor** module to view the two properties defined in the manifest: ---### Add telemetry to the device template --An IoT Edge manifest doesn't define the telemetry a module sends. You add the telemetry definitions to the device template in IoT Central. The **SimulatedTemperatureSensor** module sends telemetry messages that look like the following JSON: --```json -{ - "machine": { - "temperature": 75.0, - "pressure": 40.2 - }, - "ambient": { - "temperature": 23.0, - "humidity": 30.0 - }, - "timeCreated": "" -} -``` --To add the telemetry definitions to the device template: --1. Select the **management** interface in the **Environmental Sensor Edge Device** template. --1. Select **+ Add capability**. Enter *machine* as the **Display name** and select the **Capability type** as **Telemetry**. --1. Select **Object** as the schema type, and then select **Define**. On the object definition page, add *temperature* and *pressure* as attributes of type **Double** and then select **Apply**. --1. Select **+ Add capability**. Enter *ambient* as the **Display name** and select the **Capability type** as **Telemetry**. --1. Select **Object** as the schema type, and then select **Define**. On the object definition page, add *temperature* and *humidity* as attributes of type **Double** and then select **Apply**. --1. Select **+ Add capability**. Enter *timeCreated* as the **Display name** and make sure that the **Capability type** is **Telemetry**. --1. Select **DateTime** as the schema type. --1. Select **Save** to update the template. --The **management** interface now includes the **machine**, **ambient**, and **timeCreated** telemetry types: ---### Add views to template --To enable an operator to view the telemetry from the device, define a view in the device template. --1. Select **Views** in the **Environmental Sensor Edge Device** template. --1. On the **Select to add a new view** page, select the **Visualizing the device** tile. --1. Change the view name to *View IoT Edge device telemetry*. --1. Under **Start with devices**, select the **ambient/temperature**, **ambient/humidity**, **machine/humidity**, and **machine/temperature** telemetry types. Then select **Add tile**. --1. Select **Save** to save the **View IoT Edge device telemetry** view. --### Publish the template --Before you can add a device that uses the **Environmental Sensor Edge Device** template, you must publish the template. --Navigate to the **Environmental Sensor Edge Device** template and select **Publish**. On the **Publish this device template to the application** panel, select **Publish** to publish the template --## Add an IoT Edge device --Before you can connect a device to IoT Central, you must register the device in your application: --1. In your IoT Central application, navigate to the **Devices** page and select **Environmental Sensor Edge Device** in the list of available templates. --1. Select **+ New** to add a new device from the template. --1. On the **Create new device** page, select the **Environmental Sensor** deployment manifest, and then select **Create**. --You now have a new device with the status **Registered**: ---### Get the device credentials --When you deploy the IoT Edge device later in this how-to article, you need the credentials that allow the device to connect to your IoT Central application. To get the device credentials: --1. On the **Device** page, select the device you created. --1. Select **Connect**. --1. On the **Device connection** page, make a note of the **ID Scope**, the **Device ID**, and the **Primary Key**. You use these values later. --1. Select **Close**. --You've now finished configuring your IoT Central application to enable an IoT Edge device to connect. --## Install and provision an EFLOW device --To install and provision your EFLOW device: --1. In an elevated PowerShell session, run the following commands to download IoT Edge for Linux on Windows. -- ```powershell - $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi')) - $ProgressPreference = 'SilentlyContinue' - Invoke-WebRequest "https://aka.ms/AzEFLOWMSI_1_4_LTS_X64" -OutFile $msiPath - ``` -- > [!TIP] - > The previous commands download an X64 image, for ARM64 use `https://aka.ms/AzEFLOWMSI_1_4_LTS_ARM64`. --1. Install IoT Edge for Linux on Windows on your device. -- ```powershell - Start-Process -Wait msiexec -ArgumentList "/i","$([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))","/qn" - ``` -- > [!TIP] - > You can specify custom IoT Edge for Linux on Windows installation and VHDX directories by adding `INSTALLDIR="<FULLY_QUALIFIED_PATH>"` and `VHDXDIR="<FULLY_QUALIFIED_PATH>"` parameters to the install command. --1. Create the IoT Edge for Linux on Windows deployment. The deployment creates your Linux VM and installs the IoT Edge runtime for you. -- ```powershell - Deploy-Eflow - ``` --1. Use the **ID scope**, **Device ID** and the **Primary Key** you made a note of previously. -- ```powershell - Provision-EflowVm -provisioningType DpsSymmetricKey -scopeId <ID_SCOPE_HERE> -registrationId <DEVCIE_ID_HERE> -symmKey <PRIMARY_KEY_HERE> - ``` --To learn about other ways you can deploy and provision an EFLOW device, see [Install and provision Azure IoT Edge for Linux on a Windows device](../../iot-edge/how-to-install-iot-edge-on-windows.md). --Go to the **Device Details** page in your IoT Central application and you can see telemetry flowing from your EFLOW device: ---> [!TIP] -> You may need to wait several minutes for the IoT Edge device to start sending telemetry. --## Clean up resources --If you want to uninstall EFLOW from your device, use the following commands. --1. Open **Settings** on Windows -1. Select **Add or Remove Programs** -1. Select **Azure IoT Edge LTS** app -1. Select **Uninstall** --## Next steps --Now that you've learned how to connect an (EFLOW) device to IoT Central, the suggested next step is to learn how to [Connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md). |
iot-central | Howto Control Devices With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md | To learn how to control devices by using the IoT Central UI, see ## Components and modules -Components let you group and reuse device capabilities. To learn more about components and device models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md). +Components let you group and reuse device capabilities. To learn more about components and device models, see the [IoT Plug and Play modeling guide](../../iot/concepts-modeling-guide.md). Not all device templates use components. The following screenshot shows the device template for a simple [thermostat](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-2.json) where all the capabilities are defined in a single interface called the **Root component**: |
iot-central | Howto Create Custom Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-rules.md | - Title: Extend Azure IoT Central by using custom rules -description: Configure an IoT Central application to send notifications when a device stops sending telemetry by using Azure Stream Analytics, Azure Functions, and SendGrid. -- Previously updated : 11/27/2023-----# Solution developer ---# Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid --This how-to guide shows you how to extend your IoT Central application with custom rules and notifications. The example shows sending a notification to an operator when a device stops sending telemetry. The solution uses an [Azure Stream Analytics](../../stream-analytics/index.yml) query to detect when a device stops sending telemetry. The Stream Analytics job uses [Azure Functions](../../azure-functions/index.yml) to send notification emails using [SendGrid](https://sendgrid.com/docs/for-developers/partners/microsoft-azure/). --This how-to guide shows you how to extend IoT Central beyond what it can already do with the built-in rules and actions. --In this how-to guide, you learn how to: --* Stream telemetry from an IoT Central application using *continuous data export*. -* Create a Stream Analytics query that detects when a device stops sending data. -* Send an email notification using the Azure Functions and SendGrid services. --## Prerequisites --To complete the steps in this how-to guide, you need an active Azure subscription. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --## Create the Azure resources --Run the following script to create the Azure resources you need to configure this scenario. Run this script in a bash environment such as the Azure Cloud Shell: --> [!NOTE] -> The `az login` command is necessary even in the Cloud Shell. --```azurecli -SUFFIX=$RANDOM --# Event Hubs namespace name -EHNS=detect-stopped-devices-ehns-$SUFFIX --# IoT Central app name -CA=detect-stopped-devices-app-$SUFFIX --# Storage account -STOR=dtsstorage$SUFFIX --# Function App -FUNC=detect-stopped-devices-function-$SUFFIX --# ASA -ASA=detect-stopped-devices-asa-$SUFFIX --# Other variables -RG=DetectStoppedDevices -EH=centralexport -LOCATION=eastus -DESTID=ehdest01 -EXPID=telexp01 --# Sign in -az login --# Create the Azure resources -az group create -n $RG --location $LOCATION --# Create IoT Central app -az iot central app create --name $CA --resource-group $RG \ - --template "iotc-condition" \ - --subdomain $CA \ - --display-name "In-store analytics - Condition Monitoring (custom rules scenario)" --# Configure managed identity for IoT Central app -az iot central app identity assign --name $CA --resource-group $RG --system-assigned -PI=$(az iot central app identity show --name $CA --resource-group $RG --query "principalId" --output tsv) --# Create Event Hubs -az eventhubs namespace create --name $EHNS --resource-group $RG --location $LOCATION -az eventhubs eventhub create --name $EH --resource-group $RG --namespace-name $EHNS --# Create Function App -az storage account create --name $STOR --location $LOCATION --resource-group $RG --sku Standard_LRS -az functionapp create --name $FUNC --storage-account $STOR --consumption-plan-location $LOCATION \ - --functions-version 4 --resource-group $RG --# Create Azure Stream Analytics -az stream-analytics job create --job-name $ASA --resource-group $RG --location $LOCATION --# Create the IoT Central data export -az role assignment create --assignee $PI --role "Azure Event Hubs Data Sender" --resource-group $RG -az iot central export destination create --app-id $CA --dest-id $DESTID \ - --type eventhubs@v1 --name "Event Hubs destination" --authorization "{ - \"eventHubName\": \"$EH\", - \"hostName\": \"$EHNS.servicebus.windows.net\", - \"type\": \"systemAssignedManagedIdentity\" - }" --az iot central export create --app-id $CA --export-id $EXPID --enabled false \ - --display-name "All telemetry" --source telemetry --destinations "[ - { - \"id\": \"$DESTID\" - } - ]" --echo "Event Hubs hostname: $EHNS.servicebus.windows.net" -echo "Event hub: $EH" -echo "IoT Central app: $CA.azureiotcentral.com" -echo "Function App hostname: $FUNC.azurewebsites.net" -echo "Stream Analytics job: $ASA" -echo "Resource group: $RG" -``` --Make a note of the values output at the end of the script, you use them later in the set-up process. --The script creates: --* A resource group called `DetectStoppedDevices` that contains all the resources. -* An Event Hubs namespace with an event hub called `centralexport`. -* An IoT Central application with two simulated thermostat devices. Telemetry from the two devices is exported to the `centralexport` event hub. This IoT Central data export definition is currently disabled. -* An Azure Stream Analytics job. -* An Azure Function App. --### SendGrid account and API Keys --If you don't have a SendGrid account, create a [free account](https://app.sendgrid.com/) before you begin. --1. From the SendGrid Dashboard, select **Settings** on the left menu, select **Settings > API Keys**. -1. Select **Create API Key**. -1. Name the new API key **AzureFunctionAccess**. -1. Select **Create & View**. ---Make a note of the generated API key, you use it later. --Create a **Single Sender Verification** in your SendGrid account for the email address you'll use as the **From** address. --## Define the function --This solution uses an Azure Functions app to send an email notification when the Stream Analytics job detects a stopped device. To create your function app: --1. In the Azure portal, navigate to the **Function App** instance in the **DetectStoppedDevices** resource group. -1. Select **Functions**, then **+ Create** to create a new function. -1. Select **HTTP Trigger** as the function template. -1. Select **Create**. ---### Edit code for HTTP Trigger --The portal creates a default function called **HttpTrigger1**. Select **Code + Test**: ---1. Replace the C# code with the following code: -- ```csharp - #r "Newtonsoft.Json" - #r "SendGrid" - using System; - using SendGrid.Helpers.Mail; - using Microsoft.Azure.WebJobs.Host; - using Microsoft.AspNetCore.Mvc; - using Microsoft.Extensions.Primitives; - using Newtonsoft.Json; -- public static SendGridMessage Run(HttpRequest req, ILogger log) - { - string requestBody = new StreamReader(req.Body).ReadToEnd(); - log.LogInformation(requestBody); - var notifications = JsonConvert.DeserializeObject<IList<Notification>>(requestBody); -- SendGridMessage message = new SendGridMessage(); - message.Subject = "Contoso device notification"; -- var content = "The following device(s) have stopped sending telemetry:<br/><br/><table><tr><th>Device ID</th><th>Time</th></tr>"; - foreach(var notification in notifications) { - log.LogInformation($"No message received - Device: {notification.deviceid}, Time: {notification.time}"); - content += $"<tr><td>{notification.deviceid}</td><td>{notification.time}</td></tr>"; - } - content += "</table>"; - message.AddContent("text/html", content); -- return message; - } -- public class Notification - { - public string deviceid { get; set; } - public string time { get; set; } - } - ``` --1. Select **Save** to save the function. --### Configure function to use SendGrid --To send emails with SendGrid, you need to configure the bindings for your function as follows: --1. Select **Integration**. -1. Select **HTTP ($return)**. -1. Select **Delete.** -1. Select **+ Add output**. -1. Select **SendGrid** as the binding type. -1. For the **SendGrid API Key App Setting**, select **New**. -1. Enter the *Name* and *Value* of your SendGrid API key. If you followed the previous instructions, the name of your SendGrid API key is **AzureFunctionAccess**. -1. Add the following information: -- | Setting | Value | - | - | -- | - | Message parameter name | $return | - | To address | Enter your To Address | - | From address | Enter your SendGrid verified single sender email address | - | Message subject | Device stopped | - | Message text | The device connected to IoT Central has stopped sending telemetry. | --1. Select **Save**. ---### Test the function works --To test the function in the portal, first make the **Logs** panel is visible on the **Code + Test** page. Then select **Test/Run**. Use the following JSON as the **Request body**: --```json -[{"deviceid":"test-device-1","time":"2019-05-02T14:23:39.527Z"},{"deviceid":"test-device-2","time":"2019-05-02T14:23:50.717Z"},{"deviceid":"test-device-3","time":"2019-05-02T14:24:28.919Z"}] -``` --The function log messages appear in the **Logs** panel: ---After a few minutes, the **To** email address receives an email with the following content: --```txt -The following device(s) have stopped sending telemetry: --Device ID Time -test-device-1 2019-05-02T14:23:39.527Z -test-device-2 2019-05-02T14:23:50.717Z -test-device-3 2019-05-02T14:24:28.919Z -``` --## Add Stream Analytics query --This solution uses a Stream Analytics query to detect when a device stops sending telemetry for more than 180 seconds. The query uses the telemetry from the event hub as its input. The job sends the query results to the function app. In this section, you configure the Stream Analytics job: --1. In the Azure portal, navigate to your Stream Analytics job in the **DetectStoppedDevices** resource group. Under **Jobs topology**, select **Inputs**, select **+ Add stream input**, and then select **Event Hub**. -1. Use the information in the following table to configure the input using the event hub you created previously, then select **Save**: -- | Setting | Value | - | - | -- | - | Input alias | *centraltelemetry* | - | Subscription | Your subscription | - | Event Hubs namespace | Your Event Hubs namespace. The name starts with **detect-stopped-devices-ehns-**. | - | Event hub name | Use existing - **centralexport** | - | Event hub consumer group | Use existing - **$default** | --1. Under **Jobs topology**, select **Outputs**, select **+ Add**, and then select **Azure Function**. -1. Use the information in the following table to configure the output, then select **Save**: -- | Setting | Value | - | - | -- | - | Output alias | *emailnotification* | - | Subscription | Your subscription | - | Function app | Your Function app. The name starts with **detect-stopped-devices-function-**. | - | Function | HttpTrigger1 | --1. Under **Jobs topology**, select **Query** and replace the existing query with the following SQL: -- ```sql - with - LeftSide as - ( - SELECT - -- Get the device ID - deviceId as deviceid1, - EventEnqueuedUtcTime AS time1 - FROM - -- Use the event enqueued time for time-based operations - [centraltelemetry] TIMESTAMP BY EventEnqueuedUtcTime - ), - RightSide as - ( - SELECT - -- Get the device ID - deviceId as deviceid2, - EventEnqueuedUtcTime AS time2 - FROM - -- Use the event enqueued time for time-based operations - [centraltelemetry] TIMESTAMP BY EventEnqueuedUtcTime - ) -- SELECT - LeftSide.deviceid1 as deviceid, - LeftSide.time1 as time - INTO - [emailnotification] - FROM - LeftSide - LEFT OUTER JOIN - RightSide - ON - LeftSide.deviceid1=RightSide.deviceid2 AND DATEDIFF(second,LeftSide,RightSide) BETWEEN 1 AND 180 - where - -- Find records where a device didn't send a message for 180 seconds - RightSide.deviceid2 is NULL - ``` --1. Select **Save**. -1. To start the Stream Analytics job, select **Overview**, then **Start**, then **Now**, and then **Start**: ---## Configure export in IoT Central --On the [Azure IoT Central My apps](https://apps.azureiotcentral.com/myapps) page, locate the IoT Central application the script created. The name of the app is **In-store analytics - Condition Monitoring (custom rules scenario)**. --To enable the data export to Event Hubs, navigate to the **Data Export** page and enable the **All telemetry** export. -Wait until the export status is **Running** before you continue. --## Test --To test the solution, you can block one of the devices to simulate a stopped device: --1. In your IoT Central application, navigate to the **Devices** page and select one of the two thermostat devices. -1. Select **Block** to stop the device sending telemetry. -1. After about two minutes, the **To** email address receives one or more emails that look like the following example: -- ```txt - The following device(s) have stopped sending telemetry: -- Device ID Time - Thermostat-Zone1 2022-11-01T12:45:14.686Z - ``` --## Tidy up --To tidy up after this how-to and avoid unnecessary costs, delete the **DetectStoppedDevices** resource group in the Azure portal. --## Next steps --In this how-to guide, you learned how to: --* Stream telemetry from an IoT Central application using the data export feature. -* Create a Stream Analytics query that detects when a device stops sending data. -* Send an email notification using the Azure Functions and SendGrid services. --Now that you know how to create custom rules and notifications, the suggested next step is to learn how to [Extend Azure IoT Central with custom analytics](howto-create-custom-analytics.md). |
iot-central | Howto Export To Azure Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-azure-data-explorer.md | To create the Azure Data Explorer destination in IoT Central on the **Data expor :::image type="content" source="media/howto-export-data/export-destination-managed.png" alt-text="Screenshot of Azure Data Explorer export destination that uses a managed identity."::: -If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md). +If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshooting.md). |
iot-central | Howto Export To Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-blob-storage.md | To create the Blob Storage destination in IoT Central on the **Data export** pag 1. Select **Save**. -If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md). +If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshooting.md). |
iot-central | Howto Export To Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-event-hubs.md | To create the Event Hubs destination in IoT Central on the **Data export** page: 1. Select **Save**. -If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md). +If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshooting.md). |
iot-central | Howto Export To Service Bus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-service-bus.md | To create the Service Bus destination in IoT Central on the **Data export** page 1. Select **Save**. -If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md). +If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshooting.md). |
iot-central | Howto Monitor Devices Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-monitor-devices-azure-cli.md | az iot central device twin show --app-id <app-id> --device-id <device-id> ## Next steps -A suggested next step is to learn [how to connect Azure IoT Edge for Linux on Windows (EFLOW)](./howto-connect-eflow.md). +A suggested next step is to learn [Troubleshoot why data from your devices isn't showing up in Azure IoT Central](troubleshooting.md). |
iot-central | Howto Set Up Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md | To view and manage the interfaces in your device model: :::image type="content" source="media/howto-set-up-template/device-template.png" alt-text="Screenshot that shows root interface for a model"::: -1. Select the ellipsis to add an inherited interface or component to the root interface. To learn more about interfaces and component see [multiple components](../../iot-pnp/concepts-modeling-guide.md#multiple-components) in the modeling guide. +1. Select the ellipsis to add an inherited interface or component to the root interface. To learn more about interfaces and component see [multiple components](../../iot/concepts-modeling-guide.md) in the modeling guide. :::image type="content" source="media/howto-set-up-template/add-interface.png" alt-text="Screenshot that shows how to add interface or component." lightbox="media/howto-set-up-template/add-interface.png"::: The following table shows the configuration settings for a command capability: | Response | If enabled, a definition of the command response, including: name, display name, schema, unit, and display unit. | |Initial value | The default parameter value. This is an IoT Central extension to DTDL. | -To learn more about how devices implement commands, see [Telemetry, property, and command payloads > Commands and long running commands](../../iot-develop/concepts-message-payloads.md#commands). +To learn more about how devices implement commands, see [Telemetry, property, and command payloads > Commands and long running commands](../../iot/concepts-message-payloads.md#commands). #### Offline commands |
iot-central | Howto Transform Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md | The following table shows three example transformation types: | Transformation | Description | Example | Notes | ||-|-|-|-| Message Format | Convert to or manipulate JSON messages. | CSV to JSON | At ingress. IoT Central only accepts value JSON messages. To learn more, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). | +| Message Format | Convert to or manipulate JSON messages. | CSV to JSON | At ingress. IoT Central only accepts value JSON messages. To learn more, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). | | Computations | Math functions that [Azure Functions](../../azure-functions/index.yml) can execute. | Unit conversion from Fahrenheit to Celsius. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. Transforming the data lets you use IoT Central features such as visualizations and jobs. | | Message Enrichment | Enrichments from external data sources not found in device properties or telemetry. To learn more about internal enrichments, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md). | Add weather information to messages using [location data](howto-use-location-data.md) from devices. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. | |
iot-central | Howto Use Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md | A device can: By default, commands expect a device to be connected and fail if the device can't be reached. If you select the **Queue if offline** option in the device template UI a command can be queued until a device comes online. These *offline commands* are described in a separate section later in this article. -To learn about the IoT Pug and Play command conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md). +To learn about the IoT Pug and Play command conventions, see [IoT Plug and Play conventions](../../iot/concepts-convention.md). -To learn more about the command data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +To learn more about the command data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). To learn how to manage commands by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md) The following table shows the configuration settings for a command capability: | Request | The payload for the device command.| | Response | The payload of the device command response.| -To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define commands in a device template, see [IoT Plug and Play conventions > Commands](../../iot-develop/concepts-convention.md#commands). +To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define commands in a device template, see [IoT Plug and Play conventions > Commands](../../iot/concepts-convention.md#commands). Optional fields, such as display name and description, let you add more details to the interface and capabilities. You can call commands on a device that isn't assigned to a device template. To c ## Next steps -Now that you've learned how to use commands in your Azure IoT Central application, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) to learn more about command parameters and [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) to see complete code samples in different languages. +Now that you've learned how to use commands in your Azure IoT Central application, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md) to learn more about command parameters and [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) to see complete code samples in different languages. |
iot-central | Howto Use Location Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md | You can use location telemetry to create a geofencing rule that generates an ale Now that you've learned how to use properties in your Azure IoT Central application, see: -* [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) +* [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md) * [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) |
iot-central | Howto Use Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md | Properties represent point-in-time values. For example, a device can use a prope You can also define cloud properties in an Azure IoT Central application. Cloud property values are never exchanged with a device and are out of scope for this article. -To learn about the IoT Pug and Play property conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md). +To learn about the IoT Pug and Play property conventions, see [IoT Plug and Play conventions](../../iot/concepts-convention.md). -To learn more about the property data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +To learn more about the property data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). To learn how to manage properties by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md). The following table shows the configuration settings for a property capability. | Comment | Any comments about the property capability. | | Description | A description of the property capability. | -To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define properties in a device template, see [IoT Plug and Play conventions > Read-only properties](../../iot-develop/concepts-convention.md#read-only-properties). +To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define properties in a device template, see [IoT Plug and Play conventions > Read-only properties](../../iot/concepts-convention.md#read-only-properties). Optional fields, such as display name and description, let you add more details to the interface and capabilities. By default, properties are read-only. Read-only properties let a device report p Azure IoT Central uses device twins to synchronize property values between the device and the Azure IoT Central application. Device property values use device twin reported properties. For more information, see [device twins](../../iot-hub/tutorial-device-twins.md). -A device sends property updates as a JSON payload. For more information, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +A device sends property updates as a JSON payload. For more information, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). You can use the Azure IoT device SDK to send a property update to your Azure IoT Central application. An IoT Central operator sets writable properties on a form. Azure IoT Central se For example implementations in multiple languages, see [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md). -The response message should include the `ac` and `av` fields. The `ad` field is optional. To learn more, see [IoT Plug and Play conventions > Writable properties](../../iot-develop/concepts-convention.md#writable-properties). +The response message should include the `ac` and `av` fields. The `ad` field is optional. To learn more, see [IoT Plug and Play conventions > Writable properties](../../iot/concepts-convention.md#writable-properties). When the operator sets a writable property in the Azure IoT Central UI, the application uses a device twin desired property to send the value to the device. The device then responds by using a device twin reported property. When Azure IoT Central receives the reported property value, it updates the property view with a status of **Accepted**. You can update the writable properties in this view: Now that you've learned how to use properties in your Azure IoT Central application, see: -* [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) -* [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) +* [IoT Plug and Play conventions](../../iot/concepts-convention.md) +* [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md) * [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) |
iot-central | Overview Iot Central Operator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-operator.md | To automate device management tasks, you can use: ## Troubleshoot and remediate device issues -The [troubleshooting guide](troubleshoot-connection.md) helps you to diagnose and remediate common issues. You can use the **Devices** page to block devices that appear to be malfunctioning until the problem is resolved. +The [troubleshooting guide](troubleshooting.md) helps you to diagnose and remediate common issues. You can use the **Devices** page to block devices that appear to be malfunctioning until the problem is resolved. ## Next steps |
iot-central | Overview Iot Central Solution Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md | You can use the data export and rules capabilities in IoT Central to integrate w - [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md). - [Transform data for IoT Central](howto-transform-data.md) - [Use workflows to integrate your Azure IoT Central application with other cloud services](howto-configure-rules-advanced.md)-- [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](howto-create-custom-rules.md) - [Extend Azure IoT Central with custom analytics using Azure Databricks](howto-create-custom-analytics.md) ## Integrate with companion applications |
iot-central | Troubleshoot Data Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-data-export.md | - Title: Troubleshoot data exports from Azure IoT Central -description: Troubleshoot data exports in IoT Central for issues such as managed identity permissions and virtual network configuration --- Previously updated : 06/12/2023------# Troubleshoot issues with data exports from your Azure IoT Central application --This document helps you find out why the data your IoT Central application isn't reaching it's intended destination or isn't arriving in the correct format. --## Managed identity issues --You're using a managed identity to authorize the connection to an export destination. Data isn't arriving at the export destination. --Before you configure or enable the export destination, make sure that you complete the following steps: --- Enable the managed identity for the IoT Central application. To verify that the managed identity is enabled, go to the **Identity** page for your application in the Azure portal or use the following CLI command:-- ```azurecli - az iot central app identity show --name {your app name} --resource-group {your resource group name} - ``` --- Configure the permissions for the managed identity. To view the assigned permissions, select **Azure role assignments** on the **Identity** page for your app in the Azure portal or use the `az role assignment list` CLI command. The required permissions are:-- | Destination | Permission | - |-|| - | Azure Blob storage | Storage Blob Data Contributor | - | Azure Service Bus | Azure Service Bus Data Sender | - | Azure Event Hubs | Azure Event Hubs Data Sender | - | Azure Data Explorer | Admin | -- If the permissions were not set correctly before you created the destination in your IoT Central application, try removing the destination and then adding it again. --- Configure any virtual networks, private endpoints, and firewall policies.--> [!NOTE] -> If you're using a managed identity to authorize the connection to an export destination, IoT Central doesn't export data from simulated devices. --To learn more, see [Export data](howto-export-data.md?tabs=managed-identity). --## Destination connection issues --The export definition page shows information about failed connections to the export destination: ---## Next steps --If you need more help, you can contact the Azure experts on the [Microsoft Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support). --For more information, see [Azure IoT support and help options](../../iot/iot-support-help.md). |
iot-central | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshooting.md | + + Title: Troubleshooting in Azure IoT Central +description: Troubleshoot and resolve issues with device connections and data export configurations in your IoT Central application +++ Last updated : 02/06/2024+++++#Customer intent: As a device developer, I want to understand why data from my devices isn't showing up in IoT Central, and the steps I can take to rectify the issue. +++# Troubleshooting in Azure IoT Central ++This article includes troubleshooting guidance for device connectivity issues and data export configuration issues in your IoT Central applications. ++## Device connectivity issues ++This section helps you determine if your data is reaching IoT Central. ++If you haven't already done so, install the `az cli` tool and `azure-iot` extension. ++To learn how to install the `az cli`, see [Install the Azure CLI](/cli/azure/install-azure-cli). ++To [install](/cli/azure/azure-cli-reference-for-IoT#extension-reference-installation) the `azure-iot` extension, run the following command: ++```azurecli +az extension add --name azure-iot +``` ++> [!NOTE] +> You may be prompted to install the `uamqp` library the first time you run an extension command. ++When you've installed the `azure-iot` extension, start your device to see if the messages it's sending are making their way to IoT Central. ++Use the following commands to sign in the subscription where you have your IoT Central application: ++```azurecli +az login +az account set --subscription <your-subscription-id> +``` ++To monitor the telemetry your device is sending, use the following command: ++```azurecli +az iot central diagnostics monitor-events --app-id <iot-central-app-id> --device-id <device-name> +``` ++If the device has connected successfully to IoT Central, you see output similar to the following example: ++```output +Monitoring telemetry. +Filtering on device: device-001 +{ + "event": { + "origin": "device-001", + "module": "", + "interface": "", + "component": "", + "payload": { + "temp": 65.57910343679293, + "humid": 36.16224660107426 + } + } +} +``` ++To monitor the property updates your device is exchanging with IoT Central, use the following preview command: ++```azurecli +az iot central diagnostics monitor-properties --app-id <iot-central-app-id> --device-id <device-name> +``` ++If the device successfully sends property updates, you see output similar to the following example: ++```output +Changes in reported properties: +version : 32 +{'state': 'true', 'name': {'value': {'value': 'Contoso'}, 'status': 'completed', 'desiredVersion': 7, 'ad': 'completed', 'av': 7, 'ac +': 200}, 'brightness': {'value': {'value': 2}, 'status': 'completed', 'desiredVersion': 7, 'ad': 'completed', 'av': 7, 'ac': 200}, 'p +rocessorArchitecture': 'ARM', 'swVersion': '1.0.0'} +``` ++If you see data appear in your terminal, then the data is making it as far as your IoT Central application. ++If you don't see any data appear after a few minutes, try pressing the `Enter` or `return` key on your keyboard, in case the output is stuck. ++If you're still not seeing any data appear on your terminal, it's likely that your device is having network connectivity issues, or isn't sending data correctly to IoT Central. ++### Check the provisioning status of your device ++If your data isn't appearing in the CLI monitor, check the provisioning status of your device by running the following command: ++```azurecli +az iot central device registration-info --app-id <iot-central-app-id> --device-id <device-name> +``` ++The following output shows an example of a device that's blocked from connecting: ++```json +{ + "@device_id": "v22upeoqx6", + "device_registration_info": { + "device_status": "blocked", + "display_name": "Environmental Sensor - v22upeoqx6", + "id": "v22upeoqx6", + "instance_of": "urn:krhsi_k0u:modelDefinition:w53jukkazs", + "simulated": false + }, + "dps_state": { + "error": "Device is blocked from connecting to IoT Central application. Unblock the device in IoT Central and retry. Learn more: +https://aka.ms/iotcentral-docs-dps-SAS", + "status": null + } +} +``` ++| Device provisioning status | Description | Possible mitigation | +| - | - | - | +| Provisioned | No immediately recognizable issue. | N/A | +| Registered | The device hasn't yet connected to IoT Central. | Check your device logs for connectivity issues. | +| Blocked | The device is blocked from connecting to IoT Central. | Device is blocked from connecting to the IoT Central application. Unblock the device in IoT Central and retry. To learn more, see [Device status values](howto-manage-devices-individually.md#device-status-values). | +| Unapproved | The device isn't approved. | Device isn't approved to connect to the IoT Central application. Approve the device in IoT Central and retry. To learn more, see [Device status values](howto-manage-devices-individually.md#device-status-values) | +| Unassigned | The device isn't assigned to a device template. | Assign the device to a device template so that IoT Central knows how to parse the data. | ++Learn more about [Device status values in the UI](howto-manage-devices-individually.md#device-status-values) and [Device status values in the REST API](howto-manage-devices-with-rest-api.md#get-a-device). ++### Error codes ++If you're still unable to diagnose why your data isn't showing up in `monitor-events`, the next step is to look for error codes reported by your device. ++Start a debugging session on your device, or collect logs from your device. Check for any error codes that the device reports. ++The following tables show the common error codes and possible actions to mitigate. ++If you're seeing issues related to your authentication flow: ++| Error code | Description | Possible Mitigation | +| - | - | - | +| 400 | The body of the request isn't valid. For example, it can't be parsed, or the object can't be validated. | Ensure that you're sending the correct request body as part of the attestation flow, or use a device SDK. | +| 401 | The authorization token can't be validated. For example, it has expired or doesn't apply to the request's URI. This error code is also returned to devices as part of the TPM attestation flow. | Ensure that your device has the correct credentials. | +| 404 | The Device Provisioning Service instance, or a resource such as an enrollment doesn't exist. | [File a ticket with customer support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). | +| 412 | The `ETag` in the request doesn't match the `ETag` of the existing resource, as per RFC7232. | [File a ticket with customer support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). | +| 429 | The service is throttling operations. For specific service limits, see [IoT Hub Device Provisioning Service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#iot-hub-device-provisioning-service-limits). | Reduce message frequency, split responsibilities among more devices. | +| 500 | An internal error occurred. | [File a ticket with customer support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) to see if they can help you further. | ++### Detailed authorization error codes ++| Error | Sub error code | Notes | +| - | - | - | +| 401 Unauthorized | 401002 | The device is using invalid or expired credentials. DPS reports this error. | +| 401 Unauthorized | 400209 | The device is either waiting for approval by an operator or an operator has blocked it. | +| 401 IoTHubUnauthorized | | The device is using expired security token. IoT Hub reports this error. | +| 401 IoTHubUnauthorized | DEVICE_DISABLED | The device is disabled in this IoT hub and has moved to another IoT hub. Reprovision the device. | +| 401 IoTHubUnauthorized | DEVICE_BLOCKED | An operator has blocked this device. | ++### File upload error codes ++Here's a list of common error codes you might see when a device tries to upload a file to the cloud. Remember that before your device can upload a file, you must configure [device file uploads](howto-configure-file-uploads.md) in your application. ++| Error code | Description | Possible Mitigation | +| - | - | - | +| 403006 | You've exceeded the number of concurrent file upload operations. Each device client is limited to 10 concurrent file uploads. | Ensure the device promptly notifies IoT Central that the file upload operation has completed. If that doesn't work, try reducing the request timeout. | ++## Unmodeled data issues ++When you've established that your device is sending data to IoT Central, the next step is to ensure that your device is sending data in a valid format. ++To detect which categories your issue is in, run the most appropriate Azure CLI command for your scenario: ++- To validate telemetry, use the preview command: ++ ```azurecli + az iot central diagnostics validate-messages --app-id <iot-central-app-id> --device-id <device-name> + ``` ++- To validate property updates, use the preview command: ++ ```azurecli + az iot central diagnostics validate-properties --app-id <iot-central-app-id> --device-id <device-name> + ``` ++You may be prompted to install the `uamqp` library the first time you run a `validate` command. ++The three common types of issue that cause device data to not appear in IoT Central are: ++- Device template to device data mismatch. +- Data is invalid JSON. +- Old versions of IoT Edge cause telemetry from components to display incorrectly as unmodeled data. ++### Device template to device data mismatch ++A device must use the same name and casing as used in the device template for any telemetry field names in the payload it sends. The following output shows an example warning message where the device is sending a telemetry value called `Temperature`, when it should be `temperature`: ++```output +Validating telemetry. +Filtering on device: sample-device-01. +Exiting after 300 second(s), or 10 message(s) have been parsed (whichever happens first). +[WARNING] [DeviceId: sample-device-01] [TemplateId: urn:modelDefinition:ofhmazgddj:vmjwwjuvdzg] Device is sending data that has not been defined in the device template. Following capabilities have NOT been defined in the device template '['Temperature']'. Following capabilities have been defined in the device template (grouped by components) '{'thermostat1': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport'], 'thermostat2': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport'], 'deviceInformation': ['manufacturer', 'model', 'swVersion', 'osName', 'processorArchitecture', 'processorManufacturer', 'totalStorage', 'totalMemory']}'. +``` ++A device must use the same name and casing as used in the device template for any property names in the payload it sends. The following output shows an example warning message where the property `osVersion` isn't defined in the device template: ++```output +Command group 'iot central diagnostics' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus +[WARNING] [DeviceId: sample-device-01] [TemplateId: urn:modelDefinition:ofhmazgddj:vmjwwjuvdzg] Device is sending data that has not been defined in the device template. Following capabilities have NOT been defined in the device template '['osVersion']'. Following capabilities have been defined in the device template (grouped by components) '{'thermostat1': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport', 'rundiagnostics'], 'thermostat2': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport', 'rundiagnostics'], 'deviceInformation': ['manufacturer', 'model', 'swVersion', 'osName', 'processorArchitecture', 'processorManufacturer', 'totalStorage', 'totalMemory']}'. +``` ++A device must use the data types defined in the device template for any telemetry or property values. For example, you see a schema mismatch if the type defined in the device template is boolean, but the device sends a string. The following output shows an example error message where the device using a string value for a property that's defined as a double: ++```output +Command group 'iot central diagnostics' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus +Validating telemetry. +Filtering on device: sample-device-01. +Exiting after 300 second(s), or 10 message(s) have been parsed (whichever happens first). +[ERROR] [DeviceId: sample-device-01] [TemplateId: urn:modelDefinition:ofhmazgddj:vmjwwjuvdzg] Datatype of telemetry field 'temperature' does not match the datatype double. Data sent by the device : curr_temp. For more information, see: https://aka.ms/iotcentral-payloads +``` ++The validation commands also report an error if the same telemetry name is defined in multiple interfaces, but the device isn't IoT Plug and Play compliant. ++If you prefer to use a GUI, use the IoT Central **Raw data** view to see if something isn't being modeled. +++When you've detected the issue, you may need to update device firmware, or create a new device template that models previously unmodeled data. ++If you chose to create a new template that models the data correctly, migrate devices from your old template to the new template. To learn more, see [Manage devices in your Azure IoT Central application](howto-manage-devices-individually.md). ++### Invalid JSON ++If there are no errors reported, but a value isn't appearing, then it's probably malformed JSON in the payload the device sends. To learn more, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). ++You can't use the validate commands or the **Raw data** view in the UI to detect if the device is sending malformed JSON. ++### IoT Edge version ++To display telemetry from components hosted in IoT Edge modules correctly, use [IoT Edge version 1.2.4](https://github.com/Azure/azure-iotedge/releases/tag/1.2.4) or later. If you use an earlier version, telemetry from components in IoT Edge modules displays as *_unmodeleddata*. ++## Data export managed identity issues ++You're using a managed identity to authorize the connection to an export destination. Data isn't arriving at the export destination. ++Before you configure or enable the export destination, make sure that you complete the following steps: ++- Enable the managed identity for the IoT Central application. To verify that the managed identity is enabled, go to the **Identity** page for your application in the Azure portal or use the following CLI command: ++ ```azurecli + az iot central app identity show --name {your app name} --resource-group {your resource group name} + ``` ++- Configure the permissions for the managed identity. To view the assigned permissions, select **Azure role assignments** on the **Identity** page for your app in the Azure portal or use the `az role assignment list` CLI command. The required permissions are: ++ | Destination | Permission | + |-|| + | Azure Blob storage | Storage Blob Data Contributor | + | Azure Service Bus | Azure Service Bus Data Sender | + | Azure Event Hubs | Azure Event Hubs Data Sender | + | Azure Data Explorer | Admin | ++ If the permissions were not set correctly before you created the destination in your IoT Central application, try removing the destination and then adding it again. ++- Configure any virtual networks, private endpoints, and firewall policies. ++> [!NOTE] +> If you're using a managed identity to authorize the connection to an export destination, IoT Central doesn't export data from simulated devices. ++To learn more, see [Export data](howto-export-data.md?tabs=managed-identity). ++## Data export destination connection issues ++The export definition page shows information about failed connections to the export destination: +++## Next steps ++If you need more help, you can contact the Azure experts on the [Microsoft Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support). ++For more information, see [Azure IoT support and help options](../../iot/iot-support-help.md). |
iot-dps | How To Send Additional Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-send-additional-data.md | Common scenarios for sending optional payloads are: * [Custom allocation policies](concepts-custom-allocation.md) can use the device payload to help select an IoT hub for a device or set its initial twin. For example, you may want to allocate your devices based on the device model. In this case, you can configure the device to report its model information when it registers. DPS will pass the deviceΓÇÖs payload to the custom allocation webhook. Then your webhook can decide which IoT hub the device will be provisioned to based on the device model information. If needed, the webhook can also return data back to the device as a JSON object in the webhook response. To learn more, see [Use device payloads in custom allocation](concepts-custom-allocation.md#use-device-payloads-in-custom-allocation). -* [IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices *may* use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js). +* [IoT Plug and Play (PnP)](../iot/overview-iot-plug-and-play.md) devices *may* use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js). -* [IoT Central](../iot-central/core/overview-iot-central.md) devices that connect through DPS *should* follow [IoT Plug and Play conventions](..//iot-develop/concepts-convention.md) and send their model ID when they register. IoT Central uses the model ID to assign the device to the correct device template. To learn more, see [Device implementation and best practices for IoT Central](../iot-central/core/concepts-device-implementation.md). +* [IoT Central](../iot-central/core/overview-iot-central.md) devices that connect through DPS *should* follow [IoT Plug and Play conventions](..//iot/concepts-convention.md) and send their model ID when they register. IoT Central uses the model ID to assign the device to the correct device template. To learn more, see [Device implementation and best practices for IoT Central](../iot-central/core/concepts-device-implementation.md). ## Device sends data payload to DPS |
iot-hub-device-update | Device Update Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-overview.md | -* The *interface layer* builds on top of [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), allowing for messaging to flow between the Device Update agent and Device Update service. +* The *interface layer* builds on top of [Azure IoT Plug and Play](../iot/overview-iot-plug-and-play.md), allowing for messaging to flow between the Device Update agent and Device Update service. * The *platform layer* is responsible for the high-level update actions of download, install, and apply that may be platform- or device-specific. :::image type="content" source="media/understand-device-update/client-agent-reference-implementations.png" alt-text="Agent Implementations." lightbox="media/understand-device-update/client-agent-reference-implementations.png"::: |
iot-hub-device-update | Device Update Plug And Play | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md | -Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. +Device Update for IoT Hub uses [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. For more information: -* Understand the [IoT Plug and Play device client](../iot-develop/concepts-developer-guide-device.md). +* Understand the [IoT Plug and Play device client](../iot/concepts-developer-guide-device.md). * See how the [Device Update agent is implemented](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md). ## Device Update Models -Model ID is how smart devices advertise their capabilities to Azure IoT applications with IoT Plug and Play.To learn more on how to build smart devices to advertise their capabilities to Azure IoT applications visit [IoT Plug and Play device developer guide](../iot-develop/concepts-developer-guide-device.md). +Model ID is how smart devices advertise their capabilities to Azure IoT applications with IoT Plug and Play.To learn more on how to build smart devices to advertise their capabilities to Azure IoT applications visit [IoT Plug and Play device developer guide](../iot/concepts-developer-guide-device.md). -Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID as part of the device connection. [Learn how to announce a model ID](../iot-develop/concepts-developer-guide-device.md#model-id-announcement). +Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID as part of the device connection. [Learn how to announce a model ID](../iot/concepts-developer-guide-device.md#model-id-announcement). Device Update has 2 PnP models defined that support DU features. The Device Update model, '**dtmi:azure:iot:deviceUpdateContractModel;2**', supports the core functionality and uses the device update core interface to send update actions and metadata to devices and receive update status from devices. IoT Hub device twin example: ``` >[!NOTE]->The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. For more information, see [IoT Plug and Play conventions](../iot-develop/concepts-convention.md#sample-multiple-components-writable-property). +>The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. For more information, see [IoT Plug and Play conventions](../iot/concepts-convention.md#sample-multiple-components-writable-property). #### State The **action** field represents the actions taken by the Device Update agent as ## Device information interface -The device information interface is a concept used within [IoT Plug and Play architecture](../iot-develop/overview-iot-plug-and-play.md). It contains device-to-cloud properties that provide information about the hardware and operating system of the device. Device Update for IoT Hub uses the `DeviceInformation.manufacturer` and `DeviceInformation.model` properties for telemetry and diagnostics. To learn more, see this [example of the device information interface](https://devicemodels.azure.com/dtmi/azure/devicemanagement/deviceinformation-1.json). +The device information interface is a concept used within [IoT Plug and Play architecture](../iot/overview-iot-plug-and-play.md). It contains device-to-cloud properties that provide information about the hardware and operating system of the device. Device Update for IoT Hub uses the `DeviceInformation.manufacturer` and `DeviceInformation.model` properties for telemetry and diagnostics. To learn more, see this [example of the device information interface](https://devicemodels.azure.com/dtmi/azure/devicemanagement/deviceinformation-1.json). -The expected component name in your model is **deviceInformation** when this interface is implemented. [Learn about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md) +The expected component name in your model is **deviceInformation** when this interface is implemented. [Learn about Azure IoT Plug and Play Components](../iot/concepts-modeling-guide.md) |Name|Type|Schema|Direction|Description|Example| |-|-|||--|--| |
iot-hub | Iot Concepts And Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md | Examples of telemetry received from a device can include sensor data such as spe Properties can be read or set from the IoT hub and can be used to send notifications when an action has completed. An example of a specific property on a device is temperature. Temperature can be a writable property that can be updated on the device or read from a temperature sensor attached to the device. -You can enable properties in IoT Hub using [Device twins](iot-hub-devguide-device-twins.md) or [Plug and Play](../iot-develop/overview-iot-plug-and-play.md). +You can enable properties in IoT Hub using [Device twins](iot-hub-devguide-device-twins.md) or [Plug and Play](../iot/overview-iot-plug-and-play.md). -To learn more about the differences between device twins and Plug and Play, see [Plug and Play](../iot-develop/concepts-digital-twin.md#device-twins-and-digital-twins). +To learn more about the differences between device twins and Plug and Play, see [Plug and Play](../iot/concepts-digital-twin.md#device-twins-and-digital-twins). ## Device commands |
iot-hub | Iot Hub Devguide C2d Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-c2d-guidance.md | IoT Hub provides three options for device apps to expose functionality to a back * [Cloud-to-device messages](iot-hub-devguide-messages-c2d.md) for one-way notifications to the device app. -To learn how [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) uses these options to control IoT Plug and Play devices, see [IoT Plug and Play service developer guide](../iot-develop/concepts-developer-guide-service.md). +To learn how [Azure IoT Plug and Play](../iot/overview-iot-plug-and-play.md) uses these options to control IoT Plug and Play devices, see [IoT Plug and Play service developer guide](../iot/concepts-developer-guide-service.md). [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)] |
iot-hub | Iot Hub Devguide Device Twins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md | Refer to [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance. Refer to [Cloud-to-device communication guidance](iot-hub-devguide-c2d-guidance.md) for guidance on using desired properties, direct methods, or cloud-to-device messages. -To learn how device twins relate to the device model used by an Azure IoT Plug and Play device, see [Understand IoT Plug and Play digital twins](../iot-develop/concepts-digital-twin.md). +To learn how device twins relate to the device model used by an Azure IoT Plug and Play device, see [Understand IoT Plug and Play digital twins](../iot/concepts-digital-twin.md). ## Device twins In the previous example, the `telemetryConfig` device twin desired and reported > The preceding snippets are examples, optimized for readability, of one way to encode a device configuration and its status. IoT Hub does not impose a specific schema for the device twin desired and reported properties in the device twins. > [!IMPORTANT]-> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot-develop/concepts-convention.md#writable-properties). +> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot/concepts-convention.md#writable-properties). You can use twins to synchronize long-running operations such as firmware updates. For more information on how to use properties to synchronize and track a long running operation across devices, see [Use desired properties to configure devices](tutorial-device-twins.md). |
iot-hub | Iot Hub Devguide Messages D2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md | In addition to device telemetry, message routing also enables sending non-teleme * Digital twin change events * Device connection state events -For example, if a route is created with the data source set to **Device Twin Change Events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with the data source set to **Device Lifecycle Events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with the data source set to **Digital Twin Change Events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **Device Connection State Events**, IoT Hub sends a message indicating whether the device was connected or disconnected. +For example, if a route is created with the data source set to **Device Twin Change Events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with the data source set to **Device Lifecycle Events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot/overview-iot-plug-and-play.md), a developer can create routes with the data source set to **Digital Twin Change Events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **Device Connection State Events**, IoT Hub sends a message indicating whether the device was connected or disconnected. [IoT Hub also integrates with Azure Event Grid](iot-hub-event-grid.md) to publish device events to support real-time integrations and automation of workflows based on these events. See key [differences between message routing and Event Grid](iot-hub-event-grid-routing-comparison.md) to learn which works best for your scenario. |
iot-hub | Iot Hub Devguide Module Twins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-module-twins.md | In the previous example, the `telemetryConfig` module twin desired and reported > The preceding snippets are examples, optimized for readability, of one way to encode a module configuration and its status. IoT Hub does not impose a specific schema for the module twin desired and reported properties in the module twins. > [!IMPORTANT]-> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot-develop/concepts-convention.md#writable-properties). +> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot/concepts-convention.md#writable-properties). ## Back-end operations |
iot-hub | Iot Hub Devguide Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md | -* [**IoT Hub device SDKs**](#azure-iot-hub-device-sdks) enable you to build apps that run on your IoT devices using the device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, jobs, methods, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use the module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md). +* [**IoT Hub device SDKs**](#azure-iot-hub-device-sdks) enable you to build apps that run on your IoT devices using the device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, jobs, methods, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use the module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md). * [**IoT Hub service SDKs**](#azure-iot-hub-service-sdks) enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules. |
iot-hub | Iot Hub Device Streams Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-device-streams-overview.md | To learn more about using Azure Monitor with IoT Hub, see [Monitor IoT Hub](moni ## Regional availability -During public preview, IoT Hub device streams are available in the Central US, Central US EUAP, North Europe, and Southeast Asia regions. Please make sure you create your hub in one of these regions. +During public preview, IoT Hub device streams are available in the Central US, East US EUAP, North Europe, and Southeast Asia regions. Please make sure you create your hub in one of these regions. ## SDK availability |
iot-hub | Iot Hub Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-scaling.md | The standard tier of IoT Hub enables all features, and is required for any IoT s | [Device twins](iot-hub-devguide-device-twins.md), [module twins](iot-hub-devguide-module-twins.md), and [device management](iot-hub-device-management-overview.md) | | Yes | | [Device streams (preview)](iot-hub-device-streams-overview.md) | | Yes | | [Azure IoT Edge](../iot-edge/about-iot-edge.md) | | Yes |-| [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) | | Yes | +| [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) | | Yes | IoT Hub also offers a free tier that is meant for testing and evaluation. It has all the capabilities of the standard tier, but includes limited messaging allowances. You can't upgrade from the free tier to either the basic or standard tier. |
iot-hub | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md | Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
iot | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-architecture.md | + + Title: IoT Plug and Play architecture | Microsoft Docs +description: Understand the key architectural elements of an IoT Plug and Play solution. ++ Last updated : 1/23/2024++++++# IoT Plug and Play architecture ++IoT Plug and Play enables solution builders to integrate IoT devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device _model_ that describes a device's capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of interfaces that define: ++- _Properties_ that represent the read-only or writable state of a device or other entity. For example, a device serial number may be a read-only property and a target temperature on a thermostat may be a writable property. +- _Telemetry_ that's the data emitted by a device, whether the data is a regular stream of sensor readings, an occasional error, or an information message. +- _Commands_ that describe a function or operation that can be done on a device. For example, a command could reboot a gateway or take a picture using a remote camera. ++Every model and interface has a unique ID. ++The following diagram shows the key elements of an IoT Plug and Play solution: +++## Model repository ++The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). ++The web UI lets you manage the models and interfaces. ++The model repository has built-in role-based access controls that let you manage access to interface definitions. ++## Devices ++A device builder implements the code to run on an IoT device using one of the [Azure IoT device SDKs](../iot-develop/about-iot-sdks.md). The device SDKs help the device builder to: ++- Connect securely to an IoT hub. +- Register the device with your IoT hub and announce the model ID that identifies the collection of DTDL interfaces the device implements. +- Synchronize the properties defined in the DTDL interfaces between the device and your IoT hub. +- Add command handlers for the commands defined in the DTDL interfaces. +- Send telemetry to the IoT hub. ++## IoT Edge gateway ++An IoT Edge gateway acts as an intermediary to connect IoT Plug and Play devices that can't connect directly to an IoT hub. To learn more, see [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md). ++## IoT Edge modules ++An _IoT Edge module_ lets you deploy and manage business logic on the edge. Azure IoT Edge modules are the smallest unit of computation managed by IoT Edge, and can contain Azure services (such as Azure Stream Analytics) or your own solution-specific code. ++The _IoT Edge hub_ is one of the modules that make up the Azure IoT Edge runtime. It acts as a local proxy for IoT Hub by exposing the same protocol endpoints as IoT Hub. This consistency means that clients (whether devices or modules) can connect to the IoT Edge runtime just as they would to IoT Hub. ++The device SDKs help a module builder to: ++- Use the IoT Edge hub to connect securely to your IoT hub. +- Register the module with your IoT hub and announce the model ID that identifies the collection of DTDL interfaces the device implements. +- Synchronize the properties defined in the DTDL interfaces between the device and your IoT hub. +- Add command handlers for the commands defined in the DTDL interfaces. +- Send telemetry to the IoT hub. ++## IoT Hub ++[IoT Hub](../iot-hub/about-iot-hub.md) is a cloud-hosted service that acts as a central message hub for bi-directional communication between your IoT solution and the devices it manages. ++An IoT hub: ++- Makes the model ID implemented by a device available to a backend solution. +- Maintains the digital twin associated with each IoT Plug and Play device connected to the hub. +- Forwards telemetry streams to other services for processing or storage. +- Routes digital twin change events to other services to enable device monitoring. ++## Backend solution ++A backend solution monitors and controls connected devices by interacting with digital twins in the IoT hub. Use one of the Azure IoT service SDKs to implement your backend solution. To understand the capabilities of a connected device, the solution backend: ++1. Retrieves the model ID the device registered with the IoT hub. +1. Uses the model ID to retrieve the interface definitions from any model repository. +1. Uses the model parser to extract information from the interface definitions. ++The backend solution can use the information from the interface definitions to: ++- Read property values reported by devices. +- Update writable properties on a device. +- Call commands implemented by a device. +- Understand the format of telemetry sent by a device. ++## Next steps ++Now that you have an overview of the architecture of an IoT Plug and Play solution, the next steps are to learn more about: ++- [The model repository](./concepts-model-repository.md) +- [Digital twin model integration](./concepts-model-discovery.md) +- [Developing for IoT Plug and Play](./concepts-developer-guide-device.md) |
iot | Concepts Convention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-convention.md | + + Title: IoT Plug and Play conventions | Microsoft Docs +description: Description of the conventions IoT Plug and Play expects devices to use when they send telemetry and properties, and handle commands and property updates. ++ Last updated : 1/23/2024+++++# IoT Plug and Play conventions ++IoT Plug and Play devices should follow a set of conventions when they exchange messages with an IoT hub. IoT Plug and Play devices use the MQTT protocol to communicate with IoT Hub. IoT Hub also supports the AMQP protocol which available in some IoT device SDKs. ++A device can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime. ++You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) _model_. There are two types of model referred to in this article: ++- **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level elements in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_. +- **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with more telemetry, properties, and commands. ++For more information, see [IoT Plug and Play modeling guide](concepts-modeling-guide.md). ++## Identify the model ++To announce the model it implements, an IoT Plug and Play device or module includes the model ID in the MQTT connection packet by adding `model-id` to the `USERNAME` field. ++To identify the model that a device or module implements, a service can get the model ID from: ++- The device twin `modelId` field. +- The digital twin `$metadata.$model` field. +- A digital twin change notification. ++## Telemetry ++- Telemetry sent from a no component device doesn't require any extra metadata. The system adds the `dt-dataschema` property. +- Telemetry sent from a device using components must add the component name to the telemetry message. +- When using MQTT, add the `$.sub` property with the component name to the telemetry topic, the system adds the `dt-subject` property. +- When using AMQP, add the `dt-subject` property with the component name as a message annotation. ++> [!NOTE] +> Telemetry from components requires one message per component. ++For more telemetry examples, see [Payloads > Telemetry](concepts-message-payloads.md#telemetry) ++## Read-only properties ++A device sets a read-only property which it then reports to the back-end application. ++### Sample no component read-only property ++A device or module can send any valid JSON that follows the DTDL rules. ++DTDL that defines a property on an interface: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:example: Thermostat;1", + "@type": "Interface", + "contents": [ + { + "@type": "Property", + "name": "temperature", + "schema": "double" + } + ] +} +``` ++Sample reported property payload: ++```json +"reported" : +{ + "temperature" : 21.3 +} +``` ++### Sample multiple components read-only property ++The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. ++DTDL that references a component: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:TemperatureController;1", + "@type": "Interface", + "displayName": "Temperature Controller", + "contents": [ + { + "@type" : "Component", + "schema": "dtmi:com:example:Thermostat;1", + "name": "thermostat1" + } + ] +} +``` ++DTDL that defines the component: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:Thermostat;1", + "@type": "Interface", + "contents": [ + { + "@type": "Property", + "name": "temperature", + "schema": "double" + } + ] +} +``` ++Sample reported property payload: ++```json +"reported": { + "thermostat1": { + "__t": "c", + "temperature": 21.3 + } +} +``` ++For more read-only property examples, see [Payloads > Properties](concepts-message-payloads.md#properties). ++## Writable properties ++A back-end application sets a writable property that IoT Hub then sends to the device. ++The device or module should confirm that it received the property by sending a reported property. The reported property should include: ++- `value` - the actual value of the property (typically the received value, but the device may decide to report a different value). +- `ac` - an acknowledgment code that uses an HTTP status code. +- `av` - an acknowledgment version that refers to the `$version` of the desired property. You can find this value in the desired property JSON payload. +- `ad` - an optional acknowledgment description. ++### Acknowledgment responses ++When reporting writable properties the device should compose the acknowledgment message, by using the four fields in the previous list, to indicate the actual device state, as described in the following table: ++|Status(ac)|Version(av)|Value(value)|Description(av)| +|:|:|:|:| +|200|Desired version|Desired value|Desired property value accepted| +|202|Desired version|Value accepted by the device|Desired property value accepted, update in progress (should finish with 200)| +|203|0|Value set by the device|Property set from the device, not reflecting any desired| +|400|Desired version|Actual value used by the device|Desired property value not accepted| +|500|Desired version|Actual value used by the device|Exception when applying the property| ++When a device starts up, it should request the device twin, and check for any writable property updates. If the version of a writable property increased while the device was offline, the device should send a reported property response to confirm that it received the update. ++When a device starts up for the first time, it can send an initial value for a reported property if it doesn't receive an initial desired property from the IoT hub. In this case, the device can send the default value with `av` to `0` and `ac` to `203`. For example: ++```json +"reported": { + "targetTemperature": { + "value": 20.0, + "ac": 203, + "av": 0, + "ad": "initialize" + } +} +``` ++A device can use the reported property to provide other information to the hub. For example, the device could respond with a series of in-progress messages such as: ++```json +"reported": { + "targetTemperature": { + "value": 35.0, + "ac": 202, + "av": 3, + "ad": "In-progress - reporting current temperature" + } +} +``` ++When the device reaches the target temperature, it sends the following message: ++```json +"reported": { + "targetTemperature": { + "value": 20.0, + "ac": 200, + "av": 4, + "ad": "Reached target temperature" + } +} +``` ++A device could report an error such as: ++```json +"reported": { + "targetTemperature": { + "value": 120.0, + "ac": 500, + "av": 3, + "ad": "Target temperature out of range. Valid range is 10 to 99." + } +} +``` ++### Object type ++If a writable property is defined as an object, the service must send a complete object to the device. The device should acknowledge the update by sending sufficient information back to the service for the service to understand how the device has acted on the update. This response could include: ++- The entire object. +- Just the fields that the device updated. +- A subset of the fields. ++For large objects, consider minimizing the size of the object you include in the acknowledgment. ++The following example shows a writable property defined as an `Object` with four fields: ++DTDL: ++```json +{ + "@type": "Property", + "name": "samplingRange", + "schema": { + "@type": "Object", + "fields": [ + { + "name": "startTime", + "schema": "dateTime" + }, + { + "name": "lastTime", + "schema": "dateTime" + }, + { + "name": "count", + "schema": "integer" + }, + { + "name": "errorCount", + "schema": "integer" + } + ] + }, + "displayName": "Sampling range" + "writable": true +} +``` ++To update this writable property, send a complete object from the service that looks like the following example: ++```json +{ + "samplingRange": { + "startTime": "2021-08-17T12:53:00.000Z", + "lastTime": "2021-08-17T14:54:00.000Z", + "count": 100, + "errorCount": 5 + } +} +``` ++The device responds with an acknowledgment that looks like the following example: ++```json +{ + "samplingRange": { + "ac": 200, + "av": 5, + "ad": "Weighing status updated", + "value": { + "startTime": "2021-08-17T12:53:00.000Z", + "lastTime": "2021-08-17T14:54:00.000Z", + "count": 100, + "errorCount": 5 + } + } +} +``` ++### Sample no component writable property ++When a device receives multiple desired properties in a single payload, it can send the reported property responses across multiple payloads or combine the responses into a single payload. ++A device or module can send any valid JSON that follows the DTDL rules. ++DTDL: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:example: Thermostat;1", + "@type": "Interface", + "contents": [ + { + "@type": "Property", + "name": "targetTemperature", + "schema": "double", + "writable": true + }, + { + "@type": "Property", + "name": "targetHumidity", + "schema": "double", + "writable": true + } + ] +} +``` ++Sample desired property payload: ++```json +"desired" : +{ + "targetTemperature" : 21.3, + "targetHumidity" : 80, + "$version" : 3 +} +``` ++Sample reported property first payload: ++```json +"reported": { + "targetTemperature": { + "value": 21.3, + "ac": 200, + "av": 3, + "ad": "complete" + } +} +``` ++Sample reported property second payload: ++```json +"reported": { + "targetHumidity": { + "value": 80, + "ac": 200, + "av": 3, + "ad": "complete" + } +} +``` ++> [!NOTE] +> You could choose to combine these two reported property payloads into a single payload. ++### Sample multiple components writable property ++The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. + |