Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Claim Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claim-resolver-overview.md | The following table lists the claim resolvers with information about the OpenID | {OIDC:Resource} |The `resource` query string parameter. | N/A | | {OIDC:Scope} |The `scope` query string parameter. | openid | | {OIDC:Username}| The [resource owner password credentials flow](add-ropc-policy.md) user's username.| emily@contoso.com|+| {OIDC:IdToken} | The `id token` query string parameter. | N/A | Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-resolver#openid-connect-relying-party-application) of the OpenID Connect claim resolvers. |
active-directory-b2c | Custom Policies Series Validate User Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-validate-user-input.md | Follow the steps in [Upload custom policy file](custom-policies-series-hello-wor ## Step 7 - Validate user input by using validation technical profiles -The validation techniques we've used in step 1, step 2 and step 3 aren't applicable for all scenarios. If your business rules are complex to be defined at claim declaration level, you can configure a [Validation Technical](validation-technical-profile.md), and then call it from a [Self-Asserted Technical Profile](self-asserted-technical-profile.md). +The validation techniques we've used in step 1, step 2 and step 3 aren't applicable for all scenarios. If your business rules are too complex to be defined at claim declaration level, you can configure a [Validation Technical](validation-technical-profile.md), and then call it from a [Self-Asserted Technical Profile](self-asserted-technical-profile.md). > [!NOTE] > Only self-asserted technical profiles can use validation technical profiles. Learn more about [validation technical profile](validation-technical-profile.md) |
ai-services | Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/configuration.md | -Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` only: +Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only: * [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)+* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) * [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)+* [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md) ✔️ See [**Configure Document Intelligence v3.0 containers**](?view=doc-intel-3.0.0&preserve-view=true) for supported container documentation. :::moniker-end -**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)** +**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)** ![checkmark](../media/yes-icon.png) **v3.1 (GA)** -With Document Intelligence containers, you can build an application architecture optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premises and in the cloud. In this article, we show you how to configure the Document Intelligence container run-time environment by using the `docker compose` command arguments. Document Intelligence features are supported by six Document Intelligence feature containers—**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section. +With Document Intelligence containers, you can build an application architecture optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premises and in the cloud. In this article, we show you how to configure the Document Intelligence container run-time environment by using the `docker compose` command arguments. Document Intelligence features are supported by seven Document Intelligence feature containers—**Read**, **Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section. ## Configuration settings |
ai-services | Disconnected | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/disconnected.md | -Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)`: +Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only: * [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)+* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) * [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)+* [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md) ✔️ See [**Document Intelligence v3.0 containers in disconnected environments**](?view=doc-intel-3.0.0&preserve-view=true) for supported container documentation. :::moniker-end -**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)** +**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.0 (GA)** ![checkmark](../media/yes-icon.png) **v3.1 (GA)** ## What are disconnected containers? |
ai-services | Image Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/image-tags.md | -Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` only: +Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only: * [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)+* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) * [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)+* [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md) ✔️ See [**Document Intelligence container image tags**](?view=doc-intel-3.0.0&preserve-view=true) for supported container documentation. The following containers support DocumentIntelligence v3.0 models and features: ::: moniker-end ++**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.1 (GA)** ++## Microsoft container registry (MCR) ++Document Intelligence container images can be found within the [**Microsoft Artifact Registry** (also know as Microsoft Container Registry(MCR))](https://mcr.microsoft.com/catalog?search=document%20intelligence), the primary registry for all Microsoft published container images. ++The following containers support DocumentIntelligence v3.0 models and features: ++| Container name |image | +||| +|[**Document Intelligence Studio**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/studio/tags)| `mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:latest`| +| [**Read 3.1**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/read-3.1/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1:latest`| +| [**Layout 3.1**](https://mcr.microsoft.com/en-us/product/azure-cognitive-services/form-recognizer/layout-3.1/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.1:latest`| ++ :::moniker range="doc-intel-2.1.0" > [!IMPORTANT] |
ai-services | Api Version Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md | Azure OpenAI API version 2023-12-01-preview is currently the latest preview rele This version contains support for all the latest Azure OpenAI features including: +- [Text to speech](./text-to-speech-quickstart.md). [**Added in 2024-02-15-preview**] - [Fine-tuning](./how-to/fine-tuning.md) `gpt-35-turbo`, `babbage-002`, and `davinci-002` models.[**Added in 2023-10-01-preview**] - [Whisper](./whisper-quickstart.md). [**Added in 2023-09-01-preview**] - [Function calling](./how-to/function-calling.md) [**Added in 2023-07-01-preview**] |
ai-services | Assistants Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-quickstart.md | + + Title: Quickstart - Getting started with Azure OpenAI Assistants (Preview) ++description: Walkthrough on how to get started with Azure OpenAI assistants with new features like code interpreter and retrieval. +++++ Last updated : 02/01/2024+zone_pivot_groups: openai-quickstart +recommendations: false ++++# Quickstart: Get started using Azure OpenAI Assistants (Preview) ++Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. +++++++++ |
ai-services | Assistants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/assistants.md | + + Title: Azure OpenAI Service Assistant API concepts ++description: Learn about the concepts behind the Azure OpenAI Assistants API. + Last updated : 02/05/2023++++recommendations: false +++# Azure OpenAI Assistants API (Preview) ++Assistants, a new feature of Azure OpenAI Service, is now available in public preview. Assistants API makes it easier for developers to create applications with sophisticated copilot-like experiences that can sift through data, suggest solutions, and automate tasks. ++## Overview ++Previously, building custom AI assistants needed heavy lifting even for experienced developers. While the chat completions API is lightweight and powerful, it's inherently stateless, which means that developers had to manage conversation state and chat threads, tool integrations, retrieval documents and indexes, and execute code manually. ++The Assistants API, as the stateful evolution of the chat completion API, provides a solution for these challenges. +Assistants API supports persistent automatically managed threads. This means that as a developer you no longer need to develop conversation state management systems and work around a modelΓÇÖs context window constraints. The Assistants API will automatically handle the optimizations to keep the thread below the max context window of your chosen model. Once you create a Thread, you can simply append new messages to it as users respond. Assistants can also access multiple tools in parallel, if needed. These tools include: ++- [Code Interpreter](../how-to/code-interpreter.md) +- [Function calling](../how-to/assistant-functions.md) ++Assistant API is built on the same capabilities that power OpenAIΓÇÖs GPT product. Some possible use cases range from AI-powered product recommender, sales analyst app, coding assistant, employee Q&A chatbot, and more. Start building on the no-code Assistants playground on the Azure OpenAI Studio or start building with the API. ++> [!IMPORTANT] +> Retrieving untrusted data using Function calling, Code Interpreter with file input, and Assistant Threads functionalities could compromise the security of your Assistant, or the application that uses the Assistant. Learn about mitigation approaches [here](https://aka.ms/oai/assistant-rai). ++## Assistants playground ++We provide a walkthrough of the Assistants playground in our [quickstart guide](../assistants-quickstart.md). This provides a no-code environment to test out the capabilities of assistants. ++## Assistants components ++| **Component** | **Description** | +||| +| **Assistant** | Custom AI that uses Azure OpenAI models in conjunction with tools. | +|**Thread** | A conversation session between an Assistant and a user. Threads store Messages and automatically handle truncation to fit content into a modelΓÇÖs context.| +| **Message** | A message created by an Assistant or a user. Messages can include text, images, and other files. Messages are stored as a list on the Thread. | +|**Run** | Activation of an Assistant to begin running based on the contents of the Thread. The Assistant uses its configuration and the ThreadΓÇÖs Messages to perform tasks by calling models and tools. As part of a Run, the Assistant appends Messages to the Thread.| +|**Run Step** | A detailed list of steps the Assistant took as part of a Run. An Assistant can call tools or create Messages during itΓÇÖs run. Examining Run Steps allows you to understand how the Assistant is getting to its final results. | ++## See also ++* Learn more about Assistants and [Code Interpreter](../how-to/code-interpreter.md) +* Learn more about Assistants and [function calling](../how-to/assistant-functions.md) +* [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) +++ |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | Azure OpenAI Service is powered by a diverse set of models with different capabi | [Embeddings](#embeddings-models) | A set of models that can convert text into numerical vector form to facilitate text similarity. | | [DALL-E](#dall-e-models-preview) (Preview) | A series of models in preview that can generate original images from natural language. | | [Whisper](#whisper-models-preview) (Preview) | A series of models in preview that can transcribe and translate speech to text. |+| [Text to speech](#text-to-speech-models-preview) (Preview) | A series of models in preview that can synthesize text to speech. | ## GPT-4 and GPT-4 Turbo Preview The Whisper models, currently in preview, can be used for speech to text. You can also use the Whisper model via Azure AI Speech [batch transcription](../../speech-service/batch-transcription-create.md) API. Check out [What is the Whisper model?](../../speech-service/whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service. -## Model summary table and region availability +## Text to speech (Preview) -> [!IMPORTANT] -> Due to high demand: -> -> - South Central US is temporarily unavailable for creating new resources and deployments. +The OpenAI text to speech models, currently in preview, can be used to synthesize text to speech. -### GPT-4 and GPT-4 Turbo Preview models +You can also use the OpenAI text to speech voices via Azure AI Speech. To learn more, see [OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech](../../speech-service/openai-voices.md#openai-text-to-speech-voices-via-azure-openai-service-or-via-azure-ai-speech) guide. ++## Model summary table and region availability +### GPT-4 and GPT-4 Turbo Preview models GPT-4, GPT-4-32k, and GPT-4 Turbo with Vision are now available to all Azure OpenAI Service customers. Availability varies by region. If you don't see GPT-4 in your region, please check back later. These models can only be used with the Chat Completion API. -GPT-4 version 0314 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support. +GPT-4 version 0314 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support. See [model versions](../concepts/model-versions.md) to learn about how Azure OpenAI Service handles model version upgrades, and [working with models](../how-to/working-with-models.md) to learn how to view and configure the model version settings of your GPT-4 deployments. > [!NOTE]-> Version `0613` of `gpt-4` and `gpt-4-32k` will be retired on June 13, 2024. Version `0314` of `gpt-4` and `gpt-4-32k` will be retired on July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior. +> Version `0314` of `gpt-4` and `gpt-4-32k` will be retired no earlier than July 5, 2024. Version `0613` of `gpt-4` and `gpt-4-32k` will be retired no earlier than September 30, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior. +++GPT-4 version 0125-preview is an updated version of the GPT-4 Turbo preview previously released as version 1106-preview. GPT-4 versio 0125-preview completes tasks such as code generation more completely compared to gpt-4-1106-preview. Because of this, depending on the task, customers may find that GPT-4-0125-preview generates more output compared to the gpt-4-1106-preview. We recommend customers compare the outputs of the new model. GPT-4-0125-preview also addresses bugs in gpt-4-1106-preview with UTF-8 handling for non-English languages. ++> [!IMPORTANT] +> +> - `gpt-4` version 0125-preview replaces version 1106-preview. Deployments of `gpt-4` version 1106-preview set to "Auto-update to default" and "Upgrade when expired" will start to be upgraded on February 20, 2024 and will complete upgrades within 2 weeks. Deployments of `gpt-4` version 1106-preview set to "No autoupgrade" will stop working starting February 20, 2024. If you have a deployment of `gpt-4` version 1106-preview, you can test version `0125-preview` in the available regions below. | Model ID | Max Request (tokens) | Training Data (up to) | | | : | :: | See [model versions](../concepts/model-versions.md) to learn about how Azure Ope | `gpt-4` (0613) | 8,192 | Sep 2021 | | `gpt-4-32k` (0613) | 32,768 | Sep 2021 | | `gpt-4` (1106-preview)**<sup>1</sup>**<br>**GPT-4 Turbo Preview** | Input: 128,000 <br> Output: 4,096 | Apr 2023 |+| `gpt-4` (0125-preview)**<sup>1</sup>**<br>**GPT-4 Turbo Preview** | Input: 128,000 <br> Output: 4,096 | Apr 2023 | | `gpt-4` (vision-preview)**<sup>2</sup>**<br>**GPT-4 Turbo with Vision Preview** | Input: 128,000 <br> Output: 4,096 | Apr 2023 | -**<sup>1</sup>** GPT-4 Turbo Preview = `gpt-4` (1106-preview). To deploy this model, under **Deployments** select model **gpt-4**. For **Model version** select **1106-preview**. +**<sup>1</sup>** GPT-4 Turbo Preview = `gpt-4` (0125-preview). To deploy this model, under **Deployments** select model **gpt-4**. For **Model version** select **0125-preview**. **<sup>2</sup>** GPT-4 Turbo with Vision Preview = `gpt-4` (vision-preview). To deploy this model, under **Deployments** select model **gpt-4**. For **Model version** select **vision-preview**. > [!CAUTION]-> We don't recommend using preview models in production. We will upgrade all deployments of preview models to a future stable version. Models designated preview do not follow the standard Azure OpenAI model lifecycle. +> We don't recommend using preview models in production. We will upgrade all deployments of preview models to future preview versions and a stable version. Models designated preview do not follow the standard Azure OpenAI model lifecycle. > [!NOTE] > Regions where GPT-4 (0314) & (0613) are listed as available have access to both the 8K and 32K versions of the model See [model versions](../concepts/model-versions.md) to learn about how Azure Ope | gpt-4 (0314) | | East US <br> France Central <br> South Central US <br> UK South | | gpt-4 (0613) | Australia East <br> Canada East <br> France Central <br> Sweden Central <br> Switzerland North | East US <br> East US 2 <br> Japan East <br> UK South | | gpt-4 (1106-preview) | Australia East <br> Canada East <br> East US 2 <br> France Central <br> Norway East <br> South India <br> Sweden Central <br> UK South <br> West US | | +| gpt-4 (0125-preview) | East US <br> North Central US <br> South Central US <br> | | gpt-4 (vision-preview) | Sweden Central <br> West US <br> Japan East| Switzerland North <br> Australia East | #### Azure Government regions The following Embeddings models are available with [Azure Government](/azure/azu | `babbage-002` | North Central US <br> Sweden Central | 16,384 | Sep 2021 | | `davinci-002` | North Central US <br> Sweden Central | 16,384 | Sep 2021 | | `gpt-35-turbo` (0613) | North Central US <br> Sweden Central | 4,096 | Sep 2021 |+| `gpt-35-turbo` (1106) | North Central US <br> Sweden Central | Input: 16,385<br> Output: 4,096 | Sep 2021| + ### Whisper models (Preview) The following Embeddings models are available with [Azure Government](/azure/azu | | | :: | | `whisper` | North Central US <br> West Europe | 25 MB | +### Text to speech models (Preview) ++| Model ID | Model Availability | +| | | :: | +| `tts-1` | North Central US <br> Sweden Central | +| `tts-1-hd` | North Central US <br> Sweden Central | ++### Assistants (Preview) ++For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. For example [parallel function](../how-to/assistant-functions.md) calling requires the latest 1106 models. ++| Region | `gpt-35-turbo (1106)` | `gpt-4 (1106-preview)` | `gpt-4 (0613)` | `gpt-4 (0314)` | `gpt-35-turbo (0301)` | `gpt-35-turbo (0613)` | `gpt-35-turbo-16k (0613)` | `gpt-4-32k (0314)` | `gpt-4-32k (0613)` | +||||||||||| +| Sweden Central | ✅|✅|✅|✅|✅|✅|✅||✅| +| East US 2 ||✅|✅|||✅|||✅| +| Australia East |✅|✅|✅|||✅|||✅| ++ ## Next steps - [Learn more about working with Azure OpenAI models](../how-to/working-with-models.md) |
ai-services | Provisioned Throughput | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md | We use a variation of the leaky bucket algorithm to maintain utilization below 1 a. When the current utilization is above 100%, the service returns a 429 code with the `retry-after-ms` header set to the time until utilization is below 100% - b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified max_tokens in the call. + b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified `max_tokens` in the call. If the `max_tokens` parameter is not specified, the service will estimate a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size. 3. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic: We use a variation of the leaky bucket algorithm to maintain utilization below 1 :::image type="content" source="../media/provisioned/utilization.jpg" alt-text="Diagram showing how subsequent calls are added to the utilization." lightbox="../media/provisioned/utilization.jpg"::: +#### How many concurrent calls can I have on my deployment? +The number of concurrent calls you can have at one time is dependent on each call's shape. The service will continue to accept calls until the utilization is above 100%. To determine the approximate number of concurrent calls you can model out the maximum requests per minute for a particular call shape in the [capacity calculator](https://oai.azure.com/portal/calculator). If `max_tokens` is empty, you can assume a value of 1000 ## Next steps |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | When you want to reuse the same URL/web address, you can select [Azure AI Search -## Custom parameters +## Ingestion parameters -You can modify the following additional settings in the **Data parameters** section in Azure OpenAI Studio and [the API](../reference.md#completions-extensions). +You can use the following parameter to change how your data is ingested in Azure OpenAI Studio, Azure AI Studio, and the ingestion API. Changing the parameter requires re-ingesting your data into Azure Search. ++|Parameter name | Description | +||| +| **Chunk size** | Azure OpenAI on your data processes your documents by splitting them into chunks before indexing them in Azure Search. The chunk size is the maximum number of tokens for any chunk in the search index. The default chunk size is 1024 tokens. However, given the uniqueness of your data, you may find a different chunk size (such as 256, 512, or 1536 tokens for example) more effective. Adjusting the chunk size can enhance the performance of the chat bot. While finding the optimal chunk size requires some trial and error, start by considering the nature of your dataset. A smaller chunk size is generally better for datasets with direct facts and less context, while a larger chunk size might be beneficial for more contextual information, though it can affect retrieval performance. This is the `chunkSize` parameter in the API.| +++## Runtime parameters ++You can modify the following additional settings in the **Data parameters** section in Azure OpenAI Studio and [the API](../reference.md#completions-extensions). You do not need to re-ingest your your data when you update these parameters. |Parameter name | Description | |||-|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 5. This is the `topNDocuments` parameter in the API. | -| **Strictness** | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. The default value is 3. | +| **Limit responses to your data** | This flag configures the chatbot's approach to handling queries unrelated to the data source or when search documents are insufficient for a complete answer. When this setting is disabled, the model supplements its responses with its own knowledge in addition to your documents. When this setting is enabled, the model attempts to only rely on your documents for responses. This is the `inScope` parameter in the API. | +|**Top K Documents** | This parameter is an integer that can be set to 3, 5, 10, or 20, and controls the number of document chunks provided to the large language model for formulating the final response. By default, this is set to 5. The search process can be noisy and sometimes, due to chunking, relevant information may be spread across multiple chunks in the search index. Selecting a top-K number, like 5, ensures that the model can extract relevant information, despite the inherent limitations of search and chunking. However, increasing the number too high can potentially distract the model. Additionally, the maximum number of documents that can be effectively used depends on the version of the model, as each has a different context size and capacity for handling documents. If you find that responses are missing important context, try increasing this parameter. Conversely, if you think the model is providing irrelevant information alongside useful data, consider decreasing it. When experimenting with the [chunk size](#ingestion-parameters), we recommend adjusting the top-K parameter to achieve the best performance. Usually, it is beneficial to change the top-K value in the opposite direction of your chunk size adjustment. For example, if you decrease the chunk size from the default of 1024, you might want to increase the top-K value to 10 or 20. This ensures a similar amount of information is provided to the model, as reducing the chunk size decreases the amount of information in the 5 documents given to the model. This is the `topNDocuments` parameter in the API. | +| **Strictness** | Determines the system's aggressiveness in filtering search documents based on their similarity scores. The system queries Azure Search or other document stores, then decides which documents to provide to large language models like ChatGPT. Filtering out irrelevant documents can significantly enhance the performance of the end-to-end chatbot. Some documents are excluded from the top-K results if they have low similarity scores before forwarding them to the model. This is controlled by an integer value ranging from 1 to 5. Setting this value to 1 means that the system will minimally filter documents based on search similarity to the user query. Conversely, a setting of 5 indicates that the system will aggressively filter out documents, applying a very high similarity threshold. If you find that the chatbot omits relevant information, lower the filter's strictness (set the value closer to 1) to include more documents. Conversely, if irrelevant documents distract the responses, increase the threshold (set the value closer to 5). This is the `strictness` parameter in the API. | + ## Document-level access control |
ai-services | Assistant Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/assistant-functions.md | + + Title: 'How to use Azure OpenAI Assistants function calling' ++description: Learn how to use Assistants function calling ++++ Last updated : 02/01/2024+++recommendations: false ++++# Azure OpenAI Assistants function calling ++The Assistants API supports function calling, which allows you to describe the structure of functions to an Assistant and then return the functions that need to be called along with their arguments. ++## Function calling support ++### Supported models ++The [models page](../concepts/models.md#assistants-preview) contains the most up-to-date information on regions/models where Assistants are supported. ++To use all features of function calling including parallel functions, you need to use the latest models. ++### API Version ++- `2024-02-15-preview` ++## Example function definition ++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++assistant = client.beta.assistants.create( + instructions="You are a weather bot. Use the provided functions to answer questions.", + model="gpt-4-1106-preview", #Replace with model deployment name + tools=[{ + "type": "function", + "function": { + "name": "getCurrentWeather", + "description": "Get the weather in location", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + "unit": {"type": "string", "enum": ["c", "f"]} + }, + "required": ["location"] + } + } + }, { + "type": "function", + "function": { + "name": "getNickname", + "description": "Get the nickname of a city", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + }, + "required": ["location"] + } + } + }] +) +``` ++# [REST](#tab/rest) ++> [!NOTE] +> With Azure OpenAI the `model` parameter requires model deployment name. If your model deployment name is different than the underlying model name then you would adjust your code to ` "model": "{your-custom-model-deployment-name}"`. ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "instructions": "You are a weather bot. Use the provided functions to answer questions.", + "tools": [{ + "type": "function", + "function": { + "name": "getCurrentWeather", + "description": "Get the weather in location", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + "unit": {"type": "string", "enum": ["c", "f"]} + }, + "required": ["location"] + } + } + }, + { + "type": "function", + "function": { + "name": "getNickname", + "description": "Get the nickname of a city", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"} + }, + "required": ["location"] + } + } + }], + "model": "gpt-4-1106-preview" + }' +``` ++++## Reading the functions ++When you initiate a **Run** with a user Message that triggers the function, the **Run** will enter a pending status. After it processes, the run will enter a requires_action state that you can verify by retrieving the **Run**. ++```json +{ + "id": "run_abc123", + "object": "thread.run", + "assistant_id": "asst_abc123", + "thread_id": "thread_abc123", + "status": "requires_action", + "required_action": { + "type": "submit_tool_outputs", + "submit_tool_outputs": { + "tool_calls": [ + { + "id": "call_abc123", + "type": "function", + "function": { + "name": "getCurrentWeather", + "arguments": "{\"location\":\"San Francisco\"}" + } + }, + { + "id": "call_abc456", + "type": "function", + "function": { + "name": "getNickname", + "arguments": "{\"location\":\"Los Angeles\"}" + } + } + ] + } + }, +... +``` ++## Submitting function outputs ++You can then complete the **Run** by submitting the tool output from the function(s) you call. Pass the `tool_call_id` referenced in the `required_action` object above to match output to each function call. +++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) +++run = client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=[ + { + "tool_call_id": call_ids[0], + "output": "22C", + }, + { + "tool_call_id": call_ids[1], + "output": "LA", + }, + ] +) +``` ++# [REST](#tab/rest) ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/thread_abc123/runs/run_123/submit_tool_outputs?api-version=2024-02-15-preview \ + -H "Content-Type: application/json" \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -d '{ + "tool_outputs": [{ + "tool_call_id": "call_abc123", + "output": "{"temperature": "22", "unit": "celsius"}" + }, { + "tool_call_id": "call_abc456", + "output": "{"nickname": "LA"}" + }] + }' +``` ++++After you submit tool outputs, the **Run** will enter the `queued` state before it continues execution. ++## See also ++* Learn more about how to use Assistants with our [How-to guide on Assistants](../how-to/assistant.md). +* [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) |
ai-services | Assistant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/assistant.md | + + Title: 'How to create Assistants with Azure OpenAI Service' ++description: Learn how to create helpful AI Assistants with tools like Code Interpreter +++++ Last updated : 02/01/2024+++recommendations: false ++++# Getting started with Azure OpenAI Assistants (Preview) ++Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. In this article we'll provide an in-depth walkthrough of getting started with the Assistants API. ++## Assistants support ++### Region and model support ++The [models page](../concepts/models.md#assistants-preview) contains the most up-to-date information on regions/models where Assistants are currently supported. ++### API Version ++- `2024-02-15-preview` ++### Supported file types ++|File format|MIME Type|Code Interpreter | +|||| +|.c| text/x-c |✅| +|.cpp|text/x-c++ |✅| +|.csv|application/csv|✅| +|.docx|application/vnd.openxmlformats-officedocument.wordprocessingml.document|✅| +|.html|text/html|✅| +|.java|text/x-java|✅| +|.json|application/json|✅| +|.md|text/markdown| ✅ | +|.pdf|application/pdf|✅| +|.php|text/x-php|✅| +|.pptx|application/vnd.openxmlformats-officedocument.presentationml.presentation|✅| +|.py|text/x-python|✅| +|.py|text/x-script.python|✅| +|.rb|text/x-ruby|✅| +|.tex|text/x-tex|✅| +|.txt|text/plain|✅| +|.css|text/css|✅| +|.jpeg|image/jpeg|✅| +|.jpg|image/jpeg|✅| +|.js|text/javascript|✅| +|.gif|image/gif|✅| +|.png|image/png|✅| +|.tar|application/x-tar|✅| +|.ts|application/typescript|✅| +|.xlsx|application/vnd.openxmlformats-officedocument.spreadsheetml.sheet|✅| +|.xml|application/xml or "text/xml"|✅| +|.zip|application/zip|✅| ++### Tools ++An individual assistant can access up to 128 tools including `code interpreter`, but you can also define your own custom tools via [functions](./assistant-functions.md). ++### Files ++Files can be uploaded via Studio, or programmatically. The `file_ids` parameter is required to give tools like `code_interpreter` access to files. When using the File upload endpoint, you must have the `purpose` set to assistants to be used with the Assistants API. ++## Assistants playground ++We provide a walkthrough of the Assistants playground in our [quickstart guide](../assistants-quickstart.md). This provides a no-code environment to test out the capabilities of assistants. ++## Assistants components ++| **Component** | **Description** | +||| +| **Assistant** | Custom AI that uses Azure OpenAI models in conjunction with tools. | +|**Thread** | A conversation session between an Assistant and a user. Threads store Messages and automatically handle truncation to fit content into a model’s context.| +| **Message** | A message created by an Assistant or a user. Messages can include text, images, and other files. Messages are stored as a list on the Thread. | +|**Run** | Activation of an Assistant to begin running based on the contents of the Thread. The Assistant uses its configuration and the Thread’s Messages to perform tasks by calling models and tools. As part of a Run, the Assistant appends Messages to the Thread.| +|**Run Step** | A detailed list of steps the Assistant took as part of a Run. An Assistant can call tools or create Messages during it’s run. Examining Run Steps allows you to understand how the Assistant is getting to its final results. | ++## Setting up your first Assistant ++### Create an assistant ++For this example we'll create an assistant that writes code to generate visualizations using the capabilities of the `code_interpreter` tool. The examples below are intended to be run sequentially in an environment like [Jupyter Notebooks](https://jupyter.org/). ++```Python +import os +import json +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++# Create an assistant +assistant = client.beta.assistants.create( + name="Data Visualization", + instructions=f"You are a helpful AI assistant who makes interesting visualizations based on data." + f"You have access to a sandboxed environment for writing and testing code." + f"When you are asked to create a visualization you should follow these steps:" + f"1. Write the code." + f"2. Anytime you write new code display a preview of the code to show your work." + f"3. Run the code to confirm that it runs." + f"4. If the code is successful display the visualization." + f"5. If the code is unsuccessful display the error message and try to revise the code and rerun going through the steps from above again.", + tools=[{"type": "code_interpreter"}], + model="gpt-4-1106-preview" #You must replace this value with the deployment name for your model. +) ++``` ++There are a few details you should note from the configuration above: ++- We enable this assistant to access code interpreter with the line ` tools=[{"type": "code_interpreter"}],`. This gives the model access to a sand-boxed python environment to run and execute code to help formulating responses to a user's question. +- In the instructions we remind the model that it can execute code. Sometimes the model needs help guiding it towards the right tool to solve a given query. If you know, you want to use a particular library to generate a certain response that you know is part of code interpreter it can help to provide guidance by saying something like "Use Matplotlib to do x." +- Since this is Azure OpenAI the value you enter for `model=` **must match the deployment name**. By convention our docs will often use a deployment name that happens to match the model name to indicate which model was used when testing a given example, but in your environment the deployment names can be different and that is the name that you should enter in the code. ++Next we're going to print the contents of assistant that we just created to confirm that creation was successful: ++```python +print(assistant.model_dump_json(indent=2)) +``` ++```json +{ + "id": "asst_7AZSrv5I3XzjUqWS40X5UgRr", + "created_at": 1705972454, + "description": null, + "file_ids": [], + "instructions": "You are a helpful AI assistant who makes interesting visualizations based on data.You have access to a sandboxed environment for writing and testing code.When you are asked to create a visualization you should follow these steps:1. Write the code.2. Anytime you write new code display a preview of the code to show your work.3. Run the code to confirm that it runs.4. If the code is successful display the visualization.5. If the code is unsuccessful display the error message and try to revise the code and rerun going through the steps from above again.", + "metadata": {}, + "model": "gpt-4-1106-preview", + "name": "Data Visualization", + "object": "assistant", + "tools": [ + { + "type": "code_interpreter" + } + ] +} +``` ++### Create a thread ++Now let's create a thread ++```python +# Create a thread +thread = client.beta.threads.create() +print(thread) +``` ++```output +Thread(id='thread_6bunpoBRZwNhovwzYo7fhNVd', created_at=1705972465, metadata={}, object='thread') +``` ++A thread is essentially the record of the conversation session between the assistant and the user. It's similar to the messages array/list in a typical chat completions API call. One of the key differences, is unlike a chat completions messages array, you don't need to track tokens with each call to make sure that you're remaining below the context length of the model. Threads abstract away this management detail and will compress the thread history as needed in order to allow the conversation to continue. The ability for threads to accomplish this with larger conversations is enhanced when using the latest models, which have larger context lengths as well as support for the latest features. ++Next create the first user question to add to the thread ++```python +# Add a user question to the thread +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="Create a visualization of a sinewave" +) +``` ++### List thread messages ++```python +thread_messages = client.beta.threads.messages.list(thread.id) +print(thread_messages.model_dump_json(indent=2)) +``` ++```json +{ + "data": [ + { + "id": "msg_JnkmWPo805Ft8NQ0gZF6vA2W", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Create a visualization of a sinewave" + }, + "type": "text" + } + ], + "created_at": 1705972476, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_6bunpoBRZwNhovwzYo7fhNVd" + } + ], + "object": "list", + "first_id": "msg_JnkmWPo805Ft8NQ0gZF6vA2W", + "last_id": "msg_JnkmWPo805Ft8NQ0gZF6vA2W", + "has_more": false +} +``` ++### Run thread ++```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + #instructions="New instructions" #You can optionally provide new instructions but these will override the default instructions +) +``` ++We could also pass an `instructions` parameter here, but this would override the existing instructions that we have already provided for the assistant. ++### Retrieve thread status ++```python +# Retrieve the status of the run +run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id +) ++status = run.status +print(status) +``` ++```output +completed +``` ++Depending on the complexity of the query you run, the thread could take longer to execute. In that case you can create a loop to monitor the [run status](#run-status-definitions) of the thread with code like the example below: ++```python +import time +from IPython.display import clear_output ++start_time = time.time() ++status = run.status ++while status not in ["completed", "cancelled", "expired", "failed"]: + time.sleep(5) + run = client.beta.threads.runs.retrieve(thread_id=thread.id,run_id=run.id) + print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60))) + status = run.status + print(f'Status: {status}') + clear_output(wait=True) ++messages = client.beta.threads.messages.list( + thread_id=thread.id +) ++print(f'Status: {status}') +print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60))) +print(messages.model_dump_json(indent=2)) +``` ++When a Run is `in_progress` or in other nonterminal states the thread is locked. When a thread is locked new messages can't be added, and new runs can't be created. ++### List thread messages post run ++Once the run status indicates successful completion, you can list the contents of the thread again to retrieve the model's and any tools response: ++```python +messages = client.beta.threads.messages.list( + thread_id=thread.id +) ++print(messages.model_dump_json(indent=2)) +``` ++```json +{ + "data": [ + { + "id": "msg_M5pz73YFsJPNBbWvtVs5ZY3U", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Is there anything else you would like to visualize or any additional features you'd like to add to the sine wave plot?" + }, + "type": "text" + } + ], + "created_at": 1705967782, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_AGQHJrrfV3eM0eI9T3arKgYY", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_oJbUanImBRpRran5HSa4Duy4", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD" + }, + "type": "image_file" + }, + { + "text": { + "annotations": [], + "value": "Here is the visualization of a sine wave: \n\nThe wave is plotted using values from 0 to \\( 4\\pi \\) on the x-axis, and the corresponding sine values on the y-axis. I've also added grid lines for easier reading of the plot." + }, + "type": "text" + } + ], + "created_at": 1705967044, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_8PsweDFn6gftUd91H87K0Yts", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Create a visualization of a sinewave" + }, + "type": "text" + } + ], + "created_at": 1705966634, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + } + ], + "object": "list", + "first_id": "msg_M5pz73YFsJPNBbWvtVs5ZY3U", + "last_id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "has_more": false +} +``` ++### Retrieve file ID ++We had requested that the model generate an image of a sine wave. In order to download the image, we first need to retrieve the images file ID. ++```python +data = json.loads(messages.model_dump_json(indent=2)) # Load JSON data into a Python object +image_file_id = data['data'][1]['content'][0]['image_file']['file_id'] ++print(image_file_id) # Outputs: assistant-1YGVTvNzc2JXajI5JU9F0HMD +``` ++### Download image ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++content = client.files.content(image_file_id) ++image= content.write_to_file("sinewave.png") +``` ++Open the image locally once it's downloaded: ++```python +from PIL import Image ++# Display the image in the default image viewer +image = Image.open("sinewave.png") +image.show() +``` +++### Ask a follow-up question on the thread ++Since the assistant didn't quite follow our instructions and include the code that was run in the text portion of its response lets explicitly ask for that information. ++```python +# Add a new user question to the thread +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="Show me the code you used to generate the sinewave" +) +``` ++Again we'll need to run and retrieve the status of the thread: ++```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + #instructions="New instructions" #You can optionally provide new instructions but these will override the default instructions +) ++# Retrieve the status of the run +run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id +) ++status = run.status +print(status) ++``` ++```output +completed +``` ++Once the run status reaches completed, we'll list the messages in the thread again which should now include the response to our latest question. ++```python +messages = client.beta.threads.messages.list( + thread_id=thread.id +) ++print(messages.model_dump_json(indent=2)) +``` ++```json +{ + "data": [ + { + "id": "msg_oaF1PUeozAvj3KrNnbKSy4LQ", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Certainly, here is the code I used to generate the sine wave visualization:\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Generating data for the sinewave\nx = np.linspace(0, 4 * np.pi, 1000) # Generate values from 0 to 4*pi\ny = np.sin(x) # Compute the sine of these values\n\n# Plotting the sine wave\nplt.plot(x, y)\nplt.title('Sine Wave')\nplt.xlabel('x')\nplt.ylabel('sin(x)')\nplt.grid(True)\nplt.show()\n```\n\nThis code snippet uses `numpy` to generate an array of x values and then computes the sine for each x value. It then uses `matplotlib` to plot these values and display the resulting graph." + }, + "type": "text" + } + ], + "created_at": 1705969710, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_oDS3fH7NorCUVwROTZejKcZN", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_moYE3aNwFYuRq2aXpxpt2Wb0", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Show me the code you used to generate the sinewave" + }, + "type": "text" + } + ], + "created_at": 1705969678, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_M5pz73YFsJPNBbWvtVs5ZY3U", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Is there anything else you would like to visualize or any additional features you'd like to add to the sine wave plot?" + }, + "type": "text" + } + ], + "created_at": 1705967782, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_AGQHJrrfV3eM0eI9T3arKgYY", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_oJbUanImBRpRran5HSa4Duy4", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD" + }, + "type": "image_file" + }, + { + "text": { + "annotations": [], + "value": "Here is the visualization of a sine wave: \n\nThe wave is plotted using values from 0 to \\( 4\\pi \\) on the x-axis, and the corresponding sine values on the y-axis. I've also added grid lines for easier reading of the plot." + }, + "type": "text" + } + ], + "created_at": 1705967044, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_8PsweDFn6gftUd91H87K0Yts", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Create a visualization of a sinewave" + }, + "type": "text" + } + ], + "created_at": 1705966634, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + } + ], + "object": "list", + "first_id": "msg_oaF1PUeozAvj3KrNnbKSy4LQ", + "last_id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "has_more": false +} +``` ++To extract only the response to our latest question: ++```python +data = json.loads(messages.model_dump_json(indent=2)) +code = data['data'][0]['content'][0]['text']['value'] +print(code) +``` ++*Certainly, here is the code I used to generate the sine wave visualization:* ++```python +import numpy as np +import matplotlib.pyplot as plt ++# Generating data for the sinewave +x = np.linspace(0, 4 * np.pi, 1000) # Generate values from 0 to 4*pi +y = np.sin(x) # Compute the sine of these values ++# Plotting the sine wave +plt.plot(x, y) +plt.title('Sine Wave') +plt.xlabel('x') +plt.ylabel('sin(x)') +plt.grid(True) +plt.show() +``` ++### Dark mode ++Let's add one last question to the thread to see if code interpreter can swap the chart to dark mode for us. ++```python +# Add a user question to the thread +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="I prefer visualizations in darkmode can you change the colors to make a darkmode version of this visualization." +) ++# Run the thread +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, +) ++# Retrieve the status of the run +run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id +) ++status = run.status +print(status) +``` ++```output +completed +``` ++```python +messages = client.beta.threads.messages.list( + thread_id=thread.id +) ++print(messages.model_dump_json(indent=2)) +``` ++```json +{ + "data": [ + { + "id": "msg_KKzOHCArWGvGpuPo0pVZTHgV", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "You're viewing the dark mode version of the sine wave visualization in the image above. The plot is set against a dark background with a cyan colored sine wave for better contrast and visibility. If there's anything else you'd like to adjust or any other assistance you need, feel free to let me know!" + }, + "type": "text" + } + ], + "created_at": 1705971199, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_izZFyTVB1AlFM1VVMItggRn4", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_30pXFVYNgP38qNEMS4Zbozfk", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "I prefer visualizations in darkmode can you change the colors to make a darkmode version of this visualization." + }, + "type": "text" + } + ], + "created_at": 1705971194, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_3j31M0PaJLqO612HLKVsRhlw", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "assistant-kfqzMAKN1KivQXaEJuU0u9YS" + }, + "type": "image_file" + }, + { + "text": { + "annotations": [], + "value": "Here is the dark mode version of the sine wave visualization. I've used the 'dark_background' style in Matplotlib and chosen a cyan color for the plot line to ensure it stands out against the dark background." + }, + "type": "text" + } + ], + "created_at": 1705971123, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_B91erEPWro4bZIfryQeIDDlx", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_FgDZhBvvM1CLTTFXwgeJLdua", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "I prefer visualizations in darkmode can you change the colors to make a darkmode version of this visualization." + }, + "type": "text" + } + ], + "created_at": 1705971052, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_oaF1PUeozAvj3KrNnbKSy4LQ", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Certainly, here is the code I used to generate the sine wave visualization:\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Generating data for the sinewave\nx = np.linspace(0, 4 * np.pi, 1000) # Generate values from 0 to 4*pi\ny = np.sin(x) # Compute the sine of these values\n\n# Plotting the sine wave\nplt.plot(x, y)\nplt.title('Sine Wave')\nplt.xlabel('x')\nplt.ylabel('sin(x)')\nplt.grid(True)\nplt.show()\n```\n\nThis code snippet uses `numpy` to generate an array of x values and then computes the sine for each x value. It then uses `matplotlib` to plot these values and display the resulting graph." + }, + "type": "text" + } + ], + "created_at": 1705969710, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_oDS3fH7NorCUVwROTZejKcZN", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_moYE3aNwFYuRq2aXpxpt2Wb0", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Show me the code you used to generate the sinewave" + }, + "type": "text" + } + ], + "created_at": 1705969678, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_M5pz73YFsJPNBbWvtVs5ZY3U", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "text": { + "annotations": [], + "value": "Is there anything else you would like to visualize or any additional features you'd like to add to the sine wave plot?" + }, + "type": "text" + } + ], + "created_at": 1705967782, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_AGQHJrrfV3eM0eI9T3arKgYY", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_oJbUanImBRpRran5HSa4Duy4", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "assistant-1YGVTvNzc2JXajI5JU9F0HMD" + }, + "type": "image_file" + }, + { + "text": { + "annotations": [], + "value": "Here is the visualization of a sine wave: \n\nThe wave is plotted using values from 0 to \\( 4\\pi \\) on the x-axis, and the corresponding sine values on the y-axis. I've also added grid lines for easier reading of the plot." + }, + "type": "text" + } + ], + "created_at": 1705967044, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "assistant", + "run_id": "run_8PsweDFn6gftUd91H87K0Yts", + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + }, + { + "id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "assistant_id": null, + "content": [ + { + "text": { + "annotations": [], + "value": "Create a visualization of a sinewave" + }, + "type": "text" + } + ], + "created_at": 1705966634, + "file_ids": [], + "metadata": {}, + "object": "thread.message", + "role": "user", + "run_id": null, + "thread_id": "thread_ow1Yv29ptyVtv7ixbiKZRrHd" + } + ], + "object": "list", + "first_id": "msg_KKzOHCArWGvGpuPo0pVZTHgV", + "last_id": "msg_Pu3eHjM10XIBkwqh7IhnKKdG", + "has_more": false +} +``` ++Extract the new image file ID and download and display the image: ++```python +data = json.loads(messages.model_dump_json(indent=2)) # Load JSON data into a Python object +image_file_id = data['data'][0]['content'][0]['image_file']['file_id'] # index numbers can vary if you have had a different conversation over the course of the thread. ++print(image_file_id) ++content = client.files.content(image_file_id) +image= content.write_to_file("dark_sine.png") ++# Display the image in the default image viewer +image = Image.open("dark_sine.png") +image.show() +``` +++## Additional reference ++### Run status definitions ++|**Status**| **Definition**| +||--| +|`queued`| When Runs are first created or when you complete the required_action, they are moved to a queued status. They should almost immediately move to in_progress.| +|`in_progress` | While in_progress, the Assistant uses the model and tools to perform steps. You can view progress being made by the Run by examining the Run Steps.| +|`completed` | The Run successfully completed! You can now view all Messages the Assistant added to the Thread, and all the steps the Run took. You can also continue the conversation by adding more user Messages to the Thread and creating another Run.| +|`requires_action` | When using the Function calling tool, the Run will move to a required_action state once the model determines the names and arguments of the functions to be called. You must then run those functions and submit the outputs before the run proceeds. If the outputs are not provided before the expires_at timestamp passes (roughly 10-mins past creation), the run will move to an expired status.| +|`expired` | This happens when the function calling outputs weren't submitted before expires_at and the run expires. Additionally, if the runs take too long to execute and go beyond the time stated in expires_at, our systems will expire the run.| +|`cancelling`| You can attempt to cancel an in_progress run using the Cancel Run endpoint. Once the attempt to cancel succeeds, status of the Run moves to canceled. Cancelation is attempted but not guaranteed.| +|`cancelled` |Run was successfully canceled.| +|`failed` |You can view the reason for the failure by looking at the `last_error` object in the Run. The timestamp for the failure will be recorded under failed_at.| ++## Message annotations ++Assistant message annotations are different from the [content filtering annotations](../concepts/content-filter.md) that are present in completion and chat completion API responses. Assistant annotations can occur within the content array of the object. Annotations provide information around how you should annotate the text in the responses to the user. ++When annotations are present in the Message content array, you'll see illegible model-generated substrings in the text that you need to replace with the correct annotations. These strings might look something like `【13†source】` or `sandbox:/mnt/data/file.csv`. Here’s a Python code snippet from OpenAI that replaces these strings with the information present in the annotations. ++```Python ++from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++# Retrieve the message object +message = client.beta.threads.messages.retrieve( + thread_id="...", + message_id="..." +) ++# Extract the message content +message_content = message.content[0].text +annotations = message_content.annotations +citations = [] ++# Iterate over the annotations and add footnotes +for index, annotation in enumerate(annotations): + # Replace the text with a footnote + message_content.value = message_content.value.replace(annotation.text, f' [{index}]') ++ # Gather citations based on annotation attributes + if (file_citation := getattr(annotation, 'file_citation', None)): + cited_file = client.files.retrieve(file_citation.file_id) + citations.append(f'[{index}] {file_citation.quote} from {cited_file.filename}') + elif (file_path := getattr(annotation, 'file_path', None)): + cited_file = client.files.retrieve(file_path.file_id) + citations.append(f'[{index}] Click <here> to download {cited_file.filename}') + # Note: File download functionality not implemented above for brevity ++# Add footnotes to the end of the message before displaying to user +message_content.value += '\n' + '\n'.join(citations) ++``` ++|Message annotation | Description | +||| +| `file_citation` | File citations are created by the retrieval tool and define references to a specific quote in a specific file that was uploaded and used by the Assistant to generate the response. | +|`file_path` | File path annotations are created by the code_interpreter tool and contain references to the files generated by the tool. | ++## See also ++* Learn more about Assistants and [Code Interpreter](./code-interpreter.md) +* Learn more about Assistants and [function calling](./assistant-functions.md) +* [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) |
ai-services | Code Interpreter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/code-interpreter.md | + + Title: 'How to use Azure OpenAI Assistants Code Interpreter' ++description: Learn how to use Assistants Code Interpreter ++++ Last updated : 02/01/2024+++recommendations: false ++++# Azure OpenAI Assistants Code Interpreter (Preview) ++Code Interpreter allows the Assistants API to write and run Python code in a sandboxed execution environment. With Code Interpreter enabled, your Assistant can run code iteratively to solve more challenging code, math, and data analysis problems. When your Assistant writes code that fails to run, it can iterate on this code by modifying and running different code until the code execution succeeds. ++> [!IMPORTANT] +> Code Interpreter has [additional charges](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) beyond the token based fees for Azure OpenAI usage. If your Assistant calls Code Interpreter simultaneously in two different threads, two code interpreter sessions are created. Each session is active by default for one hour. ++## Code interpreter support ++### Supported models ++The [models page](../concepts/models.md#assistants-preview) contains the most up-to-date information on regions/models where Assistants and code interpreter are supported. ++We recommend using assistants with the latest models to take advantage of the new features, as well as the larger context windows, and more up-to-date training data. ++### API Version ++- `2024-02-15-preview` ++### Supported file types ++|File format|MIME Type| +||| +|.c| text/x-c | +|.cpp|text/x-c++ | +|.csv|application/csv| +|.docx|application/vnd.openxmlformats-officedocument.wordprocessingml.document| +|.html|text/html| +|.java|text/x-java| +|.json|application/json| +|.md|text/markdown| +|.pdf|application/pdf| +|.php|text/x-php| +|.pptx|application/vnd.openxmlformats-officedocument.presentationml.presentation| +|.py|text/x-python| +|.py|text/x-script.python| +|.rb|text/x-ruby| +|.tex|text/x-tex| +|.txt|text/plain| +|.css|text/css| +|.jpeg|image/jpeg| +|.jpg|image/jpeg| +|.js|text/javascript| +|.gif|image/gif| +|.png|image/png| +|.tar|application/x-tar| +|.ts|application/typescript| +|.xlsx|application/vnd.openxmlformats-officedocument.spreadsheetml.sheet| +|.xml|application/xml or "text/xml"| +|.zip|application/zip| ++## Enable Code Interpreter ++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++assistant = client.beta.assistants.create( + instructions="You are an AI assistant that can write code to help answer math questions", + model="<REPLACE WITH MODEL DEPLOYMENT NAME>", # replace with model deployment name. + tools=[{"type": "code_interpreter"}] +) +``` ++# [REST](#tab/rest) ++> [!NOTE] +> With Azure OpenAI the `model` parameter requires model deployment name. If your model deployment name is different than the underlying model name then you would adjust your code to ` "model": "{your-custom-model-deployment-name}"`. ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -H 'Content-Type: application/json' \ + -d '{ + "instructions": "You are an AI assistant that can write code to help answer math questions.", + "tools": [ + { "type": "code_interpreter" } + ], + "model": "gpt-4-1106-preview" + }' +``` ++++## Upload file for Code Interpreter +++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++# Upload a file with an "assistants" purpose +file = client.files.create( + file=open("speech.py", "rb"), + purpose='assistants' +) ++# Create an assistant using the file ID +assistant = client.beta.assistants.create( + instructions="You are an AI assistant that can write code to help answer math questions.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}], + file_ids=[file.id] +) +``` ++# [REST](#tab/rest) ++```console +# Upload a file with an "assistants" purpose ++curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/files?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -F purpose="assistants" \ + -F file="@c:\\path_to_file\\file.csv" ++# Create an assistant using the file ID ++curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -H 'Content-Type: application/json' \ + -d '{ + "instructions": "You are an AI assistant that can write code to help answer math questions.", + "tools": [ + { "type": "code_interpreter" } + ], + "model": "gpt-4-1106-preview", + "file_ids": ["file_123abc456"] + }' +``` ++++### Pass file to an individual thread ++In addition to making files accessible at the Assistants level you can pass files so they're only accessible to a particular thread. ++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++thread = client.beta.threads.create( + messages=[ + { + "role": "user", + "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "file_ids": ["file.id"] # file id will look like: "assistant-R9uhPxvRKGH3m0x5zBOhMjd2" + } + ] +) +``` ++# [REST](#tab/rest) ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/<YOUR-THREAD-ID>/messages?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + -H 'Content-Type: application/json' \ + -d '{ + "role": "user", + "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "file_ids": ["file_123abc456"] + }' +``` ++++## Download files generated by Code Interpreter ++Files generated by Code Interpreter can be found in the Assistant message responses ++```json + { + "id": "msg_oJbUanImBRpRran5HSa4Duy4", + "assistant_id": "asst_eHwhP4Xnad0bZdJrjHO2hfB4", + "content": [ + { + "image_file": { + "file_id": "file-1YGVTvNzc2JXajI5JU9F0HMD" + }, + "type": "image_file" + }, + # ... + } +``` ++You can download these generated files by passing the files to the files API: ++# [Python 1.x](#tab/python) ++```python +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2024-02-15-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ++image_data = client.files.content("file-abc123") +image_data_bytes = image_data.read() ++with open("./my-image.png", "wb") as file: + file.write(image_data_bytes) +``` ++# [REST](#tab/rest) ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/files/<YOUR-FILE-ID>/content?api-version=2024-02-15-preview \ + -H "api-key: $AZURE_OPENAI_KEY" \ + --output image.png +``` ++++## See also ++* Learn more about how to use Assistants with our [How-to guide on Assistants](../how-to/assistant.md). +* [Azure OpenAI Assistants API samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) |
ai-services | Fine Tuning Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning-functions.md | + + Title: Fine-tuning function calls with Azure OpenAI Service +description: Learn how to improve function calling performance with Azure OpenAI fine-tuning +# +++ Last updated : 02/05/2024++++++# Fine-tuning and function calling ++Models that use the chat completions API support [function calling](../how-to/function-calling.md). Unfortunately, functions defined in your chat completion calls don't always perform as expected. Fine-tuning your model with function calling examples can improve model output by enabling you to: ++* Get similarly formatted responses even when the full function definition isn't present. (Allowing you to potentially save money on prompt tokens.) +* Get more accurate and consistent outputs. ++> [!IMPORTANT] +> The `functions` and `function_call` parameters have been deprecated with the release of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) version of the API. However, the fine-tuning API currently requires use of the legacy parameters. ++## Constructing a training file ++When constructing a training file of function calling examples, you would take a function definition like this: ++```json +{ + "messages": [ + {"role": "user", "content": "What is the weather in San Francisco?"}, + {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"} + ], + "functions": [{ + "name": "get_current_weather", + "description": "Get the current weather", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and country, eg. San Francisco, USA"}, + "format": {"type": "string", "enum": ["celsius", "fahrenheit"]} + }, + "required": ["location", "format"] + } + }] +} +``` ++And express the information as a single line within your `.jsonl` training file as below: ++```jsonl +{"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}, {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"}}], "functions": [{"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and country, eg. San Francisco, USA"}, "format": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location", "format"]}}]} +``` ++As with all fine-tuning training your example file requires at least 10 examples. ++## Optimize for cost ++OpenAI recommends that if you're trying to optimize to use fewer prompt tokens post fine-tuning your model on the full function definitions you can experiment with: ++* Omit function and parameter descriptions: remove the description field from function and parameters. +* Omit parameters: remove the entire properties field from the parameters object. +* Omit function entirely: remove the entire function object from the functions array. ++## Optimize for quality ++Alternatively, if you're trying to improve the quality of the function calling output, it's recommended that the function definitions present in the fine-tuning training dataset and subsequent chat completion calls remain identical. ++## Customize model responses to function outputs ++Fine-tuning based on function calling examples can also be used to improve the model's response to function outputs. To accomplish this, you include examples consisting of function response messages and assistant response messages where the function response is interpreted and put into context by the assistant. ++```json +{ + "messages": [ + {"role": "user", "content": "What is the weather in San Francisco?"}, + {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celcius\"}"}} + {"role": "function", "name": "get_current_weather", "content": "21.0"}, + {"role": "assistant", "content": "It is 21 degrees celsius in San Francisco, CA"} + ], + "functions": [...] // same as before +} +``` ++As with the example before, this example is artificially expanded for readability. The actual entry in the `.jsonl` training file would be a single line: ++```jsonl +{"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}, {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celcius\"}"}}, {"role": "function", "name": "get_current_weather", "content": "21.0"}, {"role": "assistant", "content": "It is 21 degrees celsius in San Francisco, CA"}], "functions": []} +``` ++## Next steps ++- Explore the fine-tuning capabilities in the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md). +- Review fine-tuning [model regional availability](../concepts/models.md#fine-tuning-models) |
ai-services | Fine Tuning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md | A fine-tuned model improves on the few-shot learning approach by training the mo [!INCLUDE [REST API fine-tuning](../includes/fine-tuning-rest.md)] ::: zone-end++## Troubleshooting ++### How do I enable fine-tuning? Create a custom model is greyed out in Azure OpenAI Studio? ++In order to successfully access fine-tuning, you need **Cognitive Services OpenAI Contributor assigned**. Even someone with high-level Service Administrator permissions would still need this account explicitly set in order to access fine-tuning. For more information, please review the [role-based access control guidance](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor). + +## Next steps ++- Explore the fine-tuning capabilities in the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md). +- Review fine-tuning [model regional availability](../concepts/models.md#fine-tuning-models) |
ai-services | Working With Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/working-with-models.md | You can learn more about Azure OpenAI model versions and how they work in the [A ### Auto update to default -When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a change in the default version. +When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a change in the default version. For a preview version, it will update automatically when a new preview version is available starting two weeks after the new preview version is released. If you're still in the early testing phases for inference models, we recommend deploying models with **auto-update to default** set whenever it's available. |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md | Apply here for access: ## Comparing Azure OpenAI and OpenAI -Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, DALL-E, and Whisper models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other. +Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, DALL-E, Whisper, and text to speech models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other. With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering. The DALL-E models, currently in preview, generate images from text prompts that The Whisper models, currently in preview, can be used to transcribe and translate speech to text. +The text to speech models, currently in preview, can be used to synthesize text to speech. + Learn more about each model on our [models concept page](./concepts/models.md). ## Next steps |
ai-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md | The following sections provide you with a quick guide to the default quotas and | Max fine-tuned model deployments | 5 | | Total number of training jobs per resource | 100 | | Max simultaneous running training jobs per resource | 1 |-| Max training jobs queued | 20 | -| Max Files per resource | 30 | -| Total size of all files per resource | 1 GB | +| Max training jobs queued | 20 | +| Max Files per resource (fine-tuning) | 30 | +| Total size of all files per resource (fine-tuning) | 1 GB | | Max training job time (job will fail if exceeded) | 720 hours | | Max training job size (tokens in training file) x (# of epochs) | 2 Billion | | Max size of all files per upload (Azure OpenAI on your data) | 16 MB | The following sections provide you with a quick guide to the default quotas and | Max number of `/chat/completions` functions | 128 | | Max number of `/chat completions` tools | 128 | | Maximum number of Provisioned throughput units per deployment | 100,000 |--+| Max files per Assistant/thread | 20 | +| Max file size for Assistants | 512 MB | +| Assistants token limit | 2,000,000 token limit | ## Regional quota limits |
ai-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md | Title: Azure OpenAI Service REST API reference -description: Learn how to use Azure OpenAI's REST API. In this article, you'll learn about authorization options, how to structure a request and receive a response. +description: Learn how to use Azure OpenAI's REST API. In this article, you learn about authorization options, how to structure a request and receive a response. # POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM ## Completions -With the Completions operation, the model will generate one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position. +With the Completions operation, the model generates one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position. **Create a completion** POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen | Parameter | Type | Required? | Default | Description | |--|--|--|--|--|-| ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, or array of strings. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model will generate as if from the beginning of a new document. | +| ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, or array of strings. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model generates as if from the beginning of a new document. | | ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |-| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. | +| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values means the model takes more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. | | ```top_p``` | number | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |-| ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. | +| ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. | | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help monitoring and detecting abuse | | ```n``` | integer | Optional | 1 | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. |-| ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.| +| ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens are sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.| | ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. | | ```suffix```| string | Optional | null | The suffix that comes after a completion of inserted text. | | ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. | In the example response, `finish_reason` equals `stop`. If `finish_reason` equal | ```max_tokens``` | integer | Optional | inf | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).| | ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.| | ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.|-| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.| +| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.| | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.|-|```function_call```| | Optional | | `[Deprecated in 2023-12-01-preview replacement paremeter is tools_choice]`Controls how the model responds to function calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json) | +|```function_call```| | Optional | | `[Deprecated in 2023-12-01-preview replacement paremeter is tools_choice]`Controls how the model responds to function calls. "none" means the model doesn't call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json) | |```functions``` | [`FunctionDefinition[]`](#functiondefinition-deprecated) | Optional | | `[Deprecated in 2023-12-01-preview replacement paremeter is tools]` A list of functions the model can generate JSON inputs for. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)|-|```tools```| string (The type of the tool. Only [`function`](#function) is supported.) | Optional | |A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/generated.json) | -|```tool_choice```| string or object | Optional | none is the default when no functions are present. auto is the default if functions are present. | Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)| +|```tools```| string (The type of the tool. Only [`function`](#function) is supported.) | Optional | |A list of tools the model can call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model can generate JSON inputs for. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/generated.json) | +|```tool_choice```| string or object | Optional | none is the default when no functions are present. auto is the default if functions are present. | Controls which (if any) function is called by the model. none means the model won't call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)| ### ChatMessage The name and arguments of a function that should be called, as generated by the | Name | Type | Description| ||||-| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and might fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | +| arguments | string | The arguments to call the function with, as generated by the model in JSON format. The model doesn't always generate valid JSON, and might fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. | | name | string | The name of the function to call.| ### FunctionDefinition-Deprecated The definition of a caller-specified function that chat completions can invoke i |Name | Type| Description| ||||-| description | string | A description of what the function does. The model will use this description when selecting the function and interpreting its parameters. | +| description | string | A description of what the function does. The model uses this description when selecting the function and interpreting its parameters. | | name | string | The name of the function to be called. | | parameters | | The parameters the functions accepts, described as a [JSON Schema](https://json-schema.org/understanding-json-schema/) object.| curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/exten | `dataSources` | array | Required | | The data sources to be used for the Azure OpenAI on your data feature. | | `temperature` | number | Optional | 0 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | | `top_p` | number | Optional | 1 |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.|-| `stream` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` | -| `stop` | string or array | Optional | null | Up to 2 sequences where the API will stop generating further tokens. | +| `stream` | boolean | Optional | false | If set, partial message deltas are sent, like in ChatGPT. Tokens are sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` | +| `stop` | string or array | Optional | null | Up to two sequences where the API will stop generating further tokens. | | `max_tokens` | integer | Optional | 1000 | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return is `4096 - prompt_tokens`. | The following parameters can be used inside of the `parameters` field inside of `dataSources`. The following parameters can be used inside of the `parameters` field inside of |--|--|--|--|--| | `type` | string | Required | null | The data source to be used for the Azure OpenAI on your data feature. For Azure AI Search the value is `AzureCognitiveSearch`. For Azure Cosmos DB for MongoDB vCore, the value is `AzureCosmosDB`. For Elasticsearch the value is `Elasticsearch`. For Azure Machine Learning, the value is `AzureMLIndex`. For Pinecone, the value is `Pinecone`. | | `indexName` | string | Required | null | The search index to be used. |-| `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. | +| `inScope` | boolean | Optional | true | If set, this value limits responses specific to the grounding data content. | | `topNDocuments` | number | Optional | 5 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. | | `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. | | `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.| | `filter` | string | Optional | null | The filter pattern used for [restricting access to sensitive documents](./concepts/use-your-data.md#document-level-access-control) | `embeddingEndpoint` | string | Optional | null | The endpoint URL for an Ada embedding model deployment, generally of the format `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15`. Use with the `embeddingKey` parameter for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. | | `embeddingKey` | string | Optional | null | The API key for an Ada embedding model deployment. Use with `embeddingEndpoint` for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. | -| `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data will use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models) starting in API versions `2023-06-01-preview` and later.| +| `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models) starting in API versions `2023-06-01-preview` and later.| | `strictness` | number | Optional | 3 | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. | The following parameters are used for Azure AI Search. |--|--|--|--|--| | `endpoint` | string | Required | null | Azure AI Search only. The data source endpoint. | | `key` | string | Required | null | Azure AI Search only. One of the Azure AI Search admin keys for your service. |-| `queryType` | string | Optional | simple | Indicates which query option will be used for Azure AI Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. | +| `queryType` | string | Optional | simple | Indicates which query option is used for Azure AI Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. | | `fieldsMapping` | dictionary | Optional for Azure AI Search. | null | defines which [fields](./concepts/use-your-data.md?tabs=ai-search#index-field-mapping) you want to map when you add your data source. | The following parameters are used inside of the `authentication` field, which enables you to use Azure OpenAI [without public network access](./how-to/use-your-data-securely.md). curl -i -X PUT https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on- | Parameters | Type | Required? | Default | Description | |||||| | `searchServiceEndpoint` | string | Required |null | The endpoint of the search resource in which the data will be ingested.|-| `searchServiceAdminKey` | string | Optional | null | If provided, the key will be used to authenticate with the `searchServiceEndpoint`. If not provided, the system-assigned identity of the Azure OpenAI resource will be used. In this case, the system-assigned identity must have "Search Service Contributor" role assignment on the search resource. | +| `searchServiceAdminKey` | string | Optional | null | If provided, the key is used to authenticate with the `searchServiceEndpoint`. If not provided, the system-assigned identity of the Azure OpenAI resource will be used. In this case, the system-assigned identity must have "Search Service Contributor" role assignment on the search resource. | | `storageConnectionString` | string | Required | null | The connection string for the storage account where the input data is located. An account key has to be provided in the connection string. It should look something like `DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>` | | `storageContainer` | string | Required | null | The name of the container where the input data is located. | -| `embeddingEndpoint` | string | Optional | null | Not required if you use semantic or only keyword search. It is required if you use vector, hybrid, or hybrid + semantic search | -| `embeddingKey` | string | Optional | null | The key of the embedding endpoint. This is required if the embedding endpoint is not empty. | -| `url` | string | Optional | null | If URL is not null, the provided url will be crawled into the provided storage container and then ingested accordingly.| +| `embeddingEndpoint` | string | Optional | null | Not required if you use semantic or only keyword search. It's required if you use vector, hybrid, or hybrid + semantic search | +| `embeddingKey` | string | Optional | null | The key of the embedding endpoint. This is required if the embedding endpoint isn't empty. | +| `url` | string | Optional | null | If URL isn't null, the provided url is crawled into the provided storage container and then ingested accordingly.| **Body Parameters** POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen | Parameter | Type | Required? | Description | |--|--|--|--|-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. | +| ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. | | ```deployment-id``` | string | Required | The name of your Whisper model deployment such as *MyWhisperDeployment*. You're required to first deploy a Whisper model before you can make calls. | | ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. | POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen | Parameter | Type | Required? | Default | Description | |--|--|--|--|--|-| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). | +| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: `flac`, `mp3`, `mp4`, `mpeg`, `mpga`, `m4a`, `ogg`, `wav`, or `webm`.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). | | ```language``` | string | No | Null | The language of the input audio such as `fr`. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format improves accuracy and latency.<br/><br/>For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```prompt``` | string | No | Null | An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.<br/><br/>For more information about prompts including example use cases, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```response_format``` | string | No | json | The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.<br/><br/>The default value is *json*. | POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen | Parameter | Type | Required? | Description | |--|--|--|--|-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. | +| ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. | | ```deployment-id``` | string | Required | The name of your Whisper model deployment such as *MyWhisperDeployment*. You're required to first deploy a Whisper model before you can make calls. | | ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. | curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM } ``` +## Text to speech ++Synthesize text to speech. ++```http +POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/audio/speech?api-version={api-version} +``` ++**Path parameters** ++| Parameter | Type | Required? | Description | +|--|--|--|--| +| ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. | +| ```deployment-id``` | string | Required | The name of your text to speech model deployment such as *MyTextToSpeechDeployment*. You're required to first deploy a text to speech model (such as `tts-1` or `tts-1-hd`) before you can make calls. | +| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. | ++**Supported versions** ++- `2024-02-15-preview` ++**Request body** ++| Parameter | Type | Required? | Default | Description | +|--|--|--|--|--| +| ```model```| string | Yes | N/A | One of the available TTS models: `tts-1` or `tts-1-hd` | +| ```input``` | string | Yes | N/A | The text to generate audio for. The maximum length is 4096 characters. Specify input text in the language of your choice.<sup>1</sup> | +| ```voice``` | string | Yes | N/A | The voice to use when generating the audio. Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are available in the [OpenAI text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options). | ++<sup>1</sup> The text to speech models generally support the same languages as the Whisper model. For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). ++### Example request ++```console +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/audio/speech?api-version=2024-02-15-preview \ + -H "api-key: $YOUR_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "tts-hd", + "input": "I'm excited to try text to speech.", + "voice": "alloy" +}' --output speech.mp3 +``` ++### Example response ++The speech is returned as an audio file from the previous request. + ## Management APIs Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource. Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI servic ## Next steps -Learn about [ Models, and fine-tuning with the REST API](/rest/api/azureopenai/fine-tuning?view=rest-azureopenai-2023-10-01-preview). +Learn about [Models, and fine-tuning with the REST API](/rest/api/azureopenai/fine-tuning?view=rest-azureopenai-2023-10-01-preview). Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md). |
ai-services | Text To Speech Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/text-to-speech-quickstart.md | + + Title: 'Text to speech with Azure OpenAI Service' ++description: Use the Azure OpenAI Service for text to speech with OpenAI voices. +++ Last updated : 2/1/2024++++recommendations: false +++# Quickstart: Text to speech with the Azure OpenAI Service ++In this quickstart, you use the Azure OpenAI Service for text to speech with OpenAI voices. ++The available voices are: `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`. For more information, see [Azure OpenAI Service reference documentation for text to speech](./reference.md#text-to-speech). ++## Prerequisites ++- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true). +- Access granted to Azure OpenAI Service in the desired Azure subscription. +- An Azure OpenAI resource created in the North Central US or Sweden Central regions with the `tts-1` or `tts-1-hd` model deployed. For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md). ++> [!NOTE] +> Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete [this form](https://aka.ms/oai/access). ++## Set up ++### Retrieve key and endpoint ++To successfully make a call against Azure OpenAI, you need an **endpoint** and a **key**. ++|Variable name | Value | +|--|-| +| `AZURE_OPENAI_ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in the **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://aoai-docs.openai.azure.com/`.| +| `AZURE_OPENAI_KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.| ++Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption. +++Create and assign persistent environment variables for your key and endpoint. ++### Environment variables ++# [Command Line](#tab/command-line) ++```CMD +setx AZURE_OPENAI_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE" +``` ++```CMD +setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE" +``` ++# [PowerShell](#tab/powershell) ++```powershell +[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User') +``` ++```powershell +[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_ENDPOINT', 'REPLACE_WITH_YOUR_ENDPOINT_HERE', 'User') +``` ++# [Bash](#tab/bash) ++```Bash +echo export AZURE_OPENAI_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment && source /etc/environment +``` ++```Bash +echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/environment && source /etc/environment +``` ++++## Clean up resources ++If you want to clean up and remove an OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models. ++- [Portal](../multi-service-resource.md?pivots=azportal#clean-up-resources) +- [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources) ++## Next steps ++* Learn more about how to work with text to speech with Azure OpenAI Service in the [Azure OpenAI Service reference documentation](./reference.md#text-to-speech). +* For more examples, check out the [Azure OpenAI Samples GitHub repository](https://aka.ms/AOAICodeSamples) |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | +## February 2024 ++### Assistants API public preview ++Azure OpenAI now supports the API that powers OpenAI's GPTs. Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and advanced tools like code interpreter, and custom functions. To learn more, see: ++- [Quickstart](./assistants-quickstart.md) +- [Concepts](./concepts/assistants.md) +- [In-depth Python how-to](./how-to/assistant.md) +- [Code Interpreter](./how-to/code-interpreter.md) +- [Function calling](./how-to/assistant-functions.md) +- [Assistants model & region availability](./concepts/models.md#assistants-preview) +- [Assistants Samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) ++### OpenAI text to speech voices public preview ++Azure OpenAI Service now supports text to speech APIs with OpenAI's voices. Get AI-generated speech from the text you provide. To learn more, see the [overview guide](../speech-service/openai-voices.md) and try the [quickstart](./text-to-speech-quickstart.md). ++> [!NOTE] +> Azure AI Speech also supports OpenAI text to speech voices. To learn more, see [OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech](../speech-service/openai-voices.md#openai-text-to-speech-voices-via-azure-openai-service-or-via-azure-ai-speech) guide. ++### New Fine-tuning capabilities and model support ++- [Continuous fine-tuning](https://aka.ms/oai/fine-tuning-continuous) +- [Fine-tuning & function calling](./how-to/fine-tuning-functions.md) +- [`gpt-35-turbo 1106` support](./concepts/models.md#fine-tuning-models) ++### Chunk size parameter for Azure OpenAI on your data ++- You can now set the [chunk size](./concepts/use-your-data.md#ingestion-parameters) parameter when your data is ingested. Adjusting the chunk size can enhance the model's responses by setting the maximum number of tokens for any given chunk of your data in the search index. + ## December 2023 ### Azure OpenAI on your data Try out DALL-E 3 by following a [quickstart](./dall-e-quickstart.md). ### Azure OpenAI on your data -- New [custom parameters](./concepts/use-your-data.md#custom-parameters) for determining the number of retrieved documents and strictness.+- New [custom parameters](./concepts/use-your-data.md#runtime-parameters) for determining the number of retrieved documents and strictness. - The strictness setting sets the threshold to categorize documents as relevant to your queries. - The retrieved documents setting specifies the number of top-scoring documents from your data index used to generate responses. - You can see data ingestion/upload status in the Azure OpenAI Studio. |
ai-services | Whisper Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md | Title: 'Speech to text with Azure OpenAI Service' description: Use the Azure OpenAI Whisper model for speech to text. -# - Last updated : 2/1/2024+ Previously updated : 09/15/2023+ recommendations: false zone_pivot_groups: openai-whisper The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to - An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true). - Access granted to Azure OpenAI Service in the desired Azure subscription.- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI Service by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access?azure-portal=true). - An Azure OpenAI resource created in the North Central US or West Europe regions with the `whisper` model deployed. For more information, see [Create a resource and deploy a model with Azure OpenAI](how-to/create-resource.md). +> [!NOTE] +> Currently, you must submit an application to access Azure OpenAI Service. To apply for access, complete [this form](https://aka.ms/oai/access). + ## Set up ### Retrieve key and endpoint |
ai-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md | Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
ai-services | Embedded Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md | Follow these steps to install the Speech SDK for Java using Apache Maven: <dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk-embedded</artifactId>- <version>1.34.1</version> + <version>1.35.0</version> </dependency> </dependencies> </project> Be sure to use the `@aar` suffix when the dependency is specified in `build.grad ``` dependencies {- implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.34.1@aar' + implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.35.0@aar' } ``` ::: zone-end |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md | More remarks for text to speech locales are included in the [voice styles and ro ### Multilingual voices -Multilingual voices can support more languages. This expansion enhances your ability to express content in various languages, to overcome language barriers and foster a more inclusive global communication environment. Use this table to understand all supported speaking languages for each multilingual neural voice. If the voice doesnΓÇÖt speak the language of the input text, the Speech service doesnΓÇÖt output synthesized audio. The table is sorted by the number of supported languages in descending order. The primary locale for each voice is the prefix in its name, such as the voice `en-US-AndrewMultilingualNeural`, its primary locale is `en-US`. +Multilingual voices can support more languages. This expansion enhances your ability to express content in various languages, to overcome language barriers and foster a more inclusive global communication environment. ++Use this table to understand all supported speaking languages for each multilingual neural voice. If the voice doesnΓÇÖt speak the language of the input text, the Speech service doesnΓÇÖt output synthesized audio. The table is sorted by the number of supported languages in descending order. The primary locale for each voice is indicated by the prefix in its name, such as the voice `en-US-AndrewMultilingualNeural`, its primary locale is `en-US`. [!INCLUDE [Language support include](includes/language-support/multilingual-voices.md)] Use the following table to determine supported styles and roles for each neural [!INCLUDE [Language support include](includes/language-support/voice-styles-and-roles.md)] + ### Viseme This table lists all the locales supported for [Viseme](speech-synthesis-markup-structure.md#viseme-element). For more information about Viseme, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md) and [Viseme element](speech-synthesis-markup-structure.md#viseme-element). With the cross-lingual feature, you can transfer your custom neural voice model # [Pronunciation assessment](#tab/pronunciation-assessment) -The table in this section summarizes the 25 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 24 more languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. +The table in this section summarizes the 26 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 25 more languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. [!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)] |
ai-services | Openai Voices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/openai-voices.md | + + Title: What are OpenAI text to speech voices? ++description: Learn about OpenAI text to speech voices that you can use with speech synthesis. +++++ Last updated : 2/1/2024+++#customer intent: As a user who implements text to speech, I want to understand the options and differences between available OpenAI text to speech voices in Azure AI services. +++# What are OpenAI text to speech voices? ++Like Azure AI Speech voices, OpenAI text to speech voices deliver high-quality speech synthesis to convert written text into natural sounding spoken audio. This unlocks a wide range of possibilities for immersive and interactive user experiences. ++OpenAI text to speech voices are available via two model variants: `Neural` and `NeuralHD`. ++- `Neural`: Optimized for real-time use cases with the lowest latency, but lower quality than `NeuralHD`. +- `NeuralHD`: Optimized for quality. ++## Available text to speech voices in Azure AI services ++You might ask: If I want to use an OpenAI text to speech voice, should I use it via the Azure OpenAI Service or via Azure AI Speech? What are the scenarios that guide me to use one or the other? ++Each voice model offers distinct features and capabilities, allowing you to choose the one that best suits your specific needs. You want to understand the options and differences between available text to speech voices in Azure AI services. ++You can choose from the following text to speech voices in Azure AI ++- OpenAI text to speech voices in [Azure OpenAI Service](../openai/reference.md#text-to-speech). Available in the following regions: North Central US and Sweden Central. +- OpenAI text to speech voices in [Azure AI Speech](./language-support.md?tabs=tts#multilingual-voices). Available in the following regions: North Central US and Sweden Central. +- Azure AI Speech service [text to speech voices](./language-support.md?tabs=tts#prebuilt-neural-voices). Available in dozens of regions. See the [region list](regions.md#speech-service). ++## OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech? ++If you want to use OpenAI text to speech voices, you can choose whether to use them via [Azure OpenAI](../openai/text-to-speech-quickstart.md) or via [Azure AI Speech](./get-started-text-to-speech.md#openai-text-to-speech-voices-in-azure-ai-speech). In either case, the speech synthesis result is the same. ++Here's a comparison of features between OpenAI text to speech voices in Azure OpenAI Service and OpenAI text to speech voices in Azure AI Speech. ++| Feature | Azure OpenAI Service (OpenAI voices) | Azure AI Speech (OpenAI voices) | Azure AI Speech voices | +|||| +| **Region** | North Central US, Sweden Central | North Central US, Sweden Central | Available in dozens of regions. See the [region list](regions.md#speech-service).| +| **Voice variety** | 6 | 6 | More than 400 | +| **Multilingual voice number** | 6 | 6 | 14 | +| **Max multilingual language coverage** | 57 | 57 | 77 | +| **Speech Synthesis Markup Language (SSML) support** | Not supported | Support for [a subset of SSML elements](#ssml-elements-supported-by-openai-text-to-speech-voices-in-azure-ai-speech). | Support for the [full set of SSML](speech-synthesis-markup-structure.md) in Azure AI Speech. | +| **Development options** | REST API | Speech SDK, Speech CLI, REST API | Speech SDK, Speech CLI, REST API | +| **Deployment option** | Cloud only | Cloud only | Cloud, embedded, hybrid, and containers. | +| **Real-time or batch synthesis** | Real-time | Real-time and batch synthesis | Real-time and batch synthesis | +| **Latency** | greater than 500 ms | greater than 500 ms | less than 300 ms | +| **Sample rate of synthesized audio** | 24 kHz | 8, 16, 24, and 48 kHz | 8, 16, 24, and 48 kHz | +| **Speech output audio format** | opus, mp3, aac, flac | opus, mp3, pcm, truesilk | opus, mp3, pcm, truesilk | ++## SSML elements supported by OpenAI text to speech voices in Azure AI Speech ++The [Speech Synthesis Markup Language (SSML)](./speech-synthesis-markup.md) with input text determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application. ++The following table outlines the Speech Synthesis Markup Language (SSML) elements supported by OpenAI text to speech voices in Azure AI speech. Only a subset of SSML tags are supported for OpenAI voices. See [SSML document structure and events](speech-synthesis-markup-structure.md) for more information. ++| SSML element name | Description | +| | | +| `<speak>` | Encloses the entire content to be spoken. ItΓÇÖs the root element of an SSML document. | +| `<voice>` | Specifies a voice used for text to speech output. | +| `<sub>` | Indicates that the alias attribute's text value should be pronounced instead of the element's enclosed text. | +| `<say-as>` | Indicates the content type, such as number or date, of the element's text.<br/><br/>All of the `interpret-as` property values are supported for this element except `interpret-as="name"`. For example, `<say-as interpret-as="date" format="dmy">10-12-2016</say-as>` is supported, but `<say-as interpret-as="name">ED</say-as>` isn't supported. For more information, see [pronunciation with SSML](./speech-synthesis-markup-pronunciation.md#say-as-element). | +| `<s>` | Denotes sentences. | +| `<lang>` | Indicates the default locale for the language that you want the neural voice to speak. | +| `<break>` | Use to override the default behavior of breaks or pauses between words. | ++## Next steps ++- [Try the text to speech quickstart in Azure AI Speech](get-started-text-to-speech.md#openai-text-to-speech-voices-in-azure-ai-speech) +- [Try the text to speech via Azure OpenAI Service](../openai/text-to-speech-quickstart.md) |
ai-services | Releasenotes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/releasenotes.md | -> [!IMPORTANT] -> You'll be charged for custom speech model training if the base model was created on October 1, 2023 and later. You are not charged for training if the base model was created prior to October 2023. For more information, see [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and the [Charge for adaptation section in the speech to text 3.2 migration guide](./migrate-v3-1-to-v3-2.md#charge-for-adaptation). - ## Recent highlights * Azure AI Speech now supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#use-a-whisper-model) guide. |
ai-services | Speech Synthesis Markup Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md | The following SSML example uses the `<mstts:ttsembedding>` element with a voice ## Adjust speaking languages -By default, multilingual voices can autodetect the language of the input text and speak in the language in primary locale of the input text without using SSML. However, you can still use the `<lang xml:lang>` element to adjust the speaking language for these voices to set preferred accent with non-primary locales such as British accent (`en-GB`) for English. You can adjust the speaking language at both the sentence level and word level. For information about the supported languages for multilingual voice, see [Multilingual voices with the lang element](#multilingual-voices-with-the-lang-element) for a table showing the `<lang>` syntax and attribute definitions. +By default, multilingual voices can autodetect the language of the input text and speak in the language of the default locale of the input text without using SSML. Optionally, you can use the `<lang xml:lang>` element to adjust the speaking language for these voices to set the preferred accent such as `en-GB` for British English. You can adjust the speaking language at both the sentence level and word level. For information about the supported languages for multilingual voice, see [Multilingual voices with the lang element](#multilingual-voices-with-the-lang-element) for a table showing the `<lang>` syntax and attribute definitions. The following table describes the usage of the `<lang xml:lang>` element's attributes: The following table describes the usage of the `<lang xml:lang>` element's attri Use the [multilingual voices section](language-support.md?tabs=tts#multilingual-voices) to determine which speaking languages the Speech service supports for each neural voice, as demonstrated in the following example table. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio. -| Voice | Supported language number | Supported languages | Supported locales | +| Voice | Supported language number | Supported languages | Auto-detected default locale for each language | |-||--|-|-|`en-US-AndrewMultilingualNeural`<sup>1,2</sup> (Male)<br/>`en-US-AvaMultilingualNeural`<sup>1,2</sup> (Female)<br/>`en-US-BrianMultilingualNeural`<sup>1,2</sup> (Male)<br/>`en-US-EmmaMultilingualNeural`<sup>1,2</sup> (Female)| 76 | Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Bahasa Indonesian, Bangla, Basque, Bengali, Bosnian, Bulgarian, Burmese, Catalan, Chinese Cantonese, Chinese Mandarin, Croatian, Czech, Danish, Dutch, English, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Lao, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Maltese, Mongolian, Nepali, Norwegian Bokmål, Pashto, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Sinhala, Slovak, Slovene, Somali, Spanish, Sundanese, Swahili, Swedish,Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, Zulu |`af-ZA`, `am-ET`, `ar-SA`, `az-AZ`, `bg-BG`, `bn-BD`, `bn-IN`, `bs-BA`, `ca-ES`, `cs-CZ`, `cy-GB`, `da-DK`, `de-DE`, `el-GR`, `en-US`, `es-ES`, `et-EE`, `eu-ES`, `fa-IR`, `fi-FI`, `fil-PH`, `fr-FR`, `ga-IE`, `gl-ES`, `he-IL`, `hi-IN`, `hr-HR`, `hu-HU`, `hy-AM`, `id-ID`, `is-IS`, `it-IT`, `ja-JP`, `jv-ID`, `ka-GE`, `kk-KZ`, `km-KH`, `kn-IN`, `ko-KR`, `lo-LA`, `lt-LT`, `lv-LV`, `mk-MK`, `ml-IN`, `mn-MN`, `ms-MY`, `mt-MT`, `my-MM`, `nb-NO`, `ne-NP`, `nl-NL`, `pl-PL`, `ps-AF`, `pt-BR`, `ro-RO`, `ru-RU`, `si-LK`, `sk-SK`, `sl-SI`, `so-SO`, `sq-AL`, `sr-RS`, `su-ID`, `sv-SE`, `sw-KE`, `ta-IN`, `te-IN`, `th-TH`, `tr-TR`, `uk-UA`, `ur-PK`, `uz-UZ`, `vi-VN`, `zh-CN`, `zh-HK`, `zu-ZA`.| +|`en-US-AndrewMultilingualNeural`<sup>1,2</sup> (Male)<br/>`en-US-AvaMultilingualNeural`<sup>1,2</sup> (Female)<br/>`en-US-BrianMultilingualNeural`<sup>1,2</sup> (Male)<br/>`en-US-EmmaMultilingualNeural`<sup>1,2</sup> (Female)| 77 | Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Bahasa Indonesian, Bangla, Basque, Bengali, Bosnian, Bulgarian, Burmese, Catalan, Chinese Cantonese, Chinese Mandarin, Chinese Taiwanese, Croatian, Czech, Danish, Dutch, English, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Lao, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Maltese, Mongolian, Nepali, Norwegian Bokmål, Pashto, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Sinhala, Slovak, Slovene, Somali, Spanish, Sundanese, Swahili, Swedish,Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, Zulu |`af-ZA`, `am-ET`, `ar-EG`, `az-AZ`, `bg-BG`, `bn-BD`, `bn-IN`, `bs-BA`, `ca-ES`, `cs-CZ`, `cy-GB`, `da-DK`, `de-DE`, `el-GR`, `en-US`, `es-ES`, `et-EE`, `eu-ES`, `fa-IR`, `fi-FI`, `fil-PH`, `fr-FR`, `ga-IE`, `gl-ES`, `he-IL`, `hi-IN`, `hr-HR`, `hu-HU`, `hy-AM`, `id-ID`, `is-IS`, `it-IT`, `ja-JP`, `jv-ID`, `ka-GE`, `kk-KZ`, `km-KH`, `kn-IN`, `ko-KR`, `lo-LA`, `lt-LT`, `lv-LV`, `mk-MK`, `ml-IN`, `mn-MN`, `ms-MY`, `mt-MT`, `my-MM`, `nb-NO`, `ne-NP`, `nl-NL`, `pl-PL`, `ps-AF`, `pt-BR`, `ro-RO`, `ru-RU`, `si-LK`, `sk-SK`, `sl-SI`, `so-SO`, `sq-AL`, `sr-RS`, `su-ID`, `sv-SE`, `sw-KE`, `ta-IN`, `te-IN`, `th-TH`, `tr-TR`, `uk-UA`, `ur-PK`, `uz-UZ`, `vi-VN`, `zh-CN`, `zh-HK`, `zh-TW`, `zu-ZA`.| <sup>1</sup> The neural voice is available in public preview. Voices and styles in public preview are only available in three service [regions](regions.md): East US, West Europe, and Southeast Asia. -<sup>2</sup> Those are TTS multilingual voices in Azure AI Speech. By default, all multilingual voices (except `en-US-JennyMultilingualNeural`) can speak in the language in primary locale of the input text without [using SSML](speech-synthesis-markup-voice.md#adjust-speaking-languages). However, you can still use the `<lang xml:lang>` element to adjust the speaking language for these voices to set preferred accent with non-primary locales such as British accent (`en-GB`) for English. The primary locale for each voice is indicated by the prefix in its name, such as the voice `en-US-AndrewMultilingualNeural`, its primary locale is `en-US`. +<sup>2</sup> Those are neural multilingual voices in Azure AI Speech. All multilingual voices (except `en-US-JennyMultilingualNeural`) can speak in the language in default locale of the input text without [using SSML](#adjust-speaking-languages). However, you can still use the `<lang xml:lang>` element to adjust the speaking accent of each language to set preferred accent such as British accent (`en-GB`) for English. The primary locale for each voice is indicated by the prefix in its name, such as the voice `en-US-AndrewMultilingualNeural`, its primary locale is `en-US`. Check the [full list](https://speech.microsoft.com/portal/voicegallery) of supported locales through SSML. > [!NOTE] > Multilingual voices don't fully support certain SSML elements, such as `break`, `emphasis`, `silence`, and `sub`. |
ai-studio | Ai Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/ai-resources.md | Title: Azure AI resource concepts + Title: Azure AI hub resource concepts -description: This article introduces concepts about Azure AI resources. +description: This article introduces concepts about Azure AI hub resources. - ignite-2023 Previously updated : 12/14/2023 Last updated : 2/5/2024 -# Azure AI resources +# Azure AI hub resources [!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)] -The 'Azure AI' resource is the top-level Azure resource for AI Studio and provides the working environment for a team to build and manage AI applications. In Azure, resources enable access to Azure services for individuals and teams. Resources also provide a container for billing, security configuration and monitoring. +The Azure AI hub resource is the top-level Azure resource for AI Studio and provides the working environment for a team to build and manage AI applications. In Azure, resources enable access to Azure services for individuals and teams. Resources also provide a container for billing, security configuration and monitoring. -The Azure AI resource is used to access multiple Azure AI services with a single setup. Previously, different Azure AI services including [Azure OpenAI](../../ai-services/openai/overview.md), [Azure Machine Learning](../../machine-learning/overview-what-is-azure-machine-learning.md), [Azure Speech](../../ai-services/speech-service/overview.md), required their individual setup. +The Azure AI hub resource can be used to access [multiple Azure AI services](#azure-ai-services-api-access-keys) with a single setup. Previously, different Azure AI services including [Azure OpenAI](../../ai-services/openai/overview.md), [Azure Machine Learning](../../machine-learning/overview-what-is-azure-machine-learning.md), [Azure AI Speech](../../ai-services/speech-service/overview.md), required their individual setup. -In this article, you learn more about Azure AI resource's capabilities, and how to set up Azure AI for your organization. You can see the resources created in the [Azure portal](https://portal.azure.com/) and in [Azure AI Studio](https://ai.azure.com). +In this article, you learn more about Azure AI hub resource's capabilities, and how to set up Azure AI for your organization. You can see the resources created in the [Azure portal](https://portal.azure.com/) and in [Azure AI Studio](https://ai.azure.com). ## Collaboration environment for a team -The AI resource provides the collaboration environment for a team to build and manage AI applications, catering to two personas: +The Azure AI hub resource provides the collaboration environment for a team to build and manage AI applications, catering to two personas: -* To AI developers, the Azure AI resource provides the working environment for building AI applications granting access to various tools for AI model building. Tools can be used together, and lets you use and produce shareable components including datasets, indexes, models. An AI resource allows you to configure connections to external resources, provide compute resources used by tools and [endpoints and access keys to prebuilt AI models](#azure-ai-services-api-access-keys). When you use a project to customize AI capabilities, it's hosted by an AI resource and can access the same shared resources. -* To IT administrators, team leads and risk officers, the Azure AI resource provides a single pane of glass on projects created by a team, audit connections that are in use to external resources, and other governance controls to help meet cost and compliance requirements. Security settings are configured on the Azure AI resource, and once set up apply to all projects created under it, allowing administrators to enable developers to self-serve create projects to organize work. +* To AI developers, the Azure AI hub resource provides the working environment for building AI applications granting access to various tools for AI model building. Tools can be used together, and lets you use and produce shareable components including datasets, indexes, models. An Azure AI hub resource allows you to configure connections to external resources, provide compute resources used by tools and [endpoints and access keys to prebuilt AI models](#azure-ai-services-api-access-keys). When you use a project to customize AI capabilities, it's hosted by an Azure AI hub resource and can access the same shared resources. +* To IT administrators, team leads and risk officers, the Azure AI hub resource provides a single pane of glass on projects created by a team, audit connections that are in use to external resources, and other governance controls to help meet cost and compliance requirements. Security settings are configured on the Azure AI hub resource, and once set up apply to all projects created under it, allowing administrators to enable developers to self-serve create projects to organize work. ## Central setup and management concepts -Various management concepts are available on AI resource to support team leads and admins to centrally manage a team's environment. In [Azure AI studio](https://ai.azure.com/), you find these on the **Manage** page. +Various management concepts are available on Azure AI hub resources to support team leads and admins to centrally manage a team's environment. -* **Security configuration** including public network access, [virtual networking](#virtual-networking), customer-managed key encryption, and privileged access to whom can create projects for customization. Security settings configured on the AI resource automatically pass down to each project. A managed virtual network is shared between all projects that share the same AI resource +* **Security configuration** including public network access, [virtual networking](#virtual-networking), customer-managed key encryption, and privileged access to whom can create projects for customization. Security settings configured on the Azure AI hub resource automatically pass down to each project. A managed virtual network is shared between all projects that share the same Azure AI hub resource. * **Connections** are named and authenticated references to Azure and non-Azure resources like data storage providers. Use a connection as a means for making an external resource available to a group of developers without having to expose its stored credential to an individual.-* **Compute and quota allocation** is managed as shared capacity for all projects in AI studio that share the same Azure AI resource. This includes compute instance as managed cloud-based workstation for an individual. Compute instance can be used across projects by the same user. -* **AI services access keys** to endpoints for prebuilt AI models are managed on the AI resource scope. Use these endpoints to access foundation models from Azure OpenAI, Speech, Vision, and Content Safety with one [API key](#azure-ai-services-api-access-keys) -* **Policy** enforced in Azure on the Azure AI resource scope applies to all projects managed under it. -* **Dependent Azure resources** are set up once per AI resource and associated projects and used to store artifacts you generate while working in AI studio such as logs or when uploading data. See [Azure AI dependencies](#azure-ai-dependencies) for more details. +* **Compute and quota allocation** is managed as shared capacity for all projects in AI Studio that share the same Azure AI hub resource. This includes compute instance as managed cloud-based workstation for an individual. Compute instance can be used across projects by the same user. +* **AI services access keys** to endpoints for prebuilt AI models are managed on the Azure AI hub resource scope. Use these endpoints to access foundation models from Azure OpenAI, Speech, Vision, and Content Safety with one [API key](#azure-ai-services-api-access-keys) +* **Policy** enforced in Azure on the Azure AI hub resource scope applies to all projects managed under it. +* **Dependent Azure resources** are set up once per Azure AI hub resource and associated projects and used to store artifacts you generate while working in AI Studio such as logs or when uploading data. See [Azure AI dependencies](#azure-ai-dependencies) for more details. ## Organize work in projects for customization -An Azure AI resource provides the hosting environment for **projects** in AI studio. A project is an organizational container that has tools for AI customization and orchestration, lets you organize your work, save state across different tools like prompt flow, and collaborate with others. For example, you can share uploaded files and connections to data sources. +An Azure AI hub resource provides the hosting environment for [Azure AI projects](../how-to/create-projects.md) in AI Studio. A project is an organizational container that has tools for AI customization and orchestration, lets you organize your work, save state across different tools like prompt flow, and collaborate with others. For example, you can share uploaded files and connections to data sources. -Multiple projects can use an Azure AI resource, and a project can be used by multiple users. A project also helps you keep track of billing, and manage access and provides data isolation. Every project has dedicated storage containers to let you upload files and share it with only other project members when using the 'data' experiences. +Multiple projects can use an Azure AI hub resource, and a project can be used by multiple users. A project also helps you keep track of billing, and manage access and provides data isolation. Every project has dedicated storage containers to let you upload files and share it with only other project members when using the 'data' experiences. -Projects let you create and group reusable components that can be used across tools in AI studio: +Projects let you create and group reusable components that can be used across tools in AI Studio: | Asset | Description | | | | Projects also have specific settings that only hold for that project: | Asset | Description | | | |-| Project connections | Connections to external resources like data storage providers that only you and other project members can use. They complement shared connections on the AI resource accessible to all projects.| +| Project connections | Connections to external resources like data storage providers that only you and other project members can use. They complement shared connections on the Azure AI hub resource accessible to all projects.| | Prompt flow runtime | Prompt flow is a feature that can be used to generate, customize, or run a flow. To use prompt flow, you need to create a runtime on top of a compute instance. | > [!NOTE]-> In AI Studio you can also manage language and notification settings that apply to all Azure AI Studio projects that you can access regardless of the Azure AI resource or project. +> In AI Studio you can also manage language and notification settings that apply to all Azure AI Studio projects that you can access regardless of the Azure AI hub resource or project. ## Azure AI services API access keys -The Azure AI Resource exposes API endpoints and keys for prebuilt AI services that are created by Microsoft such as Speech services and Language service. Which precise services are available to you is subject to your Azure region and your chosen Azure AI services provider at the time of setup ('advanced' option): +The Azure AI hub resource exposes API endpoints and keys for prebuilt AI services that are created by Microsoft such as Azure OpenAI Service. Which precise services are available to you is subject to your Azure region and your chosen Azure AI services provider at the time of setup ('advanced' option): -* If you create an Azure AI resource using the default configuration, you'll have by default capabilities enabled for Azure OpenAI service, Speech, Vision, Content Safety. -* If you create an Azure AI resource and choose an existing Azure OpenAI resource as service provider, you'll only have capabilities for Azure OpenAI service. Use this option if you'd like to reuse existing Azure OpenAI quota and models deployments. Currently, there's no upgrade path to get Speech and Vision capabilities after deployment. +* If you create an Azure AI hub resource together with an existing Azure OpenAI Service resource, you only have capabilities for Azure OpenAI Service. Use this option if you'd like to reuse existing Azure OpenAI quota and models deployments. Currently, there's no upgrade path to get Speech and Vision capabilities after the AI hub is created. +* If you create an Azure AI hub resource together with an Azure AI services provider, you can use Azure OpenAI Service and other AI services such as Speech and Vision. Currently, this option is only available via the Azure AI CLI and SDK. -To understand the full layering of Azure AI resources and its Azure dependencies including the Azure AI services provider, and how these is represented in Azure AI Studio and in the Azure portal, see [Find Azure AI Studio resources in the Azure portal](#find-azure-ai-studio-resources-in-the-azure-portal). --> [!NOTE] -> This Azure AI services resource is similar but not to be confused with the standalone "Azure AI services multi-service account" resource. Their capabilities vary, and the standalone resource is not supported in Azure AI Studio. Going forward, we recommend using the Azure AI services resource that's provided with your Azure AI resource. +To understand the full layering of Azure AI hub resources and its Azure dependencies including the Azure AI services provider, and how these is represented in Azure AI Studio and in the Azure portal, see [Find Azure AI Studio resources in the Azure portal](#find-azure-ai-studio-resources-in-the-azure-portal). With the same API key, you can access all of the following Azure AI With the same API key, you can access all of the following Azure AI | ![Speech icon](../../ai-services/media/service-icons/speech.svg) [Speech](../../ai-services/speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition | | ![Vision icon](../../ai-services/media/service-icons/vision.svg) [Vision](../../ai-services/computer-vision/index.yml) | Analyze content in images and videos | -Large language models that can be used to generate text, speech, images, and more, are hosted by the AI resource. Fine-tuned models and open models deployed from the [model catalog](../how-to/model-catalog.md) are always created in the project context for isolation. +Large language models that can be used to generate text, speech, images, and more, are hosted by the Azure AI hub resource. Fine-tuned models and open models deployed from the [model catalog](../how-to/model-catalog.md) are always created in the project context for isolation. ### Virtual networking -Azure AI resources, compute resources, and projects share the same Microsoft-managed Azure virtual network. After you configure the managed networking settings during the Azure AI resource creation process, all new projects created using that Azure AI resource will inherit the same virtual network settings. Therefore, any changes to the networking settings are applied to all current and new project in that Azure AI resource. By default, Azure AI resources provide public network access. +Azure AI hub resources, compute resources, and projects share the same Microsoft-managed Azure virtual network. After you configure the managed networking settings during the Azure AI hub resource creation process, all new projects created using that Azure AI hub resource will inherit the same virtual network settings. Therefore, any changes to the networking settings are applied to all current and new project in that Azure AI hub resource. By default, Azure AI hub resources provide public network access. -To establish a private inbound connection to your Azure AI resource environment, create an Azure Private Link endpoint on the following scopes: -* The Azure AI resource +To establish a private inbound connection to your Azure AI hub resource environment, create an Azure Private Link endpoint on the following scopes: +* The Azure AI hub resource * The dependent `Azure AI services` providing resource * Any other [Azure AI dependency](#azure-ai-dependencies) such as Azure storage -While projects show up as their own tracking resources in the Azure portal, they don't require their own private link endpoints to be accessed. New projects that are created post AI resource setup, do automatically get added to the network-isolated environment. +While projects show up as their own tracking resources in the Azure portal, they don't require their own private link endpoints to be accessed. New projects that are created after Azure AI hub resource setup, do automatically get added to the network-isolated environment. ## Connections to Azure and third-party resources Azure AI offers a set of connectors that allows you to connect to different types of data sources and other Azure tools. You can take advantage of connectors to connect with data such as indices in Azure AI Search to augment your flows. -Connections can be set up as shared with all projects in the same Azure AI resource, or created exclusively for one project. To manage project connections via Azure AI Studio, navigate to a project page, then navigate to **Settings** > **Connections**. To manage shared connections, navigate to the **Manage** page. As an administrator, you can audit both shared and project-scoped connections on an Azure AI resource level to have a single pane of glass of connectivity across projects. +Connections can be set up as shared with all projects in the same Azure AI hub resource, or created exclusively for one project. To manage project connections via Azure AI Studio, navigate to a project page, then navigate to **Settings** > **Connections**. To manage shared connections, navigate to the **Manage** page. As an administrator, you can audit both shared and project-scoped connections on an Azure AI hub resource level to have a single pane of glass of connectivity across projects. ## Azure AI dependencies -Azure AI studio layers on top of existing Azure services including Azure AI and Azure Machine Learning services. While this might not be visible on the display names in Azure portal, AI studio, or when using the SDK or CLI, some of these architectural details become apparent when you work with the Azure REST APIs, use Azure cost reporting, or use infrastructure-as-code templates such as Azure Bicep or Azure Resource Manager. From an Azure Resource Provider perspective, Azure AI studio resource types map to the following resource provider kinds: +Azure AI Studio layers on top of existing Azure services including Azure AI and Azure Machine Learning services. While this might not be visible on the display names in Azure portal, AI Studio, or when using the SDK or CLI, some of these architectural details become apparent when you work with the Azure REST APIs, use Azure cost reporting, or use infrastructure-as-code templates such as Azure Bicep or Azure Resource Manager. From an Azure Resource Provider perspective, Azure AI Studio resource types map to the following resource provider kinds: |Resource type|Resource provider|Kind| ||||-|Azure AI resources|Microsoft.MachineLearningServices/workspace|hub| +|Azure AI hub resources|Microsoft.MachineLearningServices/workspace|hub| |Azure AI project|Microsoft.MachineLearningServices/workspace|project| |Azure AI services|Microsoft.CognitiveServices/account|AIServices| |Azure AI OpenAI Service|Microsoft.CognitiveServices/account|OpenAI| -When you create a new Azure AI resource, a set of dependent Azure resources are required to store data that you upload or get generated when working in AI studio. If not provided by you, these resources are automatically created. +When you create a new Azure AI hub resource, a set of dependent Azure resources are required to store data that you upload or get generated when working in AI Studio. If not provided by you, these resources are automatically created. |Dependent Azure resource|Note| |||-|Azure AI services|Either Azure AI services multi-service provider, or Azure OpenAI service. Provides API endpoints and keys for prebuilt AI services.| +|Azure AI services|Either Azure AI services multi-service provider, or Azure OpenAI Service. Provides API endpoints and keys for prebuilt AI services.| |Azure Storage account|Stores artifacts for your projects like flows and evaluations. For data isolation, storage containers are prefixed using the project GUID, and conditionally secured using Azure ABAC for the project identity.| |Azure Key Vault| Stores secrets like connection strings for your resource connections. For data isolation, secrets can't be retrieved across projects via APIs.| |Azure Container Registry| Stores docker images created when using custom runtime for prompt flow. For data isolation, docker images are prefixed using the project GUID.| When you create a new Azure AI resource, a set of dependent Azure resources are Azure AI costs accrue by [various Azure resources](#central-setup-and-management-concepts). -In general, an Azure AI resource and project don't have a fixed monthly cost, and you're only charged for usage in terms of compute hours and tokens used. Azure Key Vault, Storage, and Application Insights charge transaction and volume-based, dependent on the amount of data stored with your Azure AI projects. +In general, an Azure AI hub resource and project don't have a fixed monthly cost, and you're only charged for usage in terms of compute hours and tokens used. Azure Key Vault, Storage, and Application Insights charge transaction and volume-based, dependent on the amount of data stored with your Azure AI projects. -If you require to group costs of these different services together, we recommend creating Azure AI resources in one or more dedicated resource groups and subscriptions in your Azure environment. +If you require to group costs of these different services together, we recommend creating Azure AI hub resources in one or more dedicated resource groups and subscriptions in your Azure environment. You can use [cost management](/azure/cost-management-billing/costs/quick-acm-cost-analysis) and [Azure resource tags](/azure/azure-resource-manager/management/tag-resources) to help with a detailed resource-level cost breakdown, or run [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) on the above listed resources to obtain a pricing estimate. For more information, see [Plan and manage costs for Azure AI services](../how-to/costs-plan-manage.md). You can use [cost management](/azure/cost-management-billing/costs/quick-acm-cos In the Azure portal, you can find resources that correspond to your Azure AI project in Azure AI Studio. > [!NOTE]-> This section assumes that the Azure AI resource and Azure AI project are in the same resource group. --In Azure AI Studio, go to **Build** > **Settings** to view your Azure AI project resources such as connections and API keys. There's a link to view the corresponding resources in the Azure portal and a link to your Azure AI resource in Azure AI Studio. ---In Azure AI Studio, go to **Manage** (or select the Azure AI resource link from the project settings page) to view your Azure AI resource, including projects and shared connections. There's also a link to view the corresponding resources in the Azure portal. ---After you select **View in the Azure Portal**, you see your Azure AI resource in the Azure portal. -+> This section assumes that the Azure AI hub resource and Azure AI project are in the same resource group. -Select the resource group name to see all associated resources, including the Azure AI project, storage account, and key vault. +1. In [Azure AI Studio](https://ai.azure.com), go to **Build** > **Settings** to view your Azure AI project resources such as connections and API keys. There's a link to your Azure AI hub resource in Azure AI Studio and links to view the corresponding project resources in the [Azure portal](https://portal.azure.com). + :::image type="content" source="../media/concepts/azureai-project-view-ai-studio.png" alt-text="Screenshot of the Azure AI project and related resources in the Azure AI Studio." lightbox="../media/concepts/azureai-project-view-ai-studio.png"::: -From the resource group, you can select the Azure AI project for more details. +1. Select the AI hub name to view your Azure AI hub's projects and shared connections. There's also a link to view the corresponding resources in the [Azure portal](https://portal.azure.com). + :::image type="content" source="../media/concepts/azureai-resource-view-ai-studio.png" alt-text="Screenshot of the Azure AI hub resource and related resources in the Azure AI Studio." lightbox="../media/concepts/azureai-resource-view-ai-studio.png"::: -Also from the resource group, you can select the **Azure AI service** resource to see the keys and endpoints needed to authenticate your requests to Azure AI services. +1. Select **View in the Azure Portal** to see your Azure AI hub resource in the Azure portal. + :::image type="content" source="../media/concepts/ai-hub-azure-portal.png" alt-text="Screenshot of the Azure AI hub resource in the Azure portal." lightbox="../media/concepts/ai-hub-azure-portal.png"::: -You can use the same API key to access all of the supported service endpoints that are listed. + - Select the **AI Services provider** to see the keys and endpoints needed to authenticate your requests to Azure AI services such as Azure OpenAI. For more information, see [Azure AI services API access keys](#azure-ai-services-api-access-keys). + - Also from the Azure AI hub page, you can select the **Project resource group** to find your Azure AI project. ## Next steps -- [Quickstart: Generate product name ideas in the Azure AI Studio playground](../quickstarts/playground-completions.md)+- [Quickstart: Analyze images and video with GPT-4 for Vision in the playground](../quickstarts/multimodal-vision.md) - [Learn more about Azure AI Studio](../what-is-ai-studio.md) - [Learn more about Azure AI Studio projects](../how-to/create-projects.md) |
ai-studio | Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/connections.md | -Connections in Azure AI Studio are a way to authenticate and consume both Microsoft and third-party resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI resource. +Connections in Azure AI Studio are a way to authenticate and consume both Microsoft and third-party resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI hub resource. ## Connections to Azure AI services -You can create connections to Azure AI services such as Azure AI Content Safety and Azure OpenAI. You can then use the connection in a prompt flow tool such as the LLM tool. +You can create connections to Azure AI services such as Azure OpenAI and Azure AI Content Safety. You can then use the connection in a prompt flow tool such as the LLM tool. :::image type="content" source="../media/prompt-flow/llm-tool-connection.png" alt-text="Screenshot of a connection used by the LLM tool in prompt flow." lightbox="../media/prompt-flow/llm-tool-connection.png"::: A Uniform Resource Identifier (URI) represents a storage location on your local ## Key vaults and secrets -Connections allow you to securely store credentials, authenticate access, and consume data and information. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards. As an administrator, you can audit both shared and project-scoped connections on an Azure AI resource level (link to connection rbac). +Connections allow you to securely store credentials, authenticate access, and consume data and information. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards. As an administrator, you can audit both shared and project-scoped connections on an Azure AI hub resource level (link to connection rbac). -Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. Azure AI Studio connections store API keys securely, as secrets, in a key vault. The key vault [Azure role-based access control (Azure RBAC)](./rbac-ai-studio.md) controls access to these connection resources. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they are stored in the Azure AI resource's key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you avoid credential storage in a YAML file, because a security breach could lead to a credential leak. +Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. Azure AI Studio connections store API keys securely, as secrets, in a key vault. The key vault [Azure role-based access control (Azure RBAC)](./rbac-ai-studio.md) controls access to these connection resources. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they are stored in the Azure AI hub resource's key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you avoid credential storage in a YAML file, because a security breach could lead to a credential leak. ## Next steps |
ai-studio | Deployments Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/deployments-overview.md | Azure AI Studio simplifies deployments. A simple select or a line of code deploy ### Azure OpenAI models -Azure OpenAI allows you to get access to the latest OpenAI models with the enterprise features from Azure. Learn more about [how to deploy OpenAI models in AI studio](../how-to/deploy-models-openai.md). +Azure OpenAI allows you to get access to the latest OpenAI models with the enterprise features from Azure. Learn more about [how to deploy OpenAI models in AI Studio](../how-to/deploy-models-openai.md). ### Open models |
ai-studio | Evaluation Improvement Strategies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-improvement-strategies.md | Mitigating harms presented by large language models (LLMs) such as the Azure Ope :::image type="content" source="../media/evaluations/mitigation-layers.png" alt-text="Diagram of strategy to mitigate potential harms of generative AI applications." lightbox="../media/evaluations/mitigation-layers.png"::: ## Model layer-At the model level, it's important to understand the models you use and what fine-tuning steps might have been taken by the model developers to align the model towards its intended uses and to reduce the risk of potentially harmful uses and outcomes. Azure AI studio's model catalog enables you to explore models from Azure OpenAI Service, Meta, etc., organized by collection and task. In the [model catalog](../how-to/model-catalog.md), you can explore model cards to understand model capabilities and limitations, experiment with sample inferences, and assess model performance. You can further compare multiple models side-by-side through benchmarks to select the best one for your use case. Then, you can enhance model performance by fine-tuning with your training data. +At the model level, it's important to understand the models you use and what fine-tuning steps might have been taken by the model developers to align the model towards its intended uses and to reduce the risk of potentially harmful uses and outcomes. Azure AI Studio's model catalog enables you to explore models from Azure OpenAI Service, Meta, etc., organized by collection and task. In the [model catalog](../how-to/model-catalog.md), you can explore model cards to understand model capabilities and limitations, experiment with sample inferences, and assess model performance. You can further compare multiple models side-by-side through benchmarks to select the best one for your use case. Then, you can enhance model performance by fine-tuning with your training data. ## Safety systems layer For most applications, itΓÇÖs not enough to rely on the safety fine-tuning built into the model itself. LLMs can make mistakes and are susceptible to attacks like jailbreaks. In many applications at Microsoft, we use another AI-based safety system, [Azure AI Content Safety](https://azure.microsoft.com/products/ai-services/ai-content-safety/), to provide an independent layer of protection, helping you to block the output of harmful content. |
ai-studio | Rbac Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md | -In this article, you learn how to manage access (authorization) to an Azure AI resource. Azure Role-based access control is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Users in your Microsoft Entra ID are assigned specific roles, which grant access to resources. Azure provides both built-in roles and the ability to create custom roles. +In this article, you learn how to manage access (authorization) to an Azure AI hub resource. Azure Role-based access control is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Users in your Microsoft Entra ID are assigned specific roles, which grant access to resources. Azure provides both built-in roles and the ability to create custom roles. > [!WARNING] > Applying some roles might limit UI functionality in Azure AI Studio for other users. For example, if a user's role does not have the ability to create a compute instance, the option to create a compute instance will not be available in studio. This behavior is expected, and prevents the user from attempting operations that would return an access denied error. -## Azure AI resource vs Azure AI project -In the Azure AI Studio, there are two levels of access: the Azure AI resource and the Azure AI project. The resource is home to the infrastructure (including virtual network setup, customer-managed keys, managed identities, and policies) as well as where you configure your Azure AI services. Azure AI resource access can allow you to modify the infrastructure, create new Azure AI resources, and create projects. Azure AI projects are a subset of the Azure AI resource that act as workspaces that allow you to build and deploy AI systems. Within a project you can develop flows, deploy models, and manage project assets. Project access lets you develop AI end-to-end while taking advantage of the infrastructure setup on the Azure AI resource. +## Azure AI hub resource vs Azure AI project +In the Azure AI Studio, there are two levels of access: the Azure AI hub resource and the Azure AI project. The resource is home to the infrastructure (including virtual network setup, customer-managed keys, managed identities, and policies) as well as where you configure your Azure AI services. Azure AI hub resource access can allow you to modify the infrastructure, create new Azure AI hub resources, and create projects. Azure AI projects are a subset of the Azure AI hub resource that act as workspaces that allow you to build and deploy AI systems. Within a project you can develop flows, deploy models, and manage project assets. Project access lets you develop AI end-to-end while taking advantage of the infrastructure setup on the Azure AI hub resource. -## Default roles for the Azure AI resource +## Default roles for the Azure AI hub resource -The Azure AI Studio has built-in roles that are available by default. In addition to the Reader, Contributor, and Owner roles, the Azure AI Studio has a new role called Azure AI Developer. This role can be assigned to enable users to create connections, compute, and projects, but not let them create new Azure AI resources or change permissions of the existing Azure AI resource. +The Azure AI Studio has built-in roles that are available by default. In addition to the Reader, Contributor, and Owner roles, the Azure AI Studio has a new role called Azure AI Developer. This role can be assigned to enable users to create connections, compute, and projects, but not let them create new Azure AI hub resources or change permissions of the existing Azure AI hub resource. -Here's a table of the built-in roles and their permissions for the Azure AI resource: +Here's a table of the built-in roles and their permissions for the Azure AI hub resource: | Role | Description | | | |-| Owner | Full access to the Azure AI resource, including the ability to manage and create new Azure AI resources and assign permissions. This role is automatically assigned to the Azure AI resource creator| -| Contributor | User has full access to the Azure AI resource, including the ability to create new Azure AI resources, but isn't able to manage Azure AI resource permissions on the existing resource. | -| Azure AI Developer | Perform all actions except create new Azure AI resources and manage the Azure AI resource permissions. For example, users can create projects, compute, and connections. Users can assign permissions within their project. Users can interact with existing AI resources such as Azure OpenAI, Azure AI Search, and Azure AI services. | -| Reader | Read only access to the Azure AI resource. This role is automatically assigned to all project members within the Azure AI resource. | +| Owner | Full access to the Azure AI hub resource, including the ability to manage and create new Azure AI hub resources and assign permissions. This role is automatically assigned to the Azure AI hub resource creator| +| Contributor | User has full access to the Azure AI hub resource, including the ability to create new Azure AI hub resources, but isn't able to manage Azure AI hub resource permissions on the existing resource. | +| Azure AI Developer | Perform all actions except create new Azure AI hub resources and manage the Azure AI hub resource permissions. For example, users can create projects, compute, and connections. Users can assign permissions within their project. Users can interact with existing Azure AI resources such as Azure OpenAI, Azure AI Search, and Azure AI services. | +| Reader | Read only access to the Azure AI hub resource. This role is automatically assigned to all project members within the Azure AI hub resource. | -The key difference between Contributor and Azure AI Developer is the ability to make new Azure AI resources. If you don't want users to make new Azure AI resources (due to quota, cost, or just managing how many Azure AI resources you have), assign the AI Developer role. +The key difference between Contributor and Azure AI Developer is the ability to make new Azure AI hub resources. If you don't want users to make new Azure AI hub resources (due to quota, cost, or just managing how many Azure AI hub resources you have), assign the AI Developer role. -Only the Owner and Contributor roles allow you to make an Azure AI resource. At this time, custom roles won't grant you permission to make Azure AI resources. +Only the Owner and Contributor roles allow you to make an Azure AI hub resource. At this time, custom roles won't grant you permission to make Azure AI hub resources. The full set of permissions for the new "Azure AI Developer" role are as follows: Here's a table of the built-in roles and their permissions for the Azure AI proj | Azure AI Developer | User can perform most actions, including create deployments, but can't assign permissions to project users. | | Reader | Read only access to the Azure AI project. | -When a user gets access to a project, two more roles are automatically assigned to the project user. The first role is Reader on the Azure AI resource. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```. +When a user gets access to a project, two more roles are automatically assigned to the project user. The first role is Reader on the Azure AI hub resource. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```. In order to complete end-to-end AI development and deployment, users only need these two autoassigned roles and either the Contributor or Azure AI Developer role on a *project*. Below is an example of how to set up role-based access control for your Azure AI | Persona | Role | Purpose | | | | |-| IT admin | Owner of the Azure AI resource | The IT admin can ensure the Azure AI resource is set up to their enterprise standards and assign managers the Contributor role on the resource if they want to enable managers to make new Azure AI resources or they can assign managers the Azure AI Developer role on the resource to not allow for new Azure AI resource creation. | -| Managers | Contributor or Azure AI Developer on the Azure AI resource | Managers can create projects for their team and create shared resources (ex: compute and connections) for their group at the Azure AI resource level. | +| IT admin | Owner of the Azure AI hub resource | The IT admin can ensure the Azure AI hub resource is set up to their enterprise standards and assign managers the Contributor role on the resource if they want to enable managers to make new Azure AI hub resources or they can assign managers the Azure AI Developer role on the resource to not allow for new Azure AI hub resource creation. | +| Managers | Contributor or Azure AI Developer on the Azure AI hub resource | Managers can create projects for their team and create shared resources (ex: compute and connections) for their group at the Azure AI hub resource level. | | Managers | Owner of the Azure AI Project | When managers create a project, they become the project owner. This allows them to add their team/developers to the project. Their team/developers can be added as Contributors or Azure AI Developers to allow them to develop in the project. | | Team members/developers | Contributor or Azure AI Developer on the Azure AI Project | Developers can build and deploy AI models within a project and create assets that enable development such as computes and connections. | -## Access to resources created outside of the Azure AI resource +## Access to resources created outside of the Azure AI hub resource -When you create an Azure AI resource, the built-in role-based access control permissions grant you access to use the resource. However, if you wish to use resources outside of what was created on your behalf, you need to ensure both: +When you create an Azure AI hub resource, the built-in role-based access control permissions grant you access to use the resource. However, if you wish to use resources outside of what was created on your behalf, you need to ensure both: - The resource you're trying to use has permissions set up to allow you to access it.-- Your Azure AI resource is allowed to access it. +- Your Azure AI hub resource is allowed to access it. -For example, if you're trying to consume a new Blob storage, you need to ensure that Azure AI resource's managed identity is added to the Blob Storage Reader role for the Blob. If you're trying to use a new Azure AI Search source, you might need to add the Azure AI resource to the Azure AI Search's role assignments. +For example, if you're trying to consume a new Blob storage, you need to ensure that Azure AI hub resource's managed identity is added to the Blob Storage Reader role for the Blob. If you're trying to use a new Azure AI Search source, you might need to add the Azure AI hub resource to the Azure AI Search's role assignments. ## Manage access with roles -If you're an owner of an Azure AI resource, you can add and remove roles for the Studio. Within the Azure AI Studio, go to **Manage** and select your Azure AI resource. Then select **Permissions** to add and remove users for the Azure AI resource. You can also manage permissions from the Azure portal under **Access Control (IAM)** or through the Azure CLI. For example, use the [Azure CLI](/cli/azure/) to assign the Azure AI Developer role to "joe@contoso.com" for resource group "this-rg" with the following command: +If you're an owner of an Azure AI hub resource, you can add and remove roles for the Studio. Within the Azure AI Studio, go to **Manage** and select your Azure AI hub resource. Then select **Permissions** to add and remove users for the Azure AI hub resource. You can also manage permissions from the Azure portal under **Access Control (IAM)** or through the Azure CLI. For example, use the [Azure CLI](/cli/azure/) to assign the Azure AI Developer role to "joe@contoso.com" for resource group "this-rg" with the following command: ```azurecli-interactive az role assignment create --role "Azure AI Developer" --assignee "joe@contoso.com" --resource-group this-rg az role assignment create --role "Azure AI Developer" --assignee "joe@contoso.co ## Create custom roles > [!NOTE]-> In order to make a new Azure AI resource, you need the Owner or Contributor role. At this time, a custom role, even with all actions allowed, will not enable you to make an Azure AI resource. +> In order to make a new Azure AI hub resource, you need the Owner or Contributor role. At this time, a custom role, even with all actions allowed, will not enable you to make an Azure AI hub resource. If the built-in roles are insufficient, you can create custom roles. Custom roles might have read, write, delete, and compute resource permissions in that AI Studio. You can make the role available at a specific project level, a specific resource group level, or a specific subscription level. If the built-in roles are insufficient, you can create custom roles. Custom role ## Next steps -- [How to create an Azure AI resource](../how-to/create-azure-ai-resource.md)+- [How to create an Azure AI hub resource](../how-to/create-azure-ai-resource.md) - [How to create an Azure AI project](../how-to/create-projects.md) - [How to create a connection in Azure AI Studio](../how-to/connections-add.md) |
ai-studio | Cli Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/cli-install.md | You can run the Azure AI CLI in a Docker container using VS Code Dev Containers: ## Try the Azure AI CLI The AI CLI offers many capabilities, including an interactive chat experience, tools to work with prompt flows and search and speech services, and tools to manage AI services. -If you plan to use the AI CLI as part of your development, we recommend you start by running `ai init`, which guides you through setting up your AI resources and connections in your development environment. +If you plan to use the AI CLI as part of your development, we recommend you start by running `ai init`, which guides you through setting up your Azure resources and connections in your development environment. Try `ai help` to learn more about these capabilities. ### ai init -The `ai init` command allows interactive and non-interactive selection or creation of Azure AI resources. When an AI resource is selected or created, the associated resource keys and region are retrieved and automatically stored in the local AI configuration datastore. +The `ai init` command allows interactive and non-interactive selection or creation of Azure AI hub resources. When an Azure AI hub resource is selected or created, the associated resource keys and region are retrieved and automatically stored in the local AI configuration datastore. You can initialize the Azure AI CLI by running the following command: The following table describes the scenarios for each flow. | Scenario | Description | | | | | Initialize a new AI project | Choose if you don't have an existing AI project that you have been working with in the Azure AI Studio. The `ai init` command walks you through creating or attaching resources. |-| Initialize an existing AI project | Choose if you have an existing AI project you want to work with. The `ai init` command checks your existing linked resources, and ask you to set anything that hasn't been set before. | +| Initialize an existing AI project | Choose if you have an existing AI project you want to work with. The `ai init` command checks your existing linked resources, and asks you to set anything that hasn't been set before. | | Initialize standalone resources| Choose if you're building a simple solution connected to a single AI service, or if you want to attach more resources to your development environment | -Working with an AI project is recommended when using the Azure AI Studio and/or connecting to multiple AI services. Projects come with an AI Resource that houses related projects and shareable resources like compute and connections to services. Projects also allow you to connect code to cloud resources (storage and model deployments), save evaluation results, and host code behind online endpoints. You're prompted to create and/or attach Azure AI Services to your project. +Working with an AI project is recommended when using the Azure AI Studio and/or connecting to multiple AI services. Projects come with An Azure AI hub resource that houses related projects and shareable resources like compute and connections to services. Projects also allow you to connect code to cloud resources (storage and model deployments), save evaluation results, and host code behind online endpoints. You're prompted to create and/or attach Azure AI Services to your project. Initializing standalone resources is recommended when building simple solutions connected to a single AI service. You can also choose to initialize more standalone resources after initializing a project. The following resources can be initialized standalone, or attached to projects: -- Azure AI +- Azure AI - Azure OpenAI: Provides access to OpenAI's powerful language models. - Azure AI Search: Provides keyword, vector, and hybrid search capabilities. - Azure AI Speech: Provides speech recognition, synthesis, and translation. The following resources can be initialized standalone, or attached to projects: 1. Run `ai init` and choose **Initialize new AI project**. 1. Select your subscription. You might be prompted to sign in through an interactive flow.-1. Select your Azure AI Resource, or create a new one. An AI Resource can have multiple projects that can share resources. +1. Select your Azure AI hub resource, or create a new one. An Azure AI hub resource can have multiple projects that can share resources. 1. Select the name of your new project. There are some suggested names, or you can enter a custom one. Once you submit, the project might take a minute to create. 1. Select the resources you want to attach to the project. You can skip resource types you don't want to attach. 1. `ai init` checks you have the connections you need for the attached resources, and your development environment is configured with your new project. The following resources can be initialized standalone, or attached to projects: ## Project connections -When working the Azure AI CLI, you want to use your project's connections. Connections are established to attached resources and allow you to integrate services with your project. You can have project-specific connections, or connections shared at the Azure AI resource level. For more information, see [Azure AI resources](../concepts/ai-resources.md) and [connections](../concepts/connections.md). +When working the Azure AI CLI, you want to use your project's connections. Connections are established to attached resources and allow you to integrate services with your project. You can have project-specific connections, or connections shared at the Azure AI hub resource level. For more information, see [Azure AI hub resources](../concepts/ai-resources.md) and [connections](../concepts/connections.md). When you run `ai init` your project connections get set in your development environment, allowing seamless integration with AI services. You can view these connections by running `ai service connection list`, and further manage these connections with `ai service connection` subcommands. ai dev new .env `ai service` helps you manage your connections to resources and services. -- `ai service resource` lets you list, create or delete AI Resources.-- `ai service project` lets you list, create, or delete AI Projects.+- `ai service resource` lets you list, create or delete Azure AI hub resources. +- `ai service project` lets you list, create, or delete Azure AI projects. - `ai service connection` lets you list, create, or delete connections. These are the connections to your attached services. ## ai flow |
ai-studio | Commitment Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/commitment-tier.md | -Azure AI offers commitment tier pricing, each offering a discounted rate compared to the pay-as-you-go pricing model. With commitment tier pricing, you can commit to using the Azure AI resources and features for a fixed fee, enabling you to have a predictable total cost based on the needs of your workload. +Azure AI offers commitment tier pricing, each offering a discounted rate compared to the pay-as-you-go pricing model. With commitment tier pricing, you can commit to using the Azure AI hub resources and features for a fixed fee, enabling you to have a predictable total cost based on the needs of your workload. ## Purchase a commitment plan by updating your Azure resource |
ai-studio | Configure Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-private-link.md | You get several Azure AI default resources in your resource group. You need to c - Disable public network access flag of Azure AI default resources such as Storage, Key Vault, Container Registry. Azure AI services and Azure AI Search should be public. - Establish private endpoint connection to Azure AI default resource. Note that you need to have blob and file PE for the default storage account.-- [Managed identity configurations](#managed-identity-configuration) to allow Azure AI resources access your storage account if it's private.+- [Managed identity configurations](#managed-identity-configuration) to allow Azure AI hub resources access your storage account if it's private. ## Prerequisites You get several Azure AI default resources in your resource group. You need to c ## Create an Azure AI that uses a private endpoint -Use one of the following methods to create an Azure AI resource with a private endpoint. Each of these methods __requires an existing virtual network__: +Use one of the following methods to create an Azure AI hub resource with a private endpoint. Each of these methods __requires an existing virtual network__: # [Azure CLI](#tab/cli) -Create your Azure AI resource with the Azure AI CLI. Run the following command and follow the prompts. For more information, see [Get started with Azure AI CLI](cli-install.md). +Create your Azure AI hub resource with the Azure AI CLI. Run the following command and follow the prompts. For more information, see [Get started with Azure AI CLI](cli-install.md). ```azurecli-interactive ai init See [this documentation](../../machine-learning/how-to-custom-dns.md#find-the-ip - [Create a project](create-projects.md) - [Learn more about Azure AI Studio](../what-is-ai-studio.md)-- [Learn more about Azure AI resources](../concepts/ai-resources.md)+- [Learn more about Azure AI hub resources](../concepts/ai-resources.md) - [Troubleshoot secure connectivity to a project](troubleshoot-secure-connection-project.md) |
ai-studio | Connections Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/connections-add.md | -Connections are a way to authenticate and consume both Microsoft and third-party resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI resource. +Connections are a way to authenticate and consume both Microsoft and third-party resources within your Azure AI projects. For example, connections can be used for prompt flow, training data, and deployments. [Connections can be created](../how-to/connections-add.md) exclusively for one project or shared with all projects in the same Azure AI hub resource. ## Connection types Here's a table of the available connection types in Azure AI Studio with descrip ### Connection details -When you [create a new connection](#create-a-new-connection), you enter the following information for the service connection type you selected. You can create a connection that's only available for the current project or available for all projects associated with the Azure AI resource. +When you [create a new connection](#create-a-new-connection), you enter the following information for the service connection type you selected. You can create a connection that's only available for the current project or available for all projects associated with the Azure AI hub resource. > [!NOTE]-> When you create a connection from the **Manage** page, the connection is always created at the Azure AI resource level and shared accross all associated projects. +> When you create a connection from the **Manage** page, the connection is always created at the Azure AI hub resource level and shared accross all associated projects. # [Azure AI Search](#tab/azure-ai-search) |
ai-studio | Costs Plan Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/costs-plan-manage.md | As you add new resources to your project, return to this calculator and add the ### Costs that typically accrue with Azure AI and Azure AI Studio -When you create resources for an Azure AI resource, resources for other Azure services are also created. They are: +When you create resources for an Azure AI hub resource, resources for other Azure services are also created. They are: | Service pricing page | Description with example use cases | | | | -| [Azure AI services](https://azure.microsoft.com/pricing/details/cognitive-services/) | You pay to use services such as Azure OpenAI, Speech, Content Safety, Vision, Document Intelligence, and Language. Costs vary for each service and for some features within each service. | -| [Azure AI Search](https://azure.microsoft.com/pricing/details/search/) | An example use case is to store data in a vector search index. | -| [Azure Machine Learning](https://azure.microsoft.com/pricing/details/machine-learning/) | Compute instances are needed to run Visual Studio Code (Web) and prompt flow via Azure AI Studio.<br/><br/>When you create a compute instance, the virtual machine (VM) stays on so it's available for your work.<br/><br/>Enable idle shutdown to save on cost when the VM is idle for a specified time period.<br/><br/>Or set up a schedule to automatically start and stop the compute instance to save cost when you aren't planning to use it. | +| [Azure AI services](https://azure.microsoft.com/pricing/details/cognitive-services/) | You pay to use services such as Azure OpenAI, Speech, Content Safety, Vision, Document Intelligence, and Language. Costs vary for each service and for some features within each service. For more information about provisioning of Azure AI services, see [Azure AI hub resources](../concepts/ai-resources.md#azure-ai-services-api-access-keys).| +| [Azure AI Search](https://azure.microsoft.com/pricing/details/search/) | An example use case is to store data in a [vector search index](./index-add.md). | +| [Azure Machine Learning](https://azure.microsoft.com/pricing/details/machine-learning/) | Compute instances are needed to run [Visual Studio Code (Web or Desktop)](./develop-in-vscode.md) and [prompt flow](./prompt-flow.md) via Azure AI Studio.<br/><br/>When you create a compute instance, the virtual machine (VM) stays on so it's available for your work.<br/><br/>Enable idle shutdown to save on cost when the VM is idle for a specified time period.<br/><br/>Or set up a schedule to automatically start and stop the compute instance to save cost when you aren't planning to use it. | | [Azure Virtual Machine](https://azure.microsoft.com/pricing/details/virtual-machines/) | Azure Virtual Machines gives you the flexibility of virtualization for a wide range of computing solutions with support for Linux, Windows Server, SQL Server, Oracle, IBM, SAP, and more. | | [Azure Container Registry Basic account](https://azure.microsoft.com/pricing/details/container-registry) | Provides storage of private Docker container images, enabling fast, scalable retrieval, and network-close deployment of container workloads on Azure. |-| [Azure Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs/) | Can be used to store Azure AI project files. | +| [Azure Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs/) | Can be used to store [Azure AI project](./create-projects.md) files. | | [Key Vault](https://azure.microsoft.com/pricing/details/key-vault/) | A key vault for storing secrets. | | [Azure Private Link](https://azure.microsoft.com/pricing/details/private-link/) | Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) over a private endpoint in your virtual network. | ### Costs might accrue before resource deletion -Before you delete an Azure AI resource in the Azure portal or with Azure CLI, the following sub resources are common costs that accumulate even when you aren't actively working in the workspace. If you're planning on returning to your Azure AI resource at a later time, these resources might continue to accrue costs: +Before you delete an Azure AI hub resource in the Azure portal or with Azure CLI, the following sub resources are common costs that accumulate even when you aren't actively working in the workspace. If you're planning on returning to your Azure AI hub resource at a later time, these resources might continue to accrue costs: - Azure AI Search (for the data) - Virtual machines - Load Balancer Compute instances also incur P10 disk costs even in stopped state. This is becau ### Costs might accrue after resource deletion -After you delete an Azure AI resource in the Azure portal or with Azure CLI, the following resources continue to exist. They continue to accrue costs until you delete them. +After you delete an Azure AI hub resource in the Azure portal or with Azure CLI, the following resources continue to exist. They continue to accrue costs until you delete them. - Azure Container Registry - Azure Blob Storage - Key Vault-- Application Insights (if you enabled it for your Azure AI resource)+- Application Insights (if you enabled it for your Azure AI hub resource) ## Monitor costs -As you use Azure AI Studio with Azure AI resources, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). You can see the incurred costs in [cost analysis](../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +As you use Azure AI Studio with Azure AI hub resources, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). You can see the incurred costs in [cost analysis](../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). -When you use cost analysis, you view Azure AI resource costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. +When you use cost analysis, you view Azure AI hub resource costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. ### Monitor Azure AI Studio project costs You can get to cost analysis from the [Azure portal](https://portal.azure.com). You can also get to cost analysis from the [Azure AI Studio portal](https://ai.azure.com). > [!IMPORTANT]-> Your Azure AI project costs are only a subset of your overall application or solution costs. You need to monitor costs for all Azure resources used in your application or solution. See [Azure AI resources](../concepts/ai-resources.md) for more information. +> Your Azure AI project costs are only a subset of your overall application or solution costs. You need to monitor costs for all Azure resources used in your application or solution. See [Azure AI hub resources](../concepts/ai-resources.md) for more information. For the examples in this section, assume that all Azure AI Studio resources are in the same resource group. But you can have resources in different resource groups. For example, your Azure AI Search resource might be in a different resource group than your Azure AI Studio project. Here's an example of how to monitor costs for an Azure AI Studio project. The co :::image type="content" source="../media/cost-management/project-costs/costs-per-project-resource-details.png" alt-text="Screenshot of the Azure portal cost analysis with Azure AI project expanded." lightbox="../media/cost-management/project-costs/costs-per-project-resource-details.png"::: -1. Expand **contoso_ai_resource** to see the costs for services underlying the [Azure AI](../concepts/ai-resources.md#azure-ai-resources) resource. You can also apply a filter to focus on other costs in your resource group. +1. Expand **contoso_ai_resource** to see the costs for services underlying the [Azure AI](../concepts/ai-resources.md#azure-ai-hub-resources) resource. You can also apply a filter to focus on other costs in your resource group. You can also view resource group costs directly from the Azure portal. To do so: 1. Sign in to [Azure portal](https://portal.azure.com). |
ai-studio | Create Azure Ai Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-azure-ai-resource.md | Title: How to create and manage an Azure AI resource + Title: How to create and manage an Azure AI hub resource -description: This article describes how to create and manage an Azure AI resource +description: This article describes how to create and manage an Azure AI hub resource - ignite-2023 Previously updated : 11/15/2023 Last updated : 2/5/2024 -# How to create and manage an Azure AI resource +# How to create and manage an Azure AI hub resource [!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)] -As an administrator, you can create and manage Azure AI resources. Azure AI resources provide a hosting environment for the projects of a team, and help you as an IT admin centrally set up security settings and govern usage and spend. You can create and manage an Azure AI resource from the Azure portal or from the Azure AI Studio. +As an administrator, you can create and manage Azure AI hub resources. Azure AI hub resources provide a hosting environment for the projects of a team, and help you as an IT admin centrally set up security settings and govern usage and spend. You can create and manage an Azure AI hub resource from the Azure portal or from the Azure AI Studio. -In this article, you learn how to create and manage an Azure AI resource in Azure AI Studio (for getting started) and from the Azure portal (for advanced security setup). +In this article, you learn how to create and manage an Azure AI hub resource in Azure AI Studio (for getting started) and from the Azure portal (for advanced security setup). -## Create an Azure AI resource in AI Studio for getting started -To create a new Azure AI resource, you need either the Owner or Contributor role on the resource group or on an existing Azure AI resource. If you are unable to create an Azure AI resource due to permissions, reach out to your administrator. If your organization is using [Azure Policy](../../governance/policy/overview.md), don't create the resource in AI Studio. Create the Azure AI resource [in the Azure Portal](#create-a-secure-azure-ai-resource-in-the-azure-portal) instead. +## Create an Azure AI hub resource in AI Studio -Follow these steps to create a new Azure AI resource in AI Studio. +To create a new Azure AI hub resource, you need either the Owner or Contributor role on the resource group or on an existing Azure AI hub resource. If you are unable to create an Azure AI hub resource due to permissions, reach out to your administrator. If your organization is using [Azure Policy](../../governance/policy/overview.md), don't create the resource in AI Studio. Create the Azure AI hub resource [in the Azure portal](#create-a-secure-azure-ai-hub-resource-in-the-azure-portal) instead. -1. From Azure AI Studio, navigate to `manage` and select `New Azure AI resource`. +Follow these steps to create a new Azure AI hub resource in AI Studio. -1. Fill in **Subscription**, **Resource group**, and **Location** for your new Azure AI resource. +1. Go to the **Manage** page in [Azure AI Studio](https://ai.azure.com). +1. Select **+ New AI hub**. - :::image type="content" source="../media/how-to/resource-create-advanced.png" alt-text="Screenshot of the Create an Azure AI resource wizard with the option to set basic information." lightbox="../media/how-to/resource-create-advanced.png"::: +1. Enter your AI hub name, subscription, resource group, and location details. -1. Optionally, choose an existing Azure AI services provider. By default a new provider is created. New Azure AI services include multiple API endpoints for Speech, Content Safety and Azure OpenAI. You can also bring an existing Azure OpenAI resource. +1. In the **Azure OpenAI** dropdown, you can select an existing Azure OpenAI resource to bring all your deployments into AI Studio. If you do not bring one, we will create one for you. -1. Optionally, connect an existing Azure AI Search instance to share search indices with all projects in this Azure AI resource. An Azure AI Search instance isn't created for you if you don't provide one. + :::image type="content" source="../media/how-to/resource-create-advanced.png" alt-text="Screenshot of the Create an Azure AI hub resource wizard with the option to set basic information." lightbox="../media/how-to/resource-create-advanced.png"::: -## Create a secure Azure AI resource in the Azure portal +1. Optionally, connect an existing Azure AI Search instance to share search indices with all projects in this Azure AI hub resource. An Azure AI Search instance isn't created for you if you don't provide one. +1. Select **Next**. +1. On the **Review and finish** page, you see the **AI Services** provider for you to access the Azure AI services such as Azure OpenAI. -If your organization is using [Azure Policy](../../governance/policy/overview.md), setup a resource that meets your organization's requirements instead of using AI Studio for resource creation. + :::image type="content" source="../media/how-to/resource-create-studio-review.png" alt-text="Screenshot of the review and finish page for creating an AI hub." lightbox="../media/how-to/resource-create-studio-review.png"::: ++1. Select **Create**. ++When the AI hub is created, you can see it on the **Manage** page in AI Studio. You can see that initially no projects are created in the AI hub. You can [create a new project](./create-projects.md). +++## Create a secure Azure AI hub resource in the Azure portal ++If your organization is using [Azure Policy](../../governance/policy/overview.md), set up an Azure AI hub resource that meets your organization's requirements instead of using AI Studio for resource creation. 1. From the Azure portal, search for `Azure AI Studio` and create a new resource by selecting **+ New Azure AI**-1. Fill in **Subscription**, **Resource group**, and **Region**. **Name** your new Azure AI resource. - - For advanced settings, select **Next: Resources** to specify resources, networking, encryption, identity, and tags. - - Your subscription must have access to Azure AI to create this resource. +1. Enter your AI hub name, subscription, resource group, and location details. +1. For advanced settings, select **Next: Resources** to specify resources, networking, encryption, identity, and tags. - :::image type="content" source="../media/how-to/resource-create-basics.png" alt-text="Screenshot of the option to set Azure AI resource basic information." lightbox="../media/how-to/resource-create-basics.png"::: + :::image type="content" source="../media/how-to/resource-create-basics.png" alt-text="Screenshot of the option to set Azure AI hub resource basic information." lightbox="../media/how-to/resource-create-basics.png"::: -1. Select an existing **Azure AI services** or create a new one. New Azure AI services include multiple API endpoints for Speech, Content Safety and Azure OpenAI. You can also bring an existing Azure OpenAI resource. Optionally, choose an existing **Storage account**, **Key vault**, **Container Registry**, and **Application insights** to host artifacts generated when you use AI Studio. +1. Select an existing **Azure AI services** resource or create a new one. New Azure AI services include multiple API endpoints for Speech, Content Safety and Azure OpenAI. You can also bring an existing Azure OpenAI resource. Optionally, choose an existing **Storage account**, **Key vault**, **Container Registry**, and **Application insights** to host artifacts generated when you use AI Studio. - :::image type="content" source="../media/how-to/resource-create-resources.png" alt-text="Screenshot of the Create an Azure AI resource with the option to set resource information." lightbox="../media/how-to/resource-create-resources.png"::: + :::image type="content" source="../media/how-to/resource-create-resources.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to set resource information." lightbox="../media/how-to/resource-create-resources.png"::: 1. Set up Network isolation. Read more on [network isolation](configure-managed-network.md). - :::image type="content" source="../media/how-to/resource-create-networking.png" alt-text="Screenshot of the Create an Azure AI resource with the option to set network isolation information." lightbox="../media/how-to/resource-create-networking.png"::: + :::image type="content" source="../media/how-to/resource-create-networking.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to set network isolation information." lightbox="../media/how-to/resource-create-networking.png"::: 1. Set up data encryption. You can either use **Microsoft-managed keys** or enable **Customer-managed keys**. - :::image type="content" source="../media/how-to/resource-create-encryption.png" alt-text="Screenshot of the Create an Azure AI resource with the option to select your encryption type." lightbox="../media/how-to/resource-create-encryption.png"::: + :::image type="content" source="../media/how-to/resource-create-encryption.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to select your encryption type." lightbox="../media/how-to/resource-create-encryption.png"::: 1. By default, **System assigned identity** is enabled, but you can switch to **User assigned identity** if existing storage, key vault, and container registry are selected in Resources. - :::image type="content" source="../media/how-to/resource-create-identity.png" alt-text="Screenshot of the Create an Azure AI resource with the option to select a managed identity." lightbox="../media/how-to/resource-create-identity.png"::: + :::image type="content" source="../media/how-to/resource-create-identity.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to select a managed identity." lightbox="../media/how-to/resource-create-identity.png"::: >[!Note]- >If you select **User assigned identity**, your identity needs to have the `Cognitive Services Contributor` role in order to successfully create a new Azure AI resource. + >If you select **User assigned identity**, your identity needs to have the `Cognitive Services Contributor` role in order to successfully create a new Azure AI hub resource. 1. Add tags. - :::image type="content" source="../media/how-to/resource-create-tags.png" alt-text="Screenshot of the Create an Azure AI resource with the option to add tags." lightbox="../media/how-to/resource-create-tags.png"::: + :::image type="content" source="../media/how-to/resource-create-tags.png" alt-text="Screenshot of the Create an Azure AI hub resource with the option to add tags." lightbox="../media/how-to/resource-create-tags.png"::: 1. Select **Review + create** -## Manage your Azure AI resource from the Azure portal +## Manage your Azure AI hub resource from the Azure portal -### Azure AI resource keys -View your keys and endpoints for your Azure AI resource from the overview page within the Azure portal. +### Azure AI hub resource keys +View your keys and endpoints for your Azure AI hub resource from the overview page within the Azure portal. ### Manage access control -Manage role assignments from **Access control (IAM)** within the Azure portal. Learn more about Azure AI resource [role-based access control](../concepts/rbac-ai-studio.md). +Manage role assignments from **Access control (IAM)** within the Azure portal. Learn more about Azure AI hub resource [role-based access control](../concepts/rbac-ai-studio.md). To add grant users permissions: -1. Select **+ Add** to add users to your Azure AI resource +1. Select **+ Add** to add users to your Azure AI hub resource 1. Select the **Role** you want to assign. - :::image type="content" source="../media/how-to/resource-rbac-role.png" alt-text="Screenshot of the page to add a role within the Azure AI resource Azure portal view." lightbox="../media/how-to/resource-rbac-role.png"::: + :::image type="content" source="../media/how-to/resource-rbac-role.png" alt-text="Screenshot of the page to add a role within the Azure AI hub resource Azure portal view." lightbox="../media/how-to/resource-rbac-role.png"::: 1. Select the **Members** you want to give the role to. - :::image type="content" source="../media/how-to/resource-rbac-members.png" alt-text="Screenshot of the add members page within the Azure AI resource Azure portal view." lightbox="../media/how-to/resource-rbac-members.png"::: + :::image type="content" source="../media/how-to/resource-rbac-members.png" alt-text="Screenshot of the add members page within the Azure AI hub resource Azure portal view." lightbox="../media/how-to/resource-rbac-members.png"::: 1. **Review + assign**. It can take up to an hour for permissions to be applied to users. ### Networking-Azure AI resource networking settings can be set during resource creation or changed in the Networking tab in the Azure portal view. Creating a new Azure AI resource invokes a Managed Virtual Network. This streamlines and automates your network isolation configuration with a built-in Managed Virtual Network. The Managed Virtual Network settings are applied to all projects created within an Azure AI resource. +Azure AI hub resource networking settings can be set during resource creation or changed in the **Networking** tab in the Azure portal view. Creating a new Azure AI hub resource invokes a Managed Virtual Network. This streamlines and automates your network isolation configuration with a built-in Managed Virtual Network. The Managed Virtual Network settings are applied to all projects created within an Azure AI hub resource. -At Azure AI resource creation, select between the networking isolation modes: Public, Private with Internet Outbound, and Private with Approved Outbound. To secure your resource, select either Private with Internet Outbound or Private with Approved Outbound for your networking needs. For the private isolation modes, a private endpoint should be created for inbound access. Read more information on Network Isolation and Managed Virtual Network Isolation [here](../../machine-learning/how-to-managed-network.md). To create a secure Azure AI resource, follow the tutorial [here](../../machine-learning/tutorial-create-secure-workspace.md). +At Azure AI hub resource creation, select between the networking isolation modes: **Public**, **Private with Internet Outbound**, and **Private with Approved Outbound**. To secure your resource, select either **Private with Internet Outbound** or Private with Approved Outbound for your networking needs. For the private isolation modes, a private endpoint should be created for inbound access. Read more information on Network Isolation and Managed Virtual Network Isolation [here](../../machine-learning/how-to-managed-network.md). To create a secure Azure AI hub resource, follow the tutorial [here](../../machine-learning/tutorial-create-secure-workspace.md). -At Azure AI resource creation in the Azure portal, creation of associated Azure AI services, Storage account, Key vault, Application insights, and Container registry is given. These resources are found on the Resources tab during creation. +At Azure AI hub resource creation in the Azure portal, creation of associated Azure AI services, Storage account, Key vault, Application insights, and Container registry is given. These resources are found on the Resources tab during creation. -To connect to Azure AI services (Azure OpenAI, Azure AI Search, and Azure AI Content Safety) or storage accounts in Azure AI Studio, create a private endpoint in your virtual network. Ensure the PNA flag is disabled when creating the private endpoint connection. For more about Azure AI service connections, follow documentation [here](../../ai-services/cognitive-services-virtual-networks.md). You can optionally bring your own (BYO) search, but this requires a private endpoint connection from your virtual network. +To connect to Azure AI services (Azure OpenAI, Azure AI Search, and Azure AI Content Safety) or storage accounts in Azure AI Studio, create a private endpoint in your virtual network. Ensure the PNA flag is disabled when creating the private endpoint connection. For more about Azure AI services connections, follow documentation [here](../../ai-services/cognitive-services-virtual-networks.md). You can optionally bring your own (BYO) search, but this requires a private endpoint connection from your virtual network. ### Encryption-Projects that use the same Azure AI resource, share their encryption configuration. Encryption mode can be set only at the time of Azure AI resource creation between Microsoft-managed keys and Customer-managed keys. +Projects that use the same Azure AI hub resource, share their encryption configuration. Encryption mode can be set only at the time of Azure AI hub resource creation between Microsoft-managed keys and Customer-managed keys. -From the Azure portal view, navigate to the encryption tab, to find the encryption settings for your AI resource. -For Azure AI resources that use CMK encryption mode, you can update the encryption key to a new key version. This update operation is constrained to keys and key versions within the same Key Vault instance as the original key. +From the Azure portal view, navigate to the encryption tab, to find the encryption settings for your Azure AI hub resource. +For Azure AI hub resources that use CMK encryption mode, you can update the encryption key to a new key version. This update operation is constrained to keys and key versions within the same Key Vault instance as the original key. -## Manage your Azure AI resource from the Manage tab within the AI Studio +## Manage your Azure AI hub resource from the Manage tab within the AI Studio ### Getting started with the AI Studio -When you enter your AI Studio, under **Manage**, you have the options to create a new Azure AI resource, manage an existing Azure AI resource, or view your Quota. +On the **Manage** page in [Azure AI Studio](https://ai.azure.com), you have the options to create a new Azure AI hub resource, manage an existing Azure AI hub resource, or view your quota. :::image type="content" source="../media/how-to/resource-manage-studio.png" alt-text="Screenshot of the Manage page of the Azure AI Studio." lightbox="../media/how-to/resource-manage-studio.png"::: -### Managing an Azure AI resource +### Managing an Azure AI hub resource When you manage a resource, you see an Overview page that lists **Projects**, **Description**, **Resource Configuration**, **Connections**, and **Permissions**. You also see pages to further manager **Permissions**, **Compute instances**, **Connections**, **Policies**, and **Billing**. -You can view all Projects that use this Azure AI resource. Be linked to the Azure portal to manage the Resource Configuration. Manage who has access to this Azure AI resource. View all of the connections within the resource. Manage who has access to this Azure AI resource. +You can view all Projects that use this Azure AI hub resource. Be linked to the Azure portal to manage the Resource Configuration. Manage who has access to this Azure AI hub resource. View all of the connections within the resource. Manage who has access to this Azure AI hub resource. :::image type="content" source="../media/how-to/resource-manage-details.png" alt-text="Screenshot of the Details page of the Azure AI Studio showing an overview of the resource." lightbox="../media/how-to/resource-manage-details.png"::: ### Permissions-Within Permissions you can view who has access to the Azure AI resource and also manage permissions. Learn more about [permissions](../concepts/rbac-ai-studio.md). +Within Permissions you can view who has access to the Azure AI hub resource and also manage permissions. Learn more about [permissions](../concepts/rbac-ai-studio.md). To add members: 1. Select **+ Add member**-1. Enter the member's name in **Add member** and assign a **Role**. For most users, we recommend the AI Developer role. This permission applies to the entire Azure AI resource. If you wish to only grant access to a specific Project, manage permissions in the [Project](create-projects.md) +1. Enter the member's name in **Add member** and assign a **Role**. For most users, we recommend the AI Developer role. This permission applies to the entire Azure AI hub resource. If you wish to only grant access to a specific Project, manage permissions in the [Project](create-projects.md) ### Compute instances-View and manage computes for your Azure AI resource. Create computes, delete computes, and review all compute resources you have in one place. +View and manage computes for your Azure AI hub resource. Create computes, delete computes, and review all compute resources you have in one place. ### Connections-From the Connections page, you can view all Connections in your Azure AI resource by their Name, Authentication method, Category type, if the connection is shared to all projects in the resource or specifically to a Project, Target, Owner, and Provisioning state. +From the Connections page, you can view all Connections in your Azure AI hub resource by their Name, Authentication method, Category type, if the connection is shared to all projects in the resource or specifically to a Project, Target, Owner, and Provisioning state. You can also add a connection through **+ Connection** -Learn more on how to [create and manage Connections](connections-add.md). Connections created in the Azure AI resource Manage page are automatically shared across all projects. If you want to make a project specific connection, make that within the Project. +Learn more on how to [create and manage Connections](connections-add.md). Connections created in the Azure AI hub resource Manage page are automatically shared across all projects. If you want to make a project specific connection, make that within the Project. ### Policies-View and configure policies for your Azure AI resource. See all the policies you have in one place. Policies are applied across all Projects. +View and configure policies for your Azure AI hub resource. See all the policies you have in one place. Policies are applied across all Projects. ### Billing-Here you're linked to the Azure portal to review the cost analysis information for your Azure AI resource. +Here you're linked to the Azure portal to review the cost analysis information for your Azure AI hub resource. ## Next steps - [Create a project](create-projects.md) - [Learn more about Azure AI Studio](../what-is-ai-studio.md)-- [Learn more about Azure AI resources](../concepts/ai-resources.md)+- [Learn more about Azure AI hub resources](../concepts/ai-resources.md) |
ai-studio | Create Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md | -Projects are hosted by an Azure AI resource that provides enterprise-grade security and a collaborative environment. For more information about the Azure AI projects and resources model, see [Azure AI resources](../concepts/ai-resources.md). +Projects are hosted by an Azure AI hub resource that provides enterprise-grade security and a collaborative environment. For more information about the Azure AI projects and resources model, see [Azure AI hub resources](../concepts/ai-resources.md). ++## Create a project You can create a project in Azure AI Studio in more than one way. The most direct way is from the **Build** tab. 1. Select the **Build** tab at the top of the page. You can create a project in Azure AI Studio in more than one way. The most direc :::image type="content" source="../media/how-to/projects-create-new.png" alt-text="Screenshot of the Build tab of the Azure AI Studio with the option to create a new project visible." lightbox="../media/how-to/projects-create-new.png"::: 1. Enter a name for the project.-1. Select an Azure AI resource from the dropdown to host your project. If you don't have access to an Azure AI resource yet, select **Create a new resource**. +1. Select an Azure AI hub resource from the dropdown to host your project. If you don't have access to an Azure AI hub resource yet, select **Create a new resource**. :::image type="content" source="../media/how-to/projects-create-details.png" alt-text="Screenshot of the project details page within the create project dialog." lightbox="../media/how-to/projects-create-details.png"::: > [!NOTE]- > To create an Azure AI resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. + > To create an Azure AI hub resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI hub resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. -1. If you're creating a new Azure AI resource, enter a name. +1. If you're creating a new Azure AI hub resource, enter a name. :::image type="content" source="../media/how-to/projects-create-resource.png" alt-text="Screenshot of the create resource page within the create project dialog." lightbox="../media/how-to/projects-create-resource.png"::: You can create a project in Azure AI Studio in more than one way. The most direc 1. Leave the **Resource group** as the default to create a new resource group. Alternatively, you can select an existing resource group from the dropdown. > [!TIP]- > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI resource, a container registry, and a storage account. + > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI hub resource, a container registry, and a storage account. -1. Enter the **Location** for the Azure AI resource and then select **Next**. The location is the region where the Azure AI resource is hosted. The location of the Azure AI resource is also the location of the project. Azure AI services availability differs per region. For example, certain models might not be available in certain regions. -1. Review the project details and then select **Create a project**. +1. Enter the **Location** for the Azure AI hub resource and then select **Next**. The location is the region where the Azure AI hub resource is hosted. The location of the Azure AI hub resource is also the location of the project. Azure AI services availability differs per region. For example, certain models might not be available in certain regions. +1. On the **Review and finish** page, you see the **AI Services** provider for you to access the Azure AI services such as Azure OpenAI. :::image type="content" source="../media/how-to/projects-create-review-finish.png" alt-text="Screenshot of the review and finish page within the create project dialog." lightbox="../media/how-to/projects-create-review-finish.png"::: -Once a project is created, you can access the **Tools**, **Components**, and **Settings** assets in the left navigation panel. Tools and assets listed under each of those subheadings can vary depending on the type of project you've selected. For example, if you've selected a project that uses Azure OpenAI, you see the **Playground** navigation option under **Tools**. +1. Review the project details and then select **Create a project**. ++Once a project is created, you can access the **Tools**, **Components**, and **Settings** assets in the left navigation panel. For a project that uses an Azure AI hub with support for Azure OpenAI, you see the **Playground** navigation option under **Tools**. ## Project details -In the project details page (select **Build** > **Settings**), you can find information about the project, such as the project name, description, and the Azure AI resource that hosts the project. You can also find the project ID, which is used to identify the project in the Azure AI Studio API. +In the project details page (select **Build** > **Settings**), you can find information about the project, such as the project name, description, and the Azure AI hub resource that hosts the project. You can also find the project ID, which is used to identify the project in the Azure AI Studio API. -- Project name: The name of the project corresponds to the selected project in the left panel. -- Azure AI resource: The Azure AI resource that hosts the project. -- Location: The location of the Azure AI resource that hosts the project. For supported locations, see [Azure AI Studio regions](../reference/region-support.md).-- Subscription: The subscription that hosts the Azure AI resource that hosts the project.-- Resource group: The resource group that hosts the Azure AI resource that hosts the project.+- Name: The name of the project corresponds to the selected project in the left panel. +- AI hub: The Azure AI hub resource that hosts the project. +- Location: The location of the Azure AI hub resource that hosts the project. For supported locations, see [Azure AI Studio regions](../reference/region-support.md). +- Subscription: The subscription that hosts the Azure AI hub resource that hosts the project. +- Resource group: The resource group that hosts the Azure AI hub resource that hosts the project. - Permissions: The users that have access to the project. For more information, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). -Select the Azure AI resource, subscription, or resource group to navigate to the corresponding resource in the Azure portal. +Select **View in the Azure portal** to navigate to the project resources in the Azure portal. ## Next steps -- [QuickStart: Moderate text and images with content safety in Azure AI Studio](../quickstarts/content-safety.md)+- [Deploy a web app for chat on your data](../tutorials/deploy-chat-web-app.md) - [Learn more about Azure AI Studio](../what-is-ai-studio.md)-- [Learn more about Azure AI resources](../concepts/ai-resources.md)+- [Learn more about Azure AI hub resources](../concepts/ai-resources.md) |
ai-studio | Data Image Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-image-add.md | This guide is scoped to the Azure AI Studio playground, but you can also add ima From the Azure AI Studio playground, you can choose how to add your image data for GPT-4 Turbo with Vision: * [Upload image files and metadata](?tabs=upload-image-files-and-metadata): You can upload image files and metadata in the playground. This option is useful if you have a small number of image files.-* [Azure AI Search](?tabs=azure-ai-search): If you have an existing [Azure AI search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. +* [Azure AI Search](?tabs=azure-ai-search): If you have an existing [Azure AI Search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. * [Azure Blob Storage](?tabs=azure-blob-storage): The Azure Blob storage option is especially useful if you have a large number of image files and don't want to manually upload each one. Each option uses an Azure AI Search index to do image-to-image search and retrieve the top search results for your input prompt image. Each option uses an Azure AI Search index to do image-to-image search and retrie # [Azure AI Search](#tab/azure-ai-search) -If you have an existing [Azure AI search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. +If you have an existing [Azure AI Search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. If you don't already have a search index created for your images: - You can create one using the [AI Search vector search repository on GitHub](https://github.com/Azure/cognitive-search-vector-pr), which provides you with scripts to create an index with your image files. If you don't already have a search index created for your images: 1. Enter your data source details: - :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-ai-search.png" alt-text="A screenshot showing the Azure AI search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-ai-search.png"::: + :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-ai-search.png" alt-text="A screenshot showing the Azure AI Search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-ai-search.png"::: - **Subscription**: Select the Azure subscription that contains the Azure OpenAI resource you want to use. - **Azure AI Search service**: Select your Azure AI Search service resource that has an image search index. If you don't already have a search index created for your images: 1. Review the details you entered. - :::image type="content" source="../media/data-add/use-your-image-data/add-your-data-ai-search-review-finish.png" alt-text="Screenshot of the review and finish page for adding data via Azure AI search." lightbox="../media/data-add/use-your-image-data/add-your-data-ai-search-review-finish.png"::: + :::image type="content" source="../media/data-add/use-your-image-data/add-your-data-ai-search-review-finish.png" alt-text="Screenshot of the review and finish page for adding data via Azure AI Search." lightbox="../media/data-add/use-your-image-data/add-your-data-ai-search-review-finish.png"::: 1. Select **Save and close**. After you have a blob storage populated with image files and at least one metada > > When adding data to the selected storage account for the first time in Azure AI Studio, you might be prompted to turn on [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). Azure AI Studio and Azure OpenAI need access your Azure Blob storage account. - :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-blob.png" alt-text="A screenshot showing the Azure storage account and Azure AI search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-blob.png"::: + :::image type="content" source="../media/data-add/use-your-image-data/add-image-data-blob.png" alt-text="A screenshot showing the Azure storage account and Azure AI Search index selection." lightbox="../media/data-add/use-your-image-data/add-image-data-blob.png"::: - **Subscription**: Select the Azure subscription that contains the Azure OpenAI resource you want to use. - **Storage resource** and **Storage container**: Select the Azure Blob storage resource where the image files and metadata are already stored. |
ai-studio | Deploy Models Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md | The following is an example response: ## Deploy Llama 2 models to real-time endpoints -Llama 2 models can be deployed to real-time endpoints in AI studio. When deployed to real-time endpoints, you can select all the details about on the infrastructure running the model including the virtual machines used to run it and the number of instances to handle the load you're expecting. Models deployed in this modality consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints. +Llama 2 models can be deployed to real-time endpoints in AI Studio. When deployed to real-time endpoints, you can select all the details about on the infrastructure running the model including the virtual machines used to run it and the number of instances to handle the load you're expecting. Models deployed in this modality consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints. ### Create a new deployment |
ai-studio | Develop In Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop-in-vscode.md | This table summarizes the folder structure: | `shared` | Use for working with a project's shared files and assets such as prompt flows.<br/><br/>For example, `shared\Users\{user-name}\promptflow` is where you find the project's prompt flows. | > [!IMPORTANT]-> It's recommended that you work within this project directory. Files, folders, and repos you include in your project directory persist on your host machine (your compute instance). Files stored in the code and data folders will persist even when the compute instance is stopped or restarted, but will be lost if the compute is deleted. However, the shared files are saved in your Azure AI resource's storage account, and therefore aren't lost if the compute instance is deleted. +> It's recommended that you work within this project directory. Files, folders, and repos you include in your project directory persist on your host machine (your compute instance). Files stored in the code and data folders will persist even when the compute instance is stopped or restarted, but will be lost if the compute is deleted. However, the shared files are saved in your Azure AI hub resource's storage account, and therefore aren't lost if the compute instance is deleted. ### The Azure AI SDK For cross-language compatibility and seamless integration of Azure AI capabiliti ## Next steps - [Get started with the Azure AI CLI](cli-install.md)-- [Quickstart: Generate product name ideas in the Azure AI Studio playground](../quickstarts/playground-completions.md)+- [Quickstart: Analyze images and video with GPT-4 for Vision in the playground](../quickstarts/multimodal-vision.md) |
ai-studio | Flow Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-deploy.md | This step allows you to configure the basic settings of the deployment. |Virtual machine| The VM size to use for the deployment.| |Instance count| The number of instances to use for the deployment. Specify the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades.| |Inference data collection| If you enable this, the flow inputs and outputs are auto collected in an Azure Machine Learning data asset, and can be used for later monitoring.|-|Application Insights diagnostics| If you enable this, system metrics during inference time (such as token count, flow latency, flow request, and etc.) will be collected into Azure AI resource default Application Insights.| +|Application Insights diagnostics| If you enable this, system metrics during inference time (such as token count, flow latency, flow request, and etc.) will be collected into Azure AI hub resource default Application Insights.| After you finish the basic settings, you can directly **Review + Create** to finish the creation, or you can select **Next** to configure advanced settings. The authentication method for the endpoint. Key-based authentication provides a #### Identity type -The endpoint needs to access Azure resources such as the Azure Container Registry or your Azure AI resource connections for inferencing. You can allow the endpoint permission to access Azure resources via giving permission to its managed identity. +The endpoint needs to access Azure resources such as the Azure Container Registry or your Azure AI hub resource connections for inferencing. You can allow the endpoint permission to access Azure resources via giving permission to its managed identity. System-assigned identity will be autocreated after your endpoint is created, while user-assigned identity is created by user. [Learn more about managed identities.](../../active-directory/managed-identities-azure-resources/overview.md) You notice there's an option whether *Enforce access to connection secrets (prev ##### User-assigned -When you create the deployment, Azure tries to pull the user container image from the Azure AI resource Azure Container Registry (ACR) and mounts the user model and code artifacts into the user container from the Azure AI resource storage account. +When you create the deployment, Azure tries to pull the user container image from the Azure AI hub resource Azure Container Registry (ACR) and mounts the user model and code artifacts into the user container from the Azure AI hub resource storage account. If you created the associated endpoint with **User Assigned Identity**, the user-assigned identity must be granted the following roles before the deployment creation; otherwise, the deployment creation fails. You can grant all permissions in Azure portal UI by following steps. 1. Select **Azure Machine Learning Workspace Connection Secrets Reader**, go to **Next**. > [!NOTE]- > The **Azure Machine Learning Workspace Connection Secrets Reader** role is a built-in role which has permission to get Azure AI resource connections. + > The **Azure Machine Learning Workspace Connection Secrets Reader** role is a built-in role which has permission to get Azure AI hub resource connections. > > If you want to use a customized role, make sure the customized role has the permission of `Microsoft.MachineLearningServices/workspaces/connections/listsecrets/action`. Learn more about [how to create custom roles](../../role-based-access-control/custom-roles-portal.md#step-3-basics). You can grant all permissions in Azure portal UI by following steps. For **user-assigned identity**, select **User-assigned managed identity**, and search by identity name. -1. For **user-assigned** identity, you need to grant permissions to the Azure AI resource container registry and storage account as well. You can find the container registry and storage account in the Azure AI resource overview page in Azure portal. +1. For **user-assigned** identity, you need to grant permissions to the Azure AI hub resource container registry and storage account as well. You can find the container registry and storage account in the Azure AI hub resource overview page in Azure portal. :::image type="content" source="../media/prompt-flow/how-to-deploy-for-real-time-inference/storage-container-registry.png" alt-text="Screenshot of the overview page with storage and container registry highlighted." lightbox = "../media/prompt-flow/how-to-deploy-for-real-time-inference/storage-container-registry.png"::: - Go to the Azure AI resource container registry overview page, select **Access control**, and select **Add role assignment**, and assign **ACR pull |Pull container image** to the endpoint identity. + Go to the Azure AI hub resource container registry overview page, select **Access control**, and select **Add role assignment**, and assign **ACR pull |Pull container image** to the endpoint identity. - Go to the Azure AI resource default storage overview page, select **Access control**, and select **Add role assignment**, and assign **Storage Blob Data Reader** to the endpoint identity. + Go to the Azure AI hub resource default storage overview page, select **Access control**, and select **Add role assignment**, and assign **Storage Blob Data Reader** to the endpoint identity. -1. (optional) For **user-assigned** identity, if you want to monitor the endpoint related metrics like CPU/GPU/Disk/Memory utilization, you need to grant **Workspace metrics writer** role of Azure AI resource to the identity as well. +1. (optional) For **user-assigned** identity, if you want to monitor the endpoint related metrics like CPU/GPU/Disk/Memory utilization, you need to grant **Workspace metrics writer** role of Azure AI hub resource to the identity as well. ## Check the status of the endpoint |
ai-studio | Index Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md | This can happen if you are trying to create an index using an **Owner**, **Contr > [!NOTE] > You need to be assigned the **Owner** role of the resource group or higher scope (like Subscription) to perform the operation in the next steps. This is because only the Owner role can assign roles to others. See details [here](/azure/role-based-access-control/built-in-roles). -#### Method 1: Assign more permissions to the user on the Azure AI resource +#### Method 1: Assign more permissions to the user on the Azure AI hub resource -If the Azure AI resource the project uses was created through Azure AI Studio: +If the Azure AI hub resource the project uses was created through Azure AI Studio: 1. Sign in to [Azure AI Studio](https://aka.ms/azureaistudio) and select your project via **Build** > **Projects**. 1. Select **Settings** from the collapsible left menu. 1. From the **Resource Configuration** section, select the link for your resource group name that takes you to the Azure portal. If the Azure AI resource the project uses was created through Azure AI Studio: #### Method 2: Assign more permissions on the resource group -If the Azure AI resource the project uses was created through Azure portal: +If the Azure AI hub resource the project uses was created through Azure portal: 1. Sign in to [Azure AI Studio](https://aka.ms/azureaistudio) and select your project via **Build** > **Projects**. 1. Select **Settings** from the collapsible left menu. 1. From the **Resource Configuration** section, select the link for your resource group name that takes you to the Azure portal. |
ai-studio | Models Foundation Azure Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/models-foundation-azure-ai.md | Explore more Speech capabilities in the [Speech Studio](https://aka.ms/speechstu :::image type="content" source="../media/explore/explore-vision.png" alt-text="Screenshot of vision capability cards in the Azure AI Studio explore tab." lightbox="../media/explore/explore-vision.png"::: +> [!TIP] +> You can also try GPT-4 Turbo with Vision capabilities in the Azure AI Studio playground. For more information, see [GPT-4 Turbo with Vision on your images and videos in Azure AI Studio playground](../quickstarts/multimodal-vision.md). + Explore more vision capabilities in the [Vision Studio](https://portal.vision.cognitive.azure.com/) and the [Azure AI Vision documentation](/azure/ai-services/computer-vision/). To try more Azure AI services, go to the following studio links: - [Content Safety](https://contentsafety.cognitive.azure.com/) - [Custom Translator](https://portal.customtranslator.azure.ai/) -You can conveniently access these links from a menu at the top-right corner of AI Studio. +You can conveniently access these links from the **All Azure AI** menu at the top-right corner of AI Studio. ## Prompt samples Prompt engineering is an important aspect of working with generative AI models as it allows users to have greater control, customization, and influence over the outputs. By skillfully designing prompts, users can harness the capabilities of generative AI models to generate desired content, address specific requirements, and cater to various application domains. -The prompt samples are designed to assist AI studio users in finding and utilizing prompts for common use-cases and quickly get started. Users can explore the catalog, view available prompts, and easily open them in a playground for further customization and fine-tuning. +The prompt samples are designed to assist AI Studio users in finding and utilizing prompts for common use-cases and quickly get started. Users can explore the catalog, view available prompts, and easily open them in a playground for further customization and fine-tuning. > [!NOTE] > These prompts serve as starting points to help users get started and we recommend users to tune and evaluate before using in production. -On the **Explore** page, select **Samples** > **Prompts** from the left menu to learn more and try it out. +On the **Explore** page, select **Models** > **Prompt catalog** from the left menu to learn more and try it out. ### Filter by Modalities, Industries or Tasks |
ai-studio | Azure Open Ai Gpt 4V Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md | The prompt flow *Azure OpenAI GPT-4 Turbo with Vision* tool enables you to use y Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI resource](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.+- An [Azure AI hub resource](../../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version. ## Connection |
ai-studio | Content Safety Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/content-safety-tool.md | Create an Azure Content Safety connection: 1. Sign in to [Azure AI Studio](https://studio.azureml.net/). 1. Go to **Settings** > **Connections**. 1. Select **+ New connection**.-1. Complete all steps in the **Create a new connection** dialog box. You can use an Azure AI resource or Azure AI Content Safety resource. An Azure AI resource that supports multiple Azure AI services is recommended. +1. Complete all steps in the **Create a new connection** dialog box. You can use an Azure AI hub resource or Azure AI Content Safety resource. An Azure AI hub resource that supports multiple Azure AI services is recommended. ## Build with the Content Safety tool |
ai-studio | Python Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md | Create a custom connection that stores all your LLM API KEY or other required cr - azureml.flow.connection_type: Custom - azureml.flow.module: promptflow.connections - :::image type="content" source="./media/python-tool/custom-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI studio." lightbox = "./media/python-tool/custom-connection-meta.png"::: + :::image type="content" source="./media/python-tool/custom-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI Studio." lightbox = "./media/python-tool/custom-connection-meta.png"::: > [!NOTE] |
ai-studio | Serp Api Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/serp-api-tool.md | Create a Serp connection: - azureml.flow.module: promptflow.connections - api_key: Your_Serp_API_key, please mark it as a secret. - :::image type="content" source="./media/serp-api-tool/serp-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI studio." lightbox = "./media/serp-api-tool/serp-connection-meta.png"::: + :::image type="content" source="./media/serp-api-tool/serp-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI Studio." lightbox = "./media/serp-api-tool/serp-connection-meta.png"::: The connection is the model used to establish connections with Serp API. Get your API key from the SerpAPI account dashboard. |
ai-studio | Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/quota.md | Azure uses limits and quotas to prevent budget overruns due to fraud, and to hon In this article, you learn about: - Default limits on Azure resources -- Creating Azure AI resource-level quotas. +- Creating Azure AI hub resource-level quotas. - Viewing your quotas and limits - Requesting quota and limit increases Azure Storage has a limit of 250 storage accounts per region, per subscription. ## View and request quotas in the studio -Use quotas to manage compute target allocation between multiple Azure AI resources in the same subscription. +Use quotas to manage compute target allocation between multiple Azure AI hub resources in the same subscription. -By default, all Azure AI resources share the same quota as the subscription-level quota for VM families. However, you can set a maximum quota for individual VM families for more granular cost control and governance on Azure AI resources in a subscription. Quotas for individual VM families let you share capacity and avoid resource contention issues. +By default, all Azure AI hub resources share the same quota as the subscription-level quota for VM families. However, you can set a maximum quota for individual VM families for more granular cost control and governance on Azure AI hub resources in a subscription. Quotas for individual VM families let you share capacity and avoid resource contention issues. In Azure AI Studio, select **Manage** from the top menu. Select **Quota** to view your quota at the subscription level in a region for both Azure Machine Learning virtual machine families and for your Azure Open AI resources. |
ai-studio | Sdk Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/sdk-install.md | -The Azure AI SDK is a family of packages that provide access to Azure AI services such as Azure OpenAI and Speech. +The Azure AI SDK is a family of packages that provide access to Azure AI services such as Azure OpenAI. In this article, you'll learn how to get started with the Azure AI SDK for generative AI applications. You can either: - [Install the SDK into an existing development environment](#install-the-sdk-into-an-existing-development-environment) or |
ai-studio | Troubleshoot Deploy And Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/troubleshoot-deploy-and-monitor.md | This article provides instructions on how to troubleshoot your deployments and m For the general deployment error code reference, you can go to the [Azure Machine Learning documentation](/azure/machine-learning/how-to-troubleshoot-online-endpoints). Much of the information there also applies to Azure AI Studio deployments. **Question:** I got the following error message. What should I do?-"Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI services resources. This subscription or region doesn't have access to this model." +"Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI Services resources. This subscription or region doesn't have access to this model." **Answer:** You might not have access to this particular Azure OpenAI model. For example, your subscription might not have access to the latest GPT model yet or this model isn't offered in the region you want to deploy to. You can learn more about it on [Azure OpenAI Service models](../../ai-services/openai/concepts/models.md). You might have come across an ImageBuildFailure error: This happens when the env Option 1: Find the build log for the Azure default blob storage. -1. Go to your project and select the settings icon on the lower left corner. -2. Select YourAIResourceName under AI Resource on the Settings page. -3. On the AI resource page, select YourStorageName under Storage Account. This should be the name of storage account listed in the error message you received. -4. On the storage account page, select Container under Data Storage on the left navigation UI -5. Select the ContainerName listed in the error message you received. +1. Go to your project in [Azure AI Studio](https://ai.azure.com) and select the settings icon on the lower left corner. +2. Select your Azure AI hub resource name under **Resource configurations** on the **Settings** page. +3. On the Azure AI hub overview page, select your storage account name. This should be the name of storage account listed in the error message you received. You'll be taken to the storage account page in the [Azure portal](https://portal.azure.com). +4. On the storage account page, select **Containers** under **Data Storage** on the left menu. +5. Select the container name listed in the error message you received. 6. Select through folders to find the build logs. Option 2: Find the build log within Azure Machine Learning studio, which is a separate portal from Azure AI Studio. |
ai-studio | Content Safety | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/content-safety.md | In this quickstart, get started with the [Azure AI Content Safety](/azure/ai-ser ## Prerequisites + * An active Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).-* An [Azure AI resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio. +* An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio. ## Moderate text or images |
ai-studio | Hear Speak Playground | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md | The speech to text and text to speech features can be used together or separatel ## Prerequisites + - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. - Access granted to Azure OpenAI in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a chat model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) with a chat model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md). - An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio. |
ai-studio | Multimodal Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/multimodal-vision.md | Extra usage fees might apply for using GPT-4 Turbo with Vision and Azure AI Visi ## Prerequisites + - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. - Access granted to Azure OpenAI in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the [regions that support GPT-4 Turbo with Vision](../../ai-services/openai/concepts/models.md#gpt-4-and-gpt-4-turbo-preview-model-availability): Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your Azure AI project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version. - An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio. ## Start a chat session to analyze images or video |
ai-studio | Playground Completions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/playground-completions.md | Use this article to get started making your first calls to Azure OpenAI. - Access granted to Azure OpenAI in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.-- An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) with a model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md). - An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio. ### Try text completions |
ai-studio | Region Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/region-support.md | Title: Azure AI Studio feature availability across clouds regions-+ description: This article lists Azure AI Studio feature availability across clouds regions. Azure AI Studio brings together various Azure AI capabilities that previously we ## Azure Public regions -Azure AI Studio is currently available in preview in the following Azure regions. You can create [Azure AI resources](../how-to/create-azure-ai-resource.md) and projects in these regions. +Azure AI Studio is currently available in preview in the following Azure regions. You can create [Azure AI hub resources](../how-to/create-azure-ai-resource.md) and projects in these regions. - Australia East - Brazil South Azure AI Studio preview is currently not available in Azure Government regions o ## Speech capabilities -Speech capabilities including custom neural voice vary in regional availability due to underlying hardware availability. See [Speech service supported regions](../../ai-services/speech-service/regions.md) for an overview. ++Azure AI Speech capabilities including custom neural voice vary in regional availability due to underlying hardware availability. See [Speech service supported regions](../../ai-services/speech-service/regions.md) for an overview. ## Next steps |
ai-studio | Deploy Chat Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md | The steps in this tutorial are: Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An [Azure AI resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio.+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio. - You need at least one file to upload that contains example data. To complete this tutorial, use the product information samples from the [Azure/aistudio-copilot-sample repository on GitHub](https://github.com/Azure/aistudio-copilot-sample/tree/main/data). Specifically, the [product_info_11.md](https://github.com/Azure/aistudio-copilot-sample/blob/main/dat` on your local computer. Once you're satisfied with the experience in Azure AI Studio, you can deploy the ### Find your resource group in the Azure portal -In this tutorial, your web app is deployed to the same resource group as your Azure AI resource. Later you configure authentication for the web app in the Azure portal. +In this tutorial, your web app is deployed to the same resource group as your Azure AI hub resource. Later you configure authentication for the web app in the Azure portal. Follow these steps to navigate from Azure AI Studio to your resource group in the Azure portal: -1. In Azure AI Studio, select **Manage** from the top menu and then select **Details**. If you have multiple Azure AI resources, select the one you want to use in order to see its details. +1. In Azure AI Studio, select **Manage** from the top menu and then select **Details**. If you have multiple Azure AI hub resources, select the one you want to use in order to see its details. 1. In the **Resource configuration** pane, select the resource group name to open the resource group in the Azure portal. In this example, the resource group is named `rg-docsazureairesource`. :::image type="content" source="../media/tutorials/chat-web-app/resource-group-manage-page.png" alt-text="Screenshot of the resource group in the Azure AI Studio." lightbox="../media/tutorials/chat-web-app/resource-group-manage-page.png"::: -1. You should now be in the Azure portal, viewing the contents of the resource group where you deployed the Azure AI resource. +1. You should now be in the Azure portal, viewing the contents of the resource group where you deployed the Azure AI hub resource. :::image type="content" source="../media/tutorials/chat-web-app/resource-group-azure-portal.png" alt-text="Screenshot of the resource group in the Azure portal." lightbox="../media/tutorials/chat-web-app/resource-group-azure-portal.png"::: To deploy the web app: 1. On the **Deploy to a web app** page, enter the following details: - **Name**: A unique name for your web app. - **Subscription**: Your Azure subscription.- - **Resource group**: Select a resource group in which to deploy the web app. You can use the same resource group as the Azure AI resource. - - **Location**: Select a location in which to deploy the web app. You can use the same location as the Azure AI resource. + - **Resource group**: Select a resource group in which to deploy the web app. You can use the same resource group as the Azure AI hub resource. + - **Location**: Select a location in which to deploy the web app. You can use the same location as the Azure AI hub resource. - **Pricing plan**: Choose a pricing plan for the web app. - **Enable chat history in the web app**: For the tutorial, make sure this box isn't selected. - **I acknowledge that web apps will incur usage to my account**: Selected To deploy the web app: By default, the web app will only be accessible to you. In this tutorial, you add authentication to restrict access to the app to members of your Azure tenant. Users are asked to sign in with their Microsoft Entra account to be able to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign in information in any other way other than verifying they're a member of your tenant. -1. Return to the browser tab containing the Azure portal (or re-open the [Azure portal](https://portal.azure.com?azure-portal=true) in a new browser tab) and view the contents of the resource group where you deployed the Azure AI resource and web app (you might need to refresh the view the see the web app). +1. Return to the browser tab containing the Azure portal (or re-open the [Azure portal](https://portal.azure.com?azure-portal=true) in a new browser tab) and view the contents of the resource group where you deployed the Azure AI hub resource and web app (you might need to refresh the view the see the web app). 1. Select the **App Service** resource from the list of resources in the resource group. |
ai-studio | Deploy Copilot Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md | The steps in this tutorial are: Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- You need an Azure AI resource and your user role must be **Azure AI Developer**, **Contributor**, or **Owner** on the Azure AI resource. For more information, see [Azure AI resources](../concepts/ai-resources.md) and [Azure AI roles](../concepts/rbac-ai-studio.md).- - If your role is **Contributor** or **Owner**, you can [create an Azure AI resource in this tutorial](#create-an-azure-ai-project-in-azure-ai-studio). - - If your role is **Azure AI Developer**, the Azure AI resource must already be created. +- You need an Azure AI hub resource and your user role must be **Azure AI Developer**, **Contributor**, or **Owner** on the Azure AI hub resource. For more information, see [Azure AI hub resources](../concepts/ai-resources.md) and [Azure AI roles](../concepts/rbac-ai-studio.md). + - If your role is **Contributor** or **Owner**, you can [create an Azure AI hub resource in this tutorial](#create-an-azure-ai-project-in-azure-ai-studio). + - If your role is **Azure AI Developer**, the Azure AI hub resource must already be created. - Your subscription needs to be below your [quota limit](../how-to/quota.md) to [deploy a new model in this tutorial](#deploy-a-chat-model). Otherwise you already need to have a [deployed chat model](../how-to/deploy-models-openai.md). The steps in this tutorial are: ## Create an Azure AI project in Azure AI Studio -Your Azure AI project is used to organize your work and save state while building your copilot. During this tutorial, your project contains your data, prompt flow runtime, evaluations, and other resources. For more information about the Azure AI projects and resources model, see [Azure AI resources](../concepts/ai-resources.md). +Your Azure AI project is used to organize your work and save state while building your copilot. During this tutorial, your project contains your data, prompt flow runtime, evaluations, and other resources. For more information about the Azure AI projects and resources model, see [Azure AI hub resources](../concepts/ai-resources.md). To create an Azure AI project in Azure AI Studio, follow these steps: 1. Sign in to [Azure AI Studio](https://ai.azure.com) and go to the **Build** page from the top menu. 1. Select **+ New project**. 1. Enter a name for the project.-1. Select an Azure AI resource from the dropdown to host your project. If you don't have access to an Azure AI resource yet, select **Create a new resource**. +1. Select an Azure AI hub resource from the dropdown to host your project. If you don't have access to an Azure AI hub resource yet, select **Create a new resource**. :::image type="content" source="../media/tutorials/copilot-deploy-flow/create-project-details.png" alt-text="Screenshot of the project details page within the create project dialog." lightbox="../media/tutorials/copilot-deploy-flow/create-project-details.png"::: > [!NOTE]- > To create an Azure AI resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. + > To create an Azure AI hub resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI hub resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. -1. If you're creating a new Azure AI resource, enter a name. +1. If you're creating a new Azure AI hub resource, enter a name. :::image type="content" source="../media/tutorials/copilot-deploy-flow/create-project-resource.png" alt-text="Screenshot of the create resource page within the create project dialog." lightbox="../media/tutorials/copilot-deploy-flow/create-project-resource.png"::: To create an Azure AI project in Azure AI Studio, follow these steps: 1. Leave the **Resource group** as the default to create a new resource group. Alternatively, you can select an existing resource group from the dropdown. > [!TIP]- > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI resource, a container registry, and a storage account. + > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI hub resource, a container registry, and a storage account. -1. Enter the **Location** for the Azure AI resource and then select **Next**. The location is the region where the Azure AI resource is hosted. The location of the Azure AI resource is also the location of the project. +1. Enter the **Location** for the Azure AI hub resource and then select **Next**. The location is the region where the Azure AI hub resource is hosted. The location of the Azure AI hub resource is also the location of the project. > [!NOTE]- > Azure AI resources and services availability differ per region. For example, certain models might not be available in certain regions. The resources in this tutorial are created in the **East US 2** region. + > Azure AI hub resources and services availability differ per region. For example, certain models might not be available in certain regions. The resources in this tutorial are created in the **East US 2** region. 1. Review the project details and then select **Create a project**. |
ai-studio | Screen Reader | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/screen-reader.md | -This article is for people who use screen readers such as Microsoft's Narrator, JAWS, NVDA or Apple's Voiceover, and provides guidance on how to use the Azure AI Studio with a screen reader. +This article is for people who use screen readers such as Microsoft's Narrator, JAWS, NVDA or Apple's Voiceover. You learn how to use the Azure AI Studio with a screen reader. ## Getting started in the Azure AI Studio For efficient navigation, it might be helpful to navigate by landmarks to move b ## Explore -In **Explore** you can explore the different capabilities of Azure AI before creating a project. You can find this in the primary navigation landmark. +In **Explore** you can explore the different capabilities of Azure AI before creating a project. You can find this page in the primary navigation landmark. -Within **Explore**, you can explore many capabilities found within the secondary navigation. These include model catalog, model leaderboard, and pages for Azure AI services such as Speech, Vision, and Content Safety. -- Model catalog contains three main areas: Announcements, Models and Filters. You can use Search and Filters to narrow down model selection +Within **Explore**, you can [explore many capabilities](../how-to/models-foundation-azure-ai.md) found within the secondary navigation. These include [model catalog](../how-to/model-catalog.md), model leaderboard, and pages for Azure AI services such as Speech, Vision, and Content Safety. +- [Model catalog](../how-to/model-catalog.md) contains three main areas: Announcements, Models and Filters. You can use Search and Filters to narrow down model selection - Azure AI service pages such as Speech consist of many cards containing links. These cards lead you to demo experiences where you can sample our AI capabilities and might link out to another webpage. ## Projects To work within the Azure AI Studio, you must first [create a project](../how-to/create-projects.md): -1. Navigate to the Build tab in the primary navigation. -1. Press the Tab key until you hear *New project* and select this button. +1. In [Azure AI Studio](https://ai.azure.com), navigate to the **Build** tab in the primary navigation. +1. Press the **Tab** key until you hear *New project* and select this button. 1. Enter the information requested in the **Create a new project** dialog. You then get taken to the project details page. From the **Build** tab, navigate to the secondary navigation landmark and press ### Playground structure -When you first arrive the playground mode dropdown is set to **Chat** by default. In this mode the playground is composed of the command toolbar and three main panes: **Assistant setup**, **Chat session**, and **Configuration**. If you have added your own data in the playground, the **Citations** pane will also appear when selecting a citation as part of the model response. +When you first arrive, the playground mode dropdown is set to **Chat** by default. In this mode, the playground is composed of the command toolbar and three main panes: **Assistant setup**, **Chat session**, and **Configuration**. If you added your own data in the playground, the **Citations** pane also appears when selecting a citation as part of the model response. You can navigate by heading to move between these panes, as each pane has its own H2 heading. ### Assistant setup pane -This is where you can set up the chat assistant according to your organization's needs. +The assistant setup pane is where you can set up the chat assistant according to your organization's needs. Once you edit the system message or examples, your changes don't save automatically. Press the **Save changes** button to ensure your changes are saved. ### Chat session pane -This is where you can chat to the model and test out your assistant -- After you send a message, the model might take some time to respond, especially if the response is long. You hear a screen reader announcement "Message received from the chatbot" when the model has finished composing a response. +The chat session pane is where you can chat to the model and test out your assistant +- After you send a message, the model might take some time to respond, especially if the response is long. You hear a screen reader announcement "Message received from the chatbot" when the model finishes composing a response. - Content in the chatbot follows this format: ``` There's also a dashboard view provided to allow you to compare evaluation runs. ## Technical support for customers with disabilities -Microsoft wants to provide the best possible experience for all our customers. If you have a disability or questions related to accessibility, please contact the Microsoft Disability Answer Desk for technical assistance. The Disability Answer Desk support team is trained in using many popular assistive technologies and can offer assistance in English, Spanish, French, and American Sign Language. Go to the Microsoft Disability Answer Desk site to find out the contact details for your region. +Microsoft wants to provide the best possible experience for all our customers. If you have a disability or questions related to accessibility, contact the Microsoft Disability Answer Desk for technical assistance. The Disability Answer Desk support team is trained in using many popular assistive technologies. They can offer assistance in English, Spanish, French, and American Sign Language. Go to the Microsoft Disability Answer Desk site to find out the contact details for your region. -If you're a government, commercial, or enterprise customer, please contact the enterprise Disability Answer Desk. +If you're a government, commercial, or enterprise customer, contact the enterprise Disability Answer Desk. ## Next steps * Learn how you can build generative AI applications in the [Azure AI Studio](../what-is-ai-studio.md). |
ai-studio | What Is Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md | Azure AI Studio brings together capabilities from across multiple Azure AI servi [Azure AI Studio](https://ai.azure.com) is designed for developers to: - Build generative AI applications on an enterprise-grade platform. -- Directly from the studio you can interact with a project code-first via the Azure AI SDK and Azure AI CLI. +- Directly from the studio you can interact with a project code-first via the [Azure AI SDK](how-to/sdk-install.md) and [Azure AI CLI](how-to/cli-install.md). - Azure AI Studio is a trusted and inclusive platform that empowers developers of all abilities and preferences to innovate with AI and shape the future. - Seamlessly explore, build, test, and deploy using cutting-edge AI tools and ML models, grounded in responsible AI practices. -- Build together as one team. Your Azure AI resource provides enterprise-grade security, and a collaborative environment with shared files and connections to pretrained models, data and compute.-- Organize your way. Your project helps you save state, allowing you iterate from first idea, to first prototype, and then first production deployment. Also easily invite others to collaborate along this journey.+- Build together as one team. Your [Azure AI hub resource](./concepts/ai-resources.md) provides enterprise-grade security, and a collaborative environment with shared files and connections to pretrained models, data and compute. +- Organize your way. Your [Azure AI project](./how-to/create-projects.md) helps you save state, allowing you iterate from first idea, to first prototype, and then first production deployment. Also easily invite others to collaborate along this journey. With Azure AI Studio, you can evaluate large language model (LLM) responses and orchestrate prompt application components with prompt flow for better performance. The platform facilitates scalability for transforming proof of concepts into full-fledged production with ease. Continuous monitoring and refinement support long-term success. ## Getting around in Azure AI Studio -Wherever you're at or going in Azure AI Studio, use the Home, Explore, Build, and Manage tabs to find your way around. -+Wherever you're at or going in Azure AI Studio, use the **Home**, **Explore**, **Build**, and **Manage** tabs to find your way around. # [Home](#tab/home) Build is an experience where AI Devs and ML Pros can build or customize AI solut - Simplified development of large language model (LLM) solutions and copilots with end-to-end app templates and prompt samples for common use cases. - Orchestration framework to handle the complex mapping of functions and code between LLMs, tools, custom code, prompts, data, search indexes, and more.-- Evaluate, deploy, and continuously monitor your AI application and app performance +- Evaluate, deploy, and continuously monitor your AI application and app performance. :::image type="content" source="./media/explore/ai-studio-tab-build.png" alt-text="Screenshot of the signed-out Azure AI Studio Build page." lightbox="./media/explore/ai-studio-tab-build.png"::: Build is an experience where AI Devs and ML Pros can build or customize AI solut As a developer, you can manage settings such as connections and compute. Your admin will mainly use this section to look at access control, usage, and billing. -- Centralized backend infrastructure to reduce complexity for developers-- A single Azure AI resource for enterprise configuration, unified data story, and built-in governance+- Centralized backend infrastructure to reduce complexity for developers. +- A single Azure AI hub resource for enterprise configuration, unified data story, and built-in governance. :::image type="content" source="./media/explore/ai-studio-tab-manage.png" alt-text="Screenshot of the signed-out Azure AI Studio manage page." lightbox="./media/explore/ai-studio-tab-manage.png"::: -## Azure AI studio enterprise chat solution demo +## Azure AI Studio enterprise chat solution demo Learn how to create a retail copilot using your data with Azure AI Studio in this [end-to-end walkthrough video](https://youtu.be/Qes7p5w8Tz8). > [!VIDEO https://www.youtube.com/embed/Qes7p5w8Tz8] Using Azure AI Studio also incurs cost associated with the underlying services, ## Region availability -Azure AI Studio is currently available in the following regions: Australia East, Brazil South, Canada Central, East US, East US 2, France Central, Germany West Central, India South, Japan East, North Central US, Norway East, Poland Central, South Africa North, South Central US, Sweden Central, Switzerland North, UK South, West Europe, and West US. --To learn more, see [Azure AI Studio regions](./reference/region-support.md). +Azure AI Studio is available in most regions where Azure AI services are available. For more information, see [region support for Azure AI Studio](reference/region-support.md). ## How to get access |
aks | Egress Outboundtype | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md | You can customize egress for an AKS cluster to fit specific scenarios. By defaul This article covers the various types of outbound connectivity that are available in AKS clusters. > [!NOTE]-> You can now update the `outboundType` after cluster creation. This feature is in preview. See [Updating `outboundType after cluster creation (preview)](#updating-outboundtype-after-cluster-creation-preview). +> You can now update the `outboundType` after cluster creation. ## Limitations You must deploy the AKS cluster into an existing virtual network with a subnet t For more information, see [configuring cluster egress via user-defined routing](egress-udr.md). -## Updating `outboundType` after cluster creation (preview) +## Updating `outboundType` after cluster creation Changing the outbound type after cluster creation will deploy or remove resources as required to put the cluster into the new egress configuration. Migration is only supported between `loadBalancer`, `managedNATGateway` (if usin > [!WARNING] > Changing the outbound type on a cluster is disruptive to network connectivity and will result in a change of the cluster's egress IP address. If any firewall rules have been configured to restrict traffic from the cluster, you need to update them to match the new egress IP address. - ### Update cluster to use a new outbound type > [!NOTE] |
aks | Monitor Control Plane Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md | This article helps you understand this new feature, how to implement it, and how - [Private link](../azure-monitor/logs/private-link-security.md) isn't supported. - Only the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps) can be customized. All other customizations are not supported. - The cluster must use [managed identity authentication](use-managed-identity.md).-- This feature is currently available in the following regions: West Central US, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central, Israel Central, Italy North, Japan East, JioIndia West, Korea Central, Malaysia South, Mexico Central, North Central US, North Europe, Norway East, Qatar Central, South Africa North, Sweden Central, Switzerland North, Taiwan North, UAE North, UK West, West US 2.+- This feature is currently available in the following regions: West Central US, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central, Israel Central, Italy North, Japan East, JioIndia West, Korea Central, Malaysia South, Mexico Central, North Central US, North Europe, Norway East, Qatar Central, South Africa North, Sweden Central, Switzerland North, Taiwan North, UAE North, UK West, West US 2, Australia Central 2, Austrial South East, Austria East, Belgium Central, Brazil South East, Canada East, Central US, Chile Central, France South, Germany North, Israel North West, Japan West, Jio India Central. ### Install or update the `aks-preview` Azure CLI extension |
aks | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md | Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
aks | Static Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md | This article shows you how to create a static public IP address and assign it to 2. Get the static public IP address using the [`az network public-ip list`][az-network-public-ip-list] command. Specify the name of the node resource group and public IP address you created, and query for the `ipAddress`. ```azurecli-interactive- az network public-ip show --resource-group myNetworkResourceGroup --name myAKSPublicIP --query ipAddress --output tsv + az network public-ip show --resource-group <node resource group name> --name myAKSPublicIP --query ipAddress --output tsv ``` ## Create a service using the static IP address This article shows you how to create a static public IP address and assign it to kind: Service metadata: annotations:- service.beta.kubernetes.io/azure-load-balancer-resource-group: myNetworkResourceGroup + service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group name> service.beta.kubernetes.io/azure-pip-name: myAKSPublicIP name: azure-load-balancer spec: This article shows you how to create a static public IP address and assign it to kind: Service metadata: annotations:- service.beta.kubernetes.io/azure-load-balancer-resource-group: myNetworkResourceGroup + service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group name> service.beta.kubernetes.io/azure-pip-name: myAKSPublicIP service.beta.kubernetes.io/azure-dns-label-name: <unique-service-label> name: azure-load-balancer |
api-management | Api Management Howto Log Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md | Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger { "properties": { "loggerType": "azureEventHub",- "description": "adding a new logger with system assigned managed identity", + "description": "adding a new logger with user-assigned managed identity", "credentials": { "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net", "identityClientId":"<ClientID>", |
api-management | Migrate Stv1 To Stv2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md | For more information about the `stv1` and `stv2` platforms and the benefits of u API Management platform migration from `stv1` to `stv2` involves updating the underlying compute alone and has no impact on the service/API configuration persisted in the storage layer. -* The upgrade process involves creating a new compute in parallel to the old compute. The old compute takes 15-45 mins to be deleted with an option to delay it for up to 48 hours. +* The upgrade process involves creating a new compute in parallel to the old compute. The old compute takes 15-45 mins to be deleted with an option to delay it for up to 48 hours. The 48 hour delay option is only available for VNet injected services. * The API Management status in the Portal will be "Updating". * Azure manages the management endpoint DNS, and updates to the new compute immediately on successful migration. * The Gateway DNS still points to the old compute if custom domain is in use. On successful migration, update any network dependencies including DNS, firewall - **Can I roll back the migration if required?** - Yes, you can. If there's a failure during the migration process, the instance will automatically roll back to the stv1 platform. However, if you encounter any other issues post migration, you can roll back only if you have requested an extension to the old gateway purge. By default, the old gateway is purged in 15 mins that can be extended up to 48 hours by contacting support in advance. You should make sure to contact support before the old gateway is purged, if a rollback is required. + **VNet-injected instances:** Yes, you can. If there's a failure during the migration process, the instance will automatically roll back to the stv1 platform. However, if you encounter any other issues post migration, you can roll back only if you have requested an extension to the old gateway purge. By default, the old gateway is purged in 15 mins that can be extended up to 48 hours by contacting support in advance. You should make sure to contact support before the old gateway is purged, if a rollback is required. ++ **Non-VNet injected instances:** If there is a failure during the migration process, the instance will automatically roll back to the stv1 platform. If the migration completes successfully, a rollback is not possible. - **Is there any change required in custom domain/private DNS zones?** |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
app-service | Configure Ssl App Service Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-app-service-certificate.md | The following domain verification methods are supported: | **App Service Verification** | The most convenient option when the domain is already mapped to an App Service app in the same subscription because the App Service app has already verified the domain ownership. Review the last step in [Confirm domain ownership](#confirm-domain-ownership). | | **Domain Verification** | Confirm an [App Service domain that you purchased from Azure](manage-custom-dns-buy-domain.md). Azure automatically adds the verification TXT record for you and completes the process. | | **Mail Verification** | Confirm the domain by sending an email to the domain administrator. Instructions are provided when you select the option. |-| **Manual Verification** | Confirm the domain by using either a DNS TXT record or an HTML page, which applies only to **Standard** certificates per the following note. The steps are provided after you select the option. The HTML page option doesn't work for web apps with "HTTPS Only' enabled. | +| **Manual Verification** | Confirm the domain by using either a DNS TXT record or an HTML page, which applies only to **Standard** certificates per the following note. The steps are provided after you select the option. The HTML page option doesn't work for web apps with "HTTPS Only' enabled. For subdomain verification, the domain verification token needs to be added to the root domain. | > [!IMPORTANT] > With the **Standard** certificate, you get a certificate for the requested top-level domain *and* the `www` subdomain, for example, `contoso.com` and `www.contoso.com`. However, **App Service Verification** and **Manual Verification** both use HTML page verification, which doesn't support the `www` subdomain when issuing, rekeying, or renewing a certificate. For the **Standard** certificate, use **Domain Verification** and **Mail Verification** to include the `www` subdomain with the requested top-level domain in the certificate. |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md | Title: App Service Environment overview description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 11/08/2023 Last updated : 02/06/2024 An App Service Environment is a single-tenant deployment of Azure App Service th Applications are hosted in App Service plans, which are created in an App Service Environment. An App Service plan is essentially a provisioning profile for an application host. As you scale out your App Service plan, you create more application hosts with all the apps in that App Service plan on each host. A single App Service Environment v3 can have up to 200 total App Service plan instances across all the App Service plans combined. A single App Service Isolated v2 (Iv2) plan can have up to 100 instances by itself. -When you're deploying onto dedicated hardware (hosts), you're limited in scaling across all App Service plans to the number of cores in this type of environment. An App Service Environment that's deployed on dedicated hosts has 132 vCores available. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance. +When you're deploying onto dedicated hardware (hosts), you're limited in scaling across all App Service plans to the number of cores in this type of environment. An App Service Environment that's deployed on dedicated hosts has 132 vCores available. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance. Only I1v2, I2v2 and I3v2 SKU sizes are available on App Service Environment deployed on dedicated hosts. ## Virtual network support |
app-service | Language Support Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md | Title: Language runtime support policy -description: Learn about the language runtime support policy for Azure App Service. +description: Learn about the language runtime support policy for Azure App Service. Last updated 12/23/2023 -+ # Language runtime support policy for App Service |
app-service | Manage Custom Dns Buy Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md | Title: Buy a custom domain -description: Learn how to buy an App Service domain and use it as a custom domain for your app Azure App Service. +description: Learn how to buy an App Service domain and use it as a custom domain for your app Azure App Service. ms.assetid: 70fb0e6e-8727-4cca-ba82-98a4d21586ff Last updated 01/31/2023- - # Buy an App Service domain and configure an app with it |
app-service | Manage Custom Dns Migrate Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-migrate-domain.md | |
app-service | Manage Scale Per App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-per-app.md | ms.assetid: a903cb78-4927-47b0-8427-56412c4e3e64 Last updated 06/29/2023 -+ # High-density hosting on Azure App Service using per-app scaling |
app-service | Manage Scale Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-up.md | description: Learn how to scale up an app in Azure App Service. Get more CPU, me ms.assetid: f7091b25-b2b6-48da-8d4a-dcf9b7baccab Last updated 05/08/2023- - # Scale up an app in Azure App Service |
app-service | Migrate Wordpress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/migrate-wordpress.md | When you migrate a live site and its DNS domain name to App Service, that DNS na If your site is configured with SSL certs, then follow [Add and manage TLS/SSL certificates](configure-ssl-certificate.md?tabs=apex%2Cportal) to configure SSL. Next steps:-[At-scale assessment of .NET web apps](/training/modules/migrate-app-service-migration-assistant/) +[At-scale assessment of .NET web apps](/training/modules/migrate-app-service-migration-assistant/) |
app-service | Monitor Instances Health Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md | |
app-service | Overview App Gateway Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-app-gateway-integration.md | |
app-service | Overview Authentication Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md | ms.assetid: b7151b57-09e5-4c77-a10c-375a262f17e5 Last updated 02/03/2023 -+ |
app-service | Overview Nat Gateway Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-nat-gateway-integration.md | |
app-service | Overview Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md | Title: Integrate your app with an Azure virtual network description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 07/21/2023 Last updated : 02/06/2024 App Service has two variations: * The dedicated compute pricing tiers, which include the Basic, Standard, Premium, Premium v2, and Premium v3. * The App Service Environment, which deploys directly into your virtual network with dedicated supporting infrastructure and is using the Isolated and Isolated v2 pricing tiers. -The virtual network integration feature is used in Azure App Service dedicated compute pricing tiers. If your app is in an [App Service Environment](./environment/overview.md), it's already integrated with a virtual network and doesn't require you to configure virtual network integration feature to reach resources in the same virtual network. For more information on all the networking features, see [App Service networking features](./networking-features.md). +The virtual network integration feature is used in Azure App Service dedicated compute pricing tiers. If your app is in an [App Service Environment](./environment/overview.md), it already integrates with a virtual network and doesn't require you to configure virtual network integration feature to reach resources in the same virtual network. For more information on all the networking features, see [App Service networking features](./networking-features.md). Virtual network integration gives your app access to resources in your virtual network, but it doesn't grant inbound private access to your app from the virtual network. Private site access refers to making an app accessible only from a private network, such as from within an Azure virtual network. Virtual network integration is used only to make outbound calls from your app into your virtual network. Refer to [private endpoint](./networking/private-endpoint.md) for inbound private access. Virtual network integration supports connecting to a virtual network in the same When you use virtual network integration, you can use the following Azure networking features: -* **Network security groups (NSGs)**: You can block outbound traffic with an NSG that's placed on your integration subnet. The inbound rules don't apply because you can't use virtual network integration to provide inbound access to your app. +* **Network security groups (NSGs)**: You can block outbound traffic with an NSG that you use on your integration subnet. The inbound rules don't apply because you can't use virtual network integration to provide inbound access to your app. * **Route tables (UDRs)**: You can place a route table on the integration subnet to send outbound traffic where you want. * **NAT gateway**: You can use [NAT gateway](./networking/nat-gateway-integration.md) to get a dedicated outbound IP and mitigate SNAT port exhaustion. Because subnet size can't be changed after assignment, use a subnet that's large > > Since you have 1 App Service plan, 1 x 50 = 50 IP addresses. -When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration. +When you want your apps in your plan to reach a virtual network that apps in another plan already connect to, select a different subnet than the one being used by the pre-existing virtual network integration. ## Permissions You must have at least the following Role-based access control permissions on th | Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition | | Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network | -If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it's automatically registered when creating the first web app in a subscription. +If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it also automatically registers when creating the first web app in a subscription. ## Routes You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and [app settings with Key Vault reference](./app-service-key-vault-references.md). [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out. -Through application routing or configuration routing options, you can configure what traffic is sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it's sent through the virtual network integration. +Through application routing or configuration routing options, you can configure what traffic is sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if sent through the virtual network integration. ### Application routing -Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during startup. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the outbound internet traffic setting. If outbound internet traffic routing is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that outbound internet traffic is enabled. +Application routing applies to traffic that is sent from your app after it starts. See [configuration routing](#configuration-routing) for traffic during startup. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the outbound internet traffic setting. If outbound internet traffic routing is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that outbound internet traffic is enabled. * Only traffic configured in application or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet. * When outbound internet traffic routing is enabled, the source address for your outbound traffic from your app is still one of the IP addresses that are listed in your app properties. If you route your traffic through a firewall or a NAT gateway, the source IP address originates from this service. App settings using Key Vault references attempt to get secrets over the public r > * Configure SSL/TLS certificates from private Key Vaults is currently not supported. > * App Service Logs to private storage accounts is currently not supported. We recommend using Diagnostics Logging and allowing Trusted Services for the storage account. +### Routing app settings ++App Service has existing app settings to configure application and configuration routing. Site properties override the app settings if both exist. Site properties have the advantage of being auditable with Azure Policy and validated at the time of configuration. We recommend you to use site properties. ++You can still use the existing `WEBSITE_VNET_ROUTE_ALL` app setting to configure application routing. ++App settings also exist for some configuration routing options. These app settings are named `WEBSITE_CONTENTOVERVNET` and `WEBSITE_PULL_IMAGE_OVER_VNET`. + ### Network routing -You can use route tables to route outbound traffic from your app without restriction. Common destinations can include firewall devices or gateways. You can also use a [network security group](../virtual-network/network-security-groups-overview.md) (NSG) to block outbound traffic to resources in your virtual network or the internet. An NSG that's applied to your integration subnet is in effect regardless of any route tables applied to your integration subnet. +You can use route tables to route outbound traffic from your app without restriction. Common destinations can include firewall devices or gateways. You can also use a [network security group](../virtual-network/network-security-groups-overview.md) (NSG) to block outbound traffic to resources in your virtual network or the internet. An NSG that you apply to your integration subnet is in effect regardless of any route tables applied to your integration subnet. Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes don't apply to replies from inbound app requests and inbound rules in an NSG don't apply to your app. Virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the [access restrictions](./overview-access-restrictions.md) feature or [private endpoints](./networking/private-endpoint.md). There are some limitations with using virtual network integration: * The app and the virtual network must be in the same region. * The integration virtual network can't have IPv6 address spaces defined. * The integration subnet can't have [service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) enabled.-* The integration subnet can be used by only one App Service plan. +* Only one App Service plan virtual network integration connection per integration subnet is supported. * You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network. * You can't have more than two virtual network integrations per Windows App Service plan. You can't have more than one virtual network integration per Linux App Service plan. Multiple apps in the same App Service plan can use the same virtual network integration. * You can't change the subscription of an app or a plan while there's an app that's using virtual network integration. The feature is easy to set up, but that doesn't mean your experience is problem ### Deleting the App Service plan or app before disconnecting the network integration -If you deleted the app or the App Service plan without disconnecting the virtual network integration first, you aren't able to do any update/delete operations on the virtual network or subnet that was used for the integration with the deleted resource. A subnet delegation 'Microsoft.Web/serverFarms' remains assigned to your subnet and prevents the update/delete operations. +If you deleted the app or the App Service plan without disconnecting the virtual network integration first, you aren't able to do any update/delete operations on the virtual network or subnet that was used for the integration with the deleted resource. A subnet delegation 'Microsoft.Web/serverFarms' remains assigned to your subnet and prevents the update and delete operations. In order to do update/delete the subnet or virtual network again, you need to re-create the virtual network integration, and then disconnect it: 1. Re-create the App Service plan and app (it's mandatory to use the exact same web app name as before). |
app-service | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md | Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
app-service | Quickstart Dotnetcore Uiex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore-uiex.md | ms.assetid: b1e6bd58-48d1-4007-9d6c-53fd6db061e3 Last updated 11/23/2020 ms.devlang: csharp-+ zone_pivot_groups: app-service-platform-windows-linux |
app-service | Quickstart Dotnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md | adobe-target-experience: Experience B adobe-target-content: ./quickstart-dotnetcore-uiex -+ ai-usage: ai-assisted |
app-service | Quickstart Java Uiex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java-uiex.md | ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a ms.devlang: java Last updated 08/01/2020-+ zone_pivot_groups: app-service-platform-windows-linux |
app-service | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md | ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a ms.devlang: java Last updated 08/31/2023-+ zone_pivot_groups: app-service-java-hosting adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 |
app-service | Quickstart Multi Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md | |
app-service | Quickstart Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md | |
app-service | Quickstart Python 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python-1.md | |
app-service | Samples Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-cli.md | tags: azure-service-management ms.assetid: 53e6a15a-370a-48df-8618-c6737e26acec Last updated 04/21/2022-+ keywords: azure cli samples, azure cli examples, azure cli code samples |
app-service | Cli Continuous Deployment Vsts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md | ms.devlang: azurecli Last updated 04/15/2022 -+ # Create an App Service app with continuous deployment from an Azure DevOps repository using Azure CLI |
app-service | Cli Linux Acr Aspnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-linux-acr-aspnetcore.md | ms.devlang: azurecli Last updated 04/25/2022 -+ # Create an ASP.NET Core app in a Docker container in App Service from Azure Container Registry |
app-service | Powershell Backup Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-delete.md | ms.assetid: ebcadb49-755d-4202-a5eb-f211827a9168 Last updated 10/30/2017 -+ # Delete a backup for a web using Azure PowerShell |
app-service | Powershell Backup Onetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-onetime.md | ms.assetid: fc755f82-ca3e-4532-b251-690b699324d6 Last updated 10/30/2017 -+ # Back up a web app using PowerShell |
app-service | Powershell Backup Restore Diff Sub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-restore-diff-sub.md | ms.assetid: a2a27d94-d378-4c17-a6a9-ae1e69dc4a72 Last updated 12/06/2022 -+ # Restore a web app from a backup in another subscription using PowerShell |
app-service | Powershell Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-restore.md | ms.assetid: a2a27d94-d378-4c17-a6a9-ae1e69dc4a72 Last updated 12/06/2022 -+ # Restore a web app from a backup using Azure PowerShell |
app-service | Powershell Backup Scheduled | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-scheduled.md | ms.assetid: a2a27d94-d378-4c17-a6a9-ae1e69dc4a72 Last updated 10/30/2017 -+ # Create a scheduled backup for a web app using PowerShell |
app-service | Powershell Configure Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-configure-custom-domain.md | ms.assetid: 356f5af9-f62e-411c-8b24-deba05214103 Last updated 12/06/2022 -+ # Assign a custom domain to a web app using PowerShell |
app-service | Powershell Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-configure-ssl-certificate.md | tags: azure-service-management ms.assetid: 23e83b74-614a-49a0-bc08-7542120eeec5 Last updated 12/06/2022-+ # Bind a custom TLS/SSL certificate to a web app using PowerShell |
app-service | Powershell Scale Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-scale-manual.md | ms.assetid: de5d4285-9c7d-4735-a695-288264047375 Last updated 12/06/2022 -+ # Scale a web app manually using PowerShell |
app-service | Troubleshoot Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md | |
app-service | Troubleshoot Domain Ssl Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-domain-ssl-certificates.md | tags: top-support-issue Last updated 03/01/2019 - # Troubleshoot domain and TLS/SSL certificate problems in Azure App Service |
app-service | Troubleshoot Dotnet Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-dotnet-visual-studio.md | ms.assetid: def8e481-7803-4371-aa55-64025d116c97 ms.devlang: csharp Last updated 08/29/2016-+ |
app-service | Troubleshoot Http 502 Http 503 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-http-502-http-503.md | keywords: 502 bad gateway, 503 service unavailable, error 503, error 502 ms.assetid: 51cd331a-a3fa-438f-90ef-385e755e50d5 Last updated 07/06/2016- - # Troubleshoot HTTP errors of "502 bad gateway" and "503 service unavailable" in Azure App Service "502 bad gateway" and "503 service unavailable" are common errors in your app hosted in [Azure App Service](./overview.md). This article helps you troubleshoot these errors. This is often the simplest way to recover from one-time issues. On the [Azure Po ![restart app to solve HTTP errors of 502 bad gateway and 503 service unavailable](./media/app-service-web-troubleshoot-HTTP-502-503/2-restart.png) You can also manage your app using Azure PowerShell. For more information, see-[Using Azure PowerShell with Azure Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md). +[Using Azure PowerShell with Azure Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md). |
app-service | Troubleshoot Performance Degradation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-performance-degradation.md | keywords: web app performance, slow app, app slow ms.assetid: b8783c10-3a4a-4dd6-af8c-856baafbdde5 Last updated 08/03/2016- - # Troubleshoot slow app performance issues in Azure App Service This article helps you troubleshoot slow app performance issues in [Azure App Service](./overview.md). |
app-service | Tutorial Auth Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md | Title: 'Tutorial: Authenticate users E2E' + Title: 'Tutorial: Authenticate users E2E' description: Learn how to use App Service authentication and authorization to secure your App Service apps end-to-end, including access to remote APIs. keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad |
app-service | Tutorial Connect App App Graph Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-app-graph-javascript.md | Title: 'Tutorial: Authenticate users E2E to Azure' + Title: 'Tutorial: Authenticate users E2E to Azure' description: Learn how to use App Service authentication and authorization to secure your App Service apps end-to-end to a downstream Azure service. keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad |
app-service | Tutorial Connect Msi Azure Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md | ms.devlang: csharp # ms.devlang: csharp,java,javascript,python Last updated 04/12/2022-+ # Tutorial: Connect to Azure databases from App Service without secrets using a managed identity |
app-service | Tutorial Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md | Last updated 11/29/2022 keywords: azure app service, web app, linux, windows, docker, container-+ zone_pivot_groups: app-service-containers-windows-linux |
app-service | Tutorial Dotnetcore Sqldb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md | |
app-service | Tutorial Java Spring Cosmosdb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md | |
app-service | Tutorial Nodejs Mongodb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md | ms.role: developer ms.devlang: javascript -+ # Deploy a Node.js + MongoDB web app to Azure |
app-service | Tutorial Php Mysql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md | Title: 'Tutorial: PHP app with MySQL and Redis' + Title: 'Tutorial: PHP app with MySQL and Redis' description: Learn how to get a PHP app working in Azure, with connection to a MySQL database and a Redis cache in Azure. Laravel is used in the tutorial. ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Last updated 06/30/2023-+ # Tutorial: Deploy a PHP, MySQL, and Redis app to Azure App Service |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | |
app-service | Web Sites Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-monitor.md | |
app-service | Web Sites Traffic Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-traffic-manager.md | description: Find best practices for configuring Azure Traffic Manager when you ms.assetid: dabda633-e72f-4dd4-bf1c-6e945da456fd Last updated 02/25/2016- - # Controlling Azure App Service traffic with Azure Traffic Manager > [!NOTE] When using Azure Traffic Manager with Azure, keep in mind the following points: ## Next Steps For a conceptual and technical overview of Azure Traffic Manager, see [Traffic Manager Overview](../traffic-manager/traffic-manager-overview.md).-- |
app-service | Webjobs Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md | Last updated 7/30/2023 --#Customer intent: As a web developer, I want to leverage background tasks to keep my application running smoothly. adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B adobe-target-content: ./webjobs-create-ieux+#Customer intent: As a web developer, I want to leverage background tasks to keep my application running smoothly. # Run background tasks with WebJobs in Azure App Service |
app-service | Webjobs Sdk How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-sdk-how-to.md | description: Learn more about how to write code for the WebJobs SDK. Create even ms.devlang: csharp-+ Last updated 06/24/2021 |
attestation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md | Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
automation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md | Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-app-configuration | Howto Integrate Azure Managed Service Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md | To set up a managed identity in the portal, you first create an application and 1. When prompted, answer **Yes** to turn on the system-assigned managed identity. - :::image type="content" source="./media/add-managed-identity-app-service.png" alt-text="Screenshot of how to add a managed identity in App Service."::: + :::image type="content" source="./media/managed-identity/add-managed-identity-app-service.png" alt-text="Screenshot of how to add a managed identity in App Service."::: ## Grant access to App Configuration The following steps describe how to assign the App Configuration Data Reader role to App Service. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). -1. In the [Azure portal](https://portal.azure.com), select **All resources** and select the App Configuration store that you created in the [quickstart](../azure-app-configuration/quickstart-azure-functions-csharp.md). +1. In the [Azure portal](https://portal.azure.com), select the App Configuration store that you created in the [quickstart](../azure-app-configuration/quickstart-azure-functions-csharp.md). 1. Select **Access control (IAM)**. |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-app-configuration | Quickstart Aspnet Core App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-aspnet-core-app.md | |
azure-arc | Configure Transparent Data Encryption Manually | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-manually.md | |
azure-arc | Configure Transparent Data Encryption Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-sql-managed-instance.md | Similar to above, to restore the credentials, copy them into the container and r ## Related content [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption)- |
azure-arc | Limitations Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-managed-instance.md | description: Limitations of SQL Managed Instance enabled by Azure Arc - |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 # |
azure-arc | Deliver Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md | Azure policies can be specified to a targeted subscription or resource group for There are some scenarios in which you may be eligible to receive Extended Security Updates patches at no additional cost. Two of these scenarios supported by Azure Arc are (1) [Dev/Test (Visual Studio)](/azure/devtest/offer/overview-what-is-devtest-offer-visual-studio) and (2) [Disaster Recovery (Entitled benefit DR instances from Software Assurance](https://www.microsoft.com/en-us/licensing/licensing-programs/software-assurance-by-benefits) or subscription only). Both of these scenarios require the customer is already using Windows Server 2012/R2 ESUs enabled by Azure Arc for billable, production machines. > [!WARNING]-> Don't create a Windows Server 2012/R2 ESU License for only Dev/Test or Disaster Recovery workloads. You can't provision an ESU License only for non-billable workloads. Moreover, you'll be billed fully for all of the cores provisioned with an ESU license. +> Don't create a Windows Server 2012/R2 ESU License for only Dev/Test or Disaster Recovery workloads. You shouldn't provision an ESU License only for non-billable workloads. Moreover, you'll be billed fully for all of the cores provisioned with an ESU license, and any dev/test cores on the license won't be billed as long as they're tagged accordingly based on the following qualifications. > To qualify for these scenarios, you must already have: -- **Billable ESU License.** You must already have provisioned and activated a WS2012 Arc ESU License intended to be linked to regular Azure Arc-enabled servers running in production environments (i.e., normally billed ESU scenarios). This license should be provisioned only for billable cores, not cores that are eligible for free Extended Security Updates.+- **Billable ESU License.** You must already have provisioned and activated a WS2012 Arc ESU License intended to be linked to regular Azure Arc-enabled servers running in production environments (i.e., normally billed ESU scenarios). This license should be provisioned only for billable cores, not cores that are eligible for free Extended Security Updates, for example, dev/test cores. - **Arc-enabled servers.** Onboarded your Windows Server 2012 and Windows Server 2012 R2 machines to Azure Arc-enabled servers for the purpose of Dev/Test with Visual Studio subscriptions or Disaster Recovery. This linking will not trigger a compliance violation or enforcement block, allow > Adding these tags to your license will NOT make the license free or reduce the number of license cores that are chargeable. These tags allow you to link your Azure machines to existing licenses that are already configured with payable cores without needing to create any new licenses or add additional cores to your free machines. **Example:**--You have 8 Windows Server 2012 R2 Standard instances, each with 8 physical cores. 6 of these Windows Server 2012 R2 Standard machines are for production, and 2 of these Windows Server 2012 R2 Standard machines are eligible for free ESUs through the Visual Studio Dev Test subscription. You should first provision and activate a regular ESU License for Windows Server 2012/R2 that's Standard edition and has 48 physical cores. You should link this regular, production ESU license to your 6 production servers. Next, you should use this existing license, not add any more cores or provision a separate license, and link this license to your 2 non-production Windows Server 2012 R2 standard machines. You should tag the license and the 2 non-production Windows Server 2012 R2 Standard machines with Name: ΓÇ£ESU UsageΓÇ¥ and Value: ΓÇ£WS2012 VISUAL STUDIO DEV TESTΓÇ¥. +- You have 8 Windows Server 2012 R2 Standard instances, each with 8 physical cores. Six of these Windows Server 2012 R2 Standard machines are for production, and 2 of these Windows Server 2012 R2 Standard machines are eligible for free ESUs through the Visual Studio Dev Test subscription. + - You should first provision and activate a regular ESU License for Windows Server 2012/R2 that's Standard edition and has 48 physical cores to cover the 6 production machines. You should link this regular, production ESU license to your 6 production servers. + - Next, you should reuse this existing license, don't add any more cores or provision a separate license, and link this license to your 2 non-production Windows Server 2012 R2 standard machines. You should tag the ESU license and the 2 non-production Windows Server 2012 R2 Standard machines with Name: "ESU Usage" and Value: "WS2012 VISUAL STUDIO DEV TEST". + - This will result in an ESU license for 48 cores, and you'll be billed for those 48 cores. You won't be charged for the additional 16 cores of the dev test servers that you added to this license, as long as the ESU license and the dev test server resources are tagged appropriately. > [!NOTE]-> You needed a regular production license to start with, and you'll be billed only for the production cores. You did not and should not provision non-production cores in your license. +> You needed a regular production license to start with, and you'll be billed only for the production cores. > ## Upgrading from Windows Server 2012/2012 R2 |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-cache-for-redis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md | Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-government | Azure Services In Fedramp Auditscope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md | For current Azure Government regions and available services, see [Products avail This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope-*Last updated: November 2023* +*Last updated: January 2024* ### Terminology used This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure for Education](https://azureforeducation.microsoft.com/) | ✅ | ✅ | | [Azure Information Protection](/azure/information-protection/) | ✅ | ✅ | | [Azure Kubernetes Service (AKS)](../../aks/index.yml) | ✅ | ✅ |+| [Azure Managed Grafana](../../managed-grafana/index.yml) | ✅ | ✅ | | [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | ✅ | ✅ | | [Azure Maps](../../azure-maps/index.yml) | ✅ | ✅ | | [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Bot Service](/azure/bot-service/) | ✅ | ✅ | | [Cloud Services](../../cloud-services/index.yml) | ✅ | ✅ | | [Cloud Shell](../../cloud-shell/overview.md) | ✅ | ✅ |-| [Cognitive Search](../../search/index.yml) (formerly Azure Search) | ✅ | ✅ | +| [Azure AI Health Bot](/healthbot/) | ✅ | ✅ | +| [Azure AI Search](../../search/index.yml) (formerly Azure Cognitive Search) | ✅ | ✅ | | [Azure AI | [Azure AI | [Azure AI This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Dedicated HSM](../../dedicated-hsm/index.yml) | ✅ | ✅ | | [DevTest Labs](../../devtest-labs/index.yml) | ✅ | ✅ | | [DNS](../../dns/index.yml) | ✅ | ✅ |-| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | ✅ | ✅ | +| [Omnichannel for Customer Service (Formerly Dynamics 365 Chat and Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | ✅ | ✅ | | [Dynamics 365 Commerce](/dynamics365/commerce/)| ✅ | ✅ | | [Dynamics 365 Customer Service](/dynamics365/customer-service/overview)| ✅ | ✅ | | [Dynamics 365 Field Service](/dynamics365/field-service/overview)| ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure AI Document Intelligence](../../ai-services/document-intelligence/index.yml) | ✅ | ✅ | | [Front Door](../../frontdoor/index.yml) | ✅ | ✅ | | [Functions](../../azure-functions/index.yml) | ✅ | ✅ |-| [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | ✅ | ✅ | -| [Health Bot](/healthbot/) | ✅ | ✅ | | [HDInsight](../../hdinsight/index.yml) | ✅ | ✅ | | [HPC Cache](../../hpc-cache/index.yml) | ✅ | ✅ | | [Immersive Reader](../../ai-services/immersive-reader/index.yml) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | ✅ | ✅ | | [Media Services](/azure/media-services/) | ✅ | ✅ | | [Metrics Advisor](../../ai-services/metrics-advisor/index.yml) | ✅ | ✅ |-| [Microsoft Defender XDR](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | ✅ | ✅ | | [Microsoft Azure Attestation](../../attestation/index.yml)| ✅ | ✅ | | [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| ✅ | ✅ | | [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) (formerly Azure Security Center) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Microsoft Defender for Identity](/defender-for-identity/) (formerly Azure Advanced Threat Protection) | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | | [Microsoft Defender for IoT](../../defender-for-iot/index.yml) (formerly Azure Security for IoT) | ✅ | ✅ |-| [Microsoft Defender Vulnerability Management](../../defender-for-iot/index.yml) | ✅ | ✅ | +| [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/) | ✅ | ✅ | | [Microsoft Graph](/graph/) | ✅ | ✅ | | [Microsoft Intune](/mem/intune/) | ✅ | ✅ | | [Microsoft Purview](../../purview/index.yml) (incl. Data Map, Data Estate Insights, and governance portal) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Site Recovery](../../site-recovery/index.yml) | ✅ | ✅ | | [SQL Database](/azure/azure-sql/database/sql-database-paas-overview) | ✅ | ✅ | | [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) | ✅ | ✅ |-| [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | ✅ | ✅ | | [SQL Server Stretch Database](../../sql-server-stretch-database/index.yml) | ✅ | ✅ | | [Storage: Archive](../../storage/blobs/access-tiers-overview.md) | ✅ | ✅ | | [Storage: Blobs](../../storage/blobs/index.yml) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Sign-up portal](https://signup.azure.com/) | ✅ | ✅ | ✅ | ✅ | |-| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [Azure Stack](/azure-stack/operator/azure-stack-usage-reporting) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) ***** | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Stack HCI](/azure-stack/hci/) | ✅ | ✅ | ✅ | | | | [Azure Video Indexer](/azure/azure-video-indexer/) | ✅ | ✅ | ✅ | | | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Front Door](../../frontdoor/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Functions](../../azure-functions/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |-| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | ✅ | ✅ | ✅ | | | | [HDInsight](../../hdinsight/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [HPC Cache](../../hpc-cache/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Import/Export](../../import-export/index.yml) | ✅ | ✅ | ✅ | ✅ | | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Machine Learning](../../machine-learning/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Media Services](/azure/media-services/) | ✅ | ✅ | ✅ | ✅ | ✅ |-| [Microsoft Defender XDR](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | ✅ | ✅ | ✅ | ✅ | | | [Microsoft Azure portal](../../azure-portal/index.yml) | ✅ | ✅ | ✅| ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | ✅ | ✅ | ✅| ✅ | | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Power BI](/power-bi/fundamentals/) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Power BI Embedded](/power-bi/developer/embedded/) | ✅ | ✅ | ✅ | ✅ | | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | ✅ | ✅ | ✅ | ✅ | |-| [Power Query Online](/power-query/) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Power Virtual Agents](/power-virtual-agents/) | ✅ | ✅ | ✅ | | | | [Private Link](../../private-link/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | ✅ | ✅ | ✅ | ✅ | | |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | We strongly recommended to always update to the latest version, or opt in to the ## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|-| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Fixed syslog timestamp parsing where an incorrect timezone offset might be applied</li></ul> | 1.23.0 | 1.29.5 | +| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Syslog time zones incorrect: AMA now uses machine current time when AMA receives an event to populate the TimeGenerated field. The previous behavior parsed the time zone from the Syslog event which caused incorrect times if a device sent an event from a time zone different than the AMA collector machine.</li></ul> | 1.23.0 | 1.29.5 | | December 2023 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.4 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li><li>Multiple IIS subscriptions causes a memory leak. feature reverted in 1.23.0.</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4| | October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11| | September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | |
azure-monitor | Azure Monitor Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md | There are built-in policy initiatives for Windows and Linux virtual machines, sc These initiatives above comprise individual policies that: - (Optional) Create and assign built-in user-assigned managed identity, per subscription, per region. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#policy-definition-and-details).- - `Bring Your Own User-Assigned Identity`: If set to `true`, it creates the built-in user-assigned managed identity in the predefined resource group and assigns it to all machines that the policy is applied to. If set to `false`, you can instead use existing user-assigned identity that *you must assign* to the machines beforehand. + - `Bring Your Own User-Assigned Identity`: If set to `false`, it creates the built-in user-assigned managed identity in the predefined resource group and assigns it to all the machines that the policy is applied to. Location of the resource group can be configured in the `Built-In-Identity-RG Location` parameter. + If set to `true`, you can instead use an existing user-assigned identity that is automatically assigned to all the machines that the policy is applied to. - Install Azure Monitor Agent extension on the machine, and configure it to use user-assigned identity as specified by the following parameters.- - `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the preceding policy. If set to `true`, it configures the agent to use an existing user-assigned identity that *you must assign* to the machines in scope beforehand. + - `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the preceding policy. If set to `true`, it configures the agent to use an existing user-assigned identity. - `User-Assigned Managed Identity Name`: If you use your own identity (selected `true`), specify the name of the identity that's assigned to the machines. - `User-Assigned Managed Identity Resource Group`: If you use your own identity (selected `true`), specify the resource group where the identity exists. - `Additional Virtual Machine Images`: Pass additional VM image names that you want to apply the policy to, if not already included.+ - `Built-In-Identity-RG Location`: If you use built-in user-assigned managed identity, specify the location where the identity and the resource group should be created. This parameter is only used when 'Bring Your Own User-Assigned Managed Identity' parameter is false. - Create and deploy the association to link the machine to specified data collection rule. - `Data Collection Rule Resource Id`: The Azure Resource Manager resourceId of the rule you want to associate via this policy to all machines the policy is applied to. |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | All custom tables created with or migrated to the [data collection rule (DCR)-ba | Managed Lustre | [AFSAuditLogs](/azure/azure-monitor/reference/tables/AFSAuditLogs) | | Managed NGINX | [NGXOperationLogs](/azure/azure-monitor/reference/tables/ngxoperationlogs) | | Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) |+| Microsoft Graph | [MicrosoftGraphActivityLogs](/azure/azure-monitor/reference/tables/microsoftgraphactivitylogs) | | Monitor | [AzureMetricsV2](/azure/azure-monitor/reference/tables/AzureMetricsV2) | | Network Devices (Operator Nexus) | [MNFDeviceUpdates](/azure/azure-monitor/reference/tables/MNFDeviceUpdates)<br>[MNFSystemStateMessageUpdates](/azure/azure-monitor/reference/tables/MNFSystemStateMessageUpdates) | | Network Managers | [AVNMConnectivityConfigurationChange](/azure/azure-monitor/reference/tables/AVNMConnectivityConfigurationChange)<br>[AVNMIPAMPoolAllocationChange](/azure/azure-monitor/reference/tables/AVNMIPAMPoolAllocationChange) | |
azure-monitor | Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md | Title: Azure Monitor customer-managed key description: Information and steps to configure Customer-managed key to encrypt data in your Log Analytics workspaces using an Azure Key Vault key. Previously updated : 01/04/2024 Last updated : 01/06/2024 Review [limitations and constraints](#limitationsandconstraints) before configur Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). You can encrypt data using your own key in [Azure Key Vault](../../key-vault/general/overview.md), for control over the key lifecycle, and ability to revoke access to your data. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption) operates. -Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data is encrypted twice, once at the service level using Microsoft-managed keys or Customer-managed keys, and once at the infrastructure level, using two different encryption algorithms and two different keys. [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. Dedicated cluster also lets you protect data with [Lockbox](#customer-lockbox). +Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data is encrypted in storage twice, once at the service level using Microsoft-managed keys or Customer-managed keys, and once at the infrastructure level, using two different [encryption algorithms](../../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption) and two different keys. [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. Dedicated cluster also lets you protect data with [Lockbox](#customer-lockbox). Data ingested in the last 14 days or recently used in queries is kept in hot-cache (SSD-backed) for query efficiency. SSD data is encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD access adheres to [key revocation](#key-revocation) |
azure-monitor | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md | Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md | Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md | Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md | Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-resource-manager | Tag Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md | Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 01/03/2024 Last updated : 02/05/2024 # Tag support for Azure resources To get the same data as a file of comma-separated values, download [tag-support. > | flexibleServers | Yes | Yes | > | getPrivateDnsZoneSuffix | No | No | > | serverGroups | Yes | Yes |-> | serverGroupsv2 | Yes | Yes | +> | serverGroupsv2 | Yes | No | > | servers | Yes | Yes | > | servers / advisors | No | No | > | servers / keys | No | No | To get the same data as a file of comma-separated values, download [tag-support. > | servers / topQueryStatistics | No | No | > | servers / virtualNetworkRules | No | No | > | servers / waitStatistics | No | No |-> | serversv2 | Yes | Yes | +> | serversv2 | Yes | No | ## Microsoft.DelegatedNetwork |
azure-signalr | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md | Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
azure-signalr | Signalr Concept Authenticate Oauth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md | In this section, you implement a `Login` API that authenticates clients using th ### Update the Hub class -By default, web client connects to SignalR Service using an internal access token. This access token isn't associated with an authenticated identity. +By default, web client connects to SignalR Service using an internal access. This access token isn't associated with an authenticated identity. Basically, it's anonymous access. In this section, you turn on real authentication by adding the `Authorize` attribute to the hub class, and updating the hub methods to read the username from the authenticated user's claim. In this section, you turn on real authentication by adding the `Authorize` attri ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png) - You prompt to authorize the chat app's access to your GitHub account. Select the **Authorize** button. + You're prompted to authorize the chat app's access to your GitHub account. Select the **Authorize** button. ![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png) |
azure-signalr | Signalr Concept Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-disaster-recovery.md | -Resiliency and disaster recovery is a common need for online systems. Azure SignalR Service already guarantees 99.9% availability, but it's still a regional service. -Your service instance is always running in one region and doesn't fail over to another region when there's a region-wide outage. +Resiliency and disaster recovery is a common need for online systems. Azure SignalR Service already provides 99.9% availability, however it's still a regional service. +When there's a region-wide outage, your service instance doesn't fail over to another region because it's always running in the one region. For regional disaster recovery, we recommend the following two approaches: -- **Enable Geo-Replication** (Easy way). This feature will handle regional failover for you automatically. When enabled, there remains just one Azure SignalR instance and no code changes are introduced. Check [geo-replication](howto-enable-geo-replication.md) for details.-- **Utilize Multiple Endpoints in Service SDK**. Our service SDK provides a functionality to support multiple SignalR service instances and automatically switch to other instances when some of them aren't available. With this feature, you're able to recover when a disaster takes place, but you need to set up the right system topology by yourself. You learn how to do so **in this document**.+- **Enable Geo-Replication** (Easy way). This feature handles regional failover for you automatically. When enabled, there's only one Azure SignalR instance and no code changes are introduced. Check [geo-replication](howto-enable-geo-replication.md) for details. +- **Utilize Multiple Endpoints in Service SDK**. Our service SDK supports multiple SignalR service instances and automatically switches to other instances when some of them are unavailable. With this feature, you're able to recover when a disaster takes place, but you need to set up the right system topology by yourself. You learn how to do so **in this document**. ## High available architecture for SignalR service -In order to have cross region resiliency for SignalR service, you need to set up multiple service instances in different regions. So when one region is down, the others can be used as backup. +To ensure cross region resiliency for SignalR service, you need to set up multiple service instances in different regions. So when one region is down, the others can be used as backup. When app servers connect to multiple service instances, there are two roles, primary and secondary.-Primary is an instance who is taking online traffic and secondary is a fully functional but backup instance for primary. -In our SDK implementation, negotiate only returns primary endpoints so in normal case clients only connect to primary endpoints. +Primary is an instance responsible for receiving online traffic, while secondary serves as a fallback instance that is fully functional. +In our SDK implementation, negotiate only returns primary endpoints, so clients only connect to primary endpoints in normal cases. But when primary instance is down, negotiate returns secondary endpoints so client can still make connections. Primary instance and app server are connected through normal server connections but secondary instance and app server are connected through a special type of connection called weak connection.-The main difference of a weak connection is that it doesn't accept client connection routing, because secondary instance is located in another region. Routing a client to another region isn't an optimal choice (increases latency). +One distinguishing characteristic of a weak connection is that it's unable to accept client connection routing due to the location of secondary instance in another region. Routing a client to another region isn't an optimal choice (increases latency). One service instance can have different roles when connecting to multiple app servers.-One typical setup for cross region scenario is to have two (or more) pairs of SignalR service instances and app servers. +One typical setup for cross region scenario is to have two or more pairs of SignalR service instances and app servers. Inside each pair app server and SignalR service are located in the same region, and SignalR service is connected to the app server as a primary role. Between each pairs app server and SignalR service are also connected, but SignalR becomes a secondary when connecting to server in another region. With this topology, message from one server can still be delivered to all clients as all app servers and SignalR service instances are interconnected.-But when a client is connected, it's always routed to the app server in the same region to achieve optimal network latency. +But when a client is connected, it routes to the app server in the same region to achieve optimal network latency. -Below is a diagram that illustrates such topology: +The following diagram illustrates such topology: ![Diagram shows two regions each with an app server and a SignalR service, where each server is associated with the SignalR service in its region as primary and with the service in the other region as secondary.](media/signalr-concept-disaster-recovery/topology.png) If you have multiple endpoints, you can set them in multiple config entries, eac Azure:SignalR:ConnectionString:<name>:<role> ``` -Here `<name>` is the name of the endpoint and `<role>` is its role (primary or secondary). +In the ConnectionString, `<name>` is the name of the endpoint and `<role>` is its role (primary or secondary). Name is optional but it's useful if you want to further customize the routing behavior among multiple endpoints. #### Through code Follow the steps to trigger the failover: ## Next steps -In this article, you have learned how to configure your application to achieve resiliency for SignalR service. To understand more details about server/client connection and connection routing in SignalR service, you can read [this article](signalr-concept-internals.md) for SignalR service internals. +In this article, you learned how to configure your application to achieve resiliency for SignalR service. To understand more details about server/client connection and connection routing in SignalR service, you can read [this article](signalr-concept-internals.md) for SignalR service internals. For scaling scenarios such as sharding that uses multiple instances together to handle large number of connections read [how to scale multiple instances](signalr-howto-scale-multi-instances.md). |
azure-signalr | Signalr Concept Event Grid Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-event-grid-integration.md | Title: React to Azure SignalR Service events -description: Use Azure Event Grid to subscribe to Azure SignalR Service events. Other downstream services can be triggered by these events. +description: Use Azure Event Grid to subscribe to Azure SignalR Service events. These events can also trigger other downstream services. -Azure SignalR Service events are reliably sent to the Event Grid service which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery. To learn more, see [Event Grid message delivery and retry](../event-grid/delivery-and-retry.md). +Azure SignalR Service events are reliably sent to the Event Grid service that provides reliable delivery services to your applications through rich retry policies and dead-letter delivery. To learn more, see [Event Grid message delivery and retry](../event-grid/delivery-and-retry.md). ![Event Grid Model](/azure/event-grid/media/overview/functional-model.png) ## Serverless state-Azure SignalR Service events are only active when client connections are in a serverless state. If a client does not route to a hub server, it goes into the serverless state. Classic mode only works when the hub that client connections connect to doesn't have a hub server. Serverless mode is recommended as a best practice. To learn more details about service mode, see [How to choose Service Mode](https://github.com/Azure/azure-signalr/blob/dev/docs/faq.md#what-is-the-meaning-of-service-mode-defaultserverlessclassic-how-can-i-choose). +Azure SignalR Service events are only active when client connections are in a serverless state. If a client doesn't route to a hub server, it goes into the serverless state. Classic mode only works when the hub that client connections connect to doesn't have a hub server. Serverless mode is recommended as a best practice. To learn more details about service mode, see [How to choose Service Mode](https://github.com/Azure/azure-signalr/blob/dev/docs/faq.md#what-is-the-meaning-of-service-mode-defaultserverlessclassic-how-can-i-choose). ## Available Azure SignalR Service events-Event grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Azure SignalR Service event subscriptions support two types of events: +Event Grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Azure SignalR Service event subscriptions support two types of events: |Event Name|Description| |-|--| Event grid uses [event subscriptions](../event-grid/concepts.md#event-subscripti |`Microsoft.SignalRService.ClientConnectionDisconnected`|Raised when a client connection is disconnected.| ## Event schema-Azure SignalR Service events contain all the information you need to respond to the changes in your data. You can identify an Azure SignalR Service event with the eventType property starts with "Microsoft.SignalRService". Additional information about the usage of Event Grid event properties is documented at [Event Grid event schema](../event-grid/event-schema.md). +Azure SignalR Service events contain all the information you need to respond to the changes in your data. You can identify an Azure SignalR Service event with the eventType property starts with **Microsoft.SignalRService**. Additional information about the usage of Event Grid event properties is documented at [Event Grid event schema](../event-grid/event-schema.md). -Here is an example of a client connection connected event: +Here's an example of a client connection connected event: ```json [{ "topic": "/subscriptions/{subscription-id}/resourceGroups/signalr-rg/providers/Microsoft.SignalRService/SignalR/signalr-resource", |
azure-signalr | Signalr Concept Scale Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-scale-aspnet-core.md | -Currently, there are [two versions](/aspnet/core/signalr/version-differences) of SignalR you can use with your web applications: ASP.NET SignalR, and the new ASP.NET Core SignalR. ASP.NET Core SignalR is a rewrite of the previous version. As a result, ASP.NET Core SignalR isn't backward compatible with the earlier SignalR version. The APIs and behaviors are different. The Azure SignalR Service supports both versions. +SignalR is currently available in [two versions](/aspnet/core/signalr/version-differences) for use with web applications: +- ASP.NET SignalR +- new ASP.NET Core SignalR -With Azure SignalR Service, you have the ability to run your actual web application on multiple platforms (Windows, Linux, and macOS) while hosting with [Azure App Service](../app-service/overview.md), [IIS](/aspnet/core/host-and-deploy/iis/index), [Nginx](/aspnet/core/host-and-deploy/linux-nginx), [Apache](/aspnet/core/host-and-deploy/linux-apache), [Docker](/aspnet/core/host-and-deploy/docker/index). You can also use self-hosting in your own process. +ASP.NET Core SignalR is a rewrite of the previous version. As a result, ASP.NET Core SignalR isn't backward compatible with the earlier SignalR version. The APIs and behaviors are different. The Azure SignalR Service supports both versions. -If the goals for your application include: supporting the latest functionality for updating web clients with real-time content updates, running across multiple platforms (Azure, Windows, Linux, and macOS), and hosting in different environments, then the best choice could be using the Azure SignalR Service. +Azure SignalR Service lets you host your actual web application on multiple platforms (Windows, Linux, and macOS) [Azure App Service](../app-service/overview.md), [IIS](/aspnet/core/host-and-deploy/iis/index), [Nginx](/aspnet/core/host-and-deploy/linux-nginx), [Apache](/aspnet/core/host-and-deploy/linux-apache), [Docker](/aspnet/core/host-and-deploy/docker/index). You can also use self-hosting in your own process. ++Azure SignalR Service is the best choice if the goals for your application include: +- supporting the latest functionality for updating web clients with real-time content updates, +- running across multiple platforms (Azure, Windows, Linux, and macOS) +- hosting in different environments ## Why not deploy SignalR myself? One of the key reasons to use the Azure SignalR Service is simplicity. With Azur Also, WebSockets are typically the preferred technique to support real-time content updates. However, load balancing a large number of persistent WebSocket connections becomes a complicated problem to solve as you scale. Common solutions use: DNS load balancing, hardware load balancers, and software load balancing. Azure SignalR Service handles this problem for you. -For ASP.NET Core SignalR, another reason might be you have no requirements to actually host a web application at all. The logic of your web application might use [Serverless computing](https://azure.microsoft.com/overview/serverless-computing/). For example, maybe your code is only hosted and executed on demand with [Azure Functions](../azure-functions/index.yml) triggers. This scenario can be tricky because your code only runs on-demand and doesn't maintain long connections with clients. Azure SignalR Service can handle this situation since the service already manages connections for you. For more information, see [overview on how to use SignalR Service with Azure Functions](signalr-concept-azure-functions.md). Since ASP.NET SignalR uses a different protocol, such Serverless mode isn't supported for ASP.NET SignalR. +For ASP.NET Core SignalR, another reason might be you have no requirements to actually host a web application at all. The logic of your web application might use [Serverless computing](https://azure.microsoft.com/overview/serverless-computing/). For example, maybe your code is only hosted and executed on demand with [Azure Functions](../azure-functions/index.yml) triggers. This scenario can be challenging because your code only runs on-demand and doesn't maintain long connections with clients. Azure SignalR Service can handle this situation since the service already manages connections for you. For more information, see [overview on how to use SignalR Service with Azure Functions](signalr-concept-azure-functions.md). Since ASP.NET SignalR uses a different protocol, such Serverless mode isn't supported for ASP.NET SignalR. ## How does it scale? |
azure-signalr | Signalr Quickstart Azure Signalr Service Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-arm-template.md | -This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure SignalR Service. You can deploy the Azure SignalR Service through the Azure portal, PowerShell, or CLI. +This quickstart walks you through the process of creating an Azure SignalR Service using an Azure Resource Manager (ARM) template. You can deploy the Azure SignalR Service through the Azure portal, PowerShell, or CLI. [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal once you sign in. +If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal once you sign in. [:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy Azure SignalR Service to Azure using an ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.signalrservice%2fsignalr%2fazuredeploy.json) The template defines one Azure resource: # [Portal](#tab/azure-portal) -Select the following link to deploy the Azure SignalR Service using the ARM template in the Azure portal: +To deploy the Azure SignalR Service using the ARM template, Select the following link in the Azure portal: [:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy Azure SignalR Service to Azure using the ARM template in the Azure portal.":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.signalrservice%2fsignalr%2fazuredeploy.json) On the **Deploy an Azure SignalR Service** page: 3. If you created a new resource group, select a **Region** for the resource group. -4. If you want, enter a new **Name** and the **Location** (such as **eastus2**) of the Azure SignalR Service. If you don't specify a name, it's automatically generated. The location for the Azure SignalR Service can be the same as or different from the region of the resource group. If you don't specify a location, it's set to the same region as the resource group. +4. If you want, enter a new **Name** and the **Location** (For example **eastus2**) of the Azure SignalR Service. If you don't specify a name, it generates automatically. The Azure SignalR Service's location can be the same or different from the region of the resource group. If you don't specify a location, it defaults to the same region as your resource group. -5. Choose the **Pricing Tier** (**Free_F1** or **Standard_S1**), enter the **Capacity** (number of SignalR units), and choose a **Service Mode** of **Default** (requires hub server), **Serverless** (doesn't allow any server connection), or **Classic** (routed to hub server only if hub has server connection). Then choose whether to **Enable Connectivity Logs** or **Enable Messaging Logs**. +5. Choose the **Pricing Tier** (**Free_F1** or **Standard_S1**), enter the **Capacity** (number of SignalR units), and choose a **Service Mode** of **Default** (requires hub server), **Serverless** (doesn't allow any server connection), or **Classic** (routed to hub server only if hub has server connection). Now, choose whether to **Enable Connectivity Logs** or **Enable Messaging Logs**. > [!NOTE] > For the **Free_F1** pricing tier, the capacity is limited to 1 unit. Follow these steps to see an overview of your new Azure SignalR Service: # [PowerShell](#tab/PowerShell) -Run the following interactive code to view details about your Azure SignalR Service. You'll have to enter the name of the new service and the resource group. +Run the following interactive code to view details about your Azure SignalR Service. You have to enter the name of the new service and the resource group. ```azurepowershell-interactive $serviceName = Read-Host -Prompt "Enter the name of your Azure SignalR Service" Read-Host "Press [ENTER] to continue" # [CLI](#tab/CLI) -Run the following interactive code to view details about your Azure SignalR Service. You'll have to enter the name of the new service and the resource group. +Run the following interactive code to view details about your Azure SignalR Service. You have to enter the name of the new service and the resource group. ```azurecli-interactive read -p "Enter the name of your Azure SignalR Service: " serviceName && |
azure-signalr | Signalr Quickstart Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-rest-api.md | -Azure SignalR Service provides [REST API](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to support server to client communication scenarios, such as broadcasting. You can choose any programming language that can make REST API call. You can post messages to all connected clients, a specific client by name, or a group of clients. +Azure SignalR Service provides [REST API](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to support server-to-client communication scenarios such as broadcasting. You can choose any programming language that can make REST API calls. You can post messages to all connected clients, a specific client by name, or a group of clients. -In this quickstart, you will learn how to send messages from a command-line app to connected client apps in C#. +In this quickstart, you learn how to send messages from a command-line app to connected client apps in C#. ## Prerequisites Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide. ## Clone the sample application -While the service is deploying, let's switch to prepare the code. Clone the [sample app from GitHub](https://github.com/aspnet/AzureSignalR-samples.git), set the SignalR Service connection string, and run the application locally. +While the service is being deployed, let's switch to prepare the code. Clone the [sample app from GitHub](https://github.com/aspnet/AzureSignalR-samples.git), set the SignalR Service connection string, and run the application locally. 1. Open a git terminal window. Change to a folder where you want to clone the sample project. This sample is a console app showing the use of Azure SignalR Service. It provid - Server Mode: use simple commands to call Azure SignalR Service REST API. - Client Mode: connect to Azure SignalR Service and receive messages from server. -Also you can find how to generate an access token to authenticate with Azure SignalR Service. +You also learn how to generate an access token to authenticate with Azure SignalR Service. ### Build the executable file Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide. ## Run the sample without publishing -You can also run the command below to start a server or client +You can also run the following command to start a server or client ```bash # Start a server You can start multiple clients with different client names. Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsapi). -## <a name="usage"> </a> Integration with third-party services +## <a name="usage"> </a> Integration with non-Microsoft services -The Azure SignalR service allows third-party services to integrate with the system. +The Azure SignalR service allows non-Microsoft services to integrate with the system. ### Definition of technical specifications Send to some users | **✓** (Deprecated) | `N / A` Version | API HTTP Method | Request URL | Request body | | | `1.0-preview` | `POST` | `https://<instance-name>.service.signalr.net:5002/api/v1-preview/hub/<hub-name>` | `{"target": "<method-name>", "arguments": [...]}`-`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>` | Same as above +`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>` | `{"target": "<method-name>", "arguments": [...]}` <a name="broadcast-group"> </a> ### Broadcast to a group Version | API HTTP Method | Request URL | Request body Version | API HTTP Method | Request URL | Request body | | | `1.0-preview` | `POST` | `https://<instance-name>.service.signalr.net:5002/api/v1-preview/hub/<hub-name>/group/<group-name>` | `{"target": "<method-name>", "arguments": [...]}`-`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>/groups/<group-name>` | Same as above +`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>/groups/<group-name>` | `{"target": "<method-name>", "arguments": [...]}` <a name="send-user"> </a> ### Sending to a user Version | API HTTP Method | Request URL | Request body Version | API HTTP Method | Request URL | Request body | | | `1.0-preview` | `POST` | `https://<instance-name>.service.signalr.net:5002/api/v1-preview/hub/<hub-name>/user/<user-id>` | `{"target": "<method-name>", "arguments": [...]}`-`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>/users/<user-id>` | Same as above +`1.0` | `POST` | `https://<instance-name>.service.signalr.net/api/v1/hubs/<hub-name>/users/<user-id>` | `{"target": "<method-name>", "arguments": [...]}` <a name="add-user-to-group"> </a> ### Adding a user to a group |
azure-web-pubsub | Tutorial Build Chat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-build-chat.md | |
azure-web-pubsub | Tutorial Permission | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-permission.md | |
azure-web-pubsub | Tutorial Subprotocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-subprotocol.md | |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
batch | Batch Account Create Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-account-create-portal.md | For detailed steps, see [Assign Azure roles by using the Azure portal](../role-b ### Create a key vault -User subscription mode requires [Azure Key Vault](/azure/key-vault/general/overview). The key vault must be in the same subscription and region as the Batch account and use a [Vault Access Policy](/azure/key-vault/general/assign-access-policy). +User subscription mode requires [Azure Key Vault](/azure/key-vault/general/overview). The key vault must be in the same subscription and region as the Batch account. To create a new key vault: 1. Search for and select **key vaults** from the Azure Search box, and then select **Create** on the **Key vaults** page. 1. On the **Create a key vault** page, enter a name for the key vault, and choose an existing resource group or create a new one in the same region as your Batch account.-1. On the **Access configuration** tab, select **Vault access policy** under **Permission model**. +1. On the **Access configuration** tab, select either **Azure role-based access control** or **Vault access policy** under **Permission model**, and under **Resource access**, check all 3 checkboxes for **Azure Virtual Machine for deployment**, **Azure Resource Manager for template deployment** and **Azure Disk Encryption for volume encryption**. 1. Leave the remaining settings at default values, select **Review + create**, and then select **Create**. ### Create a Batch account in user subscription mode To create a Batch account in user subscription mode: ### Grant access to the key vault manually -You can also grant access to the key vault manually. +You can also grant access to the key vault manually in [Azure portal](https://portal.azure.com). +#### If the Key Vault permission model is **Azure role-based access control**: +1. Select **Access control (IAM)** from the left navigation of the key vault page. +1. At the top of the **Access control (IAM)** page, select **Add** > **Add role assignment**. +1. On the **Add role assignment** screen, under **Role** tab, under **Job function roles** sub tab, select either **Key Vault Secrets Officer** or **Key Vault Administrator** role for the Batch account, and then select **Next**. +1. On the **Members** tab, select **Select members**. On the **Select members** screen, search for and select **Microsoft Azure Batch**, and then select **Select**. +1. Click the **Review + create** button on the bottom to go to **Review + assign** tab, and click the **Review + create** button on the bottom again. ++For detailed steps, see [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.md). ++#### If the Key Vault permission model is **Vault access policy**: 1. Select **Access policies** from the left navigation of the key vault page. 1. On the **Access policies** page, select **Create**. 1. On the **Create an access policy** screen, select a minimum of **Get**, **List**, **Set**, and **Delete** permissions under **Secret permissions**. For [key vaults with soft-delete enabled](/azure/key-vault/general/soft-delete-overview), also select **Recover**. |
batch | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md | Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
certification | How To Test Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-pnp.md | This article shows you how to: The application code that runs on your IoT Plug and Play must: - Connect to Azure IoT Hub using the [Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md).-- Follow the [IoT Plug an Play conventions](../iot-develop/concepts-developer-guide-device.md) to implement of telemetry, properties, and commands.+- Follow the [IoT Plug an Play conventions](../iot/concepts-developer-guide-device.md) to implement of telemetry, properties, and commands. The application is software that's installed separately from the operating system or is bundled with the operating system in a firmware image that's flashed to the device. -Prior to certifying your device through the certification process for IoT Plug and Play, you will want to validate that the device implementation matches the telemetry, properties and commands defined in the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) device model locally prior to submitting to the [Azure IoT Public Model Repository](../iot-develop/concepts-model-repository.md). +Prior to certifying your device through the certification process for IoT Plug and Play, you will want to validate that the device implementation matches the telemetry, properties and commands defined in the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) device model locally prior to submitting to the [Azure IoT Public Model Repository](../iot/concepts-model-repository.md). To meet the certification requirements, your device must: - Connects to Azure IoT Hub using the [DPS](../iot-dps/about-iot-dps.md). - Implement of telemetry, properties, or commands following the IoT Plug and Play convention. - Describe the device interactions with a [DTDL v2](https://aka.ms/dtdl) model.-- Send the model ID during [DPS registration](../iot-develop/concepts-developer-guide-device.md#dps-payload) in the DPS provisioning payload.-- Announce the model ID during the [MQTT connection](../iot-develop/concepts-developer-guide-device.md#model-id-announcement).+- Send the model ID during [DPS registration](../iot/concepts-developer-guide-device.md#dps-payload) in the DPS provisioning payload. +- Announce the model ID during the [MQTT connection](../iot/concepts-developer-guide-device.md#model-id-announcement). ## Test with the Azure IoT Extension CLI |
certification | How To Troubleshoot Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-troubleshoot-pnp.md | While running the tests, if you receive a result of `Passed with warnings`, this ## When you need help with the model repository -For IoT Plug and Play issues related to the model repository, refer to [our Docs guidance about the device model repository](../iot-develop/concepts-model-repository.md). +For IoT Plug and Play issues related to the model repository, refer to [our Docs guidance about the device model repository](../iot/concepts-model-repository.md). ## Next steps |
certification | Program Requirements Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-pnp.md | IoT Plug and Play enables solution builders to integrate smart devices with thei Promise of IoT Plug and Play certification are: 1. Defined device models and interfaces are compliant with the [Digital Twin Definition Language](https://github.com/Azure/opendigitaltwins-dtdl)-1. Easy integration with Azure IoT based solutions using the [Digital Twin APIs](../iot-develop/concepts-digital-twin.md) : Azure IoT Hub and Azure IoT Central +1. Easy integration with Azure IoT based solutions using the [Digital Twin APIs](../iot/concepts-digital-twin.md) : Azure IoT Hub and Azure IoT Central 1. Product truth validated through testing telemetry from end point to cloud using DTDL > [!Note] Promise of IoT Plug and Play certification are: | **OS** | Agnostic | | **Validation Type** | Automated | | **Validation** | The [portal workflow](https://certify.azure.com) validates: **1.** Model ID announcement and ensure the device is connected using either the MQTT or MQTT over WebSockets protocol **2.** Models are compliant with the DTDL v2 **3.** Telemetry, properties, and commands are properly implemented and interact between IoT Hub Digital Twin and Device Twin on the device |-| **Resources** | [Public Preview Refresh updates](../iot-develop/overview-iot-plug-and-play.md) | +| **Resources** | [Public Preview Refresh updates](../iot/overview-iot-plug-and-play.md) | **[Required] Device models are published in public model repository** Promise of IoT Plug and Play certification are: | **OS** | Agnostic | | **Validation Type** | Automated | | **Validation** | All device models are required to be published in public repository. Device models are resolved via models available in public repository **1.** User must manually publish the models to the public repository before submitting for the certification. **2.** Note that once the models are published, it is immutable. We strongly recommend publishing only when the models and embedded device code are finalized.*1 *1 User must contact Microsoft support to revoke the models once published to the model repository **3.** [Portal workflow](https://certify.azure.com) checks the existence of the models in the public repository when the device is connected to the certification service |-| **Resources** | [Model repository](../iot-develop/overview-iot-plug-and-play.md) | +| **Resources** | [Model repository](../iot/overview-iot-plug-and-play.md) | **[If implemented] Device info Interface: The purpose of test is to validate device info interface is implemented properly in the device code** Promise of IoT Plug and Play certification are: | **OS** | Agnostic | | **Validation Type** | Automated | | **Validation** | [Portal workflow](https://certify.azure.com) validates the device code implements device info interface **1.** Checks the values are emitted by the device code to IoT Hub **2.** Checks the interface is implemented in the DCM (this implementation will change in DTDL v2) **3.** Checks properties are not write-able (read only) **4.** Checks the schema type is string and/or long and not null |-| **Resources** | [Microsoft defined interface](../iot-develop/overview-iot-plug-and-play.md) | +| **Resources** | [Microsoft defined interface](../iot/overview-iot-plug-and-play.md) | | **Azure Recommended** | N/A | **[If implemented] Cloud to device: The purpose of test is to make sure messages can be sent from cloud to devices** |
communication-services | Migrating To Azure Communication Services Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/migrating-to-azure-communication-services-calling.md | +zone_pivot_groups: acs-plat-web-ios-android # Migration Guide from Twilio Video to Azure Communication Services -This article describes how to migrate an existing Twilio Video implementation to the [Azure Communication Services' Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md) for WebJS. Both Twilio Video and Azure Communication Services' Calling SDK for WebJS are also cloud-based platforms that enable developers to add voice and video calling features to their web applications. +This article describes how to migrate an existing Twilio Video implementation to the [Azure Communication Services' Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md). Both Twilio Video and Azure Communication Services' Calling SDK for WebJS are also cloud-based platforms that enable developers to add voice and video calling features to their web applications. However, there are some key differences between them that may affect your choice of platform or require some changes to your existing code if you decide to migrate. In this article, we compare the main features and functions of both platforms and provide some guidance on how to migrate your existing Twilio Video implementation to Azure Communication Services' Calling SDK for WebJS. However, there are some key differences between them that may affect your choice If you're embarking on a new project from the ground up, see the [Quickstarts of the Calling SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web). -**Prerequisites:** --1. **Azure Account:** Make sure that your Azure account is active. New users can create a free account at [Microsoft Azure](https://azure.microsoft.com/free/). -2. **Node.js 18:** Ensure Node.js 18 is installed on your system. Download from [Node.js](https://nodejs.org/en). -3. **Communication Services Resource:** Set up a [Communication Services Resource](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp) via your Azure portal and note your connection string. -4. **Azure CLI:** Follow the instructions at [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows?tabs=azure-cli).. -5. **User Access Token:** Generate a user access token to instantiate the call client. You can create one using the Azure CLI as follows: -```console -az communication identity token issue --scope voip --connection-string "yourConnectionString" -``` --For more information, see [Use Azure CLI to Create and Manage Access Tokens](../quickstarts/identity/access-tokens.md?pivots=platform-azcli). --For Video Calling as a Teams user: --- You can also use Teams identity. To generate an access token for a Teams User, see [Manage teams identity](../quickstarts/manage-teams-identity.md?pivots=programming-language-javascript).-- Obtain the Teams thread ID for call operations using the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). For information about creating a thread ID, see [Create chat - Microsoft Graph v1.0 > Example2: Create a group chat](/graph/api/chat-post?preserve-view=true&tabs=javascript&view=graph-rest-1.0#example-2-create-a-group-chat).--### UI library --The UI library simplifies the process of creating modern communication user interfaces using Azure Communication Services. It offers a collection of ready-to-use UI components that you can easily integrate into your application. --This open source prebuilt set of controls enables you to create aesthetically pleasing designs using [Fluent UI SDK](https://developer.microsoft.com/en-us/fluentui#/) components and develop high quality audio/video communication experiences. For more information, check out the [Azure Communications Services UI Library overview](../concepts/ui-library/ui-library-overview.md). The overview includes comprehensive information about both web and mobile platforms. ### Calling support Azure Communication Services offers various call types. The type of call you cho - **Group Calls** - Happens when three or more participants connect in a single call. Any combination of VoIP and PSTN-connected users can be on a group call. A one-to-one call can evolve into a group call by adding more participants to the call, and one of these participants can be a bot. - **Rooms Call** - A Room acts as a container that manages activity between end-users of Azure Communication Services. It provides application developers with enhanced control over who can join a call, when they can meet, and how they collaborate. For a more comprehensive understanding of Rooms, see the [Rooms overview](../concepts/rooms/room-concept.md). -## Installation --### Install the Azure Communication Services Calling SDK --Use the `npm install` command to install the Azure Communication Services Calling SDK for JavaScript. -```console -npm install @azure/communication-common npm install @azure/communication-calling -``` --### Remove the Twilio SDK from the project --You can remove the Twilio SDK from your project by uninstalling the package. -```console -npm uninstall twilio-video -``` --## Object Model --The following classes and interfaces handle some of the main features of the Azure Communication Services Calling SDK: --| **Name** | **Description** | -|--|-| -| CallClient | The main entry point to the Calling SDK. | -| AzureCommunicationTokenCredential | Implements the `CommunicationTokenCredential` interface, which is used to instantiate the CallAgent. | -| CallAgent | Start and manage calls. | -| Device Manager | Manage media devices. | -| Call | Represents a Call. | -| LocalVideoStream | Create a local video stream for a camera device on the local system. | -| RemoteParticipant | Represents a remote participant in the Call. | -| RemoteVideoStream | Represents a remote video stream from a Remote Participant. | -| LocalAudioStream | Represents a local audio stream for a local microphone device. | -| AudioOptions | Audio options, provided to a participant when making an outgoing call or joining a group call. | -| AudioIssue | Represents the end of call survey audio issues. Example responses might be `NoLocalAudio` - the other participants were unable to hear me, or `LowVolume` - the call audio volume was too low. | --When using ACS calling in a Teams call, there are a few differences: --- Instead of `CallAgent` - use `TeamsCallAgent` for starting and managing Teams calls.-- Instead of `Call` - use `TeamsCall` for representing a Teams Call.--## Initialize the Calling SDK (CallClient/CallAgent) --Using the `CallClient`, initialize a `CallAgent` instance. The `createCallAgent` method uses CommunicationTokenCredential as an argument. It accepts a [user access token](../quickstarts/identity/access-tokens.md?tabs=windows&pivots=programming-language-javascript). --### Device manager --#### Twilio --Twilio doesn't have a Device Manager analog. Tracks are created using the systemΓÇÖs default device. To customize a device, obtain the desired source track via: -```javascript -navigator.mediaDevices.getUserMedia() -``` --And pass it to the track creation method. --#### Azure Communication Services -```javascript -const { CallClient } = require('@azure/communication-calling'); -const { AzureCommunicationTokenCredential} = require('@azure/communication-common'); --const userToken = '<USER_TOKEN>'; -const tokenCredential = new AzureCommunicationTokenCredential(userToken); --callClient = new CallClient(); -const callAgent = await callClient.createCallAgent(tokenCredential, {displayName: 'optional user name'}); -``` --You can use the `getDeviceManager` method on the `CallClient` instance to access `deviceManager`. --const deviceManager = await callClient.getDeviceManager(); -```javascript -// Get a list of available video devices for use. -const localCameras = await deviceManager.getCameras(); --// Get a list of available microphone devices for use. -const localMicrophones = await deviceManager.getMicrophones(); --// Get a list of available speaker devices for use. -const localSpeakers = await deviceManager.getSpeakers(); -``` --### Get device permissions --#### Twilio --Twilio Video asks for device permissions on track creation. --#### Azure Communication Services --Prompt a user to grant camera and/or microphone permissions: -```javascript -const result = await deviceManager.askDevicePermission({audio: true, video: true}); -``` --The output returns with an object that indicates whether audio and video permissions were granted: -```javascript -console.log(result.audio); console.log(result.video); -``` --## Starting a call --### Twilio --```javascript -import * as TwilioVideo from 'twilio-video'; --const twilioVideo = TwilioVideo; -let twilioRoom; --twilioRoom = await twilioVideo.connect('token', { name: 'roomName', audio: false, video: false }); -``` --### Azure Communication Services --To create and start a call, use one of the `callAgent` APIs and provide a user that you created through the Communication Services identity SDK. --Call creation and start are synchronous. The `call` instance enables you to subscribe to call events. Subscribe to the `stateChanged` event for value changes. -```javascript -call.on('stateChanged', async () =\> { console.log(\`Call state changed: \${call.state}\`) }); -``` --#### 1:1 Call --To call another Azure Communication Services user, use the `startCall` method on `callAgent` and pass the recipient's `CommunicationUserIdentifier` that you [created with the Communication Services administration library](../quickstarts/identity/access-tokens.md). -```javascript -const userCallee = { communicationUserId: '\<Azure_Communication_Services_USER_ID\>' }; -const oneToOneCall = callAgent.startCall([userCallee]); -``` --#### Rooms Call --To join a `Room` call, you can instantiate a context object with the `roomId` property as the room identifier. To join the call, use the `join` method and pass the context instance. -```javascript -const context = { roomId: '\<RoomId\>' }; -const call = callAgent.join(context); -``` -A **Room** offers application developers better control over who can join a call, when they meet and how they collaborate. To learn more about **Rooms**, see the [Rooms overview](../concepts/rooms/room-concept.md), or see [Quickstart: Join a room call](../quickstarts/rooms/join-rooms-call.md). --#### Group Call --To start a new group call or join an ongoing group call, use the `join` method and pass an object with a `groupId` property. The `groupId` value must be a GUID. -```javascript -const context = { groupId: '\<GUID\>'}; -const call = callAgent.join(context); -``` --#### Teams call --Start a synchronous one-to-one or group call using the `startCall` API on `teamsCallAgent`. You can provide `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier` as a parameter to define the target of the call. The method returns the `TeamsCall` instance that allows you to subscribe to call events. -```javascript -const userCallee = { microsoftTeamsUserId: '\<MICROSOFT_TEAMS_USER_ID\>' }; -const oneToOneCall = teamsCallAgent.startCall(userCallee); -``` --## Accepting and joining a call --### Twilio --When using Twilio Video SDK, the Participant is created after joining the room; and it doesn't have any information about other rooms. --### Azure Communication Services --Azure Communication Services has the `CallAgent` instance, which emits an `incomingCall` event when the logged-in identity receives an incoming call. -```javascript -callAgent.on('incomingCall', async (call) =\>{ - // Incoming call - }); -``` --The `incomingCall` event includes an `incomingCall` instance that you can accept or reject. --When starting, joining, or accepting a call with *video on*, if the specified video camera device is being used by another process or if it's disabled in the system, the call starts with *video off*, and returns a `cameraStartFailed: true` call diagnostic. --```javascript -const incomingCallHandler = async (args: { incomingCall: IncomingCall }) => { - const incomingCall = args.incomingCall; -- // Get incoming call ID - var incomingCallId = incomingCall.id -- // Get information about this Call. - var callInfo = incomingCall.info; -- // Get information about caller - var callerInfo = incomingCall.callerInfo - - // Accept the call - var call = await incomingCall.accept(); -- // Reject the call - incomingCall.reject(); -- // Subscribe to callEnded event and get the call end reason - incomingCall.on('callEnded', args => - { console.log(args.callEndReason); - }); -- // callEndReason is also a property of IncomingCall - var callEndReason = incomingCall.callEndReason; -}; --callAgentInstance.on('incomingCall', incomingCallHandler); --``` --After starting a call, joining a call, or accepting a call, you can also use the `callAgent` `callsUpdated` event to be notified of the new `Call` object and start subscribing to it. -```javascript -callAgent.on('callsUpdated', (event) => { - event.added.forEach((call) => { - // User joined call - }); - - event.removed.forEach((call) => { - // User left call - }); -}); -``` --For Azure Communication Services Teams implementation, see how to [Receive a Teams Incoming Call](../how-tos/cte-calling-sdk/manage-calls.md#receive-a-teams-incoming-call). --## Adding and removing participants to a call --### Twilio --Participants can't be added or removed from Twilio Room, they need to join the Room or disconnect from it themselves. --Local Participant in Twilio Room can be accessed this way: -```javascript -let localParticipant = twilioRoom.localParticipant; -``` --Remote Participants in Twilio Room are represented with a map that has unique Participant SID as a key: -```javascript -twilioRoom.participants; -``` --### Azure Communication Services --All remote participants are represented by `RemoteParticipant` type and available through `remoteParticipants` collection on a call instance. --The `remoteParticipants` collection returns a list of remote participants in a call: -```javascript -call.remoteParticipants; // [remoteParticipant, remoteParticipant....] -``` --**Add participant:** --To add a participant to a call, you can use `addParticipant`. Provide one of the Identifier types. It synchronously returns the `remoteParticipant` instance. --The `remoteParticipantsUpdated` event from Call is raised when a participant is successfully added to the call. -```javascript -const userIdentifier = { communicationUserId: '<Azure_Communication_Services_USER_ID>' }; -const remoteParticipant = call.addParticipant(userIdentifier); -``` --**Remove participant:** --To remove a participant from a call, use `removeParticipant`. You need to pass one of the Identifier types. This method resolves asynchronously after the participant is removed from the call. The participant is also removed from the `remoteParticipants` collection. -```javascript -const userIdentifier = { communicationUserId: '<Azure_Communication_Services_USER_ID>' }; -await call.removeParticipant(userIdentifier); --``` --Subscribe to the call's `remoteParticipantsUpdated` event to be notified when new participants are added to the call or removed from the call. --```javascript -call.on('remoteParticipantsUpdated', e => { - e.added.forEach(remoteParticipant => { - // Subscribe to new remote participants that are added to the call - }); - - e.removed.forEach(remoteParticipant => { - // Unsubscribe from participants that are removed from the call - }) --}); -``` --Subscribe to remote participant's `stateChanged` event for value changes. -```javascript -remoteParticipant.on('stateChanged', () => { - console.log(`Remote participants state changed: ${remoteParticipant.state}`) -}); -``` --## Video calling --## Starting and stopping video --### Twilio --```javascript -const videoTrack = await twilioVideo.createLocalVideoTrack({ constraints }); -const videoTrackPublication = await localParticipant.publishTrack(videoTrack, { options }); -``` --The camera is enabled by default. It can be disabled and enabled back if necessary: -```javascript -videoTrack.disable(); -``` -Or: -```javascript -videoTrack.enable(); -``` --If there's a later created video track, attach it locally: --```javascript -const videoElement = videoTrack.attach(); -const localVideoContainer = document.getElementById( localVideoContainerId ); -localVideoContainer.appendChild(videoElement); -``` --Twilio Tracks rely on default input devices and reflect the changes in defaults. To change an input device, you need to unpublish the previous Video Track: --```javascript -localParticipant.unpublishTrack(videoTrack); -``` --Then create a new Video Track with the correct constraints. --### Azure Communication Services -To start a video while on a call, you need to enumerate cameras using the `getCameras` method on the `deviceManager` object. Then create a new instance of `LocalVideoStream` with the desired camera and pass the `LocalVideoStream` object into the `startVideo` method of an existing call object: --```javascript -const deviceManager = await callClient.getDeviceManager(); -const cameras = await deviceManager.getCameras(); -const camera = cameras[0] -const localVideoStream = new LocalVideoStream(camera); -await call.startVideo(localVideoStream); -``` --After you successfully start sending video, a `LocalVideoStream` instance of type Video is added to the `localVideoStreams` collection on a call instance. -```javascript -const localVideoStream = call.localVideoStreams.find( (stream) =\> { return stream.mediaStreamType === 'Video'} ); -``` --To stop local video while on a call, pass the `localVideoStream` instance that's being used for video: -```javascript -await call.stopVideo(localVideoStream); -``` --You can switch to a different camera device while a video is sending by calling `switchSource` on a `localVideoStream` instance: --```javascript -const cameras = await callClient.getDeviceManager().getCameras(); -const camera = cameras[1]; -localVideoStream.switchSource(camera); -``` --If the specified video device is being used by another process, or if it's disabled in the system: --- While in a call, if your video is off and you start video using `call.startVideo()`, this method returns a `SourceUnavailableError` and `cameraStartFailed` will be set to true.-- A call to the `localVideoStream.switchSource()` method causes `cameraStartFailed` to be set to true. See the [Call Diagnostics guide](../concepts/voice-video-calling/call-diagnostics.md) for more information about how to diagnose call-related issues.--To verify whether the local video is *on* or *off* you can use the `isLocalVideoStarted` API, which returns true or false: -```javascript -call.isLocalVideoStarted; -``` --To listen for changes to the local video, you can subscribe and unsubscribe to the `isLocalVideoStartedChanged` event: --```javascript -// Subscribe to local video event -call.on('isLocalVideoStartedChanged', () => { - // Callback(); -}); -// Unsubscribe from local video event -call.off('isLocalVideoStartedChanged', () => { - // Callback(); -}); --``` --### Rendering a remote user's video --#### Twilio --As soon as a Remote Participant publishes a Video Track, it needs to be attached. The `trackSubscribed` event on Room or Remote Participant enables you to detect when the track can be attached: --```javascript -twilioRoom.on('participantConneted', (participant) => { - participant.on('trackSubscribed', (track) => { - const remoteVideoElement = track.attach(); - const remoteVideoContainer = document.getElementById(remoteVideoContainerId + participant.identity); - remoteVideoContainer.appendChild(remoteVideoElement); - }); -}); -``` --Or --```javascript -twilioRoom..on('trackSubscribed', (track, publication, participant) => { - const remoteVideoElement = track.attach(); - const remoteVideoContainer = document.getElementById(remoteVideoContainerId + participant.identity); - remoteVideoContainer.appendChild(remoteVideoElement); - }); -}); -``` --#### Azure Communication Services --To list the video streams and screen sharing streams of remote participants, inspect the `videoStreams` collections: -```javascript -const remoteVideoStream: RemoteVideoStream = call.remoteParticipants[0].videoStreams[0]; -const streamType: MediaStreamType = remoteVideoStream.mediaStreamType; -``` --To render `RemoteVideoStream`, you need to subscribe to its `isAvailableChanged` event. If the `isAvailable` property changes to true, a remote participant is sending a stream. After that happens, create a new instance of `VideoStreamRenderer`, and then create a new `VideoStreamRendererView` instance by using the asynchronous `createView` method. You can then attach `view.target` to any UI element. --Whenever availability of a remote stream changes, you can destroy the whole `VideoStreamRenderer` or a specific `VideoStreamRendererView`. If you do decide to keep them, it displays a blank video frame. --```javascript -// Reference to the html's div where we would display a grid of all remote video streams from all participants. -let remoteVideosGallery = document.getElementById('remoteVideosGallery'); --subscribeToRemoteVideoStream = async (remoteVideoStream) => { - let renderer = new VideoStreamRenderer(remoteVideoStream); - let view; - let remoteVideoContainer = document.createElement('div'); - remoteVideoContainer.className = 'remote-video-container'; -- let loadingSpinner = document.createElement('div'); - // See the css example below for styling the loading spinner. - loadingSpinner.className = 'loading-spinner'; - remoteVideoStream.on('isReceivingChanged', () => { - try { - if (remoteVideoStream.isAvailable) { - const isReceiving = remoteVideoStream.isReceiving; - const isLoadingSpinnerActive = remoteVideoContainer.contains(loadingSpinner); - if (!isReceiving && !isLoadingSpinnerActive) { - remoteVideoContainer.appendChild(loadingSpinner); - } else if (isReceiving && isLoadingSpinnerActive) { - remoteVideoContainer.removeChild(loadingSpinner); - } - } - } catch (e) { - console.error(e); - } - }); -- const createView = async () => { - // Create a renderer view for the remote video stream. - view = await renderer.createView(); - // Attach the renderer view to the UI. - remoteVideoContainer.appendChild(view.target); - remoteVideosGallery.appendChild(remoteVideoContainer); - } -- // Remote participant has switched video on/off - remoteVideoStream.on('isAvailableChanged', async () => { - try { - if (remoteVideoStream.isAvailable) { - await createView(); - } else { - view.dispose(); - remoteVideosGallery.removeChild(remoteVideoContainer); - } - } catch (e) { - console.error(e); - } - }); -- // Remote participant has video on initially. - if (remoteVideoStream.isAvailable) { - try { - await createView(); - } catch (e) { - console.error(e); - } - } - - console.log(`Initial stream size: height: ${remoteVideoStream.size.height}, width: ${remoteVideoStream.size.width}`); - remoteVideoStream.on('sizeChanged', () => { - console.log(`Remote video stream size changed: new height: ${remoteVideoStream.size.height}, new width: ${remoteVideoStream.size.width}`); - }); -} -``` --Subscribe to the remote participant's `videoStreamsUpdated` event to be notified when the remote participant adds new video streams and removes video streams. --```javascript -remoteParticipant.on('videoStreamsUpdated', e => { - e.added.forEach(remoteVideoStream => { - // Subscribe to new remote participant's video streams - }); -- e.removed.forEach(remoteVideoStream => { - // Unsubscribe from remote participant's video streams - }); -}); -``` --### Virtual background --#### Twilio --To use Virtual Background, install Twilio helper library: -```console -npm install @twilio/video-processors -``` --Create and load a new `Processor` instance: --```javascript -import { GaussianBlurBackgroundProcessor } from '@twilio/video-processors'; --const blurProcessor = new GaussianBlurBackgroundProcessor({ assetsPath: virtualBackgroundAssets }); --await blurProcessor.loadModel(); -``` --As soon as the model is loaded, you can add the background to the video track using the `addProcessor` method: -```javascript -videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' }); -``` --#### Azure Communication Services --Use the npm install command to install the [Azure Communication Services Effects SDK](../quickstarts/voice-video-calling/get-started-video-effects.md?pivots=platform-web) for JavaScript. -```console -npm install @azure/communication-calling-effects --save -``` --> [!NOTE] -> To use video effects with the Azure Communication Calling SDK, once you've created a LocalVideoStream, you need to get the VideoEffects feature API of the LocalVideoStream to start/stop video effects: --```javascript -import * as AzureCommunicationCallingSDK from '@azure/communication-calling'; --import { BackgroundBlurEffect, BackgroundReplacementEffect } from '@azure/communication-calling-effects'; --// Get the video effects feature API on the LocalVideoStream -// (here, localVideoStream is the LocalVideoStream object you created while setting up video calling) -const videoEffectsFeatureApi = localVideoStream.feature(AzureCommunicationCallingSDK.Features.VideoEffects); --// Subscribe to useful events -videoEffectsFeatureApi.on(ΓÇÿeffectsStartedΓÇÖ, () => { - // Effects started -}); --videoEffectsFeatureApi.on(ΓÇÿeffectsStoppedΓÇÖ, () => { - // Effects stopped -}); --videoEffectsFeatureApi.on(ΓÇÿeffectsErrorΓÇÖ, (error) => { - // Effects error -}); -``` --To blur the background: --```javascript -// Create the effect instance -const backgroundBlurEffect = new BackgroundBlurEffect(); --// Recommended: Check support -const backgroundBlurSupported = await backgroundBlurEffect.isSupported(); --if (backgroundBlurSupported) { - // Use the video effects feature API we created to start effects - await videoEffectsFeatureApi.startEffects(backgroundBlurEffect); -} -``` --For background replacement with an image you need to provide the URL of the image you want as the background to this effect. Supported image formats are: PNG, JPG, JPEG, TIFF, and BMP. The supported aspect ratio is 16:9. --```javascript -const backgroundImage = 'https://linkToImageFile'; --// Create the effect instance -const backgroundReplacementEffect = new BackgroundReplacementEffect({ - backgroundImageUrl: backgroundImage -}); --// Recommended: Check support -const backgroundReplacementSupported = await backgroundReplacementEffect.isSupported(); --if (backgroundReplacementSupported) { - // Use the video effects feature API as before to start/stop effects - await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect); -} -``` --Change the image for this effect by passing it via the configured method: -```javascript -const newBackgroundImage = 'https://linkToNewImageFile'; --await backgroundReplacementEffect.configure({ - backgroundImageUrl: newBackgroundImage -}); -``` --To switch effects, use the same method on the video effects feature API: --```javascript -// Switch to background blur -await videoEffectsFeatureApi.startEffects(backgroundBlurEffect); --// Switch to background replacement -await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect); -``` --At any time, if you want to check which effects are active, use the `activeEffects` property. The `activeEffects` property returns an array with the names of the currently active effects and returns an empty array if there are no effects active. -```javascript -// Using the video effects feature api -const currentActiveEffects = videoEffectsFeatureApi.activeEffects; -``` --To stop effects: -```javascript -await videoEffectsFeatureApi.stopEffects(); -``` ---## Audio --### Starting and stopping audio --#### Twilio --```javascript -const audioTrack = await twilioVideo.createLocalAudioTrack({ constraints }); -const audioTrackPublication = await localParticipant.publishTrack(audioTrack, { options }); -``` --The microphone is enabled by default. You can disable and enable it back as needed: -```javascript -audioTrack.disable(); -``` --Or -```javascript -audioTrack.enable(); -``` --Any created Audio Track should be attached by Local Participant the same way as Video Track: --```javascript -const audioElement = audioTrack.attach(); -const localAudioContainer = document.getElementById(localAudioContainerId); -localAudioContainer.appendChild(audioElement); -``` --And by Remote Participant: --```javascript -twilioRoom.on('participantConneted', (participant) => { - participant.on('trackSubscribed', (track) => { - const remoteAudioElement = track.attach(); - const remoteAudioContainer = document.getElementById(remoteAudioContainerId + participant.identity); - remoteAudioContainer.appendChild(remoteAudioElement); - }); -}); -``` --Or: --```javascript -twilioRoom..on('trackSubscribed', (track, publication, participant) => { - const remoteAudioElement = track.attach(); - const remoteAudioContainer = document.getElementById(remoteAudioContainerId + participant.identity); - remoteVideoContainer.appendChild(remoteAudioElement); - }); -}); --``` --It isn't possible to mute incoming audio in Twilio Video SDK. --#### Azure Communication Services --```javascript -await call.startAudio(); -``` --To mute or unmute the local endpoint, you can use the mute and unmute asynchronous APIs: --```javascript -//mute local device (microphone / sent audio) -await call.mute(); --//unmute local device (microphone / sent audio) -await call.unmute(); -``` --Mute incoming audio sets the call volume to 0. To mute or unmute the incoming audio, use the `muteIncomingAudio` and `unmuteIncomingAudio` asynchronous APIs: --```javascript -//mute local device (speaker) -await call.muteIncomingAudio(); --//unmute local device (speaker) -await call.unmuteIncomingAudio(); --``` --### Detecting dominant speaker --#### Twilio --To detect the loudest Participant in the Room, use the Dominant Speaker API. You can enable it in the connection options when joining the Group Room with at least 2 participants: -```javascript -twilioRoom = await twilioVideo.connect('token', { -name: 'roomName', -audio: false, -video: false, -dominantSpeaker: true -}); -``` --When the loudest speaker in the Room changes, the `dominantSpeakerChanged` event is emitted: --```javascript -twilioRoom.on('dominantSpeakerChanged', (participant) => { - // Highlighting the loudest speaker -}); -``` --#### Azure Communication Services --Dominant speakers for a call are an extended feature of the core Call API. It enables you to obtain a list of the active speakers in the call. This is a ranked list, where the first element in the list represents the last active speaker on the call and so on. --In order to obtain the dominant speakers in a call, you first need to obtain the call dominant speakers feature API object: -```javascript -const callDominantSpeakersApi = call.feature(Features.CallDominantSpeakers); -``` --Next you can obtain the list of the dominant speakers by calling `dominantSpeakers`. This has a type of `DominantSpeakersInfo`, which has the following members: --- `speakersList` contains the list of the ranked dominant speakers in the call. These are represented by their participant ID.-- `timestamp` is the latest update time for the dominant speakers in the call.-```javascript -let dominantSpeakers: DominantSpeakersInfo = callDominantSpeakersApi.dominantSpeakers; -``` --You can also subscribe to the `dominantSpeakersChanged` event to know when the dominant speakers list changes. ---```javascript -const dominantSpeakersChangedHandler = () => { - // Get the most up-to-date list of dominant speakers - let dominantSpeakers = callDominantSpeakersApi.dominantSpeakers; -}; -callDominantSpeakersApi.on('dominantSpeakersChanged', dominantSpeakersChangedHandler); --``` --## Enabling screen sharing -### Twilio --To share the screen in Twilio Video, obtain the source track via `navigator.mediaDevices`: --Chromium-based browsers: -```javascript -const stream = await navigator.mediaDevices.getDisplayMedia({ - audio: false, - video: true - }); -const track = stream.getTracks()[0]; -``` --Firefox and Safari: -```javascript -const stream = await navigator.mediaDevices.getUserMedia({ mediaSource: 'screen' }); -const track = stream.getTracks()[0]; -``` --Obtain the screen share track, then you can publish and manage it the same way as the casual Video Track (see the ΓÇ£VideoΓÇ¥ section). --### Azure Communication Services --To start screen sharing while on a call, you can use the asynchronous API `startScreenSharing`: -```javascript -await call.startScreenSharing(); -``` --After successfully starting to sending screen sharing, a `LocalVideoStream` instance of type `ScreenSharing` is created and added to the `localVideoStreams` collection on the call instance. --```javascript -const localVideoStream = call.localVideoStreams.find( (stream) => { return stream.mediaStreamType === 'ScreenSharing'} ); -``` --To stop screen sharing while on a call, you can use the asynchronous API `stopScreenSharing`: -```javascript -await call.stopScreenSharing(); -``` --To verify whether screen sharing is on or off, you can use `isScreenSharingOn` API, which returns true or false: -```javascript -call.isScreenSharingOn; -``` --To listen for changes to the screen share, subscribe and unsubscribe to the `isScreenSharingOnChanged` event: --```javascript -// Subscribe to screen share event -call.on('isScreenSharingOnChanged', () => { - // Callback(); -}); -// Unsubscribe from screen share event -call.off('isScreenSharingOnChanged', () => { - // Callback(); -}); --``` --## Media quality statistics --### Twilio --To collect real-time media stats, use the `getStats`` method. -```javascript -const stats = twilioRoom.getStats(); -``` --### Azure Communication Services --Media quality statistics is an extended feature of the core Call API. You first need to obtain the `mediaStatsFeature` API object: --```javascript -const mediaStatsFeature = call.feature(Features.MediaStats); -``` ---To receive the media statistics data, you can subscribe `sampleReported` event or `summmaryReported` event: --- `sampleReported` event triggers every second. Suitable as a data source for UI display or your own data pipeline.-- `summmaryReported` event contains the aggregated values of the data over intervals. Useful when you just need a summary.--If you want control over the interval of the `summmaryReported` event, you need to define `mediaStatsCollectorOptions` of type `MediaStatsCollectorOptions`. Otherwise, the SDK uses default values. -```javascript -const mediaStatsCollectorOptions: SDK.MediaStatsCollectorOptions = { - aggregationInterval: 10, - dataPointsPerAggregation: 6 -}; --const mediaStatsCollector = mediaStatsFeature.createCollector(mediaStatsSubscriptionOptions); --mediaStatsCollector.on('sampleReported', (sample) => { - console.log('media stats sample', sample); -}); --mediaStatsCollector.on('summaryReported', (summary) => { - console.log('media stats summary', summary); -}); -``` --If you don't need to use the media statistics collector, you can call the dispose method of `mediaStatsCollector`. --```javascript -mediaStatsCollector.dispose(); -``` ---You don't need to call the dispose method of `mediaStatsCollector` every time a call ends. The collectors are reclaimed internally when the call ends. --For more information, see [Media quality statistics](../concepts/voice-video-calling/media-quality-sdk.md?pivots=platform-web). --## Diagnostics --### Twilio --To test connectivity, Twilio offers Preflight API. This is a test call performed to identify signaling and media connectivity issues. --An access token is required to launch the test: --```javascript -const preflightTest = twilioVideo.runPreflight(token); --// Emits when particular call step completes -preflightTest.on('progress', (progress) => { - console.log(`Preflight progress: ${progress}`); -}); --// Emits if the test has failed and returns error and partial test results -preflightTest.on('failed', (error, report) => { - console.error(`Preflight error: ${error}`); - console.log(`Partial preflight test report: ${report}`); -}); --// Emits when the test has been completed successfully and returns the report -preflightTest.on('completed', (report) => { - console.log(`Preflight test report: ${report}`); -}); -``` --Another way to identify network issues during the call is by using the Network Quality API, which monitors a Participant's network and provides quality metrics. You can enable it in the connection options when a participant joins the Group Room: --```javascript -twilioRoom = await twilioVideo.connect('token', { - name: 'roomName', - audio: false, - video: false, - networkQuality: { - local: 3, // Local Participant's Network Quality verbosity - remote: 1 // Remote Participants' Network Quality verbosity - } -}); -``` --When the network quality for Participant changes, it generates a `networkQualityLevelChanged` event: -```javascript -participant.on(networkQualityLevelChanged, (networkQualityLevel, networkQualityStats) => { - // Processing Network Quality stats -}); -``` --### Azure Communication Services -Azure Communication Services provides a feature called `"User Facing Diagnostics" (UFD)` that you can use to examine various properties of a call to identify the issue. User Facing Diagnostics events could be caused by some underlying issue (poor network, the user has their microphone muted) that could cause a user to have a poor call experience. --User-facing diagnostics is an extended feature of the core Call API and enables you to diagnose an active call. -```javascript -const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics); -``` --Subscribe to the `diagnosticChanged`` event to monitor when any user-facing diagnostic changes: -```javascript -/** - * Each diagnostic has the following data: - * - diagnostic is the type of diagnostic, e.g. NetworkSendQuality, DeviceSpeakWhileMuted - * - value is DiagnosticQuality or DiagnosticFlag: - * - DiagnosticQuality = enum { Good = 1, Poor = 2, Bad = 3 }. - * - DiagnosticFlag = true | false. - * - valueType = 'DiagnosticQuality' | 'DiagnosticFlag' - */ -const diagnosticChangedListener = (diagnosticInfo: NetworkDiagnosticChangedEventArgs | MediaDiagnosticChangedEventArgs) => { - console.log(`Diagnostic changed: ` + - `Diagnostic: ${diagnosticInfo.diagnostic}` + - `Value: ${diagnosticInfo.value}` + - `Value type: ${diagnosticInfo.valueType}`); -- if (diagnosticInfo.valueType === 'DiagnosticQuality') { - if (diagnosticInfo.value === DiagnosticQuality.Bad) { - console.error(`${diagnosticInfo.diagnostic} is bad quality`); -- } else if (diagnosticInfo.value === DiagnosticQuality.Poor) { - console.error(`${diagnosticInfo.diagnostic} is poor quality`); - } -- } else if (diagnosticInfo.valueType === 'DiagnosticFlag') { - if (diagnosticInfo.value === true) { - console.error(`${diagnosticInfo.diagnostic}`); - } - } -}; --userFacingDiagnostics.network.on('diagnosticChanged', diagnosticChangedListener); -userFacingDiagnostics.media.on('diagnosticChanged', diagnosticChangedListener); -``` --To learn more about User Facing Diagnostics and the different diagnostic values available, see [User Facing Diagnostics](../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web). --Azure Communication Services also provides a precall diagnostics API. To Access the Pre-Call API, you need to initialize a `callClient`, and provision an Azure Communication Services access token. Then you can access the `PreCallDiagnostics` feature and the `startTest` method. --```javascript -import { CallClient, Features} from "@azure/communication-calling"; -import { AzureCommunicationTokenCredential } from '@azure/communication-common'; --const callClient = new CallClient(); -const tokenCredential = new AzureCommunicationTokenCredential("INSERT ACCESS TOKEN"); -const preCallDiagnosticsResult = await callClient.feature(Features.PreCallDiagnostics).startTest(tokenCredential); -``` --The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a `PreCallDiagnosticsResult` object. --```javascript -export declare type PreCallDiagnosticsResult = { - deviceAccess: Promise<DeviceAccess>; - deviceEnumeration: Promise<DeviceEnumeration>; - inCallDiagnostics: Promise<InCallDiagnostics>; - browserSupport?: Promise<DeviceCompatibility>; - id: string; - callMediaStatistics?: Promise<MediaStatsCallFeature>; -}; -``` --You can learn more about ensuring precall readiness in [Pre-Call diagnostics](../concepts/voice-video-calling/pre-call-diagnostics.md). --## Event listeners --### Twilio --```javascript -twilioRoom.on('participantConneted', (participant) => { -// Participant connected -}); --twilioRoom.on('participantDisconneted', (participant) => { -// Participant Disconnected -}); --``` --### Azure Communication Services --Each object in the JavaScript Calling SDK has properties and collections. Their values change throughout the lifetime of the object. Use the `on()` method to subscribe to objects' events, and use the `off()` method to unsubscribe from objects' events. --**Properties** --- You must inspect their initial values, and subscribe to the `'\<property\>Changed'` event for future value updates.--**Collections** --- You must inspect their initial values, and subscribe to the `'\<collection\>Updated'` event for future value updates.-- The `'\<collection\>Updated'` event's payload, has an `added` array that contains values that were added to the collection.-- The `'\<collection\>Updated'` event's payload also has a removed array that contains values that were removed from the collection.--## Leaving and ending sessions --### Twilio -```javascript -twilioVideo.disconnect(); -``` ---### Azure Communication Services -```javascript -call.hangUp(); --// Set the 'forEveryone' property to true to end call for all participants -call.hangUp({ forEveryone: true }); --``` -## Cleaning Up -If you want to [clean up and remove a Communication Services subscription](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp#clean-up-resources), you can delete the resource or resource group. |
confidential-computing | Choose Confidential Containers Offerings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/choose-confidential-containers-offerings.md | Your current setup and operational needs dictate the most relevant path through - **Memory Isolation**: VM level isolation with unique memory encryption key per VM. - **Programming model**: Zero to minimal changes for containerized applications. Support is limited to containers that are Linux based (containers using a Linux base image for the container). -You can find more information on [Getting started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md) +You can find more information on [Getting started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md). ### Confidential Containers on AKS You can find more information on [Getting started with CVM worker nodes with a l - **Programming model**: Zero to minimal changes for containerized applications (containers using a Linux base image for the container). - **Ideal Workloads**: Applications with sensitive data processing, multi-party computations, and regulatory compliance requirements. -You can find more information on [Getting started with CVM worker nodes with a lift and shift workload to CVM node pool.](../aks/use-cvm.md) +You can find more information at [Confidential Containers with Azure Kubernetes Service](../aks/confidential-containers-overview.md). ### Confidential Computing Nodes with Intel SGX |
confidential-computing | Confidential Containers On Aks Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers-on-aks-preview.md | In alignment with the guidelines set by the [Confidential Computing Consortium]( * Code integrity: Runtime enforcement is always available through customer defined policies for containers and container configuration, such as immutable policies and container signing. * Isolation from operator: Security designs that assume least privilege and highest isolation shielding from all untrusted parties including customer/tenant admins. It includes hardening existing Kubernetes control plane access (kubelet) to confidential pods. -But with these features of confidentiality, the product maintains its ease of use: it supports all unmodified Linux containers with high Kubernetes feature conformance. Additionally, it supports heterogenous node pools (GPU, general-purpose nodes) in a single cluster to optimize for cost. +But with these features of confidentiality, the product should additioanally its ease of use: it supports all unmodified Linux containers with high Kubernetes feature conformance. Additionally, it supports heterogenous node pools (GPU, general-purpose nodes) in a single cluster to optimize for cost. ## What forms Confidential Containers on AKS? |
connectors | Enable Stateful Affinity Built In Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/enable-stateful-affinity-built-in-connectors.md | To run these connector operations in stateful mode, you must enable this capabil After you enable virtual network integration for your logic app, you must update your logic app's underlying website configuration (**<*logic-app-name*>.azurewebsites.net**) by using one the following methods: +- [Azure portal](#azure-portal) (bearer token not required) - [Azure Resource Management API](#azure-resource-management-api) (bearer token required) - [Azure PowerShell](#azure-powershell) (bearer token *not* required) +### Azure portal ++To configure virtual network private ports using the Azure portal, follow these steps: ++1. In the [Azure portal](https://portal.azure.com), find and open your Standard logic app resource. +1. On the logic app menu, under **Settings**, select **Configuration**. +1. On the **Configuration** page, select **General settings**. +1. Under **Platform settings**, in the **VNet Private Ports** box, enter the ports that you want to use. + ### Azure Resource Management API To complete this task with the [Azure Resource Management API - Update By Id](/rest/api/resources/resources/update-by-id), review the following requirements, syntax, and parameter values. |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
container-instances | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md | |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
cosmos-db | Custom Partitioning Analytical Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/custom-partitioning-analytical-store.md | You could use one or more partition keys for your analytical data. If you are us * Currently partitioned store can only point to the primary storage account associated with the Synapse workspace. Selecting custom storage accounts isn't supported at this point. -* Custom partitioning is only available for API for NoSQL in Azure Cosmos DB. API for MongoDB, Gremlin and Cassandra aren't supported at this time. +* Custom partitioning is only available for API for NoSQL in Azure Cosmos DB. API for MongoDB, Gremlin and Cassandra are in preview at this time. ## Pricing |
cosmos-db | Merge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md | $parameters = @{ Invoke-AzCosmosDBSqlContainerMerge @parameters ``` -For **shared-throughput databases**, use `Invoke-AzCosmosDBSqlDatabaseMerge` with the `-WhatIf` parameter to preview the merge without actually performing the operation. ----```azurepowershell-interactive -$parameters = @{ - ResourceGroupName = "<resource-group-name>" - AccountName = "<cosmos-account-name>" - Name = "<cosmos-database-name>" - WhatIf = $true -} -Invoke-AzCosmosDBSqlDatabaseMerge @parameters -``` --Start the merge by running the same command without the `-WhatIf` parameter. ----```azurepowershell-interactive -$parameters = @{ - ResourceGroupName = "<resource-group-name>" - AccountName = "<cosmos-account-name>" - Name = "<cosmos-database-name>" -} -Invoke-AzCosmosDBSqlDatabaseMerge @parameters --``` - #### [API for NoSQL](#tab/nosql/azure-cli) For **provisioned throughput** containers, start the merge by using [`az cosmosdb sql container merge`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-merge). $parameters = @{ Invoke-AzCosmosDBMongoDBCollectionMerge @parameters ``` -For **shared-throughput** databases, use `Invoke-AzCosmosDBMongoDBDatabaseMerge` with the `-WhatIf` parameter to preview the merge without actually performing the operation. ----```azurepowershell-interactive -$parameters = @{ - ResourceGroupName = "<resource-group-name>" - AccountName = "<cosmos-account-name>" - Name = "<cosmos-database-name>" - WhatIf = $true -} -Invoke-AzCosmosDBMongoDBDatabaseMerge @parameters -``` --Start the merge by running the same command without the `-WhatIf` parameter. ----```azurepowershell-interactive -$parameters = @{ - ResourceGroupName = "<resource-group-name>" - AccountName = "<cosmos-account-name>" - Name = "<cosmos-database-name>" -} -Invoke-AzCosmosDBMongoDBDatabaseMerge @parameters -``` - #### [API for MongoDB](#tab/mongodb/azure-cli) For **provisioned containers**, start the merge by using [`az cosmosdb mongodb collection merge`](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-merge). |
cosmos-db | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md | Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
data-factory | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md | |
data-factory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new-archive.md | This archive page retains updates from older months. Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly updates. +## May 2023 ++### Data Factory in Microsoft Fabric ++[Data factory in Microsoft Fabric](/fabric/data-factory/) provides cloud-scale data movement and data transformation services that allow you to solve the most complex data factory and ETL scenarios. It's intended to make your data factory experience easy to use, powerful, and truly enterprise-grade. + ## April 2023 ### Data flow |
data-factory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md | This page is updated monthly, so revisit it regularly. For older months' update Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos. +## January 2024 ++### Data movement ++- The new Salesforce connector now supports OAuth authentication on Bulk API 2.0 for both source and sink. [Learn more](connector-salesforce.md) +- The new Salesforce Service Cloud connector now supports OAuth authentication on Bulk API 2.0 for both source and sink. [Learn more](connector-salesforce-service-cloud.md) +- The Google Ads connector now supports upgrading to the newer driver version with the native Google Ads Query Language (GAQL). [Learn more](connector-google-adwords.md#upgrade-the-google-ads-driver-version) ++### Region expansion ++Azure Data Factory is now available in Israel Central and Italy North. You can co-locate your ETL workflow in this new region if you are utilizing the region for storing and managing your modern data warehouse. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/continued-region-expansion-azure-data-factory-is-generally/ba-p/4029391) + ## November 2023 ### Continuous integration and continuous deployment General Availability of Time to Live (TTL) for Managed Virtual Network [Learn mo Azure Data Factory is generally available in Poland Central [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/continued-region-expansion-azure-data-factory-is-generally/ba-p/3965769) - ## September 2023 ### Pipelines The Amazon S3 connector is now supported as a sink destination using Mapping Dat We introduce optional Source settings for DelimitedText and JSON sources in top-level CDC resource. The top-level CDC resource in data factory now supports optional source configurations for Delimited and JSON sources. You can now select the column/row delimiters for delimited sources and set the document type for JSON sources. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-optional-source-settings-for-delimitedtext-and-json/ba-p/3824274) -## May 2023 --### Data Factory in Microsoft Fabric --[Data factory in Microsoft Fabric](/fabric/data-factory/) provides cloud-scale data movement and data transformation services that allow you to solve the most complex data factory and ETL scenarios. It's intended to make your data factory experience easy to use, powerful, and truly enterprise-grade. - ## Related content - [What's new archive](whats-new-archive.md) |
data-lake-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
data-lake-store | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
databox-online | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md | Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
databox | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md | Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
ddos-protection | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md | |
defender-for-cloud | Agentless Vulnerability Assessment Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-aws.md | Vulnerability assessment for AWS, powered by Microsoft Defender Vulnerability Ma > [!NOTE] > This feature supports scanning of images in the ECR only. Images that are stored in other container registries should be imported into ECR for coverage. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images). -In every account where enablement of this capability is completed, all images stored in ECR that meet the following criteria for scan triggers are scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ECR as well as images that are currently running in EKS that were pulled from an ECR registry. Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every 24 hours. +In every account where enablement of this capability is completed, all images stored in ECR that meet the criteria for scan triggers are scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ECR as well as images that are currently running in EKS that were pulled from an ECR registry or any other Defender for Cloud supported registry (ACR, GCR, or GAR). Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every 24 hours. Container vulnerability assessment powered by Microsoft Defender Vulnerability Management has the following capabilities: |
defender-for-cloud | Azure Devops Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md | The Microsoft Security DevOps uses the following Open Source tools: | [Trivy](https://github.com/aquasecurity/trivy) | container images, Infrastructure as Code (IaC) | [Apache License 2.0](https://github.com/aquasecurity/trivy/blob/main/LICENSE) | > [!NOTE]-> Effective September 20, 2023, the secrets scanning (CredScan) tool within the Microsoft Security DevOps (MSDO) Extension for Azure DevOps has been deprecated. MSDO secrets scanning will be replaced with [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security). +> Effective September 20, 2023, the secrets scanning (CredScan) tool within the Microsoft Security DevOps (MSDO) Extension for Azure DevOps has been deprecated. MSDO secrets scanning will be replaced with [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security). ## Prerequisites |
defender-for-cloud | Concept Agentless Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md | -Microsoft Defender for Cloud improves compute posture for Azure, AWS and GCP environments with machine scanning. For requirements and support, see the [compute support matrix in Defender for Cloud](support-matrix-defender-for-servers.md). +Microsoft Defender for Cloud improves compute posture for Azure, AWS and GCP environments with machine scanning. For requirements and support, see the [compute support matrix in Defender for Cloud](support-matrix-defender-for-servers.md). Agentless scanning for virtual machines (VM) provides: Agentless scanning for virtual machines (VM) provides: - Deep analysis of operating system configuration and other machine meta data. - [Vulnerability assessment](enable-agentless-scanning-vms.md) using Defender Vulnerability Management. - [Secret scanning](secret-scanning.md) to locate plain text secrets in your compute environment.-- Threat detection with [agentless malware scanning](agentless-malware-scanning.md), using [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows?view=o365-worldwide).+- Threat detection with [agentless malware scanning](agentless-malware-scanning.md), using [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows). Agentless scanning assists you in the identification process of actionable posture issues without the need for installed agents, network connectivity, or any effect on machine performance. Agentless scanning is available through both the [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) plan and [Defender for Servers P2](plan-defender-for-servers-select-plan.md#plan-features) plan. |
defender-for-cloud | Concept Attack Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-attack-path.md | Last updated 05/07/2023 > [!VIDEO https://aka.ms/docs/player?id=36a5c440-00e6-4bd8-be1f-a27fbd007119] -One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolved and never enough resources to address them all. +One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolved and never enough resources to address them all. Defender for Cloud's contextual security capabilities assists security teams to assess the risk behind each security issue, and identify the highest risk issues that need to be resolved soonest. Defender for Cloud assists security teams to reduce the risk of an impactful breach to their environment in the most effective way. All of these capabilities are available as part of the [Defender Cloud Security ## What is cloud security graph? -The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. +The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. Defender for Cloud then uses the generated graph to perform an attack path analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the cloud security explorer. Defender for Cloud then uses the generated graph to perform an attack path analy ## What is attack path analysis? -Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. +Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. When you take your environment's contextual information into account, attack path analysis identifies issues that might lead to a breach on your environment, and helps you to remediate the highest risk ones first. For example its exposure to the internet, permissions, lateral movement, and more. Learn how to use [attack path analysis](how-to-manage-attack-path.md). ## What is cloud security explorer? -By running graph-based queries on the cloud security graph with the cloud security explorer, you can proactively identify security risks in your multicloud environments. Your security team can use the query builder to search for and locate risks, while taking your organization's specific contextual and conventional information into account. +By running graph-based queries on the cloud security graph with the cloud security explorer, you can proactively identify security risks in your multicloud environments. Your security team can use the query builder to search for and locate risks, while taking your organization's specific contextual and conventional information into account. Cloud security explorer provides you with the ability to perform proactive exploration features. You can search for security risks within your organization by running graph-based path-finding queries on top the contextual security data that is already provided by Defender for Cloud, such as cloud misconfigurations, vulnerabilities, resource context, lateral movement possibilities between resources and more. |
defender-for-cloud | Concept Cloud Security Posture Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md | The following table summarizes each plan and their cloud availability. | EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Permissions management (Preview)](enable-permissions-management.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | - > [!NOTE] > Starting March 7, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more. - ## Integrations (preview) Microsoft Defender for Cloud now has built-in integrations to help you use third-party systems to seamlessly manage and track tickets, events, and customer interactions. You can push recommendations to a third-party ticketing tool, and assign responsibility to a team for remediation. |
defender-for-cloud | Concept Data Security Posture Prepare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md | In order to protect GCP resources in Defender for Cloud, you can set up a Google Defender CSPM attack paths and cloud security graph insights include information about storage resources that are exposed to the internet and allow public access. The following table provides more details. -**State** | **Azure storage accounts** | **AWS S3 Buckets** | **GCP Storage Buckets** | - | | | -**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses. | All GCP storage buckets are exposed to the internet by default. | -**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**. | A GCP storage bucket is considered to allow public access if: it has an IAM (Identity and Access Management) role that meets these criteria: <br/><br/> The role is granted to the principal **allUsers** or **allAuthenticatedUsers**. <br/><br/>The role has at least one storage permission that *isn't* **storage.buckets.create** or **storage.buckets.list**. Public access in GCP is called ΓÇ£Public to internetΓÇ£. +| **State** | **Azure storage accounts** | **AWS S3 Buckets** | **GCP Storage Buckets** | +| | | | | +|**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses. | All GCP storage buckets are exposed to the internet by default. | +|**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**. | A GCP storage bucket is considered to allow public access if: it has an IAM (Identity and Access Management) role that meets these criteria: <br/><br/> The role is granted to the principal **allUsers** or **allAuthenticatedUsers**. <br/><br/>The role has at least one storage permission that *isn't* **storage.buckets.create** or **storage.buckets.list**. Public access in GCP is called ΓÇ£Public to internetΓÇ£.| Database resources don't allow public access but can still be exposed to the internet. |
defender-for-cloud | Concept Defender For Cosmos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md | Last updated 11/27/2022 Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders. -Defender for Azure Cosmos DB uses advanced threat detection capabilities, and [Microsoft Threat Intelligence](https://www.microsoft.com/insidetrack/microsoft-uses-threat-intelligence-to-protect-detect-and-respond-to-threats) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks. +Defender for Azure Cosmos DB uses advanced threat detection capabilities, and [Microsoft Threat Intelligence](https://www.microsoft.com/insidetrack/microsoft-uses-threat-intelligence-to-protect-detect-and-respond-to-threats) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks. -You can [enable protection for all your databases](quickstart-enable-database-protections.md) (recommended), or [enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-database-protections.md) at either the subscription level, or the resource level. +You can [enable protection for all your databases](quickstart-enable-database-protections.md) (recommended), or [enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-database-protections.md) at either the subscription level, or the resource level. -Defender for Azure Cosmos DB continually analyzes the telemetry stream generated by the Azure Cosmos DB service. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations. +Defender for Azure Cosmos DB continually analyzes the telemetry stream generated by the Azure Cosmos DB service. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations. -Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data, and doesn't have any effect on its performance. +Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data, and doesn't have any effect on its performance. ## Availability |Aspect|Details| |-|:-|-|Release state:| General Availability (GA) | +|Release state:| General Availability (GA) | | Protected Azure Cosmos DB API | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Cosmos DB for NoSQL <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for Apache Cassandra <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for MongoDB <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for Table <br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Cosmos DB for Apache Gremlin | |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government <br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet | ## What are the benefits of Microsoft Defender for Azure Cosmos DB -Microsoft Defender for Azure Cosmos DB uses advanced threat detection capabilities and Microsoft Threat Intelligence data. Defender for Azure Cosmos DB continuously monitors your Azure Cosmos DB accounts for threats such as SQL injection, compromised identities and data exfiltration. +Microsoft Defender for Azure Cosmos DB uses advanced threat detection capabilities and Microsoft Threat Intelligence data. Defender for Azure Cosmos DB continuously monitors your Azure Cosmos DB accounts for threats such as SQL injection, compromised identities and data exfiltration. -This service provides action-oriented security alerts in Microsoft Defender for Cloud with details of the suspicious activity and guidance on how to mitigate the threats. -You can use this information to quickly remediate security issues and improve the security of your Azure Cosmos DB accounts. +This service provides action-oriented security alerts in Microsoft Defender for Cloud with details of the suspicious activity and guidance on how to mitigate the threats. +You can use this information to quickly remediate security issues and improve the security of your Azure Cosmos DB accounts. -Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. To learn how to stream alerts, see [Stream alerts to a SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md). +Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. To learn how to stream alerts, see [Stream alerts to a SIEM, SOAR, or IT classic deployment model solution](export-to-siem.md). > [!TIP] > For a comprehensive list of all Defender for Azure Cosmos DB alerts, see the [alerts reference page](alerts-reference.md#alerts-azurecosmos). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md). ## Alert types -Threat intelligence security alerts are triggered for: +Threat intelligence security alerts are triggered for: - **Potential SQL injection attacks**: <br>- Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and might result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats. - + Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and might result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats. + - **Anomalous database access patterns**: <br>- For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations. - + For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations. + - **Suspicious database activity**: <br>- For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns. + For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns. ## Next steps -In this article, you learned about Microsoft Defender for Azure Cosmos DB. +In this article, you learned about Microsoft Defender for Azure Cosmos DB. > [!div class="nextstepaction"] > [Enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-database-protections.md) |
defender-for-cloud | Concept Integration 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-integration-365.md | Last updated 01/03/2024 # Alerts and incidents in Microsoft Defender XDR -Microsoft Defender for Cloud is now integrated with Microsoft Defender XDR. This integration allows security teams to access Defender for Cloud alerts and incidents within the Microsoft Defender Portal. This integration provides richer context to investigations that span cloud resources, devices, and identities. +Microsoft Defender for Cloud is now integrated with Microsoft Defender XDR. This integration allows security teams to access Defender for Cloud alerts and incidents within the Microsoft Defender Portal. This integration provides richer context to investigations that span cloud resources, devices, and identities. -The partnership with Microsoft Defender XDR allows security teams to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment. Security teams can accomplish this goal through immediate correlations of alerts and incidents. +The partnership with Microsoft Defender XDR allows security teams to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment. Security teams can accomplish this goal through immediate correlations of alerts and incidents. Microsoft Defender XDR offers a comprehensive solution that combines protection, detection, investigation, and response capabilities. The solution protects against attacks on devices, email, collaboration, identity, and cloud apps. Our detection and investigation capabilities are now extended to cloud entities, offering security operations teams a single pane of glass to significantly improve their operational efficiency. -Incidents and alerts are now part of [Microsoft Defender XDR's public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide). This integration allows exporting of security alerts data to any system using a single API. As Microsoft Defender for Cloud, we're committed to providing our users with the best possible security solutions, and this integration is a significant step towards achieving that goal. +Incidents and alerts are now part of [Microsoft Defender XDR's public API](/microsoft-365/security/defender/api-overview). This integration allows exporting of security alerts data to any system using a single API. As Microsoft Defender for Cloud, we're committed to providing our users with the best possible security solutions, and this integration is a significant step towards achieving that goal. -## Investigation experience in Microsoft Defender XDR +## Investigation experience in Microsoft Defender XDR The following table describes the detection and investigation experience in Microsoft Defender XDR with Defender for Cloud alerts. | Area | Description | |--|--|-| Incidents | All Defender for Cloud incidents are integrated to Microsoft Defender XDR. <br> - Searching for cloud resource assets in the [incident queue](/microsoft-365/security/defender/incident-queue?view=o365-worldwide) is supported. <br> - The [attack story](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#attack-story) graph shows cloud resource. <br> - The [assets tab](/microsoft-365/security/defender/investigate-incidents?view=o365-worldwide#assets) in an incident page shows the cloud resource. <br> - Each virtual machine has its own entity page containing all related alerts and activity. <br> <br> There are no duplications of incidents from other Defender workloads. | -| Alerts | All Defender for Cloud alerts, including multicloud, internal and external providersΓÇÖ alerts, are integrated to Microsoft Defender XDR. Defenders for Cloud alerts show on the Microsoft Defender XDR [alert queue](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response?view=o365-worldwide). <br>Microsoft Defender XDR<br> The `cloud resource` asset shows up in the Asset tab of an alert. Resources are clearly identified as an Azure, Amazon, or a Google Cloud resource. <br> <br> Defenders for Cloud alerts are automatically be associated with a tenant. <br> <br> There are no duplications of alerts from other Defender workloads.| +| Incidents | All Defender for Cloud incidents are integrated to Microsoft Defender XDR. <br> - Searching for cloud resource assets in the [incident queue](/microsoft-365/security/defender/incident-queue) is supported. <br> - The [attack story](/microsoft-365/security/defender/investigate-incidents#attack-story) graph shows cloud resource. <br> - The [assets tab](/microsoft-365/security/defender/investigate-incidents#assets) in an incident page shows the cloud resource. <br> - Each virtual machine has its own entity page containing all related alerts and activity. <br> <br> There are no duplications of incidents from other Defender workloads. | +| Alerts | All Defender for Cloud alerts, including multicloud, internal and external providersΓÇÖ alerts, are integrated to Microsoft Defender XDR. Defenders for Cloud alerts show on the Microsoft Defender XDR [alert queue](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response). <br>Microsoft Defender XDR<br> The `cloud resource` asset shows up in the Asset tab of an alert. Resources are clearly identified as an Azure, Amazon, or a Google Cloud resource. <br> <br> Defenders for Cloud alerts are automatically be associated with a tenant. <br> <br> There are no duplications of alerts from other Defender workloads.| | Alert and incident correlation | Alerts and incidents are automatically correlated, providing robust context to security operations teams to understand the complete attack story in their cloud environment. | | Threat detection | Accurate matching of virtual entities to device entities to ensure precision and effective threat detection. |-| Unified API | Defender for Cloud alerts and incidents are now included in [Microsoft Defender XDRΓÇÖs public API](/microsoft-365/security/defender/api-overview?view=o365-worldwide), allowing customers to export their security alerts data into other systems using one API. | +| Unified API | Defender for Cloud alerts and incidents are now included in [Microsoft Defender XDRΓÇÖs public API](/microsoft-365/security/defender/api-overview), allowing customers to export their security alerts data into other systems using one API. | -Learn more about [handling alerts in Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud?view=o365-worldwide). +Learn more about [handling alerts in Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud). ## Sentinel customers Microsoft Sentinel customers can [benefit from the Defender for Cloud integratio First you need to [enabled incident integration in your Microsoft 365 Defender connector](../sentinel/connect-microsoft-365-defender.md). -Then, enable the `Tenant-based Microsoft Defender for Cloud (Preview)` connector to synchronize your subscriptions with your tenant-based Defender for Cloud incidents to stream through the Microsoft 365 Defender incidents connector. +Then, enable the `Tenant-based Microsoft Defender for Cloud (Preview)` connector to synchronize your subscriptions with your tenant-based Defender for Cloud incidents to stream through the Microsoft 365 Defender incidents connector. -The connector is available through the Microsoft Defender for Cloud solution, version 3.0.0, in the Content Hub. If you have an earlier version of this solution, you can upgrade it in the Content Hub. +The connector is available through the Microsoft Defender for Cloud solution, version 3.0.0, in the Content Hub. If you have an earlier version of this solution, you can upgrade it in the Content Hub. If you have the legacy subscription-based Microsoft Defender for Cloud alerts connector enabled (which is displayed as `Subscription-based Microsoft Defender for Cloud (Legacy)`), we recommend you disconnect the connector in order to prevent duplicating alerts in your logs. We recommend you disable analytic rules that are enabled (either scheduled or through Microsoft creation rules), from creating incidents from your Defender for Cloud alerts. -You can use automation rules to close incidents immediately and prevent specific types of Defender for Cloud alerts from becoming incidents. You can also use the built-in tuning capabilities in the Microsoft 365 Defender portal to prevent alerts from becoming incidents. +You can use automation rules to close incidents immediately and prevent specific types of Defender for Cloud alerts from becoming incidents. You can also use the built-in tuning capabilities in the Microsoft 365 Defender portal to prevent alerts from becoming incidents. -Customers who integrated their Microsoft 365 Defender incidents into Sentinel and want to keep their subscription-based settings and avoid tenant-based syncing can [opt out of syncing incidents and alerts](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud?view=o365-worldwide) through the Microsoft 365 Defender connector. +Customers who integrated their Microsoft 365 Defender incidents into Sentinel and want to keep their subscription-based settings and avoid tenant-based syncing can [opt out of syncing incidents and alerts](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud) through the Microsoft 365 Defender connector. Learn how [Defender for Cloud and Microsoft 365 Defender handle your data's privacy](data-security.md#defender-for-cloud-and-microsoft-defender-365-defender-integration). |
defender-for-cloud | Configure Servers Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-servers-coverage.md | Last updated 02/05/2024 # Configure Defender for Servers features -Microsoft Defender for Cloud's Defender for Servers plans contains components that monitor your environments to provide extended coverage on your servers. Each of these components can be enabled, disabled or configured to your meet your specific requirements. +Microsoft Defender for Cloud's Defender for Servers plans contains components that monitor your environments to provide extended coverage on your servers. Each of these components can be enabled, disabled or configured to your meet your specific requirements. | Component | Availability | Description | Learn more | |--|--|--|--| Vulnerability assessment for machines allows you to select between two vulnerabi ## Configure endpoint protection -With Microsoft Defender for Servers, you enable the protections provided by [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide) to your server resources. Defender for Endpoint includes automatic agent deployment to your servers, and security data integration with Defender for Cloud. +With Microsoft Defender for Servers, you enable the protections provided by [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) to your server resources. Defender for Endpoint includes automatic agent deployment to your servers, and security data integration with Defender for Cloud. To configure endpoint protection: You can also check the coverage for all of all your subscriptions and resources ## Disable Defender for Servers plan or features -To disable The Defender for Servers plan or any of the features of the plan, navigate to the Environment settings page of the relevant subscription or workspace and toggle the relevant switch to **Off**. +To disable The Defender for Servers plan or any of the features of the plan, navigate to the Environment settings page of the relevant subscription or workspace and toggle the relevant switch to **Off**. > [!NOTE] > When you disable the Defender for Servers plan on a subscription, it doesn't disable it on a workspace. To disable the plan on a workspace, you must navigate to the plans page for the workspace and toggle the switch to **Off**. |
defender-for-cloud | Connect Azure Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-azure-subscription.md | Microsoft Defender for Cloud is a cloud-native application protection platform ( - A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches - A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads -Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) for free. You can add additional paid plans to secure all aspects of your cloud resources. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender) for free. You can add additional paid plans to secure all aspects of your cloud resources. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). Defender for Cloud helps you find and fix security vulnerabilities. Defender for Cloud also applies access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. If you want to disable any of the plans, toggle the individual plan to **off**. When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft Defender Portal. No further steps are needed. -The integration between Microsoft Defender for Cloud and Microsoft Defender XDR brings your cloud environments into Microsoft Defender XDR. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft Defender XDR, SOC teams can now access all security information from a single interface. +The integration between Microsoft Defender for Cloud and Microsoft Defender XDR brings your cloud environments into Microsoft Defender XDR. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft Defender XDR, SOC teams can now access all security information from a single interface. Learn more about Defender for Cloud's [alerts in Microsoft Defender XDR](concept-integration-365.md). |
defender-for-cloud | Defender For Containers Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md | A full list of supported alerts is available in the [reference table of all Defe The expected response is `No resource found`. Within 30 minutes, Defender for Cloud detects this activity and trigger a security alert.+ > [!NOTE] + > To simulate agentless alerts for Defender for Containers, Azure Arc isn't a prerequisite. 1. In the Azure portal, open Microsoft Defender for Cloud's security alerts page and look for the alert on the relevant resource: |
defender-for-cloud | Export To Siem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md | There are built-in Azure tools that are available that ensure you can view your ## Stream alerts to Defender XDR with the Defender XDR API -Defender for Cloud natively integrates with [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) allows you to use Defender XDR's incidents and alerts API to stream alerts and incidents into non-Microsoft solutions. Defender for Cloud customers can access one API for all Microsoft security products and can use this integration as an easier way to export alerts and incidents. +Defender for Cloud natively integrates with [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender) allows you to use Defender XDR's incidents and alerts API to stream alerts and incidents into non-Microsoft solutions. Defender for Cloud customers can access one API for all Microsoft security products and can use this integration as an easier way to export alerts and incidents. -Learn how to [integrate SIEM tools with Defender XDR](/microsoft-365/security/defender/configure-siem-defender?view=o365-worldwide). +Learn how to [integrate SIEM tools with Defender XDR](/microsoft-365/security/defender/configure-siem-defender). ## Stream alerts to Microsoft Sentinel Defender for Cloud natively integrates with [Microsoft Sentinel](../sentinel/ove ### Microsoft Sentinel's connectors for Defender for Cloud -Microsoft Sentinel includes built-in connectors for Microsoft Defender for Cloud at the subscription and tenant levels. +Microsoft Sentinel includes built-in connectors for Microsoft Defender for Cloud at the subscription and tenant levels. You can: Before you set up the Azure services for exporting alerts, make sure you have: - if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read` - if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action` --> -### Set up the Azure services +### Set up the Azure services You can set up your Azure environment to support continuous export using either: You can set up your Azure environment to support continuous export using either: 1. Download and run [the PowerShell script](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Powershell%20scripts/3rd%20party%20SIEM%20integration). -1. Enter the required parameters. - +1. Enter the required parameters. + 1. Execute the script. The script performs all of the steps for you. When the script finishes, use the output to install the solution in the SIEM platform. The script performs all of the steps for you. When the script finishes, use the 1. Define a policy for the event hub with `Send` permissions. -**If you're streaming alerts to QRadar** +**If you're streaming alerts to QRadar**: 1. Create an event hub `Listen` policy. 1. Copy and save the connection string of the policy to use in QRadar. -1. Create a consumer group. +1. Create a consumer group. 1. Copy and save the name to use in the SIEM platform. To stream alerts into **ArcSight**, **SumoLogic**, **Syslog servers**, **LogRhyt |:|:| :| | SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hubs](https://help.sumologic.com/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-logs-azure-monitor/). | | ArcSight | No | The ArcSight Azure Event Hubs smart connector is available as part of [the ArcSight smart connector collection](https://community.microfocus.com/cyberres/arcsight/f/arcsight-product-announcements/163662/announcing-general-availability-of-arcsight-smart-connectors-7-10-0-8114-0). |- | Syslog server | No | If you want to stream Azure Monitor data directly to a syslog server, you can use a [solution based on an Azure function](https://github.com/miguelangelopereira/azuremonitor2syslog/). - | LogRhythm | No| Instructions to set up LogRhythm to collect logs from an event hub are available [here](https://logrhythm.com/six-tips-for-securing-your-azure-cloud-environment/). - |Logz.io | Yes | For more information, see [Getting started with monitoring and logging using Logz.io for Java apps running on Azure](/azure/developer/java/fundamentals/java-get-started-with-logzio) + | Syslog server | No | If you want to stream Azure Monitor data directly to a syslog server, you can use a [solution based on an Azure function](https://github.com/miguelangelopereira/azuremonitor2syslog/).| + | LogRhythm | No| Instructions to set up LogRhythm to collect logs from an event hub are available [here](https://logrhythm.com/six-tips-for-securing-your-azure-cloud-environment/).| + |Logz.io | Yes | For more information, see [Getting started with monitoring and logging using Logz.io for Java apps running on Azure](/azure/developer/java/fundamentals/java-get-started-with-logzio)| 1. (Optional) Stream the raw logs to the event hub and connect to your preferred solution. Learn more in [Monitoring data available](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md#monitoring-data-available). |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
defender-for-cloud | Tutorial Enable Servers Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-servers-plan.md | By enabling Defender for Servers on a Log Analytics workspace, you aren't enabli ## Enable Defender for Servers at the resource level -To protect all of your existing and future resources, we recommend you enable Defender for Servers on your entire Azure subscription. +To protect all of your existing and future resources, we recommend you [enable Defender for Servers on your entire Azure subscription](#enable-on-an-azure-subscription-aws-account-or-gcp-project). -You can exclude specific resources or manage security configurations at a lower hierarchy level by enabling the Defender for Servers plan at the resource level with REST API or at scale. +You can exclude specific resources or manage security configurations at a lower hierarchy level by enabling the Defender for Servers plan at the resource level. You can enable the plan on the resource level with REST API or at scale. The supported resource types include: After enabling the plan, you have the ability to [configure the features of the ## Next steps [Configure Defender for Servers features](configure-servers-coverage.md).-[Overview of Microsoft Defender for Servers](defender-for-servers-introduction.md) ++[Overview of Microsoft Defender for Servers](defender-for-servers-introduction.md). |
deployment-environments | Overview What Is Azure Deployment Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md | Title: What is Azure Deployment Environments? -description: Enable developer teams to spin up app infrastructure with project-based templates, minimize setup time & maximize security, compliance, and cost efficiency. +description: Enable developer teams to spin up infrastructure for deploying apps with project-based templates, while adding governance for Azure resource types, security, and cost. |
deployment-environments | Quickstart Create Access Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md | Title: Create and access an environment in the developer portal + Title: Create a deployment environment -description: Learn how to create and access an environment in an Azure Deployment Environments project through the developer portal. +description: Learn how to create and access an environment in Azure Deployment Environments through the developer portal. An environment has all Azure resource preconfigured for deploying your application. Last updated 12/01/2023 # Quickstart: Create and access an environment in Azure Deployment Environments -This quickstart shows you how to create and access an [environment](concept-environments-key-concepts.md#environments) in an existing Azure Deployment Environments project. +This quickstart shows you how to create and access an [environment](concept-environments-key-concepts.md#environments) in Azure Deployment Environments by using the developer portal. ++As a developer, you can create environments associated with a [project](concept-environments-key-concepts.md#projects) in Azure Deployment Environments. An environment has all Azure resource preconfigured for deploying your application. ## Prerequisites |
deployment-environments | Quickstart Create And Configure Devcenter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md | Title: Create and configure a dev center for Azure Deployment Environments + Title: Set up a dev center for Azure Deployment Environments -description: Learn how to configure a dev center, attach an identity, and attach a catalog in Azure Deployment Environments. +description: Learn how to set up the resources to get started with Azure Deployment Environments. Configure a dev center, attach an identity, and attach a catalog for using IaC templates. Last updated 12/01/2023 In this quickstart, you set up all the resources in Azure Deployment Environments to enable self-service deployment environments for development teams. Learn how to create and configure a dev center, add a catalog to the dev center, and define an environment type. -A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications. To learn more about the components of Azure Deployment Environments, see [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md). +A dev center is the top-level resource for getting started with Azure Deployment Environments that contains the collection of development projects. In the dev center, you specify the common configuration for your projects, such as catalogs with application templates, and the types of environments development teams can deploy to. ++A platform engineering team typically sets up the dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications. To learn more about the components of Azure Deployment Environments, see [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md). The following diagram shows the steps to configure a dev center for Azure Deployment Environments in the Azure portal. |
deployment-environments | Quickstart Create And Configure Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md | Title: Create and configure an Azure Deployment Environments project + Title: Create a project in Azure Deployment Environments -description: Learn how to create a project in Azure Deployment Environments and associate the project with a dev center. +description: Learn how to create a project for a dev center Azure Deployment Environments. In a project, you can define environment types and environments that are specific to a software development project. -# Quickstart: Create and configure an Azure Deployment Environments project +# Quickstart: Create and configure a project in Azure Deployment Environments -This quickstart shows you how to create a project in Azure Deployment Environments, then associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). After you complete this quickstart, developers can use the developer portal to create environments to deploy their applications. +This quickstart shows you how to create a project in Azure Deployment Environments, then associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). After you complete this quickstart, developers can use the developer portal to create environments in the project to deploy their applications. ++A project contains the specific configuration for environment types and environment definitions related to a development project. For example, you might create a project for the implementation of an ecommerce application, which has a development, staging, and production environment. For another project, you might define a different configuration. The following diagram shows the steps to configure a project associated with a dev center for Deployment Environments in the Azure portal. |
dev-box | Tutorial Connect To Dev Box With Remote Desktop App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md | Title: 'Tutorial: Use a Remote Desktop client to connect to a dev box' + Title: 'Tutorial: Access a dev box with a remote desktop client' -description: In this tutorial, you download and use a remote desktop client to connect to a dev box in Microsoft Dev Box. +description: In this tutorial, you learn how to connect to and access your dev box in Microsoft Dev Box by using a remote desktop (RDP) client app. -In this tutorial, you download and use a remote desktop client application to connect to a dev box. +In this tutorial, you download and use a remote desktop (RDP) client application to connect to and access a dev box. Remote desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a remote desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android. > [!TIP] > Many remote desktops apps allow you to [use multiple monitors](tutorial-configure-multiple-monitors.md) when you connect to your dev box. -Alternately, you can connect to your dev box through the browser from the Microsoft Dev Box developer portal. +Alternately, you can access your dev box through the browser from the Microsoft Dev Box developer portal. In this tutorial, you learn how to: To complete this tutorial, you must have access to a dev box through the develop ## Download the remote desktop client and connect to your dev box -You can use a remote desktop client application to connect to your dev box in Microsoft Dev Box. Remote desktop clients are available for many operating systems and devices. +You can use a remote desktop client application to access your dev box in Microsoft Dev Box. Remote desktop clients are available for many operating systems and devices. Select the relevant tab to view the steps to download and use the Remote Desktop client application from Windows or non-Windows operating systems. |
digital-twins | Concepts Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md | Models for Azure Digital Twins are defined using the Digital Twins Definition La You can view the full language description for DTDL v3 in GitHub: [DTDL Version 3 Language Description](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md). This page includes DTDL reference details and examples to help you get started writing your own DTDL models. -DTDL is based on JSON-LD and is programming-language independent. DTDL isn't exclusive to Azure Digital Twins. It is also used to represent device data in other IoT services such as [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md). +DTDL is based on JSON-LD and is programming-language independent. DTDL isn't exclusive to Azure Digital Twins. It is also used to represent device data in other IoT services such as [IoT Plug and Play](../iot/overview-iot-plug-and-play.md). The rest of this article summarizes how the language is used in Azure Digital Twins. |
digital-twins | How To Parse Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-parse-models.md | The capabilities of the parser include: * Determine whether a model is assignable from another model. > [!NOTE]-> [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) devices use a small syntax variant to describe their functionality. This syntax variant is a semantically compatible subset of the DTDL that is used in Azure Digital Twins. When using the parser library, you do not need to know which syntax variant was used to create the DTDL for your digital twin. The parser will always, by default, return the same model for both IoT Plug and Play and Azure Digital Twins syntax. +> [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) devices use a small syntax variant to describe their functionality. This syntax variant is a semantically compatible subset of the DTDL that is used in Azure Digital Twins. When using the parser library, you do not need to know which syntax variant was used to create the DTDL for your digital twin. The parser will always, by default, return the same model for both IoT Plug and Play and Azure Digital Twins syntax. ## Code with the parser library |
digital-twins | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md | In Azure Digital Twins, you define the digital entities that represent the peopl You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define a model that defines a Building type, a Floor type, and an Elevator type. Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md). In ADT, DTDL models describe types of entities according to their state properties, commands, and relationships. You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry. >[!TIP]->Version 2 of DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem. +>Version 2 of DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem. Once you've defined your data models, use them to create [digital twins](concepts-twins-graph.md) that represent each specific entity in your environment. For example, you might use the Building model definition to create several Building-type twins (Building 1, Building 2, and so on). You can also use the relationships in the model definitions to connect twins to each other, forming a conceptual graph. |
dms | Tutorial Mysql Azure External To Flex Online Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-external-to-flex-online-portal.md | To complete this tutorial, you need to: * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permissions on the source server for reading and applying the bin log. * If you're targeting an online migration, you will need to configure the binlog expiration on the source server to ensure that binlog files aren't purged before the replica commits the changes. We recommend at least two days to start. The parameter will depend on the version of your MySQL server. For MySQL 5.7 the parameter is expire_logs_days (by default it is set to 0, which is no auto purge). For MySQL 8.0 it is binlog_expire_logs_seconds (by default it is set to 30 days). After a successful cutover, you can reset the value. * To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:- * ΓÇ£READΓÇ¥ privilege on the source database. - * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database - * If migrating views, the user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege. - * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege. - * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: - * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. - * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine. - * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the events are to be shown. + * [ΓÇ£SELECTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_select) privilege at the server level on the source. + * If migrating views, user must have the [ΓÇ£SHOW VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_show-view) privilege on the source server and the [ΓÇ£CREATE VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-view) privilege on the target server. + * If migrating triggers, user must have the [ΓÇ£TRIGGERΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_trigger) privilege on the source and target server. + * If migrating routines (procedures and/or functions), the user must have the [ΓÇ£CREATE ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-routine) and [ΓÇ£ALTER ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_alter-routine) privileges granted at the server level on the target. + * If migrating events, the user must have the [ΓÇ£EVENTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_event) privilege on the source and target server. + * If migrating users/logins, the user must have the ["CREATE USER"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-user) privilege on the target server. + * ["DROP"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_drop) privilege at the server level on the target, in order to drop tables that might already exist. For example, when retrying a migration. + * ["REFERENCES"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_references) privilege at the server level on the target, in order to create tables with foreign keys. + * If migrating to MySQL 8.0, the user must have the ["SESSION_VARIABLES_ADMIN"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_session-variables-admin) privilege on the target server. + * ["CREATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create) privilege at the server level on the target. + * ["INSERT"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_insert) privilege at the server level on the target. + * ["UPDATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_update) privilege at the server level on the target. + * ["DELETE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_delete) privilege at the server level on the target. ## Limitations |
dms | Tutorial Mysql Azure Mysql Offline Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md | To complete this tutorial, you need to: * Azure Database for MySQL supports only InnoDB tables. To convert MyISAM tables to InnoDB, see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html) * The user must have the privileges to read data on the source database. * To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:- * ΓÇ£READΓÇ¥ privilege on the source database. - * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database - * If migrating views, the user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege. - * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege. - * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: - * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. - * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine. - * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the events are to be shown. + * [ΓÇ£SELECTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_select) privilege at the server level on the source. + * If migrating views, user must have the [ΓÇ£SHOW VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_show-view) privilege on the source server and the [ΓÇ£CREATE VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-view) privilege on the target server. + * If migrating triggers, user must have the [ΓÇ£TRIGGERΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_trigger) privilege on the source and target server. + * If migrating routines (procedures and/or functions), the user must have the [ΓÇ£CREATE ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-routine) and [ΓÇ£ALTER ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_alter-routine) privileges granted at the server level on the target. + * If migrating events, the user must have the [ΓÇ£EVENTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_event) privilege on the source and target server. + * If migrating users/logins, the user must have the ["CREATE USER"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-user) privilege on the target server. + * ["DROP"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_drop) privilege at the server level on the target, in order to drop tables that might already exist. For example, when retrying a migration. + * ["REFERENCES"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_references) privilege at the server level on the target, in order to create tables with foreign keys. + * If migrating to MySQL 8.0, the user must have the ["SESSION_VARIABLES_ADMIN"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_session-variables-admin) privilege on the target server. + * ["CREATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create) privilege at the server level on the target. + * ["INSERT"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_insert) privilege at the server level on the target. + * ["UPDATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_update) privilege at the server level on the target. + * ["DELETE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_delete) privilege at the server level on the target. ## Sizing the target Azure Database for MySQL instance |
dms | Tutorial Mysql Azure Single To Flex Offline Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md | To complete this tutorial, you need to: * Create or use an existing instance of Azure Database for MySQL ΓÇô Single Server (the source server). * To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:- * ΓÇ£READΓÇ¥ privilege on the source database. - * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database - * If migrating views, user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege. - * If migrating triggers, user must have the ΓÇ£TRIGGERΓÇ¥ privilege. - * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: - * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. - * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine. - * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the event is to be shown. + * [ΓÇ£SELECTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_select) privilege at the server level on the source. + * If migrating views, user must have the [ΓÇ£SHOW VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_show-view) privilege on the source server and the [ΓÇ£CREATE VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-view) privilege on the target server. + * If migrating triggers, user must have the [ΓÇ£TRIGGERΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_trigger) privilege on the source and target server. + * If migrating routines (procedures and/or functions), the user must have the [ΓÇ£CREATE ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-routine) and [ΓÇ£ALTER ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_alter-routine) privileges granted at the server level on the target. + * If migrating events, the user must have the [ΓÇ£EVENTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_event) privilege on the source and target server. + * If migrating users/logins, the user must have the ["CREATE USER"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-user) privilege on the target server. + * ["DROP"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_drop) privilege at the server level on the target, in order to drop tables that might already exist. For example, when retrying a migration. + * ["REFERENCES"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_references) privilege at the server level on the target, in order to create tables with foreign keys. + * If migrating to MySQL 8.0, the user must have the ["SESSION_VARIABLES_ADMIN"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_session-variables-admin) privilege on the target server. + * ["CREATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create) privilege at the server level on the target. + * ["INSERT"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_insert) privilege at the server level on the target. + * ["UPDATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_update) privilege at the server level on the target. + * ["DELETE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_delete) privilege at the server level on the target. ## Limitations |
dms | Tutorial Mysql Azure Single To Flex Online Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md | To complete this tutorial, you need to: * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permissions on the source server for reading and applying the bin log. * If you're targeting a online migration, configure the binlog_expire_logs_seconds parameter on the source server to ensure that binlog files aren't purged before the replica commits the changes. We recommend at least two days to start. After a successful cutover, you can reset the value. * To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:- * ΓÇ£READΓÇ¥ privilege on the source database. - * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database - * If migrating views, the user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege. - * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege. - * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege: - * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table. - * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine. - * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the events are to be shown. + * [ΓÇ£SELECTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_select) privilege at the server level on the source. + * If migrating views, user must have the [ΓÇ£SHOW VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_show-view) privilege on the source server and the [ΓÇ£CREATE VIEWΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-view) privilege on the target server. + * If migrating triggers, user must have the [ΓÇ£TRIGGERΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_trigger) privilege on the source and target server. + * If migrating routines (procedures and/or functions), the user must have the [ΓÇ£CREATE ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-routine) and [ΓÇ£ALTER ROUTINEΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_alter-routine) privileges granted at the server level on the target. + * If migrating events, the user must have the [ΓÇ£EVENTΓÇ¥](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_event) privilege on the source and target server. + * If migrating users/logins, the user must have the ["CREATE USER"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-user) privilege on the target server. + * ["DROP"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_drop) privilege at the server level on the target, in order to drop tables that might already exist. For example, when retrying a migration. + * ["REFERENCES"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_references) privilege at the server level on the target, in order to create tables with foreign keys. + * If migrating to MySQL 8.0, the user must have the ["SESSION_VARIABLES_ADMIN"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_session-variables-admin) privilege on the target server. + * ["CREATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create) privilege at the server level on the target. + * ["INSERT"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_insert) privilege at the server level on the target. + * ["UPDATE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_update) privilege at the server level on the target. + * ["DELETE"](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_delete) privilege at the server level on the target. ## Limitations |
event-grid | Partner Events Overview For Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md | Last updated 04/10/2023 # Partner Events overview for partners - Azure Event Grid-Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams. They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few. Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Azure Storage, Outlook, Teams, or Microsoft Entra ID, to partner systems where customer's solutions can react to them. With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale. +Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners". This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams. They purposely integrate with Event Grid to realize end-to-end customer use cases that end on Azure (customers subscribe to events sent by partner) or end on a partner system (customers subscribe to Microsoft events sent by Azure Event Grid). Customers bank on Azure Event Grid to send events published by a partner to supported destinations such as webhooks, Azure Functions, Azure Event Hubs, or Azure Service Bus, to name a few. Customers also rely on Azure Event Grid to route events that originate in Microsoft services, such as Outlook, Teams, or Microsoft Entra ID, so that customer's solutions can react to them. With Partner Events, customers can build event-driven solutions across platforms and network boundaries to receive or send events reliably, securely and at a scale. > [!NOTE] > This is a conceptual article that's required reading before you decide to onboard as a partner to Azure Event Grid. For step-by-step instructions on how to onboard as an Event Grid partner using the Azure portal, see [How to onboard as an Event Grid partner (Azure portal)](onboard-partner.md). |
event-grid | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md | Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
event-grid | Powershell Webhook Secure Delivery Microsoft Entra App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-microsoft-entra-app.md | Title: Azure PowerShell - Secure WebHook delivery with Microsoft Entra Application in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Microsoft Entra Application using Azure Event Grid ms.devlang: powershell-+ Previously updated : 10/14/2021 Last updated : 02/02/2024 # Secure WebHook delivery with Microsoft Entra Application in Azure Event Grid try { Function CreateAppRole([string] $Name, [string] $Description) {- $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole + $appRole = New-Object Microsoft.Graph.PowerShell.Models.MicrosoftGraphAppRole $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string]- $appRole.AllowedMemberTypes.Add("Application"); - $appRole.AllowedMemberTypes.Add("User"); + $appRole.AllowedMemberTypes += "Application"; + $appRole.AllowedMemberTypes += "User"; $appRole.DisplayName = $Name $appRole.Id = New-Guid $appRole.IsEnabled = $true try { return $appRole } - # Creates Azure Event Grid Azure AD Application if not exists + # Creates Azure Event Grid Microsoft Entra Application if not exists # You don't need to modify this id- # But Azure Event Grid Azure AD Application Id is different for different clouds + # But Azure Event Grid Entra Application Id is different for different clouds $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # Azure Public Cloud # $eventGridAppId = "54316b56-3481-47f9-8f30-0300f5542a7b" # Azure Government Cloud- $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name - $eventGridSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") - if ($eventGridSP -match "Microsoft.EventGrid") + $eventGridSP = Get-MgServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") + if ($eventGridSP.DisplayName -match "Microsoft.EventGrid") {- Write-Host "The Azure AD Application is already defined.`n" + Write-Host "The Event Grid Microsoft Entra Application is already defined.`n" } else {- Write-Host "Creating the Azure Event Grid Azure AD Application" - $eventGridSP = New-AzureADServicePrincipal -AppId $eventGridAppId + Write-Host "Creating the Azure Event Grid Microsoft Entra Application" + $eventGridSP = New-MgServicePrincipal -AppId $eventGridAppId } - # Creates the Azure app role for the webhook Azure AD application -- $app = Get-AzureADApplication -ObjectId $webhookAppObjectId + # Creates the Azure app role for the webhook Microsoft Entra application + $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name + $app = Get-MgApplication -ObjectId $webhookAppObjectId $appRoles = $app.AppRoles - Write-Host "Azure AD App roles before addition of the new role..." - Write-Host $appRoles + Write-Host "Microsoft Entra App roles before addition of the new role..." + Write-Host $appRoles.DisplayName - if ($appRoles -match $eventGridRoleName) + if ($appRoles.DisplayName -match $eventGridRoleName) { Write-Host "The Azure Event Grid role is already defined.`n" } else { - Write-Host "Creating the Azure Event Grid role in Azure AD Application: " $webhookAppObjectId + Write-Host "Creating the Azure Event Grid role in Microsoft Entra Application: " $webhookAppObjectId $newRole = CreateAppRole -Name $eventGridRoleName -Description "Azure Event Grid Role"- $appRoles.Add($newRole) - Set-AzureADApplication -ObjectId $app.ObjectId -AppRoles $appRoles + $appRoles += $newRole + Update-MgApplication -ApplicationId $webhookAppObjectId -AppRoles $appRoles } - Write-Host "Azure AD App roles after addition of the new role..." - Write-Host $appRoles + Write-Host "Microsoft Entra App roles after addition of the new role..." + Write-Host $appRoles.DisplayName # Creates the user role assignment for the app that will create event subscription - $servicePrincipal = Get-AzureADServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") - $eventSubscriptionWriterSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventSubscriptionWriterAppId + "'") + $servicePrincipal = Get-MgServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") + $eventSubscriptionWriterSP = Get-MgServicePrincipal -Filter ("appId eq '" + $eventSubscriptionWriterAppId + "'") if ($null -eq $eventSubscriptionWriterSP) {- Write-Host "Create new Azure AD Application" - $eventSubscriptionWriterSP = New-AzureADServicePrincipal -AppId $eventSubscriptionWriterAppId + Write-Host "Create new Microsoft Entra Application" + $eventSubscriptionWriterSP = New-MgServicePrincipal -AppId $eventSubscriptionWriterAppId } try {- Write-Host "Creating the Azure AD Application role assignment: " $eventSubscriptionWriterAppId + Write-Host "Creating the Microsoft Entra Application role assignment: " $eventSubscriptionWriterAppId $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName- New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventSubscriptionWriterSP.ObjectId -PrincipalId $eventSubscriptionWriterSP.ObjectId + New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $eventSubscriptionWriterSP.Id -PrincipalId $eventSubscriptionWriterSP.Id -ResourceId $servicePrincipal.Id -AppRoleId $eventGridAppRole.Id } catch { if( $_.Exception.Message -like '*Permission being assigned already exists on the object*') {- Write-Host "The Azure AD Application role is already defined.`n" + Write-Host "The Microsoft Entra Application role is already defined.`n" } else { try { Break } - # Creates the service app role assignment for Event Grid Azure AD Application + # Creates the service app role assignment for Event Grid Microsoft Entra Application $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName- New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventGridSP.ObjectId -PrincipalId $eventGridSP.ObjectId + New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $eventGridSP.Id -PrincipalId $eventGridSP.Id -ResourceId $servicePrincipal.Id -AppRoleId $eventGridAppRole.Id # Print output references for backup - Write-Host ">> Webhook's Azure AD Application Id: $($app.AppId)" - Write-Host ">> Webhook's Azure AD Application ObjectId Id: $($app.ObjectId)" + Write-Host ">> Webhook's Microsoft Entra Application Id: $($app.AppId)" + Write-Host ">> Webhook's Microsoft Entra Application ObjectId Id: $($app.ObjectId)" } catch { Write-Host ">> Exception:" catch { ## Script explanation -For more details refer to [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md) +For more information, see [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md). |
event-grid | Powershell Webhook Secure Delivery Microsoft Entra User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-microsoft-entra-user.md | Title: Azure PowerShell - Secure WebHook delivery with Microsoft Entra user in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Microsoft Entra user using Azure Event Grid ms.devlang: powershell-+ Previously updated : 09/29/2021 Last updated : 02/02/2024 # Secure WebHook delivery with Microsoft Entra user in Azure Event Grid Here are the high level steps from the script: 1. Create a service principal for **Microsoft.EventGrid** if it doesn't already exist. 1. Create a role named **AzureEventGridSecureWebhookSubscriber** in the **Microsoft Entra app for your Webhook**.-1. Add service principal of user who will be creating the subscription to the AzureEventGridSecureWebhookSubscriber role. +1. Add service principal of user who is creating the subscription to the AzureEventGridSecureWebhookSubscriber role. 1. Add service principal of Microsoft.EventGrid to the AzureEventGridSecureWebhookSubscriber. -## Sample script - stable +## Sample script ```azurepowershell # NOTE: Before run this script ensure you are logged in Azure by using "az login" command. -$webhookAppObjectId = "[REPLACE_WITH_YOUR_ID]" +$webhookAppId = "[REPLACE_WITH_YOUR_ID]" $eventSubscriptionWriterUserPrincipalName = "[REPLACE_WITH_USER_PRINCIPAL_NAME_OF_THE_USER_WHO_WILL_CREATE_THE_SUBSCRIPTION]" # Start execution try { Function CreateAppRole([string] $Name, [string] $Description) {- $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole + $appRole = New-Object Microsoft.Graph.PowerShell.Models.MicrosoftGraphAppRole $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string]- $appRole.AllowedMemberTypes.Add("Application"); - $appRole.AllowedMemberTypes.Add("User"); + $appRole.AllowedMemberTypes += "Application"; + $appRole.AllowedMemberTypes += "User"; $appRole.DisplayName = $Name $appRole.Id = New-Guid $appRole.IsEnabled = $true try { return $appRole } - # Creates Azure Event Grid Azure AD Application if not exists + # Creates Azure Event Grid Microsoft Entra Application if not exists # You don't need to modify this id- # But Azure Event Grid Azure AD Application Id is different for different clouds + # But Azure Event Grid Microsoft Entra Application Id is different for different clouds $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # Azure Public Cloud # $eventGridAppId = "54316b56-3481-47f9-8f30-0300f5542a7b" # Azure Government Cloud- $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name - $eventGridSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") - if ($eventGridSP -match "Microsoft.EventGrid") + $eventGridSP = Get-MgServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") + if ($eventGridSP.DisplayName -match "Microsoft.EventGrid") {- Write-Host "The Azure AD Application is already defined.`n" + Write-Host "The Event Grid Microsoft Entra Application is already defined.`n" } else {- Write-Host "Creating the Azure Event Grid Azure AD Application" - $eventGridSP = New-AzureADServicePrincipal -AppId $eventGridAppId + Write-Host "Creating the Azure Event Grid Microsoft Entra Application" + $eventGridSP = New-MgServicePrincipal -AppId $eventGridAppId } - # Creates the Azure app role for the webhook Azure AD application + # Creates the Azure app role for the webhook Microsoft Entra application + $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name - $app = Get-AzureADApplication -ObjectId $webhookAppObjectId + $app = Get-MgApplication -ApplicationId $webhookAppObjectId $appRoles = $app.AppRoles - Write-Host "Azure AD App roles before addition of the new role..." - Write-Host $appRoles + Write-Host "Microsoft Entra App roles before addition of the new role..." + Write-Host $appRoles.DisplayName - if ($appRoles -match $eventGridRoleName) + if ($appRoles.DisplayName -match $eventGridRoleName) { Write-Host "The Azure Event Grid role is already defined.`n" } else { - Write-Host "Creating the Azure Event Grid role in Azure AD Application: " $webhookAppObjectId + Write-Host "Creating the Azure Event Grid role in Microsoft Entra Application: " $webhookAppObjectId $newRole = CreateAppRole -Name $eventGridRoleName -Description "Azure Event Grid Role"- $appRoles.Add($newRole) - Set-AzureADApplication -ObjectId $app.ObjectId -AppRoles $appRoles + $appRoles += $newRole + Update-MgApplication -ApplicationId $webhookAppObjectId -AppRoles $appRoles } - Write-Host "Azure AD App roles after addition of the new role..." - Write-Host $appRoles + Write-Host "Microsoft Entra App roles after addition of the new role..." + Write-Host $appRoles.DisplayName # Creates the user role assignment for the user who will create event subscription - $servicePrincipal = Get-AzureADServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") + $servicePrincipal = Get-MgServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") try {- Write-Host "Creating the Azure Ad App Role assignment for user: " $eventSubscriptionWriterUserPrincipalName - $eventSubscriptionWriterUser = Get-AzureAdUser -ObjectId $eventSubscriptionWriterUserPrincipalName + Write-Host "Creating the Microsoft Entra App Role assignment for user: " $eventSubscriptionWriterUserPrincipalName + $eventSubscriptionWriterUser = Get-MgUser -UserId $eventSubscriptionWriterUserPrincipalName $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName- New-AzureADUserAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventSubscriptionWriterUser.ObjectId -PrincipalId $eventSubscriptionWriterUser.ObjectId + New-MgUserAppRoleAssignment -UserId $eventSubscriptionWriterUser.Id -PrincipalId $eventSubscriptionWriterUser.Id -ResourceId $servicePrincipal.Id -AppRoleId $eventGridAppRole.Id } catch { if( $_.Exception.Message -like '*Permission being assigned already exists on the object*') {- Write-Host "The Azure AD User Application role is already defined.`n" + Write-Host "The Microsoft Entra User Application role is already defined.`n" } else { try { Break } - # Creates the service app role assignment for Event Grid Azure AD Application + # Creates the service app role assignment for Event Grid Microsoft Entra Application $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName- New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventGridSP.ObjectId -PrincipalId $eventGridSP.ObjectId + New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $eventGridSP.Id -PrincipalId $eventGridSP.Id -ResourceId $servicePrincipal.Id -AppRoleId $eventGridAppRole.Id # Print output references for backup - Write-Host ">> Webhook's Azure AD Application Id: $($app.AppId)" - Write-Host ">> Webhook's Azure AD Application ObjectId Id: $($app.ObjectId)" + Write-Host ">> Webhook's Microsoft Entra Application Id: $($app.AppId)" + Write-Host ">> Webhook's Microsoft Entra Application Object Id: $($app.Id)" } catch { Write-Host ">> Exception:" catch { ## Script explanation -For more details refer to [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md) +For more information, see [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md). |
event-grid | Secure Webhook Delivery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/secure-webhook-delivery.md | Title: Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Microsoft Entra ID using Azure Event Grid - Previously updated : 10/12/2022+ Last updated : 02/02/2024 # Deliver events to Microsoft Entra protected endpoints There are two subsections in this section. Read through both the scenarios or th This section shows how to configure the event subscription by using a Microsoft Entra user. -1. Create a Microsoft Entra application for the webhook configured to work with the Microsoft directory (single tenant). +1. Create a Microsoft Entra application for the webhook configured to work with the Microsoft Entra (single tenant). 2. Open the [Azure Shell](https://portal.azure.com/#cloudshell/) in the tenant and select the PowerShell environment. This section shows how to configure the event subscription by using a Microsoft - **$webhookAadTenantId**: Azure tenant ID ```Shell- PS /home/user>$webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" - PS /home/user>Connect-AzureAD -TenantId $webhookAadTenantId + $webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" + Connect-MgGraph -TenantId $webhookAadTenantId -Scopes "Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All" ``` 4. Open the [following script](scripts/powershell-webhook-secure-delivery-microsoft-entra-user.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterUserPrincipalName** with your identifiers, then continue to run the script. This section shows how to configure the event subscription by using a Microsoft If you see the following error message, you need to elevate to the service principal. An extra access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. ```- New-AzureADServiceAppRoleAssignment: Error occurred while executing NewServicePrincipalAppRoleAssignment + New-MgServicePrincipalAppRoleAssignment: Error occurred while executing NewServicePrincipalAppRoleAssignment Code: Authorization_RequestDenied Message: Insufficient privileges to complete the operation. ``` This section shows how to configure the event subscription by using a Microsoft This section shows how to configure the event subscription by using a Microsoft Entra application. -1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with the Microsoft directory (Single tenant). +1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with the Microsoft Entra (Single tenant). 2. Create a secret for the Microsoft Entra application and save the value (you need this value later). 3. Go to the **Access control (IAM)** page for the Event Grid topic and assign **Event Grid Contributor** role to the Event Grid subscription writer app. This step allows you to have access to the Event Grid resource when you logged-in into Azure with the Microsoft Entra application by using Azure CLI. -4. Create a Microsoft Entra application for the webhook configured to work with the Microsoft directory (Single tenant). +4. Create a Microsoft Entra application for the webhook configured to work with the Microsoft Entra (Single tenant). 5. Open the [Azure Shell](https://portal.azure.com/#cloudshell/) in the tenant and select the PowerShell environment. This section shows how to configure the event subscription by using a Microsoft - **$webhookAadTenantId**: Azure tenant ID ```Shell- PS /home/user>$webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" - PS /home/user>Connect-AzureAD -TenantId $webhookAadTenantId + $webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" + Connect-MgGraph -TenantId $webhookAadTenantId -Scopes "Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All" ``` 7. Open the [following script](scripts/powershell-webhook-secure-delivery-microsoft-entra-app.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterAppId** with your identifiers, then continue to run the script. This section shows how to configure the event subscription by using a Microsoft 8. Sign-in as the Event Grid subscription writer Microsoft Entra Application by running the command. ```azurecli- PS /home/user>az login --service-principal -u [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_ID] -p [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID] + az login --service-principal -u [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_ID] -p [REPLACE_WITH_EVENT_GRID_SUBSCRIPTION_WRITER_APP_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID] ``` 9. Create your subscription by running the command. ```azurecli- PS /home/user>az eventgrid system-topic event-subscription create --name [REPLACE_WITH_SUBSCRIPTION_NAME] -g [REPLACE_WITH_RESOURCE_GROUP] --system-topic-name [REPLACE_WITH_SYSTEM_TOPIC] --endpoint [REPLACE_WITH_WEBHOOK_ENDPOINT] --event-delivery-schema [REPLACE_WITH_WEBHOOK_EVENT_SCHEMA] --azure-active-directory-tenant-id [REPLACE_WITH_TENANT_ID] --azure-active-directory-application-id-or-uri [REPLACE_WITH_APPLICATION_ID_FROM_SCRIPT] --endpoint-type webhook + az eventgrid system-topic event-subscription create --name [REPLACE_WITH_SUBSCRIPTION_NAME] -g [REPLACE_WITH_RESOURCE_GROUP] --system-topic-name [REPLACE_WITH_SYSTEM_TOPIC] --endpoint [REPLACE_WITH_WEBHOOK_ENDPOINT] --event-delivery-schema [REPLACE_WITH_WEBHOOK_EVENT_SCHEMA] --azure-active-directory-tenant-id [REPLACE_WITH_TENANT_ID] --azure-active-directory-application-id-or-uri [REPLACE_WITH_APPLICATION_ID_FROM_SCRIPT] --endpoint-type webhook ``` > [!NOTE] Based on the diagram, follow next steps to configure both tenants. Do the following steps in **Tenant A**: -1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with any Microsoft Entra directory (multitenant). +1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with any Microsoft Entra (multitenant). 2. Create a secret for the Microsoft Entra application, and save the value (you need this value later). Do the following steps in **Tenant A**: Do the following steps in **Tenant B**: -1. Create a Microsoft Entra Application for the webhook configured to work with the Microsoft directory (single tenant). +1. Create a Microsoft Entra Application for the webhook configured to work with the Microsoft Entra (single tenant). 5. Open the [Azure Shell](https://portal.azure.com/#cloudshell/), and select the PowerShell environment. 6. Modify the **$webhookAadTenantId** value to connect to the **Tenant B**. - Variables: - **$webhookAadTenantId**: Azure Tenant ID for the **Tenant B** ```Shell- PS /home/user>$webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" - PS /home/user>Connect-AzureAD -TenantId $webhookAadTenantId + $webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" + Connect-MgGraph -TenantId $webhookAadTenantId -Scopes "Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All" ``` 7. Open the [following script](scripts/powershell-webhook-secure-delivery-microsoft-entra-app.md), and update values of **$webhookAppObjectId** and **$eventSubscriptionWriterAppId** with your identifiers, then continue to run the script. Do the following steps in **Tenant B**: If you see the following error message, you need to elevate to the service principal. An extra access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. ```- New-AzureADServiceAppRoleAssignment: Error occurred while executing NewServicePrincipalAppRoleAssignment + New-MgServicePrincipalAppRoleAssignment: Error occurred while executing NewServicePrincipalAppRoleAssignment Code: Authorization_RequestDenied Message: Insufficient privileges to complete the operation. ``` Back in **Tenant A**, do the following steps: 1. Open the [Azure Shell](https://portal.azure.com/#cloudshell/), and sign in as the Event Grid subscription writer Microsoft Entra Application by running the command. ```azurecli- PS /home/user>az login --service-principal -u [REPLACE_WITH_APP_ID] -p [REPLACE_WITH_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID] + az login --service-principal -u [REPLACE_WITH_APP_ID] -p [REPLACE_WITH_SECRET_VALUE] --tenant [REPLACE_WITH_TENANT_ID] ``` 2. Create your subscription by running the command. ```azurecli- PS /home/user>az eventgrid system-topic event-subscription create --name [REPLACE_WITH_SUBSCRIPTION_NAME] -g [REPLACE_WITH_RESOURCE_GROUP] --system-topic-name [REPLACE_WITH_SYSTEM_TOPIC] --endpoint [REPLACE_WITH_WEBHOOK_ENDPOINT] --event-delivery-schema [REPLACE_WITH_WEBHOOK_EVENT_SCHEMA] --azure-active-directory-tenant-id [REPLACE_WITH_TENANT_B_ID] --azure-active-directory-application-id-or-uri [REPLACE_WITH_APPLICATION_ID_FROM_SCRIPT] --endpoint-type webhook + az eventgrid system-topic event-subscription create --name [REPLACE_WITH_SUBSCRIPTION_NAME] -g [REPLACE_WITH_RESOURCE_GROUP] --system-topic-name [REPLACE_WITH_SYSTEM_TOPIC] --endpoint [REPLACE_WITH_WEBHOOK_ENDPOINT] --event-delivery-schema [REPLACE_WITH_WEBHOOK_EVENT_SCHEMA] --azure-active-directory-tenant-id [REPLACE_WITH_TENANT_B_ID] --azure-active-directory-application-id-or-uri [REPLACE_WITH_APPLICATION_ID_FROM_SCRIPT] --endpoint-type webhook ``` > [!NOTE] |
event-hubs | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md | Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 01/30/2024 Last updated : 02/06/2024 The name on each built-in links to the initiative definition source on the [!INCLUDE [azure-policy-reference-policysets-security-center](../../../../includes/policy/reference/bycat/policysets-security-center.md)] +## SQL +++## Synapse ++ ## Tags [!INCLUDE [azure-policy-reference-policysets-tags](../../../../includes/policy/reference/bycat/policysets-tags.md)] |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 01/30/2024 Last updated : 02/06/2024 The name of each built-in links to the policy definition in the Azure portal. Us [!INCLUDE [azure-policy-reference-policies-sql-server](../../../../includes/policy/reference/bycat/policies-sql-server.md)] +## Stack HCI ++ ## Storage [!INCLUDE [azure-policy-reference-policies-storage](../../../../includes/policy/reference/bycat/policies-storage.md)] |
hdinsight-aks | Hdinsight On Aks Autoscale Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/hdinsight-on-aks-autoscale-clusters.md | The following table describes the cluster types that are compatible with the Aut |Workload |Load Based |Schedule Based| |-|-|-| |Flink |Planned |Yes|-|Trino |Planned |Yes**| +|Trino |Yes** |Yes**| |Spark |Yes** |Yes**| **Graceful decommissioning is configurable. |
hdinsight-aks | Trino Connect To Metastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connect-to-metastore.md | The following example covers the addition of Hive catalog and metastore database **There are few important sections you need to add to your cluster ARM template to configure the Hive catalog and Hive metastore database:** -- `secretsProfile` ΓÇô It specifies Azure Key Vault and list of secrets to be used in Trino cluster, required to connect to external Hive metastore.-- `serviceConfigsProfiles` - It includes configuration for Trino catalogs. For more information, see [Add catalogs to existing cluster](trino-add-catalogs.md).-- `trinoProfile.catalogOptions.hive` ΓÇô List of Hive or iceberg or delta catalogs with parameters of external Hive metastore database for each catalog. To use external metastore database, catalog must be present in this list.-+### Metastore configuration +Configure external Hive metastore database in `config.properties` file: +```json +{ + "fileName": "config.properties", + "values": { + "hive.metastore.hdi.metastoreDbConnectionURL": "jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30", + "hive.metastore.hdi.metastoreDbConnectionUserName": "trinoadmin", + "hive.metastore.hdi.metastoreDbConnectionPasswordSecret": "hms-db-pwd", + "hive.metastore.hdi.metastoreWarehouseDir": "abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse" + } +} +``` +| Property| Description| Example| +|||| +|hive.metastore.hdi.metastoreDbConnectionURL|JDBC connection string to database.|jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30| +|hive.metastore.hdi.metastoreDbConnectionUserName|SQL user name to connect to database.|trinoadmin| +|hive.metastore.hdi.metastoreDbConnectionPasswordSecret|Secret referenceName configured in secretsProfile with password.|hms-db-pwd| +|hive.metastore.hdi.metastoreWarehouseDir|ABFS URI to location in storage where data is stored.|`abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse`| +### Metastore authentication +Configure authentication to external Hive metastore database specifying Azure Key Vault secrets. +> [!NOTE] +> `referenceName` should match value provided in `hive.metastore.hdi.metastoreDbConnectionPasswordSecret` +```json +"secretsProfile": { + "keyVaultResourceId": "/subscriptions/{USER_SUBSCRIPTION_ID}/resourceGroups/{USER_RESOURCE_GROUP}/providers/Microsoft.KeyVault/vaults/{USER_KEYVAULT_NAME}", + "secrets": [ + { + "referenceName": "hms-db-pwd", + "type": "secret", + "keyVaultObjectName": "hms-db-pwd" + } ] +}, +``` | Property| Description| Example| |||| |secretsProfile.keyVaultResourceId|Azure resource ID string to Azure Key Vault where secrets for Hive metastore are stored.|/subscriptions/0000000-0000-0000-0000-000000000000/resourceGroups/trino-rg/providers/Microsoft.KeyVault/vaults/trinoakv| |secretsProfile.secrets[*].referenceName|Unique reference name of the secret to use later in clusterProfile.|Secret1_ref| |secretsProfile.secrets[*].type|Type of object in Azure Key Vault, only ΓÇ£secretΓÇ¥ is supported.|secret| |secretsProfile.secrets[*].keyVaultObjectName|Name of secret object in Azure Key Vault containing actual secret value.|secret1|-|trinoProfile.catalogOptions.hive|List of Hive or iceberg or delta catalogs with parameters of external Hive metastore database, require parameters for each. To use external metastore database, catalog must be present in this list. -|trinoProfile.catalogOptions.hive[*].catalogName|Name of Trino catalog configured in `serviceConfigsProfiles`, which configured to use external Hive metastore database.|hive1| -|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionURL|JDBC connection string to database.|jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30| -|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionUserName|SQL user name to connect to database.|trinoadmin| -|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionPasswordSecret|Secret referenceName configured in secretsProfile with password.|hms-db-pwd| -|trinoProfile.catalogOptions.hive[*].metastoreWarehouseDir|ABFS URI to location in storage where data is stored.|`abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse`| -To configure external Hive metastore to an existing Trino cluster, add the required sections in your cluster ARM template by referring to the following example: +### Catalog configuration +In order for a Trino catalog to use external Hive metastore it should specify `hive.metastore=hdi` property. For more information, see [Add catalogs to existing cluster](trino-add-catalogs.md): +``` +{ + "fileName": "hive1.properties", + "values": { + "connector.name": "hive", + "hive.metastore": "hdi" + } +} +``` +### Complete example +To configure external Hive metastore to an existing Trino cluster, add the required sections in your cluster ARM template by referring to the following example: ```json { To configure external Hive metastore to an existing Trino cluster, add the requi { "serviceName": "trino", "configs": [+ { + "component": "common", + "files": [ + { + "fileName": "config.properties", + "values": { + "hive.metastore.hdi.metastoreDbConnectionURL": "jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30", + "hive.metastore.hdi.metastoreDbConnectionUserName": "trinoadmin", + "hive.metastore.hdi.metastoreDbConnectionPasswordSecret": "hms-db-pwd", + "hive.metastore.hdi.metastoreWarehouseDir": "abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse" + } + } + ] + }, { "component": "catalogs", "files": [ { "fileName": "hive1.properties", "values": {- "connector.name": "hive" + "connector.name": "hive", + "hive.metastore": "hdi" } } ] } ] }- ], - "trinoProfile": { - "catalogOptions": { - "hive": [ - { - "catalogName": "hive1", - "metastoreDbConnectionURL": "jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30", - "metastoreDbConnectionUserName": "trinoadmin", - "metastoreDbConnectionPasswordSecret": "hms-db-pwd", - "metastoreWarehouseDir": "abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse" - } - ] - } - } + ] } } } create schema hive1.schema1; create table hive1.schema1.tpchorders as select * from tpch.tiny.orders; select * from hive1.schema1.tpchorders limit 100; ```++## Alternative configuration +Alternatively external Hive metastore database parameters can be specified in `trinoProfile.catalogOptions.hive` together with `hive.metastore=hdi` catalog property: ++| Property| Description| Example| +|||| +|trinoProfile.catalogOptions.hive|List of Hive or iceberg or delta catalogs with parameters of external Hive metastore database, require parameters for each. To use external metastore database, catalog must be present in this list. +|trinoProfile.catalogOptions.hive[*].catalogName|Name of Trino catalog configured in `serviceConfigsProfiles`, which configured to use external Hive metastore database.|hive1| +|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionURL|JDBC connection string to database.|jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30| +|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionUserName|SQL user name to connect to database.|trinoadmin| +|trinoProfile.catalogOptions.hive[*].metastoreDbConnectionPasswordSecret|Secret referenceName configured in secretsProfile with password.|hms-db-pwd| +|trinoProfile.catalogOptions.hive[*].metastoreWarehouseDir|ABFS URI to location in storage where data is stored.|`abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse`| ++### Complete example +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "resources": [ + { + "type": "microsoft.hdinsight/clusterpools/clusters", + "apiVersion": "<api-version>", + "name": "<cluster-pool-name>/<cluster-name>", + "location": "<region, e.g. westeurope>", + "tags": {}, + "properties": { + "clusterType": "Trino", ++ "clusterProfile": { + "secretsProfile": { + "keyVaultResourceId": "/subscriptions/{USER_SUBSCRIPTION_ID}/resourceGroups/{USER_RESOURCE_GROUP}/providers/Microsoft.KeyVault/vaults/{USER_KEYVAULT_NAME}", + "secrets": [ + { + "referenceName": "hms-db-pwd", + "type": "secret", + "keyVaultObjectName": "hms-db-pwd" + } ] + }, + "serviceConfigsProfiles": [ + { + "serviceName": "trino", + "configs": [ + { + "component": "catalogs", + "files": [ + { + "fileName": "hive1.properties", + "values": { + "connector.name": "hive", + "hive.metastore": "hdi" + } + } + ] + } + ] + } + ], + "trinoProfile": { + "catalogOptions": { + "hive": [ + { + "catalogName": "hive1", + "metastoreDbConnectionURL": "jdbc:sqlserver://mysqlserver1.database.windows.net;database=myhmsdb1;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=30", + "metastoreDbConnectionUserName": "trinoadmin", + "metastoreDbConnectionPasswordSecret": "hms-db-pwd", + "metastoreWarehouseDir": "abfs://container1@myadlsgen2account1.dfs.core.windows.net/hive/warehouse" + } + ] + } + } + } + } + } + ] +} +``` |
hdinsight-aks | Trino Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connectors.md | Trino in HDInsight on AKS enables seamless integration with data sources. You ca * [MongoDB](https://trino.io/docs/410/connector/mongodb.html) * [MySQL](https://trino.io/docs/410/connector/mysql.html) * [Oracle](https://trino.io/docs/410/connector/oracle.html)-* [Phoenix](https://trino.io/docs/410/connector/phoenix.html) * [PostgreSQL](https://trino.io/docs/410/connector/postgresql.html) * [Prometheus](https://trino.io/docs/410/connector/prometheus.html) * [Redis](https://trino.io/docs/410/connector/redis.html) |
hdinsight | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md | Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
healthcare-apis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md | Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
healthcare-apis | Events Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md | -**Applies to:** [!INCLUDE [Yes icon](../includes/applies-to.md)][!INCLUDE [FHIR service](../includes/fhir-service.md)], [!INCLUDE [DICOM service](../includes/DICOM-service.md)] - Events let you subscribe to data changes in the FHIR® or DICOM® service and get notified through Azure Event Grid. You can use events to trigger workflows, automate tasks, send alerts, and more. In this FAQ, youΓÇÖll find answers to some common questions about events. **Can I use events with a non-Microsoft FHIR or DICOM service?** |
healthcare-apis | Events Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md | -**Applies to:** [!INCLUDE [Yes icon](../includes/applies-to.md)][!INCLUDE [FHIR service](../includes/fhir-service.md)], [!INCLUDE [DICOM service](../includes/DICOM-service.md)] - Events in Azure Health Data Services allow you to subscribe to and receive notifications of changes to health data in the FHIR® service or the DICOM® service. Events also enable you to trigger other actions or services based changes to health data, such as starting workflows, sending email, text messages, or alerts. Events are: |
iot-central | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md | IoT Central can also control devices by calling commands on the device. For exam The telemetry, properties, and commands that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Assign a device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template). -The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/about-iot-sdks.md). +The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/about-iot-sdks.md). Devices connect to IoT Central using one the supported protocols: [MQTT, AMQP, or HTTP](../../iot-hub/iot-hub-devguide-protocols.md). |
iot-central | Concepts Device Implementation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md | An IoT Central device template includes a _model_ that specifies the behaviors a Each model has a unique _digital twin model identifier_ (DTMI), such as `dtmi:com:example:Thermostat;1`. When a device connects to IoT Central, it sends the DTMI of the model it implements. IoT Central can then assign the correct device template to the device. -[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot-develop/concepts-convention.md) that a device should follow when it implements a Digital Twin Definition Language (DTDL) model. +[IoT Plug and Play](../../iot/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot/concepts-convention.md) that a device should follow when it implements a Digital Twin Definition Language (DTDL) model. The [Azure IoT device SDKs](#device-sdks) include support for the IoT Plug and Play conventions. A DTDL model can be a _no-component_ or a _multi-component_ model: > [!TIP] > You can [import and export a complete device model or individual interface](howto-set-up-template.md#interfaces-and-components) from an IoT Central device template as a DTDL v2 file. -To learn more about device models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md) +To learn more about device models, see the [IoT Plug and Play modeling guide](../../iot/concepts-modeling-guide.md) ### Conventions A device should follow the IoT Plug and Play conventions when it exchanges data > [!NOTE] > Currently, IoT Central does not fully support the DTDL **Array** and **Geospatial** data types. -To learn more about the IoT Plug and Play conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md). +To learn more about the IoT Plug and Play conventions, see [IoT Plug and Play conventions](../../iot/concepts-convention.md). -To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). ### Device SDKs If the device gets any of the following errors when it connects, it should use a - Operator blocked device. - Internal error 500 from the service. -To learn more about device error codes, see [Troubleshooting device connections](troubleshoot-connection.md). +To learn more about device error codes, see [Troubleshooting device connections](troubleshooting.md). To learn more about implementing automatic reconnections, see [Manage device reconnections to create resilient applications](../../iot-develop/concepts-manage-device-reconnections.md). |
iot-central | Concepts Device Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md | -A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template. To learn more about the data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template. To learn more about the data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). A device template includes the following sections: The JSON file that defines the device model uses the [Digital Twin Definition La ] ``` -To learn more about DTDL models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md). +To learn more about DTDL models, see the [IoT Plug and Play modeling guide](../../iot/concepts-modeling-guide.md). > [!NOTE] > IoT Central defines some extensions to the DTDL v2 language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md). A solution developer creates views that let operators monitor and manage connect ## Next steps -Now that you've learned about device templates, a suggested next step is to read [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) to learn more about the data a device exchanges with IoT Central. +Now that you've learned about device templates, a suggested next step is to read [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md) to learn more about the data a device exchanges with IoT Central. |
iot-central | Concepts Faq Apaas Paas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md | So that you can seamlessly migrate devices from your IoT Central applications to - The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) model. IoT Central requires all devices to have a DTDL model. These models simplify the interoperability between an IoT PaaS solution and IoT Central. -- The device must follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).+- The device must follow the [IoT Plug and Play conventions](../../iot/concepts-convention.md). - IoT Central uses the DPS to provision the devices. The PaaS solution must also use DPS to provision the devices. - The updatable DPS pattern ensures that the device can move seamlessly between IoT Central applications and the PaaS solution without any downtime. |
iot-central | Howto Connect Eflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md | - Title: Connect Azure IoT Edge for Linux on Windows (EFLOW) -description: Learn how to connect an Azure IoT Edge for Linux on Windows (EFLOW) device to an IoT Central application -- Previously updated : 11/27/2023-----# Connect an IoT Edge for Linux on Windows device to IoT Central --[Azure IoT Edge for Linux on Windows (EFLOW)](/windows/iot/iot-enterprise/azure-iot-edge-for-linux-on-windows) lets you run Azure IoT Edge in a Linux container on your Windows device. In this article, you learn how to provision an EFLOW device and manage it from your IoT Central application. --In this how-to article, you learn how to: --* Import a device manifest for an IoT Edge device. -* Create a device template for an IoT Edge device. -* Create an IoT Edge device in IoT Central. -* Connect and provision an EFLOW device. --## Prerequisites --To complete the steps in this article, you need: --* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --* An [IoT Central application created](howto-create-iot-central-application.md) from the **Custom application** template. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md). --* A Windows device that meets the following minimum requirements: -- * Windows 10<sup>1</sup>/11 (Pro, Enterprise, IoT Enterprise) or Windows Server 2019<sup>1</sup>/2022 - * Minimum free memory: 1 GB - * Minimum free disk space: 10 GB -- <sup>1</sup> Windows 10 and Windows Server 2019 minimum build 17763 with all current cumulative updates installed. --To follow the steps in this article, download the [EnvironmentalSensorManifest-1-4.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/EnvironmentalSensorManifest-1-4.json) file to your computer. --## Import a deployment manifest --You use a deployment manifest to specify the modules to run on an IoT Edge device. IoT Central manages the deployment manifests for the IoT Edge devices in your solution. To import the deployment manifest for this example: --1. In your IoT Central application, navigate to **Edge manifests**. --1. Select **+ New**. Enter a name such as *Environmental Sensor* for your deployment manifest, and then upload the *EnvironmentalSensorManifest-1-4.json* file you downloaded previously. --1. Select **Next** and then **Create**. --The example deployment manifest includes a custom module called *SimulatedTemperatureSensor*. --## Add device template --In this section, you create an IoT Central device template for an IoT Edge device. You import an IoT Edge manifest to get started, and then modify the template to add telemetry definitions and views: --### Create the device template and import the manifest --1. Create a device template and choose **Azure IoT Edge** as the template type. --1. On the **Customize** page of the wizard, enter a name such as *Environmental Sensor Edge Device* for the device template. --1. On the **Review** page, select **Create**. --1. On the **Create a model** page, select **Custom model**. --1. In the model, select **Modules** and then **Import modules from manifest**. Select the **Environmental Sensor** deployment manifest and then select **Import**. --1. Select the **management** interface in the **SimulatedTemperatureSensor** module to view the two properties defined in the manifest: ---### Add telemetry to the device template --An IoT Edge manifest doesn't define the telemetry a module sends. You add the telemetry definitions to the device template in IoT Central. The **SimulatedTemperatureSensor** module sends telemetry messages that look like the following JSON: --```json -{ - "machine": { - "temperature": 75.0, - "pressure": 40.2 - }, - "ambient": { - "temperature": 23.0, - "humidity": 30.0 - }, - "timeCreated": "" -} -``` --To add the telemetry definitions to the device template: --1. Select the **management** interface in the **Environmental Sensor Edge Device** template. --1. Select **+ Add capability**. Enter *machine* as the **Display name** and select the **Capability type** as **Telemetry**. --1. Select **Object** as the schema type, and then select **Define**. On the object definition page, add *temperature* and *pressure* as attributes of type **Double** and then select **Apply**. --1. Select **+ Add capability**. Enter *ambient* as the **Display name** and select the **Capability type** as **Telemetry**. --1. Select **Object** as the schema type, and then select **Define**. On the object definition page, add *temperature* and *humidity* as attributes of type **Double** and then select **Apply**. --1. Select **+ Add capability**. Enter *timeCreated* as the **Display name** and make sure that the **Capability type** is **Telemetry**. --1. Select **DateTime** as the schema type. --1. Select **Save** to update the template. --The **management** interface now includes the **machine**, **ambient**, and **timeCreated** telemetry types: ---### Add views to template --To enable an operator to view the telemetry from the device, define a view in the device template. --1. Select **Views** in the **Environmental Sensor Edge Device** template. --1. On the **Select to add a new view** page, select the **Visualizing the device** tile. --1. Change the view name to *View IoT Edge device telemetry*. --1. Under **Start with devices**, select the **ambient/temperature**, **ambient/humidity**, **machine/humidity**, and **machine/temperature** telemetry types. Then select **Add tile**. --1. Select **Save** to save the **View IoT Edge device telemetry** view. --### Publish the template --Before you can add a device that uses the **Environmental Sensor Edge Device** template, you must publish the template. --Navigate to the **Environmental Sensor Edge Device** template and select **Publish**. On the **Publish this device template to the application** panel, select **Publish** to publish the template --## Add an IoT Edge device --Before you can connect a device to IoT Central, you must register the device in your application: --1. In your IoT Central application, navigate to the **Devices** page and select **Environmental Sensor Edge Device** in the list of available templates. --1. Select **+ New** to add a new device from the template. --1. On the **Create new device** page, select the **Environmental Sensor** deployment manifest, and then select **Create**. --You now have a new device with the status **Registered**: ---### Get the device credentials --When you deploy the IoT Edge device later in this how-to article, you need the credentials that allow the device to connect to your IoT Central application. To get the device credentials: --1. On the **Device** page, select the device you created. --1. Select **Connect**. --1. On the **Device connection** page, make a note of the **ID Scope**, the **Device ID**, and the **Primary Key**. You use these values later. --1. Select **Close**. --You've now finished configuring your IoT Central application to enable an IoT Edge device to connect. --## Install and provision an EFLOW device --To install and provision your EFLOW device: --1. In an elevated PowerShell session, run the following commands to download IoT Edge for Linux on Windows. -- ```powershell - $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi')) - $ProgressPreference = 'SilentlyContinue' - Invoke-WebRequest "https://aka.ms/AzEFLOWMSI_1_4_LTS_X64" -OutFile $msiPath - ``` -- > [!TIP] - > The previous commands download an X64 image, for ARM64 use `https://aka.ms/AzEFLOWMSI_1_4_LTS_ARM64`. --1. Install IoT Edge for Linux on Windows on your device. -- ```powershell - Start-Process -Wait msiexec -ArgumentList "/i","$([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))","/qn" - ``` -- > [!TIP] - > You can specify custom IoT Edge for Linux on Windows installation and VHDX directories by adding `INSTALLDIR="<FULLY_QUALIFIED_PATH>"` and `VHDXDIR="<FULLY_QUALIFIED_PATH>"` parameters to the install command. --1. Create the IoT Edge for Linux on Windows deployment. The deployment creates your Linux VM and installs the IoT Edge runtime for you. -- ```powershell - Deploy-Eflow - ``` --1. Use the **ID scope**, **Device ID** and the **Primary Key** you made a note of previously. -- ```powershell - Provision-EflowVm -provisioningType DpsSymmetricKey -scopeId <ID_SCOPE_HERE> -registrationId <DEVCIE_ID_HERE> -symmKey <PRIMARY_KEY_HERE> - ``` --To learn about other ways you can deploy and provision an EFLOW device, see [Install and provision Azure IoT Edge for Linux on a Windows device](../../iot-edge/how-to-install-iot-edge-on-windows.md). --Go to the **Device Details** page in your IoT Central application and you can see telemetry flowing from your EFLOW device: ---> [!TIP] -> You may need to wait several minutes for the IoT Edge device to start sending telemetry. --## Clean up resources --If you want to uninstall EFLOW from your device, use the following commands. --1. Open **Settings** on Windows -1. Select **Add or Remove Programs** -1. Select **Azure IoT Edge LTS** app -1. Select **Uninstall** --## Next steps --Now that you've learned how to connect an (EFLOW) device to IoT Central, the suggested next step is to learn how to [Connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md). |
iot-central | Howto Control Devices With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md | To learn how to control devices by using the IoT Central UI, see ## Components and modules -Components let you group and reuse device capabilities. To learn more about components and device models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md). +Components let you group and reuse device capabilities. To learn more about components and device models, see the [IoT Plug and Play modeling guide](../../iot/concepts-modeling-guide.md). Not all device templates use components. The following screenshot shows the device template for a simple [thermostat](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-2.json) where all the capabilities are defined in a single interface called the **Root component**: |
iot-central | Howto Create Custom Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-rules.md | - Title: Extend Azure IoT Central by using custom rules -description: Configure an IoT Central application to send notifications when a device stops sending telemetry by using Azure Stream Analytics, Azure Functions, and SendGrid. -- Previously updated : 11/27/2023-----# Solution developer ---# Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid --This how-to guide shows you how to extend your IoT Central application with custom rules and notifications. The example shows sending a notification to an operator when a device stops sending telemetry. The solution uses an [Azure Stream Analytics](../../stream-analytics/index.yml) query to detect when a device stops sending telemetry. The Stream Analytics job uses [Azure Functions](../../azure-functions/index.yml) to send notification emails using [SendGrid](https://sendgrid.com/docs/for-developers/partners/microsoft-azure/). --This how-to guide shows you how to extend IoT Central beyond what it can already do with the built-in rules and actions. --In this how-to guide, you learn how to: --* Stream telemetry from an IoT Central application using *continuous data export*. -* Create a Stream Analytics query that detects when a device stops sending data. -* Send an email notification using the Azure Functions and SendGrid services. --## Prerequisites --To complete the steps in this how-to guide, you need an active Azure subscription. --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --## Create the Azure resources --Run the following script to create the Azure resources you need to configure this scenario. Run this script in a bash environment such as the Azure Cloud Shell: --> [!NOTE] -> The `az login` command is necessary even in the Cloud Shell. --```azurecli -SUFFIX=$RANDOM --# Event Hubs namespace name -EHNS=detect-stopped-devices-ehns-$SUFFIX --# IoT Central app name -CA=detect-stopped-devices-app-$SUFFIX --# Storage account -STOR=dtsstorage$SUFFIX --# Function App -FUNC=detect-stopped-devices-function-$SUFFIX --# ASA -ASA=detect-stopped-devices-asa-$SUFFIX --# Other variables -RG=DetectStoppedDevices -EH=centralexport -LOCATION=eastus -DESTID=ehdest01 -EXPID=telexp01 --# Sign in -az login --# Create the Azure resources -az group create -n $RG --location $LOCATION --# Create IoT Central app -az iot central app create --name $CA --resource-group $RG \ - --template "iotc-condition" \ - --subdomain $CA \ - --display-name "In-store analytics - Condition Monitoring (custom rules scenario)" --# Configure managed identity for IoT Central app -az iot central app identity assign --name $CA --resource-group $RG --system-assigned -PI=$(az iot central app identity show --name $CA --resource-group $RG --query "principalId" --output tsv) --# Create Event Hubs -az eventhubs namespace create --name $EHNS --resource-group $RG --location $LOCATION -az eventhubs eventhub create --name $EH --resource-group $RG --namespace-name $EHNS --# Create Function App -az storage account create --name $STOR --location $LOCATION --resource-group $RG --sku Standard_LRS -az functionapp create --name $FUNC --storage-account $STOR --consumption-plan-location $LOCATION \ - --functions-version 4 --resource-group $RG --# Create Azure Stream Analytics -az stream-analytics job create --job-name $ASA --resource-group $RG --location $LOCATION --# Create the IoT Central data export -az role assignment create --assignee $PI --role "Azure Event Hubs Data Sender" --resource-group $RG -az iot central export destination create --app-id $CA --dest-id $DESTID \ - --type eventhubs@v1 --name "Event Hubs destination" --authorization "{ - \"eventHubName\": \"$EH\", - \"hostName\": \"$EHNS.servicebus.windows.net\", - \"type\": \"systemAssignedManagedIdentity\" - }" --az iot central export create --app-id $CA --export-id $EXPID --enabled false \ - --display-name "All telemetry" --source telemetry --destinations "[ - { - \"id\": \"$DESTID\" - } - ]" --echo "Event Hubs hostname: $EHNS.servicebus.windows.net" -echo "Event hub: $EH" -echo "IoT Central app: $CA.azureiotcentral.com" -echo "Function App hostname: $FUNC.azurewebsites.net" -echo "Stream Analytics job: $ASA" -echo "Resource group: $RG" -``` --Make a note of the values output at the end of the script, you use them later in the set-up process. --The script creates: --* A resource group called `DetectStoppedDevices` that contains all the resources. -* An Event Hubs namespace with an event hub called `centralexport`. -* An IoT Central application with two simulated thermostat devices. Telemetry from the two devices is exported to the `centralexport` event hub. This IoT Central data export definition is currently disabled. -* An Azure Stream Analytics job. -* An Azure Function App. --### SendGrid account and API Keys --If you don't have a SendGrid account, create a [free account](https://app.sendgrid.com/) before you begin. --1. From the SendGrid Dashboard, select **Settings** on the left menu, select **Settings > API Keys**. -1. Select **Create API Key**. -1. Name the new API key **AzureFunctionAccess**. -1. Select **Create & View**. ---Make a note of the generated API key, you use it later. --Create a **Single Sender Verification** in your SendGrid account for the email address you'll use as the **From** address. --## Define the function --This solution uses an Azure Functions app to send an email notification when the Stream Analytics job detects a stopped device. To create your function app: --1. In the Azure portal, navigate to the **Function App** instance in the **DetectStoppedDevices** resource group. -1. Select **Functions**, then **+ Create** to create a new function. -1. Select **HTTP Trigger** as the function template. -1. Select **Create**. ---### Edit code for HTTP Trigger --The portal creates a default function called **HttpTrigger1**. Select **Code + Test**: ---1. Replace the C# code with the following code: -- ```csharp - #r "Newtonsoft.Json" - #r "SendGrid" - using System; - using SendGrid.Helpers.Mail; - using Microsoft.Azure.WebJobs.Host; - using Microsoft.AspNetCore.Mvc; - using Microsoft.Extensions.Primitives; - using Newtonsoft.Json; -- public static SendGridMessage Run(HttpRequest req, ILogger log) - { - string requestBody = new StreamReader(req.Body).ReadToEnd(); - log.LogInformation(requestBody); - var notifications = JsonConvert.DeserializeObject<IList<Notification>>(requestBody); -- SendGridMessage message = new SendGridMessage(); - message.Subject = "Contoso device notification"; -- var content = "The following device(s) have stopped sending telemetry:<br/><br/><table><tr><th>Device ID</th><th>Time</th></tr>"; - foreach(var notification in notifications) { - log.LogInformation($"No message received - Device: {notification.deviceid}, Time: {notification.time}"); - content += $"<tr><td>{notification.deviceid}</td><td>{notification.time}</td></tr>"; - } - content += "</table>"; - message.AddContent("text/html", content); -- return message; - } -- public class Notification - { - public string deviceid { get; set; } - public string time { get; set; } - } - ``` --1. Select **Save** to save the function. --### Configure function to use SendGrid --To send emails with SendGrid, you need to configure the bindings for your function as follows: --1. Select **Integration**. -1. Select **HTTP ($return)**. -1. Select **Delete.** -1. Select **+ Add output**. -1. Select **SendGrid** as the binding type. -1. For the **SendGrid API Key App Setting**, select **New**. -1. Enter the *Name* and *Value* of your SendGrid API key. If you followed the previous instructions, the name of your SendGrid API key is **AzureFunctionAccess**. -1. Add the following information: -- | Setting | Value | - | - | -- | - | Message parameter name | $return | - | To address | Enter your To Address | - | From address | Enter your SendGrid verified single sender email address | - | Message subject | Device stopped | - | Message text | The device connected to IoT Central has stopped sending telemetry. | --1. Select **Save**. ---### Test the function works --To test the function in the portal, first make the **Logs** panel is visible on the **Code + Test** page. Then select **Test/Run**. Use the following JSON as the **Request body**: --```json -[{"deviceid":"test-device-1","time":"2019-05-02T14:23:39.527Z"},{"deviceid":"test-device-2","time":"2019-05-02T14:23:50.717Z"},{"deviceid":"test-device-3","time":"2019-05-02T14:24:28.919Z"}] -``` --The function log messages appear in the **Logs** panel: ---After a few minutes, the **To** email address receives an email with the following content: --```txt -The following device(s) have stopped sending telemetry: --Device ID Time -test-device-1 2019-05-02T14:23:39.527Z -test-device-2 2019-05-02T14:23:50.717Z -test-device-3 2019-05-02T14:24:28.919Z -``` --## Add Stream Analytics query --This solution uses a Stream Analytics query to detect when a device stops sending telemetry for more than 180 seconds. The query uses the telemetry from the event hub as its input. The job sends the query results to the function app. In this section, you configure the Stream Analytics job: --1. In the Azure portal, navigate to your Stream Analytics job in the **DetectStoppedDevices** resource group. Under **Jobs topology**, select **Inputs**, select **+ Add stream input**, and then select **Event Hub**. -1. Use the information in the following table to configure the input using the event hub you created previously, then select **Save**: -- | Setting | Value | - | - | -- | - | Input alias | *centraltelemetry* | - | Subscription | Your subscription | - | Event Hubs namespace | Your Event Hubs namespace. The name starts with **detect-stopped-devices-ehns-**. | - | Event hub name | Use existing - **centralexport** | - | Event hub consumer group | Use existing - **$default** | --1. Under **Jobs topology**, select **Outputs**, select **+ Add**, and then select **Azure Function**. -1. Use the information in the following table to configure the output, then select **Save**: -- | Setting | Value | - | - | -- | - | Output alias | *emailnotification* | - | Subscription | Your subscription | - | Function app | Your Function app. The name starts with **detect-stopped-devices-function-**. | - | Function | HttpTrigger1 | --1. Under **Jobs topology**, select **Query** and replace the existing query with the following SQL: -- ```sql - with - LeftSide as - ( - SELECT - -- Get the device ID - deviceId as deviceid1, - EventEnqueuedUtcTime AS time1 - FROM - -- Use the event enqueued time for time-based operations - [centraltelemetry] TIMESTAMP BY EventEnqueuedUtcTime - ), - RightSide as - ( - SELECT - -- Get the device ID - deviceId as deviceid2, - EventEnqueuedUtcTime AS time2 - FROM - -- Use the event enqueued time for time-based operations - [centraltelemetry] TIMESTAMP BY EventEnqueuedUtcTime - ) -- SELECT - LeftSide.deviceid1 as deviceid, - LeftSide.time1 as time - INTO - [emailnotification] - FROM - LeftSide - LEFT OUTER JOIN - RightSide - ON - LeftSide.deviceid1=RightSide.deviceid2 AND DATEDIFF(second,LeftSide,RightSide) BETWEEN 1 AND 180 - where - -- Find records where a device didn't send a message for 180 seconds - RightSide.deviceid2 is NULL - ``` --1. Select **Save**. -1. To start the Stream Analytics job, select **Overview**, then **Start**, then **Now**, and then **Start**: ---## Configure export in IoT Central --On the [Azure IoT Central My apps](https://apps.azureiotcentral.com/myapps) page, locate the IoT Central application the script created. The name of the app is **In-store analytics - Condition Monitoring (custom rules scenario)**. --To enable the data export to Event Hubs, navigate to the **Data Export** page and enable the **All telemetry** export. -Wait until the export status is **Running** before you continue. --## Test --To test the solution, you can block one of the devices to simulate a stopped device: --1. In your IoT Central application, navigate to the **Devices** page and select one of the two thermostat devices. -1. Select **Block** to stop the device sending telemetry. -1. After about two minutes, the **To** email address receives one or more emails that look like the following example: -- ```txt - The following device(s) have stopped sending telemetry: -- Device ID Time - Thermostat-Zone1 2022-11-01T12:45:14.686Z - ``` --## Tidy up --To tidy up after this how-to and avoid unnecessary costs, delete the **DetectStoppedDevices** resource group in the Azure portal. --## Next steps --In this how-to guide, you learned how to: --* Stream telemetry from an IoT Central application using the data export feature. -* Create a Stream Analytics query that detects when a device stops sending data. -* Send an email notification using the Azure Functions and SendGrid services. --Now that you know how to create custom rules and notifications, the suggested next step is to learn how to [Extend Azure IoT Central with custom analytics](howto-create-custom-analytics.md). |
iot-central | Howto Export To Azure Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-azure-data-explorer.md | To create the Azure Data Explorer destination in IoT Central on the **Data expor :::image type="content" source="media/howto-export-data/export-destination-managed.png" alt-text="Screenshot of Azure Data Explorer export destination that uses a managed identity."::: -If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md). +If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshooting.md). |
iot-central | Howto Export To Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-blob-storage.md | To create the Blob Storage destination in IoT Central on the **Data export** pag 1. Select **Save**. -If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md). +If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshooting.md). |
iot-central | Howto Export To Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-event-hubs.md | To create the Event Hubs destination in IoT Central on the **Data export** page: 1. Select **Save**. -If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md). +If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshooting.md). |
iot-central | Howto Export To Service Bus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-service-bus.md | To create the Service Bus destination in IoT Central on the **Data export** page 1. Select **Save**. -If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshoot-data-export.md). +If you don't see data arriving in your destination service, see [Troubleshoot issues with data exports from your Azure IoT Central application](troubleshooting.md). |
iot-central | Howto Monitor Devices Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-monitor-devices-azure-cli.md | az iot central device twin show --app-id <app-id> --device-id <device-id> ## Next steps -A suggested next step is to learn [how to connect Azure IoT Edge for Linux on Windows (EFLOW)](./howto-connect-eflow.md). +A suggested next step is to learn [Troubleshoot why data from your devices isn't showing up in Azure IoT Central](troubleshooting.md). |
iot-central | Howto Set Up Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md | To view and manage the interfaces in your device model: :::image type="content" source="media/howto-set-up-template/device-template.png" alt-text="Screenshot that shows root interface for a model"::: -1. Select the ellipsis to add an inherited interface or component to the root interface. To learn more about interfaces and component see [multiple components](../../iot-pnp/concepts-modeling-guide.md#multiple-components) in the modeling guide. +1. Select the ellipsis to add an inherited interface or component to the root interface. To learn more about interfaces and component see [multiple components](../../iot/concepts-modeling-guide.md) in the modeling guide. :::image type="content" source="media/howto-set-up-template/add-interface.png" alt-text="Screenshot that shows how to add interface or component." lightbox="media/howto-set-up-template/add-interface.png"::: The following table shows the configuration settings for a command capability: | Response | If enabled, a definition of the command response, including: name, display name, schema, unit, and display unit. | |Initial value | The default parameter value. This is an IoT Central extension to DTDL. | -To learn more about how devices implement commands, see [Telemetry, property, and command payloads > Commands and long running commands](../../iot-develop/concepts-message-payloads.md#commands). +To learn more about how devices implement commands, see [Telemetry, property, and command payloads > Commands and long running commands](../../iot/concepts-message-payloads.md#commands). #### Offline commands |
iot-central | Howto Transform Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md | The following table shows three example transformation types: | Transformation | Description | Example | Notes | ||-|-|-|-| Message Format | Convert to or manipulate JSON messages. | CSV to JSON | At ingress. IoT Central only accepts value JSON messages. To learn more, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). | +| Message Format | Convert to or manipulate JSON messages. | CSV to JSON | At ingress. IoT Central only accepts value JSON messages. To learn more, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). | | Computations | Math functions that [Azure Functions](../../azure-functions/index.yml) can execute. | Unit conversion from Fahrenheit to Celsius. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. Transforming the data lets you use IoT Central features such as visualizations and jobs. | | Message Enrichment | Enrichments from external data sources not found in device properties or telemetry. To learn more about internal enrichments, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md). | Add weather information to messages using [location data](howto-use-location-data.md) from devices. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. | |
iot-central | Howto Use Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md | A device can: By default, commands expect a device to be connected and fail if the device can't be reached. If you select the **Queue if offline** option in the device template UI a command can be queued until a device comes online. These *offline commands* are described in a separate section later in this article. -To learn about the IoT Pug and Play command conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md). +To learn about the IoT Pug and Play command conventions, see [IoT Plug and Play conventions](../../iot/concepts-convention.md). -To learn more about the command data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +To learn more about the command data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). To learn how to manage commands by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md) The following table shows the configuration settings for a command capability: | Request | The payload for the device command.| | Response | The payload of the device command response.| -To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define commands in a device template, see [IoT Plug and Play conventions > Commands](../../iot-develop/concepts-convention.md#commands). +To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define commands in a device template, see [IoT Plug and Play conventions > Commands](../../iot/concepts-convention.md#commands). Optional fields, such as display name and description, let you add more details to the interface and capabilities. You can call commands on a device that isn't assigned to a device template. To c ## Next steps -Now that you've learned how to use commands in your Azure IoT Central application, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) to learn more about command parameters and [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) to see complete code samples in different languages. +Now that you've learned how to use commands in your Azure IoT Central application, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md) to learn more about command parameters and [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) to see complete code samples in different languages. |
iot-central | Howto Use Location Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md | You can use location telemetry to create a geofencing rule that generates an ale Now that you've learned how to use properties in your Azure IoT Central application, see: -* [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) +* [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md) * [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) |
iot-central | Howto Use Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md | Properties represent point-in-time values. For example, a device can use a prope You can also define cloud properties in an Azure IoT Central application. Cloud property values are never exchanged with a device and are out of scope for this article. -To learn about the IoT Pug and Play property conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md). +To learn about the IoT Pug and Play property conventions, see [IoT Plug and Play conventions](../../iot/concepts-convention.md). -To learn more about the property data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +To learn more about the property data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). To learn how to manage properties by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md). The following table shows the configuration settings for a property capability. | Comment | Any comments about the property capability. | | Description | A description of the property capability. | -To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define properties in a device template, see [IoT Plug and Play conventions > Read-only properties](../../iot-develop/concepts-convention.md#read-only-properties). +To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define properties in a device template, see [IoT Plug and Play conventions > Read-only properties](../../iot/concepts-convention.md#read-only-properties). Optional fields, such as display name and description, let you add more details to the interface and capabilities. By default, properties are read-only. Read-only properties let a device report p Azure IoT Central uses device twins to synchronize property values between the device and the Azure IoT Central application. Device property values use device twin reported properties. For more information, see [device twins](../../iot-hub/tutorial-device-twins.md). -A device sends property updates as a JSON payload. For more information, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). +A device sends property updates as a JSON payload. For more information, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). You can use the Azure IoT device SDK to send a property update to your Azure IoT Central application. An IoT Central operator sets writable properties on a form. Azure IoT Central se For example implementations in multiple languages, see [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md). -The response message should include the `ac` and `av` fields. The `ad` field is optional. To learn more, see [IoT Plug and Play conventions > Writable properties](../../iot-develop/concepts-convention.md#writable-properties). +The response message should include the `ac` and `av` fields. The `ad` field is optional. To learn more, see [IoT Plug and Play conventions > Writable properties](../../iot/concepts-convention.md#writable-properties). When the operator sets a writable property in the Azure IoT Central UI, the application uses a device twin desired property to send the value to the device. The device then responds by using a device twin reported property. When Azure IoT Central receives the reported property value, it updates the property view with a status of **Accepted**. You can update the writable properties in this view: Now that you've learned how to use properties in your Azure IoT Central application, see: -* [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) -* [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) +* [IoT Plug and Play conventions](../../iot/concepts-convention.md) +* [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md) * [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) |
iot-central | Overview Iot Central Operator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-operator.md | To automate device management tasks, you can use: ## Troubleshoot and remediate device issues -The [troubleshooting guide](troubleshoot-connection.md) helps you to diagnose and remediate common issues. You can use the **Devices** page to block devices that appear to be malfunctioning until the problem is resolved. +The [troubleshooting guide](troubleshooting.md) helps you to diagnose and remediate common issues. You can use the **Devices** page to block devices that appear to be malfunctioning until the problem is resolved. ## Next steps |
iot-central | Overview Iot Central Solution Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md | You can use the data export and rules capabilities in IoT Central to integrate w - [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md). - [Transform data for IoT Central](howto-transform-data.md) - [Use workflows to integrate your Azure IoT Central application with other cloud services](howto-configure-rules-advanced.md)-- [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](howto-create-custom-rules.md) - [Extend Azure IoT Central with custom analytics using Azure Databricks](howto-create-custom-analytics.md) ## Integrate with companion applications |
iot-central | Troubleshoot Data Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-data-export.md | - Title: Troubleshoot data exports from Azure IoT Central -description: Troubleshoot data exports in IoT Central for issues such as managed identity permissions and virtual network configuration --- Previously updated : 06/12/2023------# Troubleshoot issues with data exports from your Azure IoT Central application --This document helps you find out why the data your IoT Central application isn't reaching it's intended destination or isn't arriving in the correct format. --## Managed identity issues --You're using a managed identity to authorize the connection to an export destination. Data isn't arriving at the export destination. --Before you configure or enable the export destination, make sure that you complete the following steps: --- Enable the managed identity for the IoT Central application. To verify that the managed identity is enabled, go to the **Identity** page for your application in the Azure portal or use the following CLI command:-- ```azurecli - az iot central app identity show --name {your app name} --resource-group {your resource group name} - ``` --- Configure the permissions for the managed identity. To view the assigned permissions, select **Azure role assignments** on the **Identity** page for your app in the Azure portal or use the `az role assignment list` CLI command. The required permissions are:-- | Destination | Permission | - |-|| - | Azure Blob storage | Storage Blob Data Contributor | - | Azure Service Bus | Azure Service Bus Data Sender | - | Azure Event Hubs | Azure Event Hubs Data Sender | - | Azure Data Explorer | Admin | -- If the permissions were not set correctly before you created the destination in your IoT Central application, try removing the destination and then adding it again. --- Configure any virtual networks, private endpoints, and firewall policies.--> [!NOTE] -> If you're using a managed identity to authorize the connection to an export destination, IoT Central doesn't export data from simulated devices. --To learn more, see [Export data](howto-export-data.md?tabs=managed-identity). --## Destination connection issues --The export definition page shows information about failed connections to the export destination: ---## Next steps --If you need more help, you can contact the Azure experts on the [Microsoft Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support). --For more information, see [Azure IoT support and help options](../../iot/iot-support-help.md). |
iot-central | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshooting.md | + + Title: Troubleshooting in Azure IoT Central +description: Troubleshoot and resolve issues with device connections and data export configurations in your IoT Central application +++ Last updated : 02/06/2024+++++#Customer intent: As a device developer, I want to understand why data from my devices isn't showing up in IoT Central, and the steps I can take to rectify the issue. +++# Troubleshooting in Azure IoT Central ++This article includes troubleshooting guidance for device connectivity issues and data export configuration issues in your IoT Central applications. ++## Device connectivity issues ++This section helps you determine if your data is reaching IoT Central. ++If you haven't already done so, install the `az cli` tool and `azure-iot` extension. ++To learn how to install the `az cli`, see [Install the Azure CLI](/cli/azure/install-azure-cli). ++To [install](/cli/azure/azure-cli-reference-for-IoT#extension-reference-installation) the `azure-iot` extension, run the following command: ++```azurecli +az extension add --name azure-iot +``` ++> [!NOTE] +> You may be prompted to install the `uamqp` library the first time you run an extension command. ++When you've installed the `azure-iot` extension, start your device to see if the messages it's sending are making their way to IoT Central. ++Use the following commands to sign in the subscription where you have your IoT Central application: ++```azurecli +az login +az account set --subscription <your-subscription-id> +``` ++To monitor the telemetry your device is sending, use the following command: ++```azurecli +az iot central diagnostics monitor-events --app-id <iot-central-app-id> --device-id <device-name> +``` ++If the device has connected successfully to IoT Central, you see output similar to the following example: ++```output +Monitoring telemetry. +Filtering on device: device-001 +{ + "event": { + "origin": "device-001", + "module": "", + "interface": "", + "component": "", + "payload": { + "temp": 65.57910343679293, + "humid": 36.16224660107426 + } + } +} +``` ++To monitor the property updates your device is exchanging with IoT Central, use the following preview command: ++```azurecli +az iot central diagnostics monitor-properties --app-id <iot-central-app-id> --device-id <device-name> +``` ++If the device successfully sends property updates, you see output similar to the following example: ++```output +Changes in reported properties: +version : 32 +{'state': 'true', 'name': {'value': {'value': 'Contoso'}, 'status': 'completed', 'desiredVersion': 7, 'ad': 'completed', 'av': 7, 'ac +': 200}, 'brightness': {'value': {'value': 2}, 'status': 'completed', 'desiredVersion': 7, 'ad': 'completed', 'av': 7, 'ac': 200}, 'p +rocessorArchitecture': 'ARM', 'swVersion': '1.0.0'} +``` ++If you see data appear in your terminal, then the data is making it as far as your IoT Central application. ++If you don't see any data appear after a few minutes, try pressing the `Enter` or `return` key on your keyboard, in case the output is stuck. ++If you're still not seeing any data appear on your terminal, it's likely that your device is having network connectivity issues, or isn't sending data correctly to IoT Central. ++### Check the provisioning status of your device ++If your data isn't appearing in the CLI monitor, check the provisioning status of your device by running the following command: ++```azurecli +az iot central device registration-info --app-id <iot-central-app-id> --device-id <device-name> +``` ++The following output shows an example of a device that's blocked from connecting: ++```json +{ + "@device_id": "v22upeoqx6", + "device_registration_info": { + "device_status": "blocked", + "display_name": "Environmental Sensor - v22upeoqx6", + "id": "v22upeoqx6", + "instance_of": "urn:krhsi_k0u:modelDefinition:w53jukkazs", + "simulated": false + }, + "dps_state": { + "error": "Device is blocked from connecting to IoT Central application. Unblock the device in IoT Central and retry. Learn more: +https://aka.ms/iotcentral-docs-dps-SAS", + "status": null + } +} +``` ++| Device provisioning status | Description | Possible mitigation | +| - | - | - | +| Provisioned | No immediately recognizable issue. | N/A | +| Registered | The device hasn't yet connected to IoT Central. | Check your device logs for connectivity issues. | +| Blocked | The device is blocked from connecting to IoT Central. | Device is blocked from connecting to the IoT Central application. Unblock the device in IoT Central and retry. To learn more, see [Device status values](howto-manage-devices-individually.md#device-status-values). | +| Unapproved | The device isn't approved. | Device isn't approved to connect to the IoT Central application. Approve the device in IoT Central and retry. To learn more, see [Device status values](howto-manage-devices-individually.md#device-status-values) | +| Unassigned | The device isn't assigned to a device template. | Assign the device to a device template so that IoT Central knows how to parse the data. | ++Learn more about [Device status values in the UI](howto-manage-devices-individually.md#device-status-values) and [Device status values in the REST API](howto-manage-devices-with-rest-api.md#get-a-device). ++### Error codes ++If you're still unable to diagnose why your data isn't showing up in `monitor-events`, the next step is to look for error codes reported by your device. ++Start a debugging session on your device, or collect logs from your device. Check for any error codes that the device reports. ++The following tables show the common error codes and possible actions to mitigate. ++If you're seeing issues related to your authentication flow: ++| Error code | Description | Possible Mitigation | +| - | - | - | +| 400 | The body of the request isn't valid. For example, it can't be parsed, or the object can't be validated. | Ensure that you're sending the correct request body as part of the attestation flow, or use a device SDK. | +| 401 | The authorization token can't be validated. For example, it has expired or doesn't apply to the request's URI. This error code is also returned to devices as part of the TPM attestation flow. | Ensure that your device has the correct credentials. | +| 404 | The Device Provisioning Service instance, or a resource such as an enrollment doesn't exist. | [File a ticket with customer support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). | +| 412 | The `ETag` in the request doesn't match the `ETag` of the existing resource, as per RFC7232. | [File a ticket with customer support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). | +| 429 | The service is throttling operations. For specific service limits, see [IoT Hub Device Provisioning Service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#iot-hub-device-provisioning-service-limits). | Reduce message frequency, split responsibilities among more devices. | +| 500 | An internal error occurred. | [File a ticket with customer support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) to see if they can help you further. | ++### Detailed authorization error codes ++| Error | Sub error code | Notes | +| - | - | - | +| 401 Unauthorized | 401002 | The device is using invalid or expired credentials. DPS reports this error. | +| 401 Unauthorized | 400209 | The device is either waiting for approval by an operator or an operator has blocked it. | +| 401 IoTHubUnauthorized | | The device is using expired security token. IoT Hub reports this error. | +| 401 IoTHubUnauthorized | DEVICE_DISABLED | The device is disabled in this IoT hub and has moved to another IoT hub. Reprovision the device. | +| 401 IoTHubUnauthorized | DEVICE_BLOCKED | An operator has blocked this device. | ++### File upload error codes ++Here's a list of common error codes you might see when a device tries to upload a file to the cloud. Remember that before your device can upload a file, you must configure [device file uploads](howto-configure-file-uploads.md) in your application. ++| Error code | Description | Possible Mitigation | +| - | - | - | +| 403006 | You've exceeded the number of concurrent file upload operations. Each device client is limited to 10 concurrent file uploads. | Ensure the device promptly notifies IoT Central that the file upload operation has completed. If that doesn't work, try reducing the request timeout. | ++## Unmodeled data issues ++When you've established that your device is sending data to IoT Central, the next step is to ensure that your device is sending data in a valid format. ++To detect which categories your issue is in, run the most appropriate Azure CLI command for your scenario: ++- To validate telemetry, use the preview command: ++ ```azurecli + az iot central diagnostics validate-messages --app-id <iot-central-app-id> --device-id <device-name> + ``` ++- To validate property updates, use the preview command: ++ ```azurecli + az iot central diagnostics validate-properties --app-id <iot-central-app-id> --device-id <device-name> + ``` ++You may be prompted to install the `uamqp` library the first time you run a `validate` command. ++The three common types of issue that cause device data to not appear in IoT Central are: ++- Device template to device data mismatch. +- Data is invalid JSON. +- Old versions of IoT Edge cause telemetry from components to display incorrectly as unmodeled data. ++### Device template to device data mismatch ++A device must use the same name and casing as used in the device template for any telemetry field names in the payload it sends. The following output shows an example warning message where the device is sending a telemetry value called `Temperature`, when it should be `temperature`: ++```output +Validating telemetry. +Filtering on device: sample-device-01. +Exiting after 300 second(s), or 10 message(s) have been parsed (whichever happens first). +[WARNING] [DeviceId: sample-device-01] [TemplateId: urn:modelDefinition:ofhmazgddj:vmjwwjuvdzg] Device is sending data that has not been defined in the device template. Following capabilities have NOT been defined in the device template '['Temperature']'. Following capabilities have been defined in the device template (grouped by components) '{'thermostat1': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport'], 'thermostat2': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport'], 'deviceInformation': ['manufacturer', 'model', 'swVersion', 'osName', 'processorArchitecture', 'processorManufacturer', 'totalStorage', 'totalMemory']}'. +``` ++A device must use the same name and casing as used in the device template for any property names in the payload it sends. The following output shows an example warning message where the property `osVersion` isn't defined in the device template: ++```output +Command group 'iot central diagnostics' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus +[WARNING] [DeviceId: sample-device-01] [TemplateId: urn:modelDefinition:ofhmazgddj:vmjwwjuvdzg] Device is sending data that has not been defined in the device template. Following capabilities have NOT been defined in the device template '['osVersion']'. Following capabilities have been defined in the device template (grouped by components) '{'thermostat1': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport', 'rundiagnostics'], 'thermostat2': ['temperature', 'targetTemperature', 'maxTempSinceLastReboot', 'getMaxMinReport', 'rundiagnostics'], 'deviceInformation': ['manufacturer', 'model', 'swVersion', 'osName', 'processorArchitecture', 'processorManufacturer', 'totalStorage', 'totalMemory']}'. +``` ++A device must use the data types defined in the device template for any telemetry or property values. For example, you see a schema mismatch if the type defined in the device template is boolean, but the device sends a string. The following output shows an example error message where the device using a string value for a property that's defined as a double: ++```output +Command group 'iot central diagnostics' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus +Validating telemetry. +Filtering on device: sample-device-01. +Exiting after 300 second(s), or 10 message(s) have been parsed (whichever happens first). +[ERROR] [DeviceId: sample-device-01] [TemplateId: urn:modelDefinition:ofhmazgddj:vmjwwjuvdzg] Datatype of telemetry field 'temperature' does not match the datatype double. Data sent by the device : curr_temp. For more information, see: https://aka.ms/iotcentral-payloads +``` ++The validation commands also report an error if the same telemetry name is defined in multiple interfaces, but the device isn't IoT Plug and Play compliant. ++If you prefer to use a GUI, use the IoT Central **Raw data** view to see if something isn't being modeled. +++When you've detected the issue, you may need to update device firmware, or create a new device template that models previously unmodeled data. ++If you chose to create a new template that models the data correctly, migrate devices from your old template to the new template. To learn more, see [Manage devices in your Azure IoT Central application](howto-manage-devices-individually.md). ++### Invalid JSON ++If there are no errors reported, but a value isn't appearing, then it's probably malformed JSON in the payload the device sends. To learn more, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md). ++You can't use the validate commands or the **Raw data** view in the UI to detect if the device is sending malformed JSON. ++### IoT Edge version ++To display telemetry from components hosted in IoT Edge modules correctly, use [IoT Edge version 1.2.4](https://github.com/Azure/azure-iotedge/releases/tag/1.2.4) or later. If you use an earlier version, telemetry from components in IoT Edge modules displays as *_unmodeleddata*. ++## Data export managed identity issues ++You're using a managed identity to authorize the connection to an export destination. Data isn't arriving at the export destination. ++Before you configure or enable the export destination, make sure that you complete the following steps: ++- Enable the managed identity for the IoT Central application. To verify that the managed identity is enabled, go to the **Identity** page for your application in the Azure portal or use the following CLI command: ++ ```azurecli + az iot central app identity show --name {your app name} --resource-group {your resource group name} + ``` ++- Configure the permissions for the managed identity. To view the assigned permissions, select **Azure role assignments** on the **Identity** page for your app in the Azure portal or use the `az role assignment list` CLI command. The required permissions are: ++ | Destination | Permission | + |-|| + | Azure Blob storage | Storage Blob Data Contributor | + | Azure Service Bus | Azure Service Bus Data Sender | + | Azure Event Hubs | Azure Event Hubs Data Sender | + | Azure Data Explorer | Admin | ++ If the permissions were not set correctly before you created the destination in your IoT Central application, try removing the destination and then adding it again. ++- Configure any virtual networks, private endpoints, and firewall policies. ++> [!NOTE] +> If you're using a managed identity to authorize the connection to an export destination, IoT Central doesn't export data from simulated devices. ++To learn more, see [Export data](howto-export-data.md?tabs=managed-identity). ++## Data export destination connection issues ++The export definition page shows information about failed connections to the export destination: +++## Next steps ++If you need more help, you can contact the Azure experts on the [Microsoft Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support). ++For more information, see [Azure IoT support and help options](../../iot/iot-support-help.md). |
iot-dps | How To Send Additional Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-send-additional-data.md | Common scenarios for sending optional payloads are: * [Custom allocation policies](concepts-custom-allocation.md) can use the device payload to help select an IoT hub for a device or set its initial twin. For example, you may want to allocate your devices based on the device model. In this case, you can configure the device to report its model information when it registers. DPS will pass the deviceΓÇÖs payload to the custom allocation webhook. Then your webhook can decide which IoT hub the device will be provisioned to based on the device model information. If needed, the webhook can also return data back to the device as a JSON object in the webhook response. To learn more, see [Use device payloads in custom allocation](concepts-custom-allocation.md#use-device-payloads-in-custom-allocation). -* [IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices *may* use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js). +* [IoT Plug and Play (PnP)](../iot/overview-iot-plug-and-play.md) devices *may* use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js). -* [IoT Central](../iot-central/core/overview-iot-central.md) devices that connect through DPS *should* follow [IoT Plug and Play conventions](..//iot-develop/concepts-convention.md) and send their model ID when they register. IoT Central uses the model ID to assign the device to the correct device template. To learn more, see [Device implementation and best practices for IoT Central](../iot-central/core/concepts-device-implementation.md). +* [IoT Central](../iot-central/core/overview-iot-central.md) devices that connect through DPS *should* follow [IoT Plug and Play conventions](..//iot/concepts-convention.md) and send their model ID when they register. IoT Central uses the model ID to assign the device to the correct device template. To learn more, see [Device implementation and best practices for IoT Central](../iot-central/core/concepts-device-implementation.md). ## Device sends data payload to DPS |
iot-hub-device-update | Device Update Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-overview.md | -* The *interface layer* builds on top of [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), allowing for messaging to flow between the Device Update agent and Device Update service. +* The *interface layer* builds on top of [Azure IoT Plug and Play](../iot/overview-iot-plug-and-play.md), allowing for messaging to flow between the Device Update agent and Device Update service. * The *platform layer* is responsible for the high-level update actions of download, install, and apply that may be platform- or device-specific. :::image type="content" source="media/understand-device-update/client-agent-reference-implementations.png" alt-text="Agent Implementations." lightbox="media/understand-device-update/client-agent-reference-implementations.png"::: |
iot-hub-device-update | Device Update Plug And Play | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md | -Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. +Device Update for IoT Hub uses [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) to discover and manage devices that are over-the-air update capable. The Device Update service sends and receives properties and messages to and from devices using IoT Plug and Play interfaces. For more information: -* Understand the [IoT Plug and Play device client](../iot-develop/concepts-developer-guide-device.md). +* Understand the [IoT Plug and Play device client](../iot/concepts-developer-guide-device.md). * See how the [Device Update agent is implemented](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md). ## Device Update Models -Model ID is how smart devices advertise their capabilities to Azure IoT applications with IoT Plug and Play.To learn more on how to build smart devices to advertise their capabilities to Azure IoT applications visit [IoT Plug and Play device developer guide](../iot-develop/concepts-developer-guide-device.md). +Model ID is how smart devices advertise their capabilities to Azure IoT applications with IoT Plug and Play.To learn more on how to build smart devices to advertise their capabilities to Azure IoT applications visit [IoT Plug and Play device developer guide](../iot/concepts-developer-guide-device.md). -Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID as part of the device connection. [Learn how to announce a model ID](../iot-develop/concepts-developer-guide-device.md#model-id-announcement). +Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID as part of the device connection. [Learn how to announce a model ID](../iot/concepts-developer-guide-device.md#model-id-announcement). Device Update has 2 PnP models defined that support DU features. The Device Update model, '**dtmi:azure:iot:deviceUpdateContractModel;2**', supports the core functionality and uses the device update core interface to send update actions and metadata to devices and receive update status from devices. IoT Hub device twin example: ``` >[!NOTE]->The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. For more information, see [IoT Plug and Play conventions](../iot-develop/concepts-convention.md#sample-multiple-components-writable-property). +>The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. For more information, see [IoT Plug and Play conventions](../iot/concepts-convention.md#sample-multiple-components-writable-property). #### State The **action** field represents the actions taken by the Device Update agent as ## Device information interface -The device information interface is a concept used within [IoT Plug and Play architecture](../iot-develop/overview-iot-plug-and-play.md). It contains device-to-cloud properties that provide information about the hardware and operating system of the device. Device Update for IoT Hub uses the `DeviceInformation.manufacturer` and `DeviceInformation.model` properties for telemetry and diagnostics. To learn more, see this [example of the device information interface](https://devicemodels.azure.com/dtmi/azure/devicemanagement/deviceinformation-1.json). +The device information interface is a concept used within [IoT Plug and Play architecture](../iot/overview-iot-plug-and-play.md). It contains device-to-cloud properties that provide information about the hardware and operating system of the device. Device Update for IoT Hub uses the `DeviceInformation.manufacturer` and `DeviceInformation.model` properties for telemetry and diagnostics. To learn more, see this [example of the device information interface](https://devicemodels.azure.com/dtmi/azure/devicemanagement/deviceinformation-1.json). -The expected component name in your model is **deviceInformation** when this interface is implemented. [Learn about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md) +The expected component name in your model is **deviceInformation** when this interface is implemented. [Learn about Azure IoT Plug and Play Components](../iot/concepts-modeling-guide.md) |Name|Type|Schema|Direction|Description|Example| |-|-|||--|--| |
iot-hub | Iot Concepts And Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md | Examples of telemetry received from a device can include sensor data such as spe Properties can be read or set from the IoT hub and can be used to send notifications when an action has completed. An example of a specific property on a device is temperature. Temperature can be a writable property that can be updated on the device or read from a temperature sensor attached to the device. -You can enable properties in IoT Hub using [Device twins](iot-hub-devguide-device-twins.md) or [Plug and Play](../iot-develop/overview-iot-plug-and-play.md). +You can enable properties in IoT Hub using [Device twins](iot-hub-devguide-device-twins.md) or [Plug and Play](../iot/overview-iot-plug-and-play.md). -To learn more about the differences between device twins and Plug and Play, see [Plug and Play](../iot-develop/concepts-digital-twin.md#device-twins-and-digital-twins). +To learn more about the differences between device twins and Plug and Play, see [Plug and Play](../iot/concepts-digital-twin.md#device-twins-and-digital-twins). ## Device commands |
iot-hub | Iot Hub Devguide C2d Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-c2d-guidance.md | IoT Hub provides three options for device apps to expose functionality to a back * [Cloud-to-device messages](iot-hub-devguide-messages-c2d.md) for one-way notifications to the device app. -To learn how [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) uses these options to control IoT Plug and Play devices, see [IoT Plug and Play service developer guide](../iot-develop/concepts-developer-guide-service.md). +To learn how [Azure IoT Plug and Play](../iot/overview-iot-plug-and-play.md) uses these options to control IoT Plug and Play devices, see [IoT Plug and Play service developer guide](../iot/concepts-developer-guide-service.md). [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)] |
iot-hub | Iot Hub Devguide Device Twins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md | Refer to [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance. Refer to [Cloud-to-device communication guidance](iot-hub-devguide-c2d-guidance.md) for guidance on using desired properties, direct methods, or cloud-to-device messages. -To learn how device twins relate to the device model used by an Azure IoT Plug and Play device, see [Understand IoT Plug and Play digital twins](../iot-develop/concepts-digital-twin.md). +To learn how device twins relate to the device model used by an Azure IoT Plug and Play device, see [Understand IoT Plug and Play digital twins](../iot/concepts-digital-twin.md). ## Device twins In the previous example, the `telemetryConfig` device twin desired and reported > The preceding snippets are examples, optimized for readability, of one way to encode a device configuration and its status. IoT Hub does not impose a specific schema for the device twin desired and reported properties in the device twins. > [!IMPORTANT]-> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot-develop/concepts-convention.md#writable-properties). +> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot/concepts-convention.md#writable-properties). You can use twins to synchronize long-running operations such as firmware updates. For more information on how to use properties to synchronize and track a long running operation across devices, see [Use desired properties to configure devices](tutorial-device-twins.md). |
iot-hub | Iot Hub Devguide Messages D2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md | In addition to device telemetry, message routing also enables sending non-teleme * Digital twin change events * Device connection state events -For example, if a route is created with the data source set to **Device Twin Change Events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with the data source set to **Device Lifecycle Events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with the data source set to **Digital Twin Change Events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **Device Connection State Events**, IoT Hub sends a message indicating whether the device was connected or disconnected. +For example, if a route is created with the data source set to **Device Twin Change Events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with the data source set to **Device Lifecycle Events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot/overview-iot-plug-and-play.md), a developer can create routes with the data source set to **Digital Twin Change Events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **Device Connection State Events**, IoT Hub sends a message indicating whether the device was connected or disconnected. [IoT Hub also integrates with Azure Event Grid](iot-hub-event-grid.md) to publish device events to support real-time integrations and automation of workflows based on these events. See key [differences between message routing and Event Grid](iot-hub-event-grid-routing-comparison.md) to learn which works best for your scenario. |
iot-hub | Iot Hub Devguide Module Twins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-module-twins.md | In the previous example, the `telemetryConfig` module twin desired and reported > The preceding snippets are examples, optimized for readability, of one way to encode a module configuration and its status. IoT Hub does not impose a specific schema for the module twin desired and reported properties in the module twins. > [!IMPORTANT]-> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot-develop/concepts-convention.md#writable-properties). +> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot/concepts-convention.md#writable-properties). ## Back-end operations |
iot-hub | Iot Hub Devguide Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md | -* [**IoT Hub device SDKs**](#azure-iot-hub-device-sdks) enable you to build apps that run on your IoT devices using the device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, jobs, methods, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use the module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md). +* [**IoT Hub device SDKs**](#azure-iot-hub-device-sdks) enable you to build apps that run on your IoT devices using the device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, jobs, methods, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use the module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md). * [**IoT Hub service SDKs**](#azure-iot-hub-service-sdks) enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules. |
iot-hub | Iot Hub Device Streams Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-device-streams-overview.md | To learn more about using Azure Monitor with IoT Hub, see [Monitor IoT Hub](moni ## Regional availability -During public preview, IoT Hub device streams are available in the Central US, Central US EUAP, North Europe, and Southeast Asia regions. Please make sure you create your hub in one of these regions. +During public preview, IoT Hub device streams are available in the Central US, East US EUAP, North Europe, and Southeast Asia regions. Please make sure you create your hub in one of these regions. ## SDK availability |
iot-hub | Iot Hub Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-scaling.md | The standard tier of IoT Hub enables all features, and is required for any IoT s | [Device twins](iot-hub-devguide-device-twins.md), [module twins](iot-hub-devguide-module-twins.md), and [device management](iot-hub-device-management-overview.md) | | Yes | | [Device streams (preview)](iot-hub-device-streams-overview.md) | | Yes | | [Azure IoT Edge](../iot-edge/about-iot-edge.md) | | Yes |-| [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) | | Yes | +| [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) | | Yes | IoT Hub also offers a free tier that is meant for testing and evaluation. It has all the capabilities of the standard tier, but includes limited messaging allowances. You can't upgrade from the free tier to either the basic or standard tier. |
iot-hub | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md | Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
iot | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-architecture.md | + + Title: IoT Plug and Play architecture | Microsoft Docs +description: Understand the key architectural elements of an IoT Plug and Play solution. ++ Last updated : 1/23/2024++++++# IoT Plug and Play architecture ++IoT Plug and Play enables solution builders to integrate IoT devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device _model_ that describes a device's capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of interfaces that define: ++- _Properties_ that represent the read-only or writable state of a device or other entity. For example, a device serial number may be a read-only property and a target temperature on a thermostat may be a writable property. +- _Telemetry_ that's the data emitted by a device, whether the data is a regular stream of sensor readings, an occasional error, or an information message. +- _Commands_ that describe a function or operation that can be done on a device. For example, a command could reboot a gateway or take a picture using a remote camera. ++Every model and interface has a unique ID. ++The following diagram shows the key elements of an IoT Plug and Play solution: +++## Model repository ++The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). ++The web UI lets you manage the models and interfaces. ++The model repository has built-in role-based access controls that let you manage access to interface definitions. ++## Devices ++A device builder implements the code to run on an IoT device using one of the [Azure IoT device SDKs](../iot-develop/about-iot-sdks.md). The device SDKs help the device builder to: ++- Connect securely to an IoT hub. +- Register the device with your IoT hub and announce the model ID that identifies the collection of DTDL interfaces the device implements. +- Synchronize the properties defined in the DTDL interfaces between the device and your IoT hub. +- Add command handlers for the commands defined in the DTDL interfaces. +- Send telemetry to the IoT hub. ++## IoT Edge gateway ++An IoT Edge gateway acts as an intermediary to connect IoT Plug and Play devices that can't connect directly to an IoT hub. To learn more, see [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md). ++## IoT Edge modules ++An _IoT Edge module_ lets you deploy and manage business logic on the edge. Azure IoT Edge modules are the smallest unit of computation managed by IoT Edge, and can contain Azure services (such as Azure Stream Analytics) or your own solution-specific code. ++The _IoT Edge hub_ is one of the modules that make up the Azure IoT Edge runtime. It acts as a local proxy for IoT Hub by exposing the same protocol endpoints as IoT Hub. This consistency means that clients (whether devices or modules) can connect to the IoT Edge runtime just as they would to IoT Hub. ++The device SDKs help a module builder to: ++- Use the IoT Edge hub to connect securely to your IoT hub. +- Register the module with your IoT hub and announce the model ID that identifies the collection of DTDL interfaces the device implements. +- Synchronize the properties defined in the DTDL interfaces between the device and your IoT hub. +- Add command handlers for the commands defined in the DTDL interfaces. +- Send telemetry to the IoT hub. ++## IoT Hub ++[IoT Hub](../iot-hub/about-iot-hub.md) is a cloud-hosted service that acts as a central message hub for bi-directional communication between your IoT solution and the devices it manages. ++An IoT hub: ++- Makes the model ID implemented by a device available to a backend solution. +- Maintains the digital twin associated with each IoT Plug and Play device connected to the hub. +- Forwards telemetry streams to other services for processing or storage. +- Routes digital twin change events to other services to enable device monitoring. ++## Backend solution ++A backend solution monitors and controls connected devices by interacting with digital twins in the IoT hub. Use one of the Azure IoT service SDKs to implement your backend solution. To understand the capabilities of a connected device, the solution backend: ++1. Retrieves the model ID the device registered with the IoT hub. +1. Uses the model ID to retrieve the interface definitions from any model repository. +1. Uses the model parser to extract information from the interface definitions. ++The backend solution can use the information from the interface definitions to: ++- Read property values reported by devices. +- Update writable properties on a device. +- Call commands implemented by a device. +- Understand the format of telemetry sent by a device. ++## Next steps ++Now that you have an overview of the architecture of an IoT Plug and Play solution, the next steps are to learn more about: ++- [The model repository](./concepts-model-repository.md) +- [Digital twin model integration](./concepts-model-discovery.md) +- [Developing for IoT Plug and Play](./concepts-developer-guide-device.md) |
iot | Concepts Convention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-convention.md | + + Title: IoT Plug and Play conventions | Microsoft Docs +description: Description of the conventions IoT Plug and Play expects devices to use when they send telemetry and properties, and handle commands and property updates. ++ Last updated : 1/23/2024+++++# IoT Plug and Play conventions ++IoT Plug and Play devices should follow a set of conventions when they exchange messages with an IoT hub. IoT Plug and Play devices use the MQTT protocol to communicate with IoT Hub. IoT Hub also supports the AMQP protocol which available in some IoT device SDKs. ++A device can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime. ++You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) _model_. There are two types of model referred to in this article: ++- **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level elements in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_. +- **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with more telemetry, properties, and commands. ++For more information, see [IoT Plug and Play modeling guide](concepts-modeling-guide.md). ++## Identify the model ++To announce the model it implements, an IoT Plug and Play device or module includes the model ID in the MQTT connection packet by adding `model-id` to the `USERNAME` field. ++To identify the model that a device or module implements, a service can get the model ID from: ++- The device twin `modelId` field. +- The digital twin `$metadata.$model` field. +- A digital twin change notification. ++## Telemetry ++- Telemetry sent from a no component device doesn't require any extra metadata. The system adds the `dt-dataschema` property. +- Telemetry sent from a device using components must add the component name to the telemetry message. +- When using MQTT, add the `$.sub` property with the component name to the telemetry topic, the system adds the `dt-subject` property. +- When using AMQP, add the `dt-subject` property with the component name as a message annotation. ++> [!NOTE] +> Telemetry from components requires one message per component. ++For more telemetry examples, see [Payloads > Telemetry](concepts-message-payloads.md#telemetry) ++## Read-only properties ++A device sets a read-only property which it then reports to the back-end application. ++### Sample no component read-only property ++A device or module can send any valid JSON that follows the DTDL rules. ++DTDL that defines a property on an interface: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:example: Thermostat;1", + "@type": "Interface", + "contents": [ + { + "@type": "Property", + "name": "temperature", + "schema": "double" + } + ] +} +``` ++Sample reported property payload: ++```json +"reported" : +{ + "temperature" : 21.3 +} +``` ++### Sample multiple components read-only property ++The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. ++DTDL that references a component: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:TemperatureController;1", + "@type": "Interface", + "displayName": "Temperature Controller", + "contents": [ + { + "@type" : "Component", + "schema": "dtmi:com:example:Thermostat;1", + "name": "thermostat1" + } + ] +} +``` ++DTDL that defines the component: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:Thermostat;1", + "@type": "Interface", + "contents": [ + { + "@type": "Property", + "name": "temperature", + "schema": "double" + } + ] +} +``` ++Sample reported property payload: ++```json +"reported": { + "thermostat1": { + "__t": "c", + "temperature": 21.3 + } +} +``` ++For more read-only property examples, see [Payloads > Properties](concepts-message-payloads.md#properties). ++## Writable properties ++A back-end application sets a writable property that IoT Hub then sends to the device. ++The device or module should confirm that it received the property by sending a reported property. The reported property should include: ++- `value` - the actual value of the property (typically the received value, but the device may decide to report a different value). +- `ac` - an acknowledgment code that uses an HTTP status code. +- `av` - an acknowledgment version that refers to the `$version` of the desired property. You can find this value in the desired property JSON payload. +- `ad` - an optional acknowledgment description. ++### Acknowledgment responses ++When reporting writable properties the device should compose the acknowledgment message, by using the four fields in the previous list, to indicate the actual device state, as described in the following table: ++|Status(ac)|Version(av)|Value(value)|Description(av)| +|:|:|:|:| +|200|Desired version|Desired value|Desired property value accepted| +|202|Desired version|Value accepted by the device|Desired property value accepted, update in progress (should finish with 200)| +|203|0|Value set by the device|Property set from the device, not reflecting any desired| +|400|Desired version|Actual value used by the device|Desired property value not accepted| +|500|Desired version|Actual value used by the device|Exception when applying the property| ++When a device starts up, it should request the device twin, and check for any writable property updates. If the version of a writable property increased while the device was offline, the device should send a reported property response to confirm that it received the update. ++When a device starts up for the first time, it can send an initial value for a reported property if it doesn't receive an initial desired property from the IoT hub. In this case, the device can send the default value with `av` to `0` and `ac` to `203`. For example: ++```json +"reported": { + "targetTemperature": { + "value": 20.0, + "ac": 203, + "av": 0, + "ad": "initialize" + } +} +``` ++A device can use the reported property to provide other information to the hub. For example, the device could respond with a series of in-progress messages such as: ++```json +"reported": { + "targetTemperature": { + "value": 35.0, + "ac": 202, + "av": 3, + "ad": "In-progress - reporting current temperature" + } +} +``` ++When the device reaches the target temperature, it sends the following message: ++```json +"reported": { + "targetTemperature": { + "value": 20.0, + "ac": 200, + "av": 4, + "ad": "Reached target temperature" + } +} +``` ++A device could report an error such as: ++```json +"reported": { + "targetTemperature": { + "value": 120.0, + "ac": 500, + "av": 3, + "ad": "Target temperature out of range. Valid range is 10 to 99." + } +} +``` ++### Object type ++If a writable property is defined as an object, the service must send a complete object to the device. The device should acknowledge the update by sending sufficient information back to the service for the service to understand how the device has acted on the update. This response could include: ++- The entire object. +- Just the fields that the device updated. +- A subset of the fields. ++For large objects, consider minimizing the size of the object you include in the acknowledgment. ++The following example shows a writable property defined as an `Object` with four fields: ++DTDL: ++```json +{ + "@type": "Property", + "name": "samplingRange", + "schema": { + "@type": "Object", + "fields": [ + { + "name": "startTime", + "schema": "dateTime" + }, + { + "name": "lastTime", + "schema": "dateTime" + }, + { + "name": "count", + "schema": "integer" + }, + { + "name": "errorCount", + "schema": "integer" + } + ] + }, + "displayName": "Sampling range" + "writable": true +} +``` ++To update this writable property, send a complete object from the service that looks like the following example: ++```json +{ + "samplingRange": { + "startTime": "2021-08-17T12:53:00.000Z", + "lastTime": "2021-08-17T14:54:00.000Z", + "count": 100, + "errorCount": 5 + } +} +``` ++The device responds with an acknowledgment that looks like the following example: ++```json +{ + "samplingRange": { + "ac": 200, + "av": 5, + "ad": "Weighing status updated", + "value": { + "startTime": "2021-08-17T12:53:00.000Z", + "lastTime": "2021-08-17T14:54:00.000Z", + "count": 100, + "errorCount": 5 + } + } +} +``` ++### Sample no component writable property ++When a device receives multiple desired properties in a single payload, it can send the reported property responses across multiple payloads or combine the responses into a single payload. ++A device or module can send any valid JSON that follows the DTDL rules. ++DTDL: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:example: Thermostat;1", + "@type": "Interface", + "contents": [ + { + "@type": "Property", + "name": "targetTemperature", + "schema": "double", + "writable": true + }, + { + "@type": "Property", + "name": "targetHumidity", + "schema": "double", + "writable": true + } + ] +} +``` ++Sample desired property payload: ++```json +"desired" : +{ + "targetTemperature" : 21.3, + "targetHumidity" : 80, + "$version" : 3 +} +``` ++Sample reported property first payload: ++```json +"reported": { + "targetTemperature": { + "value": 21.3, + "ac": 200, + "av": 3, + "ad": "complete" + } +} +``` ++Sample reported property second payload: ++```json +"reported": { + "targetHumidity": { + "value": 80, + "ac": 200, + "av": 3, + "ad": "complete" + } +} +``` ++> [!NOTE] +> You could choose to combine these two reported property payloads into a single payload. ++### Sample multiple components writable property ++The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. ++The marker is sent only for updates to properties defined in a component. Updates to properties defined in the default component don't include the marker, see [Sample no component writable property](#sample-no-component-writable-property). ++When a device receives multiple reported properties in a single payload, it can send the reported property responses across multiple payloads or combine the responses into a single payload. ++The device or module should confirm that it received the properties by sending reported properties: ++DTDL that references a component: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:TemperatureController;1", + "@type": "Interface", + "displayName": "Temperature Controller", + "contents": [ + { + "@type" : "Component", + "schema": "dtmi:com:example:Thermostat;1", + "name": "thermostat1" + } + ] +} +``` ++DTDL that defines the component: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:Thermostat;1", + "@type": "Interface", + "contents": [ + { + "@type": "Property", + "name": "targetTemperature", + "schema": "double", + "writable": true + } + ] +} +``` ++Sample desired property payload: ++```json +"desired": { + "thermostat1": { + "__t": "c", + "targetTemperature": 21.3, + "targetHumidity": 80, + "$version" : 3 + } +} +``` ++Sample reported property first payload: ++```json +"reported": { + "thermostat1": { + "__t": "c", + "targetTemperature": { + "value": 23, + "ac": 200, + "av": 3, + "ad": "complete" + } + } +} +``` ++Sample reported property second payload: ++```json +"reported": { + "thermostat1": { + "__t": "c", + "targetHumidity": { + "value": 80, + "ac": 200, + "av": 3, + "ad": "complete" + } + } +} +``` ++> [!NOTE] +> You could choose to combine these two reported property payloads into a single payload. ++For more writable property examples, see [Payloads > Properties](concepts-message-payloads.md#writable-property-types). ++## Commands ++No component interfaces use the command name without a prefix. ++On a device or module, multiple component interfaces use command names with the following format: `componentName*commandName`. ++For more command examples, see [Payloads > Commands](concepts-message-payloads.md#commands). ++> [!TIP] +> IoT Central has its own conventions for implementing [Long-running commands](../iot-central/core/howto-use-commands.md#long-running-commands) and [Offline commands](../iot-central/core/howto-use-commands.md#offline-commands). ++## Next steps ++Now that you've learned about IoT Plug and Play conventions, here are some other resources: ++- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) +- [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) +- [IoT REST API](/rest/api/iothub/device) +- [IoT Plug and Play modeling guide](concepts-modeling-guide.md) |
iot | Concepts Developer Guide Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-developer-guide-device.md | + + Title: Device developer guide - IoT Plug and Play | Microsoft Docs +description: "Description of IoT Plug and Play for device developers. Includes examples in the following languages: C, C#, Java, JavaScript, Python, and Embedded C." ++ Last updated : 1/23/2024++++zone_pivot_groups: programming-languages-set-twenty-seven ++#- id: programming-languages-set-twenty-seven +# Title: Embedded C +++# IoT Plug and Play device developer guide ++IoT Plug and Play lets you build IoT devices that advertise their capabilities to Azure IoT applications. IoT Plug and Play devices don't require manual configuration when a customer connects them to IoT Plug and Play-enabled applications such as IoT Central. ++You can implement an IoT device directly by using [modules](../iot-hub/iot-hub-devguide-module-twins.md), or by using [IoT Edge modules](../iot-edge/about-iot-edge.md). ++This guide describes the basic steps required to create a device, module, or IoT Edge module that follows the [IoT Plug and Play conventions](../iot/concepts-convention.md). ++To build an IoT Plug and Play device, module, or IoT Edge module, follow these steps: ++1. Ensure your device is using either the MQTT or MQTT over WebSockets protocol to connect to Azure IoT Hub. +1. Create a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md). +1. Update your device or module to announce the `model-id` as part of the device connection. +1. Implement telemetry, properties, and commands that follow the [IoT Plug and Play conventions](concepts-convention.md) ++Once your device or module implementation is ready, use the [Azure IoT explorer](../iot/howto-use-iot-explorer.md) to validate that the device follows the IoT Plug and Play conventions. ++++++++++++++++++++## Next steps ++Now that you've learned about IoT Plug and Play device development, here are some other resources: ++- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) +- [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) +- [IoT REST API](/rest/api/iothub/device) +- [Understand components in IoT Plug and Play models](concepts-modeling-guide.md) +- [IoT Plug and Play service developer guide](concepts-developer-guide-service.md) |
iot | Concepts Developer Guide Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-developer-guide-service.md | + + Title: Service developer guide - IoT Plug and Play | Microsoft Docs +description: Description of IoT Plug and Play for service developers ++ Last updated : 1/23/2024++++zone_pivot_groups: programming-languages-set-ten ++# - id: programming-languages-set-ten +# Title: Python +++# IoT Plug and Play service developer guide ++IoT Plug and Play lets you build IoT devices that advertise their capabilities to Azure IoT applications. IoT Plug and Play devices don't require manual configuration when a customer connects them to IoT Plug and Play-enabled applications. ++IoT Plug and Play lets you use devices that have announced their model ID with your IoT hub. For example, you can access the properties and commands of a device directly. ++If you're using IoT Central, you can use the IoT Central UI and REST API to interact with IoT Plug and Play devices connected to your application. ++## Service SDKs ++Use the Azure IoT service SDKs in your solution to interact with devices and modules. For example, you can use the service SDKs to read and update twin properties and invoke commands. Supported languages include C#, Java, Node.js, and Python. +++The service SDKs let you access device information from a solution component such as a desktop or web application. The service SDKs include two namespaces and object models that you can use to retrieve the model ID: ++- Iot Hub service client. This service exposes the model ID as a device twin property. ++- Digital Twins client. The new Digital Twins API operates on [Digital Twins Definition Language (DTDL)](concepts-digital-twin.md) model constructs such as components, properties, and commands. The Digital Twin APIs make it easier for solution builders to create IoT Plug and Play solutions. ++++++++++++++## Next steps ++Now that you've learned about device modeling, here are some more resources: ++- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) +- [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) +- [IoT REST API](/rest/api/iothub/device) +- [IoT Plug and Play modeling guide](concepts-modeling-guide.md) |
iot | Concepts Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-digital-twin.md | + + Title: Understand IoT Plug and Play digital twins +description: Understand how IoT Plug and Play uses digital twins ++ Last updated : 1/23/2024+++++# Understand IoT Plug and Play digital twins ++An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have. ++> [!NOTE] +> DTDL isn't exclusive to IoT Plug and Play. Other IoT services, such as [Azure Digital Twins](../digital-twins/overview.md), use it to represent entire environments such as buildings and energy networks. ++The Azure IoT service SDKs include APIs that let a service interact a device's digital twin. For example, a service can read device properties from the digital twin or use the digital twin to call a command on a device. To learn more, see [IoT Hub digital twin examples](concepts-developer-guide-service.md#iot-hub-digital-twin-examples). ++The example IoT Plug and Play device in this article implements a [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) that has [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) components. ++## Device twins and digital twins ++Along with a digital twin, Azure IoT Hub also maintains a *device twin* for every connected device. A device twin is similar to a digital twin in that it's a representation of a device's properties. An IoT hub initializes a digital twin and a device twin the first time an IoT Plug and Play device is provisioned. The Azure IoT service SDKs include APIs for interacting with device twins. ++Device twins are JSON documents that store device state information, including metadata, configurations, and conditions. To learn more, see [IoT Hub service client examples](concepts-developer-guide-service.md#iot-hub-service-client-examples). Device and solution builders can both use the same set of device twin APIs and SDKs to implement devices and solutions using IoT Plug and Play conventions. In a device twin, the state of a writable property is split across the *desired properties* and *reported properties* sections. All read-only properties are available within the reported properties section. ++The digital twin APIs operate on high-level DTDL constructs such as components, properties, and commands and makes it easier for solution builders to create IoT Plug and Play solutions. In a digital twin, there's a unified view of the current and desired state of the property. The synchronization state for a given property is stored in the corresponding default component `$metadata` section. ++### Device twin JSON example ++The following snippet shows an IoT Plug and Play device twin formatted as a JSON object: ++```json +{ + "deviceId": "sample-device", + "modelId": "dtmi:com:example:TemperatureController;1", + "version": 15, + "properties": { + "desired": { + "thermostat1": { + "__t": "c", + "targetTemperature": 21.8 + }, + "$metadata": {...}, + "$version": 4 + }, + "reported": { + "serialNumber": "alwinexlepaho8329", + "thermostat1": { + "maxTempSinceLastReboot": 25.3, + "__t": "c", + "targetTemperature": { + "value": 21.8, + "ac": 200, + "ad": "Successfully executed patch", + } + }, + "$metadata": {...}, + "$version": 11 + } + } +} +``` ++### Digital twin example ++The following snippet shows the digital twin formatted as a JSON object: ++```json +{ + "$dtId": "sample-device", + "serialNumber": "alwinexlepaho8329", + "thermostat1": { + "maxTempSinceLastReboot": 25.3, + "targetTemperature": 21.8, + "$metadata": { + "targetTemperature": { + "desiredValue": 21.8, + "desiredVersion": 4, + "ackVersion": 4, + "ackCode": 200, + "ackDescription": "Successfully executed patch", + "lastUpdateTime": "2020-07-17T06:11:04.9309159Z" + }, + "maxTempSinceLastReboot": { + "lastUpdateTime": "2020-07-17T06:10:31.9609233Z" + } + } + }, + "$metadata": { + "$model": "dtmi:com:example:TemperatureController;1", + "serialNumber": { + "lastUpdateTime": "2020-07-17T06:10:31.9609233Z" + } + } +} +``` ++The following table describes the fields in the digital twin JSON object: ++| Field name | Description | +| | | +| `$dtId` | A user-provided string representing the ID of the device digital twin. | +| `{propertyName}` | The value of a property in JSON. | +| `$metadata.$model` | [Optional] The ID of the model interface that characterizes this digital twin. | +| `$metadata.{propertyName}.desiredValue` | [Only for writable properties] The desired value of the specified property. | +| `$metadata.{propertyName}.desiredVersion` | [Only for writable properties] The version of the desired value maintained by IoT Hub.| +| `$metadata.{propertyName}.ackVersion` | [Required, only for writable properties] The version acknowledged by the device implementing the digital twin, it must by greater or equal to desired version. | +| `$metadata.{propertyName}.ackCode` | [Required, only for writable properties] The `ack` code returned by the device app implementing the digital twin. | +| `$metadata.{propertyName}.ackDescription` | [Optional, only for writable properties] The `ack` description returned by the device app implementing the digital twin. | +| `$metadata.{propertyName}.lastUpdateTime` | IoT Hub maintains the timestamp of the last update of the property by the device. The timestamps are in UTC and encoded in the ISO8601 format YYYY-MM-DDTHH:MM:SS.mmmZ. | +| `{componentName}` | A JSON object containing the component's property values and metadata. | +| `{componentName}.{propertyName}` | The value of the component's property in JSON. | +| `{componentName}.$metadata` | The metadata information for the component. | ++### Properties ++Properties are data fields that represent the state of an entity just like the properties in many object-oriented programming languages. ++#### Read-only property ++DTDL schema: ++```json +{ + "@type": "Property", + "name": "serialNumber", + "displayName": "Serial Number", + "description": "Serial number of the device.", + "schema": "string" +} +``` ++In this example, `alwinexlepaho8329` is the current value of the `serialNumber` read-only property reported by the device. ++The following snippets show the side-by-side JSON representation of the `serialNumber` property: ++ :::column span=""::: + **Device twin** ++```json +"properties": { + "reported": { + "serialNumber": "alwinexlepaho8329" + } +} +``` ++ :::column-end::: + :::column span=""::: + **Digital twin** ++```json +"serialNumber": "alwinexlepaho8329" +``` ++ :::column-end::: ++#### Writable property ++The following examples show a writable property in the default component. ++DTDL: ++```json +{ + "@type": "Property", + "name": "fanSpeed", + "displayName": "Fan Speed", + "writable": true, + "schema": "double" +} +``` ++ :::column span=""::: + **Device twin** ++```json +{ + "properties": { + "desired": { + "fanSpeed": 2.0, + }, + "reported": { + "fanSpeed": { + "value": 3.0, + "ac": 200, + "av": 1, + "ad": "Successfully executed patch version 1" + } + } + }, +} +``` ++ :::column-end::: + :::column span=""::: + **Digital twin** ++```json +{ + "fanSpeed": 3.0, + "$metadata": { + "fanSpeed": { + "desiredValue": 2.0, + "desiredVersion": 2, + "ackVersion": 1, + "ackCode": 200, + "ackDescription": "Successfully executed patch version 1", + "lastUpdateTime": "2020-07-17T06:10:31.9609233Z" + } + } +} +``` ++ :::column-end::: ++In this example, `3.0` is the current value of the `fanSpeed` property reported by the device. `2.0` is the desired value set by the solution. The desired value and synchronization state of a root-level property is set within root-level `$metadata` for a digital twin. When the device comes online, it can apply this update and report back the updated value. ++### Components ++Components let you build a model interface as an assembly of other interfaces. For example, the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) interface can be incorporated as components `thermostat1` and `thermostat2` in the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model. ++In a device twin, a component is identified by the `{ "__t": "c"}` marker. In a digital twin, the presence of `$metadata` marks a component. ++In this example, `thermostat1` is a component with two properties: ++- `maxTempSinceLastReboot` is a read-only property. +- `targetTemperature` is a writable property that's been successfully synchronized by the device. The desired value and synchronization state of these properties are in the component's `$metadata`. ++The following snippets show the side-by-side JSON representation of the `thermostat1` component: ++ :::column span=""::: + **Device twin** ++```json +"properties": { + "desired": { + "thermostat1": { + "__t": "c", + "targetTemperature": 21.8 + }, + "$metadata": { + }, + "$version": 4 + }, + "reported": { + "thermostat1": { + "maxTempSinceLastReboot": 25.3, + "__t": "c", + "targetTemperature": { + "value": 21.8, + "ac": 200, + "ad": "Successfully executed patch", + "av": 4 + } + }, + "$metadata": { + }, + "$version": 11 + } +} +``` ++ :::column-end::: + :::column span=""::: + **Digital twin** ++```json +"thermostat1": { + "maxTempSinceLastReboot": 25.3, + "targetTemperature": 21.8, + "$metadata": { + "targetTemperature": { + "desiredValue": 21.8, + "desiredVersion": 4, + "ackVersion": 4, + "ackCode": 200, + "ackDescription": "Successfully executed patch", + "lastUpdateTime": "2020-07-17T06:11:04.9309159Z" + }, + "maxTempSinceLastReboot": { + "lastUpdateTime": "2020-07-17T06:10:31.9609233Z" + } + } +} +``` ++ :::column-end::: ++## Digital twin APIs ++The digital twin APIs include **Get Digital Twin**, **Update Digital Twin**, **Invoke Component Command** and **Invoke Command** operations more managing a digital twin. You can either use the [REST APIs](/rest/api/iothub/service/digitaltwin) directly or through one of the [service SDKs](concepts-developer-guide-service.md#service-sdks). ++## Digital twin change events ++When digital twin change events are enabled, an event is triggered whenever the current or desired value of the component or property changes. Digital twin change events are generated in [JSON Patch](http://jsonpatch.com/) format. Corresponding events are generated in the device twin format if twin change events are enabled. ++To learn how to enable routing for device and digital twin events, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](../iot-hub/iot-hub-devguide-messages-d2c.md#non-telemetry-events). To understand the message format, see [Create and read IoT Hub messages](../iot-hub/iot-hub-devguide-messages-construct.md). ++For example, the following digital twin change event is triggered when `targetTemperature` is set by the solution: ++```json +iothub-connection-device-id:sample-device +iothub-enqueuedtime:7/17/2020 6:11:04 AM +iothub-message-source:digitalTwinChangeEvents +correlation-id:275d463fa034 +content-type:application/json-patch+json +content-encoding:utf-8 +[ + { + "op": "add", + "path": "/thermostat1/$metadata/targetTemperature", + "value": { + "desiredValue": 21.8, + "desiredVersion": 4 + } + } +] +``` ++The following digital twin change event is triggered when the device reports that the above desired change was applied: ++```json +iothub-connection-device-id:sample-device +iothub-enqueuedtime:7/17/2020 6:11:05 AM +iothub-message-source:digitalTwinChangeEvents +correlation-id:275d464a2c80 +content-type:application/json-patch+json +content-encoding:utf-8 +[ + { + "op": "add", + "path": "/thermostat1/$metadata/targetTemperature/ackCode", + "value": 200 + }, + { + "op": "add", + "path": "/thermostat1/$metadata/targetTemperature/ackDescription", + "value": "Successfully executed patch" + }, + { + "op": "add", + "path": "/thermostat1/$metadata/targetTemperature/ackVersion", + "value": 4 + }, + { + "op": "add", + "path": "/thermostat1/$metadata/targetTemperature/lastUpdateTime", + "value": "2020-07-17T06:11:04.9309159Z" + }, + { + "op": "add", + "path": "/thermostat1/targetTemperature", + "value": 21.8 + } +] +``` ++> [!NOTE] +> Twin change notification messages are doubled when turned on in both device and digital twin change notification. ++## Next steps ++Now that you've learned about digital twins, here are some more resources: ++- [How to use IoT Plug and Play digital twin APIs](howto-manage-digital-twin.md) +- [Interact with a device from your solution](./tutorial-service.md) +- [IoT Digital Twin REST API](/rest/api/iothub/service/digitaltwin) +- [Azure IoT explorer](../iot/howto-use-iot-explorer.md) |
iot | Concepts Message Payloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-message-payloads.md | + + Title: Plug and Play device message payloads +description: Understand the format of the telemetry, property, and command messages that a Plug and Play device can exchange with a service. ++ Last updated : 1/23/2024+++++# This article applies to device developers. +++# Telemetry, property, and command payloads ++A [device model](concepts-modeling-guide.md) defines the: ++* Telemetry a device sends to a service. +* Properties a device synchronizes with a service. +* Commands that the service calls on a device. ++> [!TIP] +> Azure IoT Central is a service that follows the Plug and Play conventions. In IoT Central, the device model is part of a [device template](../iot-central/core/concepts-device-templates.md). IoT Central currently supports [DTDL v2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) with an [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md). An IoT Central application expects to receive UTF-8 encoded JSON data. ++This article describes the JSON payloads that devices send and receive for telemetry, properties, and commands defined in a DTDL device model. ++The article doesn't describe every possible type of telemetry, property, and command payload, but the examples illustrate key types. ++Each example shows a snippet from the device model that defines the type and example JSON payloads to illustrate how the device should interact with a Plug and Play aware service such as IoT Central. ++The example JSON snippets in this article use [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). There are also some [DTDL extensions that IoT Central](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md) uses. ++For sample device code that shows some of these payloads in use, see the [Connect a sample IoT Plug and Play device application running on Linux or Windows to IoT Hub tutorial](./tutorial-connect-device.md) or the [Create and connect a client application to your Azure IoT Central application](../iot-central/core/tutorial-connect-device.md) tutorial. ++## View raw data ++If you're using IoT Central, you can view the raw data that a device sends to an application. This view is useful for troubleshooting issues with the payload sent from a device. To view the raw data a device is sending: ++1. Navigate to the device from the **Devices** page. ++1. Select the **Raw data** tab: ++ :::image type="content" source="media/concepts-message-payloads/raw-data.png" alt-text="Screenshot that shows the raw data view." lightbox="media/concepts-message-payloads/raw-data.png"::: ++ On this view, you can select the columns to display and set a time range to view. The **Unmodeled data** column shows data from the device that doesn't match any property or telemetry definitions in the device template. ++For more troubleshooting tips, see [Troubleshoot why data from your devices isn't showing up in Azure IoT Central](../iot-central/core/troubleshoot-connection.md). ++## Telemetry ++To learn more about the DTDL telemetry naming rules, see [DTDL > Telemetry](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#telemetry). You can't start a telemetry name using the `_` character. ++Don't create telemetry types with the following names. IoT Central uses these reserved names internally. If you try to use these names, IoT Central will ignore your data: ++* `EventEnqueuedUtcTime` +* `EventProcessedUtcTime` +* `PartitionId` +* `EventHub` +* `User` +* `$metadata` +* `$version` ++### Telemetry in components ++If the telemetry is defined in a component, add a custom message property called `$.sub` with the name of the component as defined in the device model. To learn more, see [Tutorial: Connect an IoT Plug and Play multiple component device applications](tutorial-multiple-components.md). This tutorial shows how to use different programming languages to send telemetry from a component. ++> [!IMPORTANT] +> To display telemetry from components hosted in IoT Edge modules correctly, use [IoT Edge version 1.2.4](https://github.com/Azure/azure-iotedge/releases/tag/1.2.4) or later. If you use an earlier version, telemetry from your components in IoT Edge modules displays as *_unmodeleddata*. ++### Telemetry in inherited interfaces ++If the telemetry is defined in an inherited interface, your device sends the telemetry as if it is defined in the root interface. Given the following device model: ++```json +[ + { + "@id": "dtmi:contoso:device;1", + "@type": "Interface", + "contents": [ + { + "@type": [ + "Property", + "Cloud", + "StringValue" + ], + "displayName": { + "en": "Device Name" + }, + "name": "DeviceName", + "schema": "string" + } + ], + "displayName": { + "en": "Contoso Device" + }, + "extends": [ + "dtmi:contoso:sensor;1" + ], + "@context": [ + "dtmi:iotcentral:context;2", + "dtmi:dtdl:context;2" + ] + }, + { + "@context": [ + "dtmi:iotcentral:context;2", + "dtmi:dtdl:context;2" + ], + "@id": "dtmi:contoso:sensor;1", + "@type": [ + "Interface", + "NamedInterface" + ], + "contents": [ + { + "@type": [ + "Telemetry", + "NumberValue" + ], + "displayName": { + "en": "Meter Voltage" + }, + "name": "MeterVoltage", + "schema": "double" + } + ], + "displayName": { + "en": "Contoso Sensor" + }, + "name": "ContosoSensor" + } +] +``` ++The device sends meter voltage telemetry using the following payload. The device doesn't include the interface name in the payload: ++```json +{ + "MeterVoltage": 5.07 +} +``` ++### Primitive types ++This section shows examples of primitive telemetry types that a device can stream. ++The following snippet from a device model shows the definition of a `boolean` telemetry type: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "BooleanTelemetry" + }, + "name": "BooleanTelemetry", + "schema": "boolean" +} +``` ++A device client should send the telemetry as JSON that looks like the following example: ++```json +{ "BooleanTelemetry": true } +``` ++The following snippet from a device model shows the definition of a `string` telemetry type: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "StringTelemetry" + }, + "name": "StringTelemetry", + "schema": "string" +} +``` ++A device client should send the telemetry as JSON that looks like the following example: ++```json +{ "StringTelemetry": "A string value - could be a URL" } +``` ++The following snippet from a device model shows the definition of an `integer` telemetry type: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "IntegerTelemetry" + }, + "name": "IntegerTelemetry", + "schema": "integer" +} ++``` ++A device client should send the telemetry as JSON that looks like the following example: ++```json +{ "IntegerTelemetry": 23 } +``` ++The following snippet from a device model shows the definition of a `double` telemetry type: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "DoubleTelemetry" + }, + "name": "DoubleTelemetry", + "schema": "double" +} +``` ++A device client should send the telemetry as JSON that looks like the following example: ++```json +{ "DoubleTelemetry": 56.78 } +``` ++The following snippet from a device model shows the definition of a `dateTime` telemetry type: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "DateTimeTelemetry" + }, + "name": "DateTimeTelemetry", + "schema": "dateTime" +} +``` ++A device client should send the telemetry as JSON that looks like the following example - `DateTime` types must be in ISO 8061 format: ++```json +{ "DateTimeTelemetry": "2020-08-30T19:16:13.853Z" } +``` ++The following snippet from a device model shows the definition of a `duration` telemetry type: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "DurationTelemetry" + }, + "name": "DurationTelemetry", + "schema": "duration" +} +``` ++A device client should send the telemetry as JSON that looks like the following example - durations must be in ISO 8601 format: ++```json +{ "DurationTelemetry": "PT10H24M6.169083011336625S" } +``` ++### Complex types ++This section shows examples of complex telemetry types that a device can stream. ++The following snippet from a device model shows the definition of an `Enum` telemetry type: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "EnumTelemetry" + }, + "name": "EnumTelemetry", + "schema": { + "@type": "Enum", + "displayName": { + "en": "Enum" + }, + "valueSchema": "integer", + "enumValues": [ + { + "displayName": { + "en": "Item1" + }, + "enumValue": 0, + "name": "Item1" + }, + { + "displayName": { + "en": "Item2" + }, + "enumValue": 1, + "name": "Item2" + }, + { + "displayName": { + "en": "Item3" + }, + "enumValue": 2, + "name": "Item3" + } + ] + } +} +``` ++A device client should send the telemetry as JSON that looks like the following example. Possible values are `0`, `1`, and `2` that display in IoT Central as `Item1`, `Item2`, and `Item3`: ++```json +{ "EnumTelemetry": 1 } +``` ++The following snippet from a device model shows the definition of an `Object` telemetry type. This object has three fields with types `dateTime`, `integer`, and `Enum`: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "ObjectTelemetry" + }, + "name": "ObjectTelemetry", + "schema": { + "@type": "Object", + "displayName": { + "en": "Object" + }, + "fields": [ + { + "displayName": { + "en": "Property1" + }, + "name": "Property1", + "schema": "dateTime" + }, + { + "displayName": { + "en": "Property2" + }, + "name": "Property2", + "schema": "integer" + }, + { + "displayName": { + "en": "Property3" + }, + "name": "Property3", + "schema": { + "@type": "Enum", + "displayName": { + "en": "Enum" + }, + "valueSchema": "integer", + "enumValues": [ + { + "displayName": { + "en": "Item1" + }, + "enumValue": 0, + "name": "Item1" + }, + { + "displayName": { + "en": "Item2" + }, + "enumValue": 1, + "name": "Item2" + }, + { + "displayName": { + "en": "Item3" + }, + "enumValue": 2, + "name": "Item3" + } + ] + } + } + ] + } +} +``` ++A device client should send the telemetry as JSON that looks like the following example. `DateTime` types must be ISO 8061 compliant. Possible values for `Property3` are `0`, `1`, and that display in IoT Central as `Item1`, `Item2`, and `Item3`: ++```json +{ + "ObjectTelemetry": { + "Property1": "2020-09-09T03:36:46.195Z", + "Property2": 37, + "Property3": 2 + } +} +``` ++The following snippet from a device model shows the definition of a `vector` telemetry type: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "VectorTelemetry" + }, + "name": "VectorTelemetry", + "schema": "vector" +} +``` ++A device client should send the telemetry as JSON that looks like the following example: ++```json +{ + "VectorTelemetry": { + "x": 74.72395045538597, + "y": 74.72395045538597, + "z": 74.72395045538597 + } +} +``` ++The following snippet from a device model shows the definition of a `geopoint` telemetry type: ++```json +{ + "@type": "Telemetry", + "displayName": { + "en": "GeopointTelemetry" + }, + "name": "GeopointTelemetry", + "schema": "geopoint" +} +``` ++> [!NOTE] +> The **geopoint** schema type is part of the [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md) to DTDL. IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility. ++A device client should send the telemetry as JSON that looks like the following example. IoT Central displays the value as a pin on a map: ++```json +{ + "GeopointTelemetry": { + "lat": 47.64263, + "lon": -122.13035, + "alt": 0 + } +} +``` ++### Event and state types ++This section shows examples of telemetry events and states that a device sends to an IoT Central application. ++> [!NOTE] +> The **event** and **state** schema types are part of the [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md) to DTDL. ++The following snippet from a device model shows the definition of a `integer` event type: ++```json +{ + "@type": [ + "Telemetry", + "Event" + ], + "displayName": { + "en": "IntegerEvent" + }, + "name": "IntegerEvent", + "schema": "integer" +} +``` ++A device client should send the event data as JSON that looks like the following example: ++```json +{ "IntegerEvent": 74 } +``` ++The following snippet from a device model shows the definition of a `integer` state type: ++```json +{ + "@type": [ + "Telemetry", + "State" + ], + "displayName": { + "en": "IntegerState" + }, + "name": "IntegerState", + "schema": { + "@type": "Enum", + "valueSchema": "integer", + "enumValues": [ + { + "displayName": { + "en": "Level1" + }, + "enumValue": 1, + "name": "Level1" + }, + { + "displayName": { + "en": "Level2" + }, + "enumValue": 2, + "name": "Level2" + }, + { + "displayName": { + "en": "Level3" + }, + "enumValue": 3, + "name": "Level3" + } + ] + } +} +``` ++A device client should send the state as JSON that looks like the following example. Possible integer state values are `1`, `2`, or `3`: ++```json +{ "IntegerState": 2 } +``` ++## Properties ++To learn more about the DTDL property naming rules, see [DTDL > Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#property). You can't start a property name using the `_` character. ++### Properties in components ++If the property is defined in a component, wrap the property in the component name. The following example sets the `maxTempSinceLastReboot` in the `thermostat2` component. The marker `__t` indicates that this section defines a component: ++```json +{ + "thermostat2" : { + "__t" : "c", + "maxTempSinceLastReboot" : 38.7 + } +} +``` ++To learn more, see [Tutorial: Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md). ++### Primitive types ++This section shows examples of primitive property types that a device sends to a service. ++The following snippet from a device model shows the definition of a `boolean` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "BooleanProperty" + }, + "name": "BooleanProperty", + "schema": "boolean", + "writable": false +} +``` ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin: ++```json +{ "BooleanProperty": false } +``` ++The following snippet from a device model shows the definition of a `long` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "LongProperty" + }, + "name": "LongProperty", + "schema": "long", + "writable": false +} +``` ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin: ++```json +{ "LongProperty": 439 } +``` ++The following snippet from a device model shows the definition of a `date` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "DateProperty" + }, + "name": "DateProperty", + "schema": "date", + "writable": false +} +``` ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin. `Date` types must be ISO 8061 compliant: ++```json +{ "DateProperty": "2020-05-17" } +``` ++The following snippet from a device model shows the definition of a `duration` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "DurationProperty" + }, + "name": "DurationProperty", + "schema": "duration", + "writable": false +} +``` ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin - durations must be ISO 8601 Duration compliant: ++```json +{ "DurationProperty": "PT10H24M6.169083011336625S" } +``` ++The following snippet from a device model shows the definition of a `float` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "FloatProperty" + }, + "name": "FloatProperty", + "schema": "float", + "writable": false +} +``` ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin: ++```json +{ "FloatProperty": 1.9 } +``` ++The following snippet from a device model shows the definition of a `string` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "StringProperty" + }, + "name": "StringProperty", + "schema": "string", + "writable": false +} +``` ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin: ++```json +{ "StringProperty": "A string value - could be a URL" } +``` ++### Complex types ++This section shows examples of complex property types that a device sends to a service. ++The following snippet from a device model shows the definition of an `Enum` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "EnumProperty" + }, + "name": "EnumProperty", + "writable": false, + "schema": { + "@type": "Enum", + "displayName": { + "en": "Enum" + }, + "valueSchema": "integer", + "enumValues": [ + { + "displayName": { + "en": "Item1" + }, + "enumValue": 0, + "name": "Item1" + }, + { + "displayName": { + "en": "Item2" + }, + "enumValue": 1, + "name": "Item2" + }, + { + "displayName": { + "en": "Item3" + }, + "enumValue": 2, + "name": "Item3" + } + ] + } +} +``` ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin. Possible values are `0`, `1`, and that display in IoT Central as `Item1`, `Item2`, and `Item3`: ++```json +{ "EnumProperty": 1 } +``` ++The following snippet from a device model shows the definition of an `Object` property type. This object has two fields with types `string` and `integer`: ++```json +{ + "@type": "Property", + "displayName": { + "en": "ObjectProperty" + }, + "name": "ObjectProperty", + "writable": false, + "schema": { + "@type": "Object", + "displayName": { + "en": "Object" + }, + "fields": [ + { + "displayName": { + "en": "Field1" + }, + "name": "Field1", + "schema": "integer" + }, + { + "displayName": { + "en": "Field2" + }, + "name": "Field2", + "schema": "string" + } + ] + } +} +``` ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin: ++```json +{ + "ObjectProperty": { + "Field1": 37, + "Field2": "A string value" + } +} +``` ++The following snippet from a device model shows the definition of an `vector` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "VectorProperty" + }, + "name": "VectorProperty", + "schema": "vector", + "writable": false +} +``` ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin: ++```json +{ + "VectorProperty": { + "x": 74.72395045538597, + "y": 74.72395045538597, + "z": 74.72395045538597 + } +} +``` ++The following snippet from a device model shows the definition of a `geopoint` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "GeopointProperty" + }, + "name": "GeopointProperty", + "schema": "geopoint", + "writable": false +} +``` ++> [!NOTE] +> The **geopoint** schema type is part of the [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md) to DTDL. IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility. ++A device client should send a JSON payload that looks like the following example as a reported property in the device twin: ++```json +{ + "GeopointProperty": { + "lat": 47.64263, + "lon": -122.13035, + "alt": 0 + } +} +``` ++### Writable property types ++This section shows examples of writable property types that a device receives from a service. ++If the writable property is defined in a component, the desired property message includes the component name. The following example shows the message requesting the device to update the `targetTemperature` in the `thermostat2` component. The marker `__t` indicates that this section defines a component: ++```json +{ + "thermostat2": { + "targetTemperature": { + "value": 57 + }, + "__t": "c" + }, + "$version": 3 +} +``` ++To learn more, see [Connect an IoT Plug and Play multiple component device applications](tutorial-multiple-components.md). ++The device or module should confirm that it received the property by sending a reported property. The reported property should include: ++* `value` - the actual value of the property (typically the received value, but the device may decide to report a different value). +* `ac` - an acknowledgment code that uses an HTTP status code. +* `av` - an acknowledgment version that refers to the `$version` of the desired property. You can find this value in the desired property JSON payload. +* `ad` - an optional acknowledgment description. ++To learn more about these fields, see [IoT Plug and Play conventions > Acknowledgment responses](concepts-convention.md#acknowledgment-responses) ++The following snippet from a device model shows the definition of a writable `string` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "StringPropertyWritable" + }, + "name": "StringPropertyWritable", + "writable": true, + "schema": "string" +} +``` ++The device receives the following payload from the service: ++```json +{ + "StringPropertyWritable": "A string from IoT Central", "$version": 7 +} +``` ++The device should send the following JSON payload to the service after it processes the update. This message includes the version number of the original update received from the service. ++> [!TIP] +> If the service is IoT Central, it marks the property as **synced** in the UI when it receives this message: ++```json +{ + "StringPropertyWritable": { + "value": "A string from IoT Central", + "ac": 200, + "ad": "completed", + "av": 7 + } +} +``` ++The following snippet from a device model shows the definition of a writable `Enum` property type: ++```json +{ + "@type": "Property", + "displayName": { + "en": "EnumPropertyWritable" + }, + "name": "EnumPropertyWritable", + "writable": true, + "schema": { + "@type": "Enum", + "displayName": { + "en": "Enum" + }, + "valueSchema": "integer", + "enumValues": [ + { + "displayName": { + "en": "Item1" + }, + "enumValue": 0, + "name": "Item1" + }, + { + "displayName": { + "en": "Item2" + }, + "enumValue": 1, + "name": "Item2" + }, + { + "displayName": { + "en": "Item3" + }, + "enumValue": 2, + "name": "Item3" + } + ] + } +} +``` ++The device receives the following payload from the service: ++```json +{ + "EnumPropertyWritable": 1 , "$version": 10 +} +``` ++The device should send the following JSON payload to the service after it processes the update. This message includes the version number of the original update received from the service. ++> [!TIP] +> If the service is IoT Central, it marks the property as **synced** in the UI when it receives this message: ++```json +{ + "EnumPropertyWritable": { + "value": 1, + "ac": 200, + "ad": "completed", + "av": 10 + } +} +``` ++## Commands ++To learn more about the DTDL command naming rules, see [DTDL > Command](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#command). You can't start a command name using the `_` character. ++If the command is defined in a component, the name of the command the device receives includes the component name. For example, if the command is called `getMaxMinReport` and the component is called `thermostat2`, the device receives a request to execute a command called `thermostat2*getMaxMinReport`. ++The following snippet from a device model shows the definition of a command that has no parameters and that doesn't expect the device to return anything: ++```json +{ + "@type": "Command", + "displayName": { + "en": "CommandBasic" + }, + "name": "CommandBasic" +} +``` ++The device receives an empty payload in the request and should return an empty payload in the response with a `200` HTTP response code to indicate success. ++The following snippet from a device model shows the definition of a command that has an integer parameter and that expects the device to return an integer value: ++```json +{ + "@type": "Command", + "request": { + "@type": "CommandPayload", + "displayName": { + "en": "RequestParam" + }, + "name": "RequestParam", + "schema": "integer" + }, + "response": { + "@type": "CommandPayload", + "displayName": { + "en": "ResponseParam" + }, + "name": "ResponseParam", + "schema": "integer" + }, + "displayName": { + "en": "CommandSimple" + }, + "name": "CommandSimple" +} +``` ++The device receives an integer value as the request payload. The device should return an integer value as the response payload with a `200` HTTP response code to indicate success. ++The following snippet from a device model shows the definition of a command that has an object parameter and that expects the device to return an object. In this example, both objects have integer and string fields: ++```json +{ + "@type": "Command", + "request": { + "@type": "CommandPayload", + "displayName": { + "en": "RequestParam" + }, + "name": "RequestParam", + "schema": { + "@type": "Object", + "displayName": { + "en": "Object" + }, + "fields": [ + { + "displayName": { + "en": "Field1" + }, + "name": "Field1", + "schema": "integer" + }, + { + "displayName": { + "en": "Field2" + }, + "name": "Field2", + "schema": "string" + } + ] + } + }, + "response": { + "@type": "CommandPayload", + "displayName": { + "en": "ResponseParam" + }, + "name": "ResponseParam", + "schema": { + "@type": "Object", + "displayName": { + "en": "Object" + }, + "fields": [ + { + "displayName": { + "en": "Field1" + }, + "name": "Field1", + "schema": "integer" + }, + { + "displayName": { + "en": "Field2" + }, + "name": "Field2", + "schema": "string" + } + ] + } + }, + "displayName": { + "en": "CommandComplex" + }, + "name": "CommandComplex" +} +``` ++The following snippet shows an example request payload sent to the device: ++```json +{ "Field1": 56, "Field2": "A string value" } +``` ++The following snippet shows an example response payload sent from the device. Use a `200` HTTP response code to indicate success: ++```json +{ "Field1": 87, "Field2": "Another string value" } +``` ++> [!TIP] +> IoT Central has its own conventions for implementing [Long-running commands](../iot-central/core/howto-use-commands.md#long-running-commands) and [Offline commands](../iot-central/core/howto-use-commands.md#offline-commands). ++## Next steps ++Now that you've learned about device payloads, a suggested next steps is to read the [Device developer guide](concepts-developer-guide-device.md). |
iot | Concepts Model Discovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-model-discovery.md | + + Title: Use IoT Plug and Play models in a solution | Microsoft Docs +description: As a solution builder, learn about how you can use IoT Plug and Play models in your IoT solution. ++ Last updated : 1/23/2024+++++# Use IoT Plug and Play models in an IoT solution ++This article describes how, in an IoT solution, you can identify model ID of an IoT Plug and Play device and then retrieve its model definition. ++There are two broad categories of IoT solution: ++- A *purpose-built solution* works with a known set of models for the IoT Plug and Play devices that connect to the solution. You use these models when you develop the solution. ++- A *model-driven solution* works with the model of any IoT Plug and Play device. Building a model-driven solution is more complex, but the benefit is that your solution works with any devices that are added in the future. A model-driven IoT solution retrieves a model and uses it to determine the telemetry, properties, and commands the device implements. ++To use an IoT Plug and Play model, an IoT solution: ++1. Identifies the model ID of the model implemented by the IoT Plug and Play device, module, or IoT Edge module connected to the solution. ++1. Uses the model ID to retrieve the model definition of the connected device from a model repository or custom store. ++## Identify model ID ++When an IoT Plug and Play device connects to IoT Hub, it registers the model ID of the model it implements with IoT Hub. ++IoT Hub notifies the solution with the device model ID as part of the device connection flow. ++A solution can get the model ID of the IoT Plug and Play device by using one of the following three methods: ++### Get Device Twin API ++The solution can use the [Get Device Twin](/java/api/com.microsoft.azure.sdk.iot.device.deviceclient.getdevicetwin) API to retrieve model ID of the IoT Plug and Play device. ++> [!TIP] +> For modules and IoT Edge modules, use [ModuleClient.getTwin](/java/api/com.microsoft.azure.sdk.iot.device.moduleclient.gettwin). ++In the following device twin response snippet, `modelId` contains the model ID of an IoT Plug and Play device: ++```json +{ + "deviceId": "sample-device", + "etag": "AAAAAAAAAAc=", + "deviceEtag": "NTk0ODUyODgx", + "status": "enabled", + "statusUpdateTime": "0001-01-01T00:00:00Z", + "connectionState": "Disconnected", + "lastActivityTime": "2020-07-17T06:12:26.8402249Z", + "cloudToDeviceMessageCount": 0, + "authenticationType": "sas", + "x509Thumbprint": { + "primaryThumbprint": null, + "secondaryThumbprint": null + }, + "modelId": "dtmi:com:example:TemperatureController;1", + "version": 15, + "properties": {...} + } +} +``` ++### Get Digital Twin API ++The solution can use the [Get Digital Twin](/rest/api/iothub/service/digitaltwin/getdigitaltwin) API to retrieve the model ID of the model implemented by the IoT Plug and Play device. ++In the following digital twin response snippet, `$metadata.$model` contains the model ID of an IoT Plug and Play device: ++```json +{ + "$dtId": "sample-device", + "$metadata": { + "$model": "dtmi:com:example:TemperatureController;1", + "serialNumber": { + "lastUpdateTime": "2020-07-17T06:10:31.9609233Z" + } + } +} +``` ++### Digital twin change event notification ++A device connection results in a [Digital Twin change event](concepts-digital-twin.md#digital-twin-change-events) notification. A solution needs to subscribe to this event notification. To learn how to enable routing for digital twin events, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](../iot-hub/iot-hub-devguide-messages-d2c.md#non-telemetry-events). ++The solution can use the event shown in the following snippet to learn about the IoT Plug and Play device that's connecting and get its model ID: ++```json +iothub-connection-device-id:sample-device +iothub-enqueuedtime:7/22/2020 8:02:27 PM +iothub-message-source:digitalTwinChangeEvents +correlation-id:100f322dc2c5 +content-type:application/json-patch+json +content-encoding:utf-8 +[ + { + "op": "replace", + "path": "/$metadata/$model", + "value": "dtmi:com:example:TemperatureController;1" + } +] +``` ++## Retrieve a model definition ++A solution uses model ID identified above to retrieve the corresponding model definition. ++A solution can get the model definition by using one of the following options: ++### Model repository ++Solutions can use the [model repository](concepts-model-repository.md) to retrieve models. Either the device builders or the solution builders must upload their models to the repository beforehand so the solution can retrieve them. ++After you identify the model ID for a new device connection, follow these steps: ++1. Retrieve the model definition using the model ID from the model repository. For more information, see [Device model repository](concepts-model-repository.md). ++1. Using the model definition of the connected device, you can enumerate the capabilities of the device. ++1. Using the enumerated capabilities of the device, you can enable users to [interact with the device](tutorial-service.md). ++### Custom store ++Solutions can store these model definitions in a local file system, in a public file store, or use a custom implementation. ++After you identify the model ID for a new device connection, follow these steps: ++1. Retrieve the model definition using the model ID from your custom store. ++1. Using the model definition of the connected device, you can enumerate the capabilities of the device. ++1. Using the enumerated capabilities of the device, you can enable users to [interact with the device](tutorial-service.md). ++## Next steps ++Now that you've learned how to integrate IoT Plug and Play models in an IoT solution, some suggested next steps are: ++- [Interact with a device from your solution](tutorial-service.md) +- [IoT Digital Twin REST API](/rest/api/iothub/service/digitaltwin) +- [Azure IoT explorer](../iot/howto-use-iot-explorer.md) |
iot | Concepts Model Parser | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-model-parser.md | + + Title: Understand the Azure Digital Twins model parser | Microsoft Docs +description: As a developer, learn how to use the DTDL parser to validate models. ++ Last updated : 1/23/2024++++++# Understand the digital twins model parser ++The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL model. The DTDL model may be defined in multiple files. ++## Install the DTDL model parser ++The parser is available in NuGet.org with the ID: [DTDLParser](https://www.nuget.org/packages/DTDLParser). To install the parser, use any compatible NuGet package manager such as the one in Visual Studio or in the `dotnet` CLI. ++```bash +dotnet add package DTDLParser +``` ++> [!NOTE] +> At the time of writing, the parser version is `1.0.52`. ++## Use the parser to validate and inspect a model ++The DTDLParser is a library that you can use to: ++- Determine whether one or more models are valid according to the language v2 or v3 specifications. +- Identify specific modeling errors. +- Inspect model contents. ++A model can be composed of one or more interfaces described in JSON files. You can use the parser to load all the files that define a model and then validate all the files as a whole, including any references between the files. ++The [DTDLParser for .NET](https://github.com/digitaltwinconsortium/DTDLParser) repository includes the following samples that illustrate the use of the parser: ++- [DTDLParserResolveSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserResolveSample) shows how to parse an interface with external references, resolve the dependencies using the `Azure.IoT.ModelsRepository` client. +- [DTDLParserJSInteropSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserJSInteropSample) shows how to use the DTDL Parser from JavaScript running in the browser, using .NET JSInterop. ++The DTDLParser for .NET repository also includes a [collection of tutorials](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/tutorials/README.md) that show you how to use the parser to validate and inspect models. ++## Next steps ++The model parser API reviewed in this article enables many scenarios to automate or validate tasks that depend on DTDL models. For example, you could dynamically build a UI from the information in the model. |
iot | Concepts Model Repository | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-model-repository.md | + + Title: Understand the IoT Plug and Play device models repository | Microsoft Docs +description: As a solution developer or an IT professional, learn about the basic concepts of the device models repository for IoT Plug and Play devices. ++ Last updated : 1/23/2024+++++# Device models repository ++The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). ++The DMR defines a pattern to store DTDL interfaces in a folder structure based on the device twin model identifier (DTMI). You can locate an interface in the DMR by converting the DTMI to a relative path. For example, the `dtmi:com:example:Thermostat;1` DTMI translates to `/dtmi/com/example/thermostat-1.json` and can be obtained from the public base URL `devicemodels.azure.com` at the URL [https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json](https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json). ++## Index, expanded and metadata ++The DMR conventions include other artifacts for simplifying consumption of hosted models. These features are _optional_ for custom or private repositories. ++- _Index_. All available DTMIs are exposed through an *index* composed by a sequence of json files, for example: [https://devicemodels.azure.com/index.page.2.json](https://devicemodels.azure.com/index.page.2.json) +- _Expanded_. A file with all the dependencies is available for each interface, for example: [https://devicemodels.azure.com/dtmi/com/example/temperaturecontroller-1.expanded.json](https://devicemodels.azure.com/dtmi/com/example/temperaturecontroller-1.expanded.json) +- _Metadata_. This file exposes key attributes of a repository and is refreshed periodically with the latest published models snapshot. It includes features that a repository implements such as whether the model index or expanded model files are available. You can access the DMR metadata at [https://devicemodels.azure.com/metadata.json](https://devicemodels.azure.com/metadata.json) ++## Public device models repository ++Microsoft hosts a public DMR with these characteristics: ++- Curated models. Microsoft reviews and approves all available interfaces using a GitHub pull request (PR) validation workflow. +- Immutability. After it's published, an interface can't be updated. +- Hyper-scale. Microsoft provides the required infrastructure to create a secure, scalable endpoint where you can publish and consume device models. ++## Custom device models repository ++Use the same DMR pattern to create a custom DMR in any storage medium, such as local file system or custom HTTP web server. You can retrieve device models from the custom DMR in the same way as from the public DMR by changing the base URL used to access the DMR. ++> [!NOTE] +> Microsoft provides tools to validate device models in the public DMR. You can reuse these tools in custom repositories. ++## Public models ++The public device models stored in the DMR are available for everyone to consume and integrate in their applications. Public device models enable an open eco-system for device builders and solution developers to share and reuse their IoT Plug and Play device models. ++See the [Publish a model](#publish-a-model) section to learn how to publish a model in the DMR and make it public. ++Users can browse, search, and view public interfaces from the official [GitHub repository](https://github.com/Azure/iot-plugandplay-models). ++All interfaces in the `dtmi` folders are also available from the public endpoint [https://devicemodels.azure.com](https://devicemodels.azure.com) ++### Resolve models ++To programmatically access these interfaces, you can use the `ModelsRepositoryClient` available in the NuGet package [Azure.IoT.ModelsRepository](https://www.nuget.org/packages/Azure.IoT.ModelsRepository). This client is configured by default to query the public DMR available at [devicemodels.azure.com](https://devicemodels.azure.com/) and can be configured to any custom repository. ++The client accepts a `DTMI` as input and returns a dictionary with all required interfaces: ++```cs +using Azure.IoT.ModelsRepository; ++var client = new ModelsRepositoryClient(); +ModelResult models = client.GetModel("dtmi:com:example:TemperatureController;1"); +models.Content.Keys.ToList().ForEach(k => Console.WriteLine(k)); +``` ++The expected output displays the `DTMI` of the three interfaces found in the dependency chain: ++```txt +dtmi:com:example:TemperatureController;1 +dtmi:com:example:Thermostat;1 +dtmi:azure:DeviceManagement:DeviceInformation;1 +``` ++The `ModelsRepositoryClient` can be configured to query a custom DMR -available through http(s)- and to specify the dependency resolution by using the `ModelDependencyResolution` flag: ++- Disabled. Returns the specified interface only, without any dependency. +- Enabled. Returns all the interfaces in the dependency chain ++> [!TIP] +> Custom repositories might not expose the `.expanded.json` file. When this file isn't available, the client will fallback to process each dependency locally. ++The following sample code shows how to initialize the `ModelsRepositoryClient` by using a custom repository base URL, in this case using the `raw` URLs from the GitHub API without using the `expanded` form since it's not available in the `raw` endpoint. The `AzureEventSourceListener` is initialized to inspect the HTTP request performed by the client: ++```cs +using AzureEventSourceListener listener = AzureEventSourceListener.CreateConsoleLogger(); ++var client = new ModelsRepositoryClient( + new Uri("https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main")); ++ModelResult model = await client.GetModelAsync( + "dtmi:com:example:TemperatureController;1", + dependencyResolution: ModelDependencyResolution.Enabled); ++model.Content.Keys.ToList().ForEach(k => Console.WriteLine(k)); +``` ++There are more samples available in the Azure SDK GitHub repository: [Azure.Iot.ModelsRepository/samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/modelsrepository/Azure.IoT.ModelsRepository/samples). ++## Publish a model ++> [!IMPORTANT] +> You must have a GitHub account to be able to submit models to the public DMR. ++1. Fork the public GitHub repository: [https://github.com/Azure/iot-plugandplay-models](https://github.com/Azure/iot-plugandplay-models). +1. Clone the forked repository. Optionally create a new branch to keep your changes isolated from the `main` branch. +1. Add the new interfaces to the `dtmi` folder using the folder/filename convention. To learn more, see [Import a Model to the `dtmi/` folder](#import-a-model-to-the-dtmi-folder). +1. Validate the models locally using the `dmr-client` tool. To learn more, see [validate models](#validate-models). +1. Commit the changes locally and push to your fork. +1. From your fork, create a pull request that targets the `main` branch. See [Creating an issue or pull request](https://docs.github.com/free-pro-team@latest/desktop/contributing-and-collaborating-using-github-desktop/creating-an-issue-or-pull-request) docs. +1. Review the [pull request requirements](https://github.com/Azure/iot-plugandplay-models/blob/main/pr-reqs.md). ++The pull request triggers a set of GitHub Actions that validate the submitted interfaces, and makes sure your pull request satisfies all the requirements. ++Microsoft will respond to a pull request with all checks in three business days. ++### `dmr-client` tools ++The tools used to validate the models during the PR checks can also be used to add and validate the DTDL interfaces locally. ++> [!NOTE] +> This tool requires the [.NET SDK](https://dotnet.microsoft.com/download) version 3.1 or greater. ++### Install `dmr-client` ++```cmd/sh +dotnet tool install --global Microsoft.IoT.ModelsRepository.CommandLine --version 1.0.0-beta.6 +``` ++### Import a model to the `dtmi/` folder ++If you have your model already stored in json files, you can use the `dmr-client import` command to add them to the `dtmi/` folder with the correct file names: ++```cmd/sh +# from the local repo root folder +dmr-client import --model-file "MyThermostat.json" +``` ++> [!TIP] +> You can use the `--local-repo` argument to specify the local repository root folder. ++### Validate models ++You can validate your models with the `dmr-client validate` command: ++```cmd/sh +dmr-client validate --model-file ./my/model/file.json +``` ++> [!NOTE] +> The validation uses the latest DTDL parser version to ensure all the interfaces are compatible with the DTDL language specification. ++To validate external dependencies, they must exist in the local repository. To validate models, use the `--repo` option to specify a `local` or `remote` folder to resolve dependencies: ++```cmd/sh +# from the repo root folder +dmr-client validate --model-file ./my/model/file.json --repo . +``` ++### Strict validation ++The DMR includes extra [requirements](https://github.com/Azure/iot-plugandplay-models/blob/main/pr-reqs.md), use the `strict` flag to validate your model against them: ++```cmd/sh +dmr-client validate --model-file ./my/model/file.json --repo . --strict true +``` ++Check the console output for any error messages. ++### Export models ++Models can be exported from a given repository (local or remote) to a single file using a JSON Array: ++```cmd/sh +dmr-client export --dtmi "dtmi:com:example:TemperatureController;1" -o TemperatureController.expanded.json +``` ++### Create the repository `index` ++The DMR can include an *index* with a list of all the DTMIs available at the time of publishing. This file can be split in to multiple files as described in the [DMR Tools Wiki](https://github.com/Azure/iot-plugandplay-models-tools/wiki/Model-Index). ++To generate the index in a custom or private DMR, use the index command: ++```cmd/sh +dmr-client index -r . -o index.json +``` ++> [!NOTE] +> The public DMR is configured to provide an updated index available at: https://devicemodels.azure.com/index.json ++### Create *expanded* files ++Expanded files can be generated using the command: ++```cmd/sh +dmr-client expand -r . +``` ++## Next steps ++The suggested next step is to review the [IoT Plug and Play architecture](concepts-architecture.md). |
iot | Concepts Modeling Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-modeling-guide.md | + + Title: Understand IoT Plug and Play device models | Microsoft Docs +description: Understand the Digital Twins Definition Language (DTDL) modeling language for IoT Plug and Play devices. The article describes primitive and complex datatypes, reuse patterns that use components and inheritance, and semantic types. The article provides guidance on the choice of device twin model identifier and tooling support for model authoring. ++ Last updated : 1/23/2024++++#Customer intent: As a device builder, I want to understand how to design and author a DTDL model for an IoT Plug and Play device. ++++# IoT Plug and Play modeling guide ++At the core of IoT Plug and Play, is a device _model_ that describes a device's capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of interfaces that define: ++- _Properties_ that represent the read-only or writable state of a device or other entity. For example, a device serial number may be a read-only property and a target temperature on a thermostat may be a writable property. +- _Telemetry_ fields that define the data emitted by a device, whether the data is a regular stream of sensor readings, an occasional error, or an information message. +- _Commands_ that describe a function or operation that can be done on a device. For example, a command could reboot a gateway or take a picture using a remote camera. ++To learn more about how IoT Plug and Play uses device models, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md) and [IoT Plug and Play service developer guide](concepts-developer-guide-service.md). ++To define a model, you use the Digital Twins Definition Language (DTDL). DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that: ++- Has a unique model ID: `dtmi:com:example:Thermostat;1`. +- Sends temperature telemetry. +- Has a writable property to set the target temperature. +- Has a read-only property to report the maximum temperature since the last reboot. +- Responds to a command that requests maximum, minimum and average temperatures over a time period. ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:Thermostat;1", + "@type": "Interface", + "displayName": "Thermostat", + "description": "Reports current temperature and provides desired temperature control.", + "contents": [ + { + "@type": [ + "Telemetry", + "Temperature" + ], + "name": "temperature", + "displayName": "Temperature", + "description": "Temperature in degrees Celsius.", + "schema": "double", + "unit": "degreeCelsius" + }, + { + "@type": [ + "Property", + "Temperature" + ], + "name": "targetTemperature", + "schema": "double", + "displayName": "Target Temperature", + "description": "Allows to remotely specify the desired target temperature.", + "unit": "degreeCelsius", + "writable": true + }, + { + "@type": [ + "Property", + "Temperature" + ], + "name": "maxTempSinceLastReboot", + "schema": "double", + "unit": "degreeCelsius", + "displayName": "Max temperature since last reboot.", + "description": "Returns the max temperature since last device reboot." + }, + { + "@type": "Command", + "name": "getMaxMinReport", + "displayName": "Get Max-Min report.", + "description": "This command returns the max, min and average temperature from the specified time to the current time.", + "request": { + "name": "since", + "displayName": "Since", + "description": "Period to return the max-min report.", + "schema": "dateTime" + }, + "response": { + "name": "tempReport", + "displayName": "Temperature Report", + "schema": { + "@type": "Object", + "fields": [ + { + "name": "maxTemp", + "displayName": "Max temperature", + "schema": "double" + }, + { + "name": "minTemp", + "displayName": "Min temperature", + "schema": "double" + }, + { + "name": "avgTemp", + "displayName": "Average Temperature", + "schema": "double" + }, + { + "name": "startTime", + "displayName": "Start Time", + "schema": "dateTime" + }, + { + "name": "endTime", + "displayName": "End Time", + "schema": "dateTime" + } + ] + } + } + } + ] +} +``` ++The thermostat model has a single interface. Later examples in this article show more complex models that use components and inheritance. ++This article describes how to design and author your own models and covers topics such as data types, model structure, and tools. ++To learn more, see the [Digital Twins Definition Language](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) specification. ++> [!NOTE] +> IoT Central currently supports [DTDL v2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) with an [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md). ++## Model structure ++Properties, telemetry, and commands are grouped into interfaces. This section describes how you can use interfaces to describe simple and complex models by using components and inheritance. ++### Model IDs ++Every interface has a unique digital twin model identifier (DTMI). Complex models use DTMIs to identify components. Applications can use the DTMIs that devices send to locate model definitions in a repository. ++DTMIs should follow the naming convention required by the [IoT Plug and Play model repository](https://github.com/Azure/iot-plugandplay-models): ++- The DTMI prefix is `dtmi:`. +- The DTMI suffix is version number for the model such as `;2`. +- The body of the DTMI maps to the folder and file in the model repository where the model is stored. The version number is part of the file name. ++For example, the model identified by the DTMI `dtmi:com:Example:Thermostat;2` is stored in the *dtmi/com/example/thermostat-2.json* file. ++The following snippet shows the outline of an interface definition with its unique DTMI: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:Thermostat;2", + "@type": "Interface", + "displayName": "Thermostat", + "description": "Reports current temperature and provides desired temperature control.", + "contents": [ + ... + ] +} +``` ++### No components ++A simple model, such as the thermostat shown previously, doesn't use embedded or cascaded components. Telemetry, properties, and commands are defined in the `contents` node of the interface. ++The following example shows part of a simple model that doesn't use components: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:Thermostat;1", + "@type": "Interface", + "displayName": "Thermostat", + "description": "Reports current temperature and provides desired temperature control.", + "contents": [ + { + "@type": [ + "Telemetry", + "Temperature" + ], + "name": "temperature", + "displayName": "Temperature", + "description": "Temperature in degrees Celsius.", + "schema": "double", + "unit": "degreeCelsius" + }, + { + "@type": [ + "Property", +... +``` ++Tools such as Azure IoT Explorer and the IoT Central device template designer label a standalone interface like the thermostat as a _default component_. ++The following screenshot shows how the model displays in the Azure IoT Explorer tool: +++The following screenshot shows how the model displays as the default component in the IoT Central device template designer. Select **View identity** to see the DTMI of the model: +++The model ID is stored in a device twin property as the following screenshot shows: +++A DTDL model without components is a useful simplification for a device or an IoT Edge module with a single set of telemetry, properties, and commands. A model that doesn't use components makes it easy to migrate an existing device or module to be an IoT Plug and Play device or module - you create a DTDL model that describes your actual device or module without the need to define any components. ++> [!TIP] +> A module can be a device [module](../iot-hub/iot-hub-devguide-module-twins.md) or an [IoT Edge module](../iot-edge/about-iot-edge.md). ++### Reuse ++There are two ways to reuse interface definitions. ++- Use multiple components in a model to reference other interface definitions. +- Use inheritance to extend existing interface definitions. ++### Multiple components ++Components let you build a model interface as an assembly of other interfaces. ++For example, the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) interface is defined as a model. You can incorporate this interface as one or more components when you define the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json). In the following example, these components are called `thermostat1` and `thermostat2`. ++For a DTDL model with multiple components, there are two or more component sections. Each section has `@type` set to `Component` and explicitly refers to a schema as shown in the following snippet: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:TemperatureController;1", + "@type": "Interface", + "displayName": "Temperature Controller", + "description": "Device with two thermostats and remote reboot.", + "contents": [ + { + "@type": [ + "Telemetry", + "DataSize" + ], + "name": "workingSet", + "displayName": "Working Set", + "description": "Current working set of the device memory in KiB.", + "schema": "double", + "unit": "kibibyte" + }, + { + "@type": "Property", + "name": "serialNumber", + "displayName": "Serial Number", + "description": "Serial number of the device.", + "schema": "string" + }, + { + "@type": "Command", + "name": "reboot", + "displayName": "Reboot", + "description": "Reboots the device after waiting the number of seconds specified.", + "request": { + "name": "delay", + "displayName": "Delay", + "description": "Number of seconds to wait before rebooting the device.", + "schema": "integer" + } + }, + { + "@type" : "Component", + "schema": "dtmi:com:example:Thermostat;1", + "name": "thermostat1", + "displayName": "Thermostat One", + "description": "Thermostat One of Two." + }, + { + "@type" : "Component", + "schema": "dtmi:com:example:Thermostat;1", + "name": "thermostat2", + "displayName": "Thermostat Two", + "description": "Thermostat Two of Two." + }, + { + "@type": "Component", + "schema": "dtmi:azure:DeviceManagement:DeviceInformation;1", + "name": "deviceInformation", + "displayName": "Device Information interface", + "description": "Optional interface with basic device hardware information." + } + ] +} +``` ++This model has three components defined in the contents section - two `Thermostat` components and a `DeviceInformation` component. The contents section also includes property, telemetry, and command definitions. ++The following screenshots show how this model appears in IoT Central. The property, telemetry, and command definitions in the temperature controller appear in the top-level **Default component**. The property, telemetry, and command definitions for each thermostat appear in the component definitions: ++++To learn how to write device code that interacts with components, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md). ++To learn how to write service code that interacts with components on a device, see [IoT Plug and Play service developer guide](concepts-developer-guide-service.md). ++### Inheritance ++Inheritance lets you reuse capabilities in a base interfaces to extend the capabilities of an interface. For example, several device models can share common capabilities such as a serial number: +++The following snippet shows a DTML model that uses the `extends` keyword to define the inheritance relationship shown in the previous diagram: ++```json +[ + { + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:Thermostat;1", + "@type": "Interface", + "contents": [ + { + "@type": "Telemetry", + "name": "temperature", + "schema": "double", + "unit": "degreeCelsius" + }, + { + "@type": "Property", + "name": "targetTemperature", + "schema": "double", + "unit": "degreeCelsius", + "writable": true + } + ], + "extends": [ + "dtmi:com:example:baseDevice;1" + ] + }, + { + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:baseDevice;1", + "@type": "Interface", + "contents": [ + { + "@type": "Property", + "name": "SerialNumber", + "schema": "double", + "writable": false + } + ] + } +] +``` ++The following screenshot shows this model in the IoT Central device template environment: +++When you write device or service-side code, your code doesn't need to do anything special to handle inherited interfaces. In the example shown in this section, your device code reports the serial number as if it's part of the thermostat interface. ++### Tips ++You can combine components and inheritance when you create a model. The following diagram shows a `thermostat` model inheriting from a `baseDevice` interface. The `baseDevice` interface has a component, that itself inherits from another interface: +++The following snippet shows a DTML model that uses the `extends` and `component` keywords to define the inheritance relationship and component usage shown in the previous diagram: ++```json +[ + { + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:Thermostat;1", + "@type": "Interface", + "contents": [ + { + "@type": "Telemetry", + "name": "temperature", + "schema": "double", + "unit": "degreeCelsius" + }, + { + "@type": "Property", + "name": "targetTemperature", + "schema": "double", + "unit": "degreeCelsius", + "writable": true + } + ], + "extends": [ + "dtmi:com:example:baseDevice;1" + ] + }, + { + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:baseDevice;1", + "@type": "Interface", + "contents": [ + { + "@type": "Property", + "name": "SerialNumber", + "schema": "double", + "writable": false + }, + { + "@type" : "Component", + "schema": "dtmi:com:example:baseComponent;1", + "name": "baseComponent" + } + ] + } +] +``` ++## Data types ++Use data types to define telemetry, properties, and command parameters. Data types can be primitive or complex. Complex data types use primitives or other complex types. The maximum depth for complex types is five levels. ++### Primitive types ++The following table shows the set of primitive types you can use: ++| Primitive type | Description | +| | | +| `boolean` | A boolean value | +| `date` | A full-date as defined in [section 5.6 of RFC 3339](https://tools.ietf.org/html/rfc3339#section-5.6) | +| `dateTime` | A date-time as defined in [RFC 3339](https://tools.ietf.org/html/rfc3339) | +| `double` | An IEEE 8-byte floating point | +| `duration` | A duration in ISO 8601 format | +| `float` | An IEEE 4-byte floating point | +| `integer` | A signed 4-byte integer | +| `long` | A signed 8-byte integer | +| `string` | A UTF8 string | +| `time` | A full-time as defined in [section 5.6 of RFC 3339](https://tools.ietf.org/html/rfc3339#section-5.6) | ++The following snippet shows an example telemetry definition that uses the `double` type in the `schema` field: ++```json +{ + "@type": "Telemetry", + "name": "temperature", + "displayName": "Temperature", + "schema": "double" +} +``` ++### Complex data types ++Complex data types are one of *array*, *enumeration*, *map*, *object*, or one of the geospatial types. ++#### Arrays ++An array is an indexable data type where all elements are the same type. The element type can be a primitive or complex type. ++The following snippet shows an example telemetry definition that uses the `Array` type in the `schema` field. The elements of the array are booleans: ++```json +{ + "@type": "Telemetry", + "name": "ledState", + "schema": { + "@type": "Array", + "elementSchema": "boolean" + } +} +``` ++#### Enumerations ++An enumeration describes a type with a set of named labels that map to values. The values can be either integers or strings, but the labels are always strings. ++The following snippet shows an example telemetry definition that uses the `Enum` type in the `schema` field. The values in the enumeration are integers: ++```json +{ + "@type": "Telemetry", + "name": "state", + "schema": { + "@type": "Enum", + "valueSchema": "integer", + "enumValues": [ + { + "name": "offline", + "displayName": "Offline", + "enumValue": 1 + }, + { + "name": "online", + "displayName": "Online", + "enumValue": 2 + } + ] + } +} +``` ++#### Maps ++A map is a type with key-value pairs where the values all have the same type. The key in a map must be a string. The values in a map can be any type, including another complex type. ++The following snippet shows an example property definition that uses the `Map` type in the `schema` field. The values in the map are strings: ++```json +{ + "@type": "Property", + "name": "modules", + "writable": true, + "schema": { + "@type": "Map", + "mapKey": { + "name": "moduleName", + "schema": "string" + }, + "mapValue": { + "name": "moduleState", + "schema": "string" + } + } +} +``` ++### Objects ++An object type is made up of named fields. The types of the fields in an object map can be primitive or complex types. ++The following snippet shows an example telemetry definition that uses the `Object` type in the `schema` field. The fields in the object are `dateTime`, `duration`, and `string` types: ++```json +{ + "@type": "Telemetry", + "name": "monitor", + "schema": { + "@type": "Object", + "fields": [ + { + "name": "start", + "schema": "dateTime" + }, + { + "name": "interval", + "schema": "duration" + }, + { + "name": "status", + "schema": "string" + } + ] + } +} +``` ++#### Geospatial types ++DTDL provides a set of geospatial types, based on [GeoJSON](https://geojson.org/), for modeling geographic data structures: `point`, `multiPoint`, `lineString`, `multiLineString`, `polygon`, and `multiPolygon`. These types are predefined nested structures of arrays, objects, and enumerations. ++The following snippet shows an example telemetry definition that uses the `point` type in the `schema` field: ++```json +{ + "@type": "Telemetry", + "name": "location", + "schema": "point" +} +``` ++Because the geospatial types are array-based, they can't currently be used in property definitions. ++## Semantic types ++The data type of a property or telemetry definition specifies the format of the data that a device exchanges with a service. The semantic type provides information about telemetry and properties that an application can use to determine how to process or display a value. Each semantic type has one or more associated units. For example, celsius and fahrenheit are units for the temperature semantic type. IoT Central dashboards and analytics can use the semantic type information to determine how to plot telemetry or property values and display units. To learn how you can use the model parser to read the semantic types, see [Understand the digital twins model parser](concepts-model-parser.md). ++The following snippet shows an example telemetry definition that includes semantic type information. The semantic type `Temperature` is added to the `@type` array, and the `unit` value, `degreeCelsius` is one of the valid units for the semantic type: ++```json +{ + "@type": [ + "Telemetry", + "Temperature" + ], + "name": "temperature", + "schema": "double", + "unit": "degreeCelsius" +} +``` ++## Localization ++Applications, such as IoT Central, use information in the model to dynamically build a UI around the data that's exchanged with an IoT Plug and Play device. For example, tiles on a dashboard can display names and descriptions for telemetry, properties, and commands. ++The optional `description` and `displayName` fields in the model hold strings intended for use in a UI. These fields can hold localized strings that an application can use to render a localized UI. ++The following snippet shows an example temperature telemetry definition that includes localized strings: ++```json +{ + "@type": [ + "Telemetry", + "Temperature" + ], + "description": { + "en": "Temperature in degrees Celsius.", + "it": "Temperatura in gradi Celsius." + }, + "displayName": { + "en": "Temperature", + "it": "Temperatura" + }, + "name": "temperature", + "schema": "double", + "unit": "degreeCelsius" +} +``` ++Adding localized strings is optional. The following example has only a single, default language: ++```json +{ + "@type": [ + "Telemetry", + "Temperature" + ], + "description": "Temperature in degrees Celsius.", + "displayName": "Temperature", + "name": "temperature", + "schema": "double", + "unit": "degreeCelsius" +} +``` ++## Lifecycle and tools ++The four lifecycle stages for a device model are *author*, *publish*, *use*, and *version*: ++### Author ++DTML device models are JSON documents that you can create in a text editor. However, in IoT Central you can use the device template GUI environment to create a DTML model. In IoT Central you can: ++- Create interfaces that define properties, telemetry, and commands. +- Use components to assemble multiple interfaces together. +- Define inheritance relationships between interfaces. +- Import and export DTML model files. ++To learn more, see [Define a new IoT device type in your Azure IoT Central application](../iot-central/core/howto-set-up-template.md). ++There's a DTDL authoring extension for VS Code. ++To install the DTDL extension for VS Code, go to [DTDL editor for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl). You can also search for **DTDL** in the **Extensions** view in VS Code. ++When you've installed the extension, use it to help you author DTDL model files in VS Code: ++- The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot: ++ :::image type="content" source="media/concepts-modeling-guide/model-validation.png" alt-text="Screenshot that shows DTDL model validation in VS Code."::: ++- Use intellisense and autocomplete when you're editing DTDL models: ++ :::image type="content" source="media/concepts-modeling-guide/model-intellisense.png" alt-text="Screenshot that shows intellisense for DTDL models in VS Code."::: ++- Create a new DTDL interface. The **DTDL: Create Interface** command creates a JSON file with a new interface. The interface includes example telemetry, property, and command definitions. ++### Publish ++To make your DTML models shareable and discoverable, you publish them in a device models repository. ++Before you publish a model in the public repository, you can use the `dmr-client` tools to validate your model. ++To learn more, see [Device models repository](concepts-model-repository.md). ++### Use ++Applications, such as IoT Central, use device models. In IoT Central, a model is part of the device template that describes the capabilities of the device. IoT Central uses the device template to dynamically build a UI for the device, including dashboards and analytics. ++> [!NOTE] +> IoT Central defines some extensions to the DTDL language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md). ++A custom solution can use the [digital twins model parser](concepts-model-parser.md) to understand the capabilities of a device that implements the model. To learn more, see [Use IoT Plug and Play models in an IoT solution](concepts-model-discovery.md). ++### Version ++To ensure devices and server-side solutions that use models continue to work, published models are immutable. ++The DTMI includes a version number that you can use to create multiple versions of a model. Devices and server-side solutions can use the specific version they were designed to use. ++IoT Central implements more versioning rules for device models. If you version a device template and its model in IoT Central, you can migrate devices from previous versions to later versions. However, migrated devices can't use new capabilities without a firmware upgrade. To learn more, see [Edit a device template](../iot-central/core/howto-edit-device-template.md). ++## Limits and constraints ++The following list summarizes some key constraints and limits on models: ++- Currently, the maximum depth for arrays, maps, and objects is five levels. +- You can't use arrays in property definitions. +- You can extend interfaces to a depth of 10 levels. +- An interface can extend at most two other interfaces. +- A component can't contain another component. ++## Next steps ++Now that you've learned about device modeling, here are some more resources: ++- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) +- [Model repositories](./concepts-model-repository.md) |
iot | Howto Convert To Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/howto-convert-to-pnp.md | + + Title: Convert an existing device to use IoT Plug and Play | Microsoft Docs +description: This article describes how to convert your existing device code to work with IoT Plug and Play by creating a device model and then sending the model ID when the device connects. ++ Last updated : 1/23/2024+++++# How to convert an existing device to be an IoT Plug and Play device ++This article outlines the steps you should follow to convert an existing device to an IoT Plug and Play device. It describes how to create the model that every IoT Plug and Play device requires, and the necessary code changes to enable the device to function as an IoT Plug and Play device. ++For the code samples, this article shows C code that uses an MQTT library to connect to an IoT hub. You can apply the changes described in this article to devices implemented with other languages and SDKs. ++To convert your existing device to be an IoT Plug and Play device: ++1. Review your device code to understand the telemetry, properties, and commands it implements. +1. Create a model that describes the telemetry, properties, and commands your device implements. +1. Modify the device code to announce the model ID when it connects to your service. ++## Review your device code ++Before you create a model for your device, you need to understand the existing capabilities of your device: ++- The telemetry the device sends on regular basis. +- The read-only and writable properties the device synchronizes with your service. +- The commands invoked from the service that the device responds to. ++For example, review the following device code snippets that implement various device capabilities. ++The following snippet shows the device sending temperature telemetry: ++```c +#define TOPIC "devices/" DEVICEID "/messages/events/" ++// ... ++void Thermostat_SendCurrentTemperature() +{ + char msg[] = "{\"temperature\":25.6}"; + int msgId = rand(); + int rc = mosquitto_publish(mosq, &msgId, TOPIC, sizeof(msg) - 1, msg, 1, true); + if (rc != MOSQ_ERR_SUCCESS) + { + printf("Error: %s\r\n", mosquitto_strerror(rc)); + } +} +``` ++The name of the telemetry field is `temperature` and its type is float or a double. The model definition for this telemetry type looks like the following JSON. To learn mode, see [Design a model](#design-a-model) below: ++```json +{ + "@type": [ + "Telemetry" + ], + "name": "temperature", + "displayName": "Temperature", + "description": "Temperature in degrees Celsius.", + "schema": "double" +} +``` ++The following snippet shows the device reporting a property value: ++```c +#define DEVICETWIN_MESSAGE_PATCH "$iothub/twin/PATCH/properties/reported/?$rid=patch_temp" ++static void SendMaxTemperatureSinceReboot() +{ + char msg[] = "{\"maxTempSinceLastReboot\": 42.500}"; + int msgId = rand(); + int rc = mosquitto_publish(mosq, &msgId, DEVICETWIN_MESSAGE_PATCH, sizeof(msg) - 1, msg, 1, true); + if (rc != MOSQ_ERR_SUCCESS) + { + printf("Error: %s\r\n", mosquitto_strerror(rc)); + } +} +``` ++The name of the property is `maxTempSinceLastReboot` and its type is float or double. This property is reported by the device, the device never receives an update for this value from the service. The model definition for this property looks like the following JSON. To learn mode, see [Design a model](#design-a-model) below: ++```json +{ + "@type": [ + "Property" + ], + "name": "maxTempSinceLastReboot", + "schema": "double", + "displayName": "Max temperature since last reboot.", + "description": "Returns the max temperature since last device reboot." +} +``` ++The following snippet shows the device responding to messages from the service: ++```c +void message_callback(struct mosquitto* mosq, void* obj, const struct mosquitto_message* message) +{ + printf("Message received: %s payload: %s \r\n", message->topic, (char*)message->payload); + + if (strncmp(message->topic, "$iothub/methods/POST/getMaxMinReport/?$rid=1",37) == 0) + { + char* pch; + char* context; + int msgId = 0; + pch = strtok_s((char*)message->topic, "=",&context); + while (pch != NULL) + { + pch = strtok_s(NULL, "=", &context); + if (pch != NULL) { + char * pEnd; + msgId = strtol(pch,&pEnd,16 ); + } + } + char topic[64]; + sprintf_s(topic, "$iothub/methods/res/200/?$rid=%d", msgId); + char msg[] = "{\"maxTemp\":83.51,\"minTemp\":77.68}"; + int rc = mosquitto_publish(mosq, &msgId, topic, sizeof(msg) - 1, msg, 1, true); + if (rc != MOSQ_ERR_SUCCESS) + { + printf("Error: %s\r\n", mosquitto_strerror(rc)); + } + delete pch; + } ++ if (strncmp(message->topic, "$iothub/twin/PATCH/properties/desired/?$version=1", 38) == 0) + { + char* pch; + char* context; + int version = 0; + pch = strtok_s((char*)message->topic, "=", &context); + while (pch != NULL) + { + pch = strtok_s(NULL, "=", &context); + if (pch != NULL) { + char* pEnd; + version = strtol(pch, &pEnd, 10); + } + } + // To do: Parse payload and extract target value + char msg[128]; + int value = 46; + sprintf_s(msg, "{\"targetTemperature\":{\"value\":%d,\"ac\":200,\"av\":%d,\"ad\":\"success\"}}", value, version); + int rc = mosquitto_publish(mosq, &version, DEVICETWIN_MESSAGE_PATCH, strlen(msg), msg, 1, true); + if (rc != MOSQ_ERR_SUCCESS) + { + printf("Error: %s\r\n", mosquitto_strerror(rc)); + } + delete pch; + } +} +``` ++The `$iothub/methods/POST/getMaxMinReport/` topic receives a request for command called `getMaxMinReport` from the service, this request could include a payload with command parameters. The device sends a response with a payload that includes `maxTemp` and `minTemp` values. ++The `$iothub/twin/PATCH/properties/desired/` topic receives property updates from the service. This example assumes the property update is for the `targetTemperature` property. It responds with an acknowledgment that looks like `{\"targetTemperature\":{\"value\":46,\"ac\":200,\"av\":12,\"ad\":\"success\"}}`. ++In summary, the sample implements the following capabilities: ++| Name | Capability type | Details | +| - | -- | - | +| temperature | Telemetry | Assume the data type is double | +| maxTempSinceLastReboot | Property | Assume the data type is double | +| targetTemperature | Writable property | Data type is integer | +| getMaxMinReport | Command | Returns JSON with `maxTemp` and `minTemp` fields of type double | ++## Design a model ++Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) to describe the device capabilities. ++For a simple model that maps the existing capabilities of your device, use the *Telemetry*, *Property*, and *Command* DTDL elements. ++A DTDL model for the sample described in the previous section looks like the following example: ++```json +{ + "@context": "dtmi:dtdl:context;2", + "@id": "dtmi:com:example:ConvertSample;1", + "@type": "Interface", + "displayName": "Simple device", + "description": "Example that shows model for simple device converted to act as an IoT Plug and Play device.", + "contents": [ + { + "@type": [ + "Telemetry", + "Temperature" + ], + "name": "temperature", + "displayName": "Temperature", + "description": "Temperature in degrees Celsius.", + "schema": "double", + "unit": "degreeCelsius" + }, + { + "@type": [ + "Property", + "Temperature" + ], + "name": "targetTemperature", + "schema": "double", + "displayName": "Target Temperature", + "description": "Allows to remotely specify the desired target temperature.", + "unit": "degreeCelsius", + "writable": true + }, + { + "@type": [ + "Property", + "Temperature" + ], + "name": "maxTempSinceLastReboot", + "schema": "double", + "unit": "degreeCelsius", + "displayName": "Max temperature since last reboot.", + "description": "Returns the max temperature since last device reboot." + }, + { + "@type": "Command", + "name": "getMaxMinReport", + "displayName": "Get Max-Min report.", + "description": "This command returns the max and min temperature.", + "request": { + }, + "response": { + "name": "tempReport", + "displayName": "Temperature Report", + "schema": { + "@type": "Object", + "fields": [ + { + "name": "maxTemp", + "displayName": "Max temperature", + "schema": "double" + }, + { + "name": "minTemp", + "displayName": "Min temperature", + "schema": "double" + } + ] + } + } + } + ] +} +``` ++In this model: ++- The `name` and `schema` values map to the data the device sends and receives. +- All the capabilities are grouped in a single interface. +- The `@type` fields identify the DTDL types such as **Property** and **Command**. +- Fields such as `unit`, `displayName`, and `description` provide extra information for the service to use. For example, IoT Central uses these values when it displays data on device dashboards. ++To learn more, see [IoT Plug and Play conventions](concepts-convention.md) and [IoT Plug and Play modeling guide](concepts-modeling-guide.md). ++## Update the code ++If your device is already working with IoT Hub or IoT Central, you don't need to make any changes to the implementation of its telemetry, property, and command capabilities. To make the device follow the IoT Plug and Play conventions, modify the way that the device connects to your service so that it announces the ID of the model you created. The service can then use the model to understand the device capabilities. For example, IoT Central can use the model ID to automatically retrieve the model from a repository and generate a device template for your device. ++IoT devices connect to your IoT service either through the Device Provisioning Service (DPS) or directly with a connection string. ++If your device uses DPS to connect, include the model ID in the payload you send when you register the device. For the example model shown previously, the payload looks like: ++```json +{ + "modelId" : "dtmi:com:example:ConvertSample;1" +} +``` ++To learn more, see [Runtime Registration - Register Device](/rest/api/iot-dps/device/runtime-registration/register-device). ++If your device uses DPS to connect or connects directly with a connection string, include the model ID when your code connects to IoT Hub. For example: ++```c +#define USERNAME IOTHUBNAME ".azure-devices.net/" DEVICEID "/?api-version=2020-09-30&model-id=dtmi:com:example:ConvertSample;1" ++// ... ++mosquitto_username_pw_set(mosq, USERNAME, PWD); ++// ... ++rc = mosquitto_connect(mosq, HOST, PORT, 10); +``` ++## Next steps ++Now that you know how to convert an existing device to be an IoT Plug and Play device, a suggested next step is to read the [IoT Plug and Play modeling guide](concepts-modeling-guide.md). |
iot | Howto Manage Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/howto-manage-digital-twin.md | + + Title: How to manage IoT Plug and Play digital twins +description: How to manage an IoT Plug and Play device by using the digital twin APIs ++ Last updated : 1/23/2024+++++# Manage IoT Plug and Play digital twins ++IoT Plug and Play supports **Get digital twin** and **Update digital twin** operations to manage digital twins. You can use either the [REST APIs](/rest/api/iothub/service/digitaltwin) or one of the [service SDKs](concepts-developer-guide-service.md#service-sdks). ++## Update a digital twin ++An IoT Plug and Play device implements a model described by [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin. ++The IoT Plug and Play device used as an example in this article implements the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) with [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) components. ++The following snippet shows the response to a **Get digital twin** request formatted as a JSON object. To learn more about the digital twin format, see [Understand IoT Plug and Play digital twins](./concepts-digital-twin.md#digital-twin-example): ++```json +{ + "$dtId": "sample-device", + "serialNumber": "alwinexlepaho8329", + "thermostat1": { + "maxTempSinceLastReboot": 25.3, + "targetTemperature": 20.4, + "$metadata": { + "targetTemperature": { + "desiredValue": 20.4, + "desiredVersion": 4, + "ackVersion": 4, + "ackCode": 200, + "ackDescription": "Successfully executed patch", + "lastUpdateTime": "2020-07-17T06:11:04.9309159Z" + }, + "maxTempSinceLastReboot": { + "lastUpdateTime": "2020-07-17T06:10:31.9609233Z" + } + } + }, + "$metadata": { + "$model": "dtmi:com:example:TemperatureController;1", + "serialNumber": { + "lastUpdateTime": "2020-07-17T06:10:31.9609233Z" + } + } +} +``` ++Digital twins let you update an entire component or property using a [JSON Patch](http://jsonpatch.com/). ++For example, you can update the `targetTemperature` property as follows: ++```json +[ + { + "op": "add", + "path": "/thermostat1/targetTemperature", + "value": 21.4 + } +] +``` ++The previous update sets the desired value of a property in the corresponding component-level `$metadata` as shown in the following snippet. IoT Hub updates the desired version of the property: ++```json +"thermostat1": { + "targetTemperature": 20.4, + "$metadata": { + "targetTemperature": { + "desiredValue": 21.4, + "desiredVersion": 5, + "ackVersion": 4, + "ackCode": 200, + "ackDescription": "Successfully executed patch", + "lastUpdateTime": "2020-07-17T06:11:04.9309159Z" + } + } +} +``` ++### Add, replace, or remove a component ++Component level operations require an empty object `$metadata` marker within the value. ++An add or replace component operation sets the desired values of all the provided properties. It also clears desired values for any writable properties not provided with the update. ++Removing a component clears the desired values of all writable properties present. A device eventually synchronizes this removal and stops reporting the individual properties. The component is then removed from the digital twin. ++The following JSON Patch sample shows how to add, replace, or remove a component: ++```json +[ + { + "op": "add", + "path": "/thermostat1", + "value": { + "targetTemperature": 21.4, + "anotherWritableProperty": 42, + "$metadata": {} + } + }, + { + "op": "replace", + "path": "/thermostat1", + "value": { + "targetTemperature": 21.4, + "$metadata": {} + } + }, + { + "op": "remove", + "path": "/thermostat2" + } +] +``` ++### Add, replace, or remove a property ++An add or replace operation sets the desired value of a property. The device can synchronize state and report an update of the value along with an `ack` code, version, and description. ++Removing a property clears the desired value of property if it's set. The device can then stop reporting this property and it's removed from the component. If this property is the last one in the component, then the component is removed as well. ++The following JSON Patch sample shows how to add, replace, or remove a property within a component: ++```json +[ + { + "op": "add", + "path": "/thermostat1/targetTemperature", + "value": 21.4 + }, + { + "op": "replace", + "path": "/thermostat1/anotherWritableProperty", + "value": 42 + }, + { + "op": "remove", + "path": "/thermostat2/targetTemperature", + } +] +``` ++### Rules for setting the desired value of a digital twin property ++**Name** ++The name of a component or property must be valid DTDL name. ++Allowed characters are a-z, A-Z, 0-9 (not as the first character), and underscore (not as the first or last character). ++A name can be 1-64 characters long. ++**Property value** ++The value must be a valid [DTDL Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md#property). ++All primitive types are supported. Within complex types, enums, maps, and objects are supported. To learn more, see [DTDL Schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md#schema). ++Properties don't support array or any complex schema with an array. ++A maximum depth of a five levels is supported for a complex object. ++All field names within complex object should be valid DTDL names. ++All map keys should be valid DTDL names. ++## Troubleshoot update digital twin API errors ++The digital twin API throws the following generic error message: ++`ErrorCode:ArgumentInvalid;'{propertyName}' exists within the device twin and is not digital twin conformant property. Please refer to aka.ms/dtpatch to update this to be conformant.` ++If you see this error, make sure the update patch follows the [rules for setting desired value of a digital twin property](#rules-for-setting-the-desired-value-of-a-digital-twin-property). ++When you update a component, make sure that the [empty object $metadata marker](#add-replace-or-remove-a-component) is set. ++Updates can fail if a device's reported values don't conform to the [IoT plug and play conventions](./concepts-convention.md#writable-properties). ++## Next steps ++Now that you've learned about digital twins, here are some more resources: ++- [Interact with a device from your solution](tutorial-service.md) +- [IoT Digital Twin REST API](/rest/api/iothub/service/digitaltwin) +- [Azure IoT explorer](../iot/howto-use-iot-explorer.md) |
iot | Howto Use Iot Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/howto-use-iot-explorer.md | For a list of the IoT features supported by the latest version of the tool, see ## Next steps -In this how-to article, you learned how to install and use Azure IoT explorer to interact with your IoT Plug and Play devices. A suggested next step is to learn how to [Manage IoT Plug and Play digital twins](../iot-develop/howto-manage-digital-twin.md). +In this how-to article, you learned how to install and use Azure IoT explorer to interact with your IoT Plug and Play devices. A suggested next step is to learn how to [Manage IoT Plug and Play digital twins](./howto-manage-digital-twin.md). |
iot | Iot Device Sdks Lifecycle And Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-device-sdks-lifecycle-and-support.md | - Title: Azure IoT device SDKs lifecycle and support -description: Describe the lifecycle and support for our IoT Hub and DPS device SDKs ----- Previously updated : 4/13/2023---# Azure IoT Device SDK lifecycle and support --The article describes the Azure IoT Device SDK lifecycle and support policy. For more information, see [Azure SDK Lifecycle and support policy](https://azure.github.io/azure-sdk/policies_support.html). --## Package lifecycle --The releases fall into the following categories, each with a defined support structure. -1. **Beta** - Also known as Preview or Release Candidate. Available for early access and feedback purposes and **is not recommended** for use in production. The preview version support is limited to GitHub issues. Preview releases typically live for less than six months, after which they're either deprecated or released as active. --1. **Active** - Generally available and fully supported, receives new feature updates, as well as bug and security fixes. We recommend that customers use the **latest version** because that version receives fixes and updates. --1. **Deprecated** - Superseded by a more recent release. Deprecation occurs at the same time the new release becomes active. Deprecated releases address the most critical bug fixes and security fixes for another **12 months**. --## Get support --If you experience problems while using the Azure IoT SDKs, there are several ways to seek support: --* **Reporting bugs** - All customers can report bugs on the issues page for the GitHub repository associated with the relevant SDK. --* **Microsoft Customer Support team** - Users who have a [support plan](https://azure.microsoft.com/support/plans/) can engage the Microsoft Customer Support team by creating a support ticket directly from the [Azure portal](https://portal.azure.com/signin/index/?feature.settingsportalinstance=mpac). |
iot | Iot Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-glossary.md | Applies to: Iot Hub, IoT Central A description, that uses the [Digital Twins Definition Language](#digital-twins-definition-language), of the capabilities of a [device](#device). Capabilities include [telemetry](#telemetry), [properties](#properties), and [commands](#command). -[Learn more](../iot-develop/concepts-modeling-guide.md) +[Learn more](../iot/concepts-modeling-guide.md) Casing rules: Always lowercase. |
iot | Iot Overview Device Development | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-development.md | As a device developer, when you implement an IoT Plug and Play device there are To learn more, see: -- [What is IoT Plug and Play?](../iot-develop/overview-iot-plug-and-play.md)-- [IoT Plug and Play modeling guide](../iot-develop/concepts-modeling-guide.md)+- [What is IoT Plug and Play?](../iot/overview-iot-plug-and-play.md) +- [IoT Plug and Play modeling guide](../iot/concepts-modeling-guide.md) ## Containerized device code |
iot | Iot Overview Device Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-management.md | In Azure IoT, *command and control* refers to the processes that let you send co Azure IoT solutions can use the following primitives for both device management and command and control: - *Device twins* to share and synchronize state data with the cloud. For example, a device can use the device twin to report the current state of a valve it controls to the cloud and to receive a desired target temperature from the cloud.-- *Digital twins* to represent a device in the digital world. For example, a digital twin can represent a device's physical location, its capabilities, and its relationships with other devices. To learn more about the differences between device twins and digital twins, see [Understand IoT Plug and Play digital twins](../iot-pnp/concepts-digital-twin.md).+- *Digital twins* to represent a device in the digital world. For example, a digital twin can represent a device's physical location, its capabilities, and its relationships with other devices. To learn more about the differences between device twins and digital twins, see [Understand IoT Plug and Play digital twins](concepts-digital-twin.md). - *Direct methods* to receive commands from the cloud. A direct method can have parameters and return a response. For example, the cloud can call a direct method to request the device to reboot in 30 seconds. - *Cloud-to-device* messages to receive one-way notifications from the cloud. For example, a notification that an update is ready to download. |
iot | Iot Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-sdks.md | The following tables list the various SDKs you can use to build IoT solutions. Use the device SDKs to develop code to run on IoT devices that connect to IoT Hub or IoT Central. -To learn more about how to use the device SDKs, see [What is Azure IoT device and application development?](../iot-develop/about-iot-develop.md). +To learn more about how to use the device SDKs, see [What is Azure IoT device and application development?](../iot-develop/about-iot-develop.md). ### Embedded device SDKs Use the embedded device SDKs to develop code to run on IoT devices that connect To learn more about when to use the embedded device SDKs, see [C SDK and Embedded C SDK usage scenarios](../iot-develop/concepts-using-c-sdk-and-embedded-c-sdk.md). +### Device SDK lifecycle and support ++This section summarizes the Azure IoT Device SDK lifecycle and support policy. For more information, see [Azure SDK Lifecycle and support policy](https://azure.github.io/azure-sdk/policies_support.html). ++#### Package lifecycle ++Packages are released in the following categories. Each category has a defined support structure. ++1. **Beta** - Also known as Preview or Release Candidate. Available for early access and feedback purposes and **is not recommended** for use in production. The preview version support is limited to GitHub issues. Preview releases typically live for less than six months, after which they're either deprecated or released as active. ++1. **Active** - Generally available and fully supported, receives new feature updates, as well as bug and security fixes. We recommend that customers use the **latest version** because that version receives fixes and updates. ++1. **Deprecated** - Superseded by a more recent release. Deprecation occurs at the same time the new release becomes active. Deprecated releases address the most critical bug fixes and security fixes for another **12 months**. ++#### Get support ++If you experience problems while using the Azure IoT SDKs, there are several ways to seek support: ++* **Reporting bugs** - All customers can report bugs on the issues page for the GitHub repository associated with the relevant SDK. ++* **Microsoft Customer Support team** - Users who have a [support plan](https://azure.microsoft.com/support/plans/) can engage the Microsoft Customer Support team by creating a support ticket directly from the [Azure portal](https://portal.azure.com/signin/index/?feature.settingsportalinstance=mpac). + ## IoT Hub service SDKs [!INCLUDE [iot-hub-sdks-service](../../includes/iot-hub-sdks-service.md)] -To learn more about using the service SDKs to interact with devices through an IoT hub, see [IoT Plug and Play service developer guide](../iot-develop/concepts-developer-guide-service.md). +To learn more about using the service SDKs to interact with devices through an IoT hub, see [IoT Plug and Play service developer guide](../iot/concepts-developer-guide-service.md). ## IoT Hub management SDKs Alternatives to the management SDKs include the [Azure CLI](../iot-hub/iot-hub-c Suggested next steps include: -- [Device developer guide](../iot-develop/concepts-developer-guide-device.md)-- [Service developer guide](../iot-develop/concepts-developer-guide-service.md)+- [Device developer guide](../iot/concepts-developer-guide-device.md) +- [Service developer guide](../iot/concepts-developer-guide-service.md) |
iot | Iot Services And Technologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-services-and-technologies.md | Azure IoT technologies and services provide you with options to create a wide va You can choose a device to use from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com). You can implement your own embedded code using the open-source [device SDKs](./iot-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python). -You can further simplify how you create the embedded code for your devices by following the [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions. IoT Plug and Play enables solution developers to integrate devices with their solutions without writing any embedded code. At the core of IoT Plug and Play, is a _device capability model_ schema that describes device capabilities. Use the device capability model to configure a cloud-based solution such as an IoT Central application. +You can further simplify how you create the embedded code for your devices by following the [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) conventions. IoT Plug and Play enables solution developers to integrate devices with their solutions without writing any embedded code. At the core of IoT Plug and Play, is a _device capability model_ schema that describes device capabilities. Use the device capability model to configure a cloud-based solution such as an IoT Central application. [Azure IoT Edge](../iot-edge/about-iot-edge.md) lets you offload parts of your IoT workload from your Azure cloud services to your devices. IoT Edge can reduce latency in your solution, reduce the amount of data your devices exchange with the cloud, and enable off-line scenarios. You can manage IoT Edge devices from IoT Central. |
iot | Overview Iot Plug And Play | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/overview-iot-plug-and-play.md | + + Title: Introduction to IoT Plug and Play | Microsoft Docs +description: Learn about IoT Plug and Play. IoT Plug and Play is based on an open modeling language that enables smart IoT devices to declare their capabilities. IoT devices present that declaration, called a device model, when they connect to cloud solutions. The cloud solution can then automatically understand the device and start interacting with it, all without writing any code. ++ Last updated : 1/23/2024+++++#Customer intent: As a device builder, I need to know what is IoT Plug and Play, so I can understand how it can help me build and market my IoT devices. +++# What is IoT Plug and Play? ++IoT Plug and Play enables solution builders to integrate IoT devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device _model_ that a device uses to advertise its capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of elements that define: ++- _Properties_ that represent the read-only or writable state of a device or other entity. For example, a device serial number may be a read-only property and a target temperature on a thermostat may be a writable property. +- _Telemetry_ that's the data emitted by a device, whether the data is a regular stream of sensor readings, an occasional error, or an information message. +- _Commands_ that describe a function or operation that can be done on a device. For example, a command could reboot a gateway or take a picture using a remote camera. ++You can group these elements in interfaces to reuse across models to make collaboration easier and to speed up development. ++To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling. ++There's no extra cost for using IoT Plug and Play and DTDL. Standard rates for [Azure IoT Hub](../iot-hub/about-iot-hub.md) and other Azure services remain the same. ++This article outlines: ++- The typical roles associated with a project that uses IoT Plug and Play. +- How to use IoT Plug and Play devices in your application. +- How to develop an IoT device application that supports IoT Plug and Play. ++## User roles ++IoT Plug and Play is used by two types of developer: ++- A _solution builder_ who is responsible for developing an IoT solution using Azure IoT Hub and other Azure resources, and for identifying IoT devices to integrate. To learn more, see [IoT Plug and Play service developer guide](concepts-developer-guide-service.md). +- A _device builder_ who creates the code that runs on a device connected to your solution. To learn more, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md). ++## Use IoT Plug and Play devices ++As a solution builder, you can use [IoT Central](../iot-central/core/overview-iot-central.md) or [IoT Hub](../iot-hub/about-iot-hub.md) to develop a cloud-hosted IoT solution that uses IoT Plug and Play devices. ++The web UI in IoT Central lets you monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. IoT Plug and Play devices connect directly to an IoT Central application. Here you can use customizable dashboards to monitor and control your devices. You can also use device templates in the IoT Central web UI to create and edit DTDL models. ++IoT Hub - a managed cloud service - acts as a message hub for secure, bi-directional communication between your IoT application and your devices. When you connect an IoT Plug and Play device to an IoT hub, you can use the [Azure IoT explorer](../iot/howto-use-iot-explorer.md) tool to view the telemetry, properties, and commands defined in the DTDL model. ++To learn more, see [IoT Plug and Play architecture](concepts-architecture.md) ++## Develop an IoT device application ++As a device builder, you can develop an IoT hardware product that supports IoT Plug and Play. The process includes three key steps: ++1. Define the device model. You author a set of JSON files that define your device's capabilities using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl). A model describes a complete entity such as a physical product, and defines the set of interfaces implemented by that entity. Interfaces are shared contracts that uniquely identify the telemetry, properties, and commands supported by a device. You can reuse interfaces across different models. ++1. Implement your device software or firmware such that your telemetry, properties, and commands follow the [IoT Plug and Play conventions](concepts-convention.md). ++1. Ensure the device announces the model ID as part of the MQTT connection. The Azure IoT SDKs include constructs to provide the model ID at connection time. ++## Device certification ++The [IoT Plug and Play device certification program](../certification/program-requirements-pnp.md) verifies that a device meets the IoT Plug and Play certification requirements. You can add a certified device to the public [Certified for Azure IoT device catalog](https://aka.ms/devicecatalog) where it's discoverable by other solution builders. ++## Next steps ++Now that you have an overview of IoT Plug and Play, the suggested next step is to try out one of the quickstarts: ++- [Connect a device to IoT Hub](./tutorial-connect-device.md) +- [Interact with a device from your solution](./tutorial-service.md) |
iot | Set Up Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/set-up-environment.md | + + Title: Tutorial - Set up the IoT resources you need for IoT Plug and Play | Microsoft Docs +description: Tutorial - Create an IoT Hub and Device Provisioning Service instance to use with the IoT Plug and Play quickstarts and tutorials. ++ Last updated : 1/23/2024++++ms.devlang: azurecli +++# Tutorial: Set up your environment for the IoT Plug and Play quickstarts and tutorials ++Before you can complete any of the IoT Plug and Play quickstarts and tutorials, you need to configure an IoT hub and the Device Provisioning Service (DPS) in your Azure subscription. You'll also need local copies of the model files used by the sample applications and the Azure IoT explorer tool. ++## Prerequisites ++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +++## Create the resources ++Create an Azure resource group for the resources: ++```azurecli-interactive +az group create --name my-pnp-resourcegroup --location centralus +``` ++Create an IoT hub. The following command uses the name `my-pnp-hub` as an example for the name of the IoT hub to create. Choose a unique name for your IoT hub to use in place of `my-pnp-hub`: ++```azurecli-interactive +az iot hub create --name my-pnp-hub --resource-group my-pnp-resourcegroup --sku F1 --partition-count 2 +``` ++Create a DPS instance. The following command uses the name `my-pnp-dps` as an example for the name of the DPS instance to create. Choose a unique name for your DPS instance to use in place of `my-pnp-dps`: ++```azurecli-interactive +az iot dps create --name my-pnp-dps --resource-group my-pnp-resourcegroup +``` ++To link the DPS instance to your IoT hub, use the following commands. Replace `my-pnp-dps` and `my-pnp-hub` with the unique names you chose previously: ++```azurecli-interactive +hubConnectionString=$(az iot hub connection-string show -n my-pnp-hub --key primary --query connectionString -o tsv) +az iot dps linked-hub create --dps-name my-pnp-dps --resource-group my-pnp-resourcegroup --location centralus --connection-string $hubConnectionString +``` ++## Retrieve the settings ++Some quickstarts and tutorials use the connection string for your IoT hub. You also need the connection string when you set up the Azure IoT explorer tool. Retrieve the connection string and make a note of it now. Replace `my-pnp-hub` with the unique name you chose for your IoT hub: ++```azurecli-interactive +az iot hub connection-string show -n my-pnp-hub --key primary --query connectionString +``` ++Most of the quickstarts and tutorials use the *ID scope* of your DPS configuration. Retrieve the ID scope and make a note of it now. Replace `my-pnp-dps` with the unique name you chose for your DPS instance: ++```azurecli-interactive +az iot dps show --name my-pnp-dps --query properties.idScope +``` ++All the quickstarts and tutorials use a DPS device enrollment. Use the following command to create a `my-pnp-device` *individual device enrollment* in your DPS instance. Replace `my-pnp-dps` with the unique name you chose for your DPS instance. Make a note of the registration ID and primary key values to use in the quickstarts and tutorials: ++```azurecli-interactive +az iot dps enrollment create --attestation-type symmetrickey --dps-name my-pnp-dps --resource-group my-pnp-resourcegroup --enrollment-id my-pnp-device --device-id my-pnp-device --query '{registrationID:registrationId,primaryKey:attestation.symmetricKey.primaryKey}' +``` ++## Create environment variables ++Create five environment variables to configure the samples in the quickstarts and tutorials to use the Device Provisioning Service (DPS) to connect to your IoT hub: ++* **IOTHUB_DEVICE_SECURITY_TYPE**: the value `DPS`. +* **IOTHUB_DEVICE_DPS_ID_SCOPE**: the DPS ID scope you made a note of previously. +* **IOTHUB_DEVICE_DPS_DEVICE_ID**: the value `my-pnp-device`. +* **IOTHUB_DEVICE_DPS_DEVICE_KEY**: the enrollment primary key you made a note of previously. +* **IOTHUB_DEVICE_DPS_ENDPOINT**: the value `global.azure-devices-provisioning.net` ++The service samples need the following environment variables to identify the hub and device to connect to: ++* **IOTHUB_CONNECTION_STRING**: the IoT hub connection string you made a note of previously. +* **IOTHUB_DEVICE_ID**: `my-pnp-device`. ++For example, in a Linux bash shell: ++```bash +export IOTHUB_DEVICE_SECURITY_TYPE="DPS" +export IOTHUB_DEVICE_DPS_ID_SCOPE="<Your ID scope>" +export IOTHUB_DEVICE_DPS_DEVICE_ID="my-pnp-device" +export IOTHUB_DEVICE_DPS_DEVICE_KEY="<Your enrolment primary key>" +export IOTHUB_DEVICE_DPS_ENDPOINT="global.azure-devices-provisioning.net" +export IOTHUB_CONNECTION_STRING="<Your IoT hub connection string>" +export IOTHUB_DEVICE_ID="my-pnp-device" +``` ++For example, at the Windows command line: ++```cmd +set IOTHUB_DEVICE_SECURITY_TYPE=DPS +set IOTHUB_DEVICE_DPS_ID_SCOPE=<Your ID scope> +set IOTHUB_DEVICE_DPS_DEVICE_ID=my-pnp-device +set IOTHUB_DEVICE_DPS_DEVICE_KEY=<Your enrolment primary key> +set IOTHUB_DEVICE_DPS_ENDPOINT=global.azure-devices-provisioning.net +set IOTHUB_CONNECTION_STRING=<Your IoT hub connection string> +set IOTHUB_DEVICE_ID=my-pnp-device +``` ++## Download the model files ++The quickstarts and tutorials use sample model files for the temperature controller and thermostat devices. To download the sample model files: ++1. Create a folder called *models* on your local machine. ++1. Right-click [TemperatureController.json](https://raw.githubusercontent.com/Azure/opendigitaltwins-dtdl/master/DTDL/v2/samples/TemperatureController.json) and save the JSON file to the *models* folder. ++1. Right-click [Thermostat.json](https://raw.githubusercontent.com/Azure/opendigitaltwins-dtdl/master/DTDL/v2/samples/Thermostat.json) and save the JSON file to the *models* folder. ++## Install the Azure IoT explorer ++The quickstarts and tutorials use the **Azure IoT explorer** tool. Go to [Azure IoT explorer releases](https://github.com/Azure/azure-iot-explorer/releases) and expand the list of assets for the most recent release. Download and install the most recent version of the application for your operating system. ++The first time you run the tool, you're prompted for the IoT hub connection string. Use the connection string you made a note of previously. ++Configure the tool to use the model files you downloaded previously. From the home page in the tool, select **IoT Plug and Play Settings**, then **+ Add > Local folder**. Select the *models* folder you created previously. Then select **Save** to save the settings. ++To learn more, see [Install and use Azure IoT explorer](../iot/howto-use-iot-explorer.md). ++## Clean up resources ++You can use the IoT hub and DPS instance for all the IoT Plug and Play quickstarts and tutorials, so you only need to complete the steps in this article once. When you're finished, you can remove them from your subscription with the following command: ++```azurecli-interactive +az group delete --name my-pnp-resourcegroup +``` ++## Next steps ++Now that you've set up your environment, you can try one of the quickstarts or tutorials such as: ++> [!div class="nextstepaction"] +> [Connect a sample IoT Plug and Play device application to IoT Hub](tutorial-connect-device.md) |
iot | Tutorial Connect Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-connect-device.md | + + Title: Tutorial - Connect IoT Plug and Play sample device code to Azure IoT Hub | Microsoft Docs +description: Tutorial - Build and run IoT Plug and Play sample device code (C, C#, Java, JavaScript, or Python) on Linux or Windows that connects to an IoT hub. Use the Azure IoT explorer tool to view the information sent by the device to the hub. ++ Last updated : 1/23/2024++++zone_pivot_groups: programming-languages-set-twenty-seven ++#- id: programming-languages-set-twenty-seven +# Title: Embedded C +#Customer intent: As a device builder, I want to see a working IoT Plug and Play device sample connecting to IoT Hub and sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to. +++# Tutorial: Connect a sample IoT Plug and Play device application running on Linux or Windows to IoT Hub +++++++++++++++++++++++++## Next steps ++In this tutorial, you've learned how to connect an IoT Plug and Play device to an IoT hub. To learn more about how to build a solution that interacts with your IoT Plug and Play devices, see: ++> [!div class="nextstepaction"] +> [How-to: Connect to and interact with a device](./tutorial-service.md) |
iot | Tutorial Migrate Device To Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-migrate-device-to-module.md | + + Title: Tutorial - Connect a generic Azure IoT Plug and Play module | Microsoft Docs +description: Tutorial - Use sample C# IoT Plug and Play device code in a generic module. ++ Last updated : 1/23/2024++++#Customer intent: As a device builder, I want to learn how to implement a module that works with IoT Plug and Play. +++# Tutorial: Connect an IoT Plug and Play module (C#) ++This tutorial shows you how to connect a generic IoT Plug and Play [module](../iot-hub/iot-hub-devguide-module-twins.md). ++A device is an IoT Plug and Play device if it: ++* Publishes its model ID when it connects to an IoT hub. +* Implements the properties and methods described in the Digital Twins Definition Language (DTDL) model identified by the model ID. ++To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way. ++To demonstrate how to implement an IoT Plug and Play module, this tutorial shows you how to: ++> [!div class="checklist"] +> * Add a device with a module to your IoT hub. +> * Convert the thermostat C# device sample into a generic module. +> * Use the service SDK to interact with the module. ++## Prerequisites +++To complete this tutorial, install the following software in your local development environment: ++* Install the latest .NET for your operating system from [https://dot.net](https://dot.net). +* [Git](https://git-scm.com/download/). ++Use the Azure IoT explorer tool to add a new device called **my-module-device** to your IoT hub. ++Add a module called **my-module** to the **my-module-device**: ++1. In the Azure IoT explorer tool, navigate to the **my-module-device** device. ++1. Select **Module identity**, and then select **+ Add**. ++1. Enter **my-module** as the module identity name and select **Save**. ++1. In the list of module identities, select **my-module**. Then copy the primary connection string. You use this module connection string later in this tutorial. ++1. Select the **Module twin** tab and notice that there are no desired or reported properties: ++ ```json + { + "deviceId": "my-module-device", + "moduleId": "my-module", + "etag": "AAAAAAAAAAE=", + "deviceEtag": "NjgzMzQ1MzQ1", + "status": "enabled", + "statusUpdateTime": "0001-01-01T00:00:00Z", + "connectionState": "Disconnected", + "lastActivityTime": "0001-01-01T00:00:00Z", + "cloudToDeviceMessageCount": 0, + "authenticationType": "sas", + "x509Thumbprint": { + "primaryThumbprint": null, + "secondaryThumbprint": null + }, + "modelId": "", + "version": 2, + "properties": { + "desired": { + "$metadata": { + "$lastUpdated": "0001-01-01T00:00:00Z" + }, + "$version": 1 + }, + "reported": { + "$metadata": { + "$lastUpdated": "0001-01-01T00:00:00Z" + }, + "$version": 1 + } + } + } + ``` ++## Download the code ++If you haven't already done so, clone the Azure IoT Hub Device C# SDK GitHub repository to your local machine: ++Open a command prompt in a folder of your choice. Use the following command to clone the [Azure IoT C# SDK](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository into this location: ++```cmd/sh +git clone https://github.com/Azure/azure-iot-sdk-csharp.git +``` ++## Prepare the project ++To open and prepare the sample project: ++1. Navigate to the *azure-iot-sdk-csharp/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat* folder. ++1. Add the following environment variables to your shell environment: ++ | Name | Value | + | - | -- | + | IOTHUB_DEVICE_SECURITY_TYPE | connectionString | + | IOTHUB_MODULE_CONNECTION_STRING | The module connection string you made a note of previously | ++ To learn more about the sample configuration, see the [sample readme](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/solutions/PnpDeviceSamples#readme). ++## Modify the code ++To modify the code to work as a module instead of a device: ++1. In your text editor or IDE, open *Parameter.cs* and modify the line that sets the **PrimaryConnectionString** variable as follows: ++ ```csharp + public string PrimaryConnectionString { get; set; } = Environment.GetEnvironmentVariable("IOTHUB_MODULE_CONNECTION_STRING"); + ``` ++1. In your text editor or IDE, open *Program.cs* and replace the nine instances of the `DeviceClient` class with the `ModuleClient` class. ++ > [!TIP] + > Use the Visual Studio search and replace feature with **Match case** and **Match whole word** enabled to replace `DeviceClient` with `ModuleClient`. ++1. In your text editor or IDE, open *Thermostat.cs* and replace both instances of the `DeviceClient` class with the `ModuleClient` class. ++1. Save the changes to the files you modified. ++1. To run the sample in your shell environment, make sure you're in the *azure-iot-sdk-csharp/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat* folder and that the environment variables are set. Then run: ++ ```cmd/sh + dotnet build + dotnet run + ``` ++If you run the code and then use the Azure IoT explorer to view the updated module twin, you see the updated device twin with the model ID and module reported property: ++```json +{ + "deviceId": "my-module-device", + "moduleId": "my-module", + "etag": "AAAAAAAAAAE=", + "deviceEtag": "MTk0ODMyMjI4", + "status": "enabled", + "statusUpdateTime": "0001-01-01T00:00:00Z", + "connectionState": "Connected", + "lastActivityTime": "2022-11-16T13:56:43.1711257Z", + "cloudToDeviceMessageCount": 0, + "authenticationType": "sas", + "x509Thumbprint": { + "primaryThumbprint": null, + "secondaryThumbprint": null + }, + "modelId": "dtmi:com:example:Thermostat;1", + "version": 5, + "properties": { + "desired": { + "$metadata": { + "$lastUpdated": "0001-01-01T00:00:00Z" + }, + "$version": 1 + }, + "reported": { + "targetTemperature": { + "value": 0, + "ac": 203, + "av": 0, + "ad": "Initialized with default value" + }, + "maxTempSinceLastReboot": 23.4, + "$metadata": { + "$lastUpdated": "2022-11-16T14:06:59.4376422Z", + "targetTemperature": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z", + "value": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z" + }, + "ac": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z" + }, + "av": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z" + }, + "ad": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z" + } + }, + "maxTempSinceLastReboot": { + "$lastUpdated": "2022-11-16T14:06:59.4376422Z" + } + }, + "$version": 2 + } + } +} +``` ++## Interact with a device module ++The service SDKs let you retrieve the model ID of connected IoT Plug and Play devices and modules. You can use the service SDKs to set writable properties and call commands: ++1. In another shell environment, navigate to the *azure-iot-sdk-csharp\iothub\service\samples\solutions\PnpServiceSamples\Thermostat* folder. ++1. Add the following environment variables to your shell environment: ++ | Name | Value | + | - | -- | + | IOTHUB_DEVICE_ID | my-module-device | + | IOTHUB_CONNECTION_STRING | The value you made a note of when you completed [Set up your environment](set-up-environment.md) | ++ > [!TIP] + > You can also find your IoT hub connection string in the Azure IoT explorer tool. ++1. In your text editor or IDE, open the *ThermostatSample.cs* file and modify the line that calls a command as follows: ++ ```csharp + CloudToDeviceMethodResult result = await _serviceClient.InvokeDeviceMethodAsync(_deviceId, "my-module", commandInvocation); + ``` ++1. In the *ThermostatSample.cs* file, modify the line that retrieves the device twin as follows: ++ ```csharp + Twin twin = await s_registryManager.GetTwinAsync(s_deviceId, "my-module"); + ``` ++1. Save your changes. ++1. Make sure the module client sample is still running, and then run this service sample: ++ ```cmd/sh + dotnet build + dotnet run + ``` ++The output from the service sample shows the model ID from the device twin and the command call: ++```cmd +[11/16/2022 14:27:56]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Get the my-module-device device twin. +... +[11/16/2022 14:27:58]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + The my-module-device device twin has a model with ID dtmi:com:example:Thermostat;1. +[11/16/2022 14:27:58]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Update the targetTemperature property on the my-module-device device twin to 44. +[11/16/2022 14:27:58]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Get the my-module-device device twin. +... +[11/16/2022 14:27:58]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Invoke the getMaxMinReport command on my-module-device device twin. +[11/16/2022 14:27:59]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Command getMaxMinReport was invoked on device twin my-module-device. +Device returned status: 200. +Report: {"maxTemp":23.4,"minTemp":23.4,"avgTemp":23.39999999999999,"startTime":"2022-11-16T14:26:00.7446533+00:00","endTime":"2022-11-16T14:27:54.3102604+00:00"} +``` ++The output from the module client shows the command handler's response: ++```cmd +[11/16/2022 14:27:59]Microsoft.Azure.Devices.Client.Samples.ThermostatSample[0] Command: Received - Generating max, min and avg temperature report since 16/11/2022 14:25:58. +[11/16/2022 14:27:59]Microsoft.Azure.Devices.Client.Samples.ThermostatSample[0] Command: MaxMinReport since 16/11/2022 14:25:58: maxTemp=23.4, minTemp=23.4, avgTemp=23.39999999999999, startTime=16/11/2022 14:26:00, endTime=16/11/2022 14:27:54 +``` ++## Convert to an IoT Edge module ++To convert this sample to work as an IoT Plug and Play IoT Edge module, you must containerize the application. You don't need to make any further code changes. The connection string environment variable is injected by the IoT Edge runtime at startup. To learn more, see [Use Visual Studio 2019 to develop and debug modules for Azure IoT Edge](../iot-edge/how-to-visual-studio-develop-module.md). ++To learn how to deploy your containerized module, see: ++* [Run Azure IoT Edge on Ubuntu Virtual Machines](../iot-edge/how-to-install-iot-edge-ubuntuvm.md). +* [Install the Azure IoT Edge runtime on Debian-based Linux systems](../iot-edge/how-to-provision-single-device-linux-symmetric.md). ++You can use the Azure IoT Explorer tool to see: ++* The model ID of your IoT Edge device in the module twin. +* Telemetry from the IoT Edge device. +* IoT Edge module twin property updates triggering IoT Plug and Play notifications. +* The IoT Edge module reacts to your IoT Plug and Play commands. ++## Clean up resources +++## Next steps ++In this tutorial, you've learned how to connect an IoT Plug and Play device with modules to an IoT hub. To learn more about IoT Plug and Play device models, see: ++> [!div class="nextstepaction"] +> [IoT Plug and Play modeling developer guide](./concepts-developer-guide-device.md) |
iot | Tutorial Multiple Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-multiple-components.md | + + Title: Tutorial - Connect an IoT Plug and Play multiple component device applications to IoT Hub | Microsoft Docs +description: Tutorial - Build and run IoT Plug and Play sample device code that uses multiple components and connects to an IoT hub. The tutorial shows you how to use C, C#, Java, JavaScript, or Python. Use the Azure IoT explorer tool to view the information sent by the device to the hub. ++ Last updated : 1/23/2024++++zone_pivot_groups: programming-languages-set-twenty-six ++#- id: programming-languages-set-twenty-six +# Title: Python +#Customer intent: As a device builder, I want to see a working IoT Plug and Play device sample connecting to IoT Hub and using multiple components to send properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to. +++# Tutorial: Connect an IoT Plug and Play multiple component device applications running on Linux or Windows to IoT Hub +++++++++++++++++## Clean up resources +++## Next steps ++In this tutorial, you've learned how to connect an IoT Plug and Play device with components to an IoT hub. To learn more about IoT Plug and Play device models, see: ++> [!div class="nextstepaction"] +> [IoT Plug and Play modeling developer guide](concepts-developer-guide-device.md) |
iot | Tutorial Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-service.md | + + Title: Tutorial - Interact with an IoT Plug and Play device connected to your Azure IoT solution | Microsoft Docs +description: Tutorial - Use C#, JavaScript, Java, or Python to connect to and interact with an IoT Plug and Play device that's connected to your Azure IoT solution. ++ Last updated : 1/23/2024++++zone_pivot_groups: programming-languages-set-ten ++# - id: programming-languages-set-ten +# Title: Python +#Customer intent: As a solution builder, I want to connect to and interact with an IoT Plug and Play device that's connected to my solution. For example, to collect telemetry from the device or to control the behavior of the device. +++# Tutorial: Interact with an IoT Plug and Play device that's connected to your solution ++++++++++++++## Clean up resources ++If you've finished with the quickstarts and tutorials, see [Clean up resources](set-up-environment.md#clean-up-resources). ++## Next steps ++In this tutorial, you learned how to connect an IoT Plug and Play device to a IoT solution. To learn more about IoT Plug and Play device models, see: ++> [!div class="nextstepaction"] +> [IoT Plug and Play modeling developer guide](concepts-developer-guide-device.md) |
key-vault | Rbac Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md | More about Azure Key Vault management guidelines, see: | Key Vault Administrator| Perform all data plane operations on a key vault and all objects in it, including certificates, keys, and secrets. Cannot manage key vault resources or manage role assignments. Only works for key vaults that use the 'Azure role-based access control' permission model. | 00482a5a-887f-4fb3-b363-3b7fe8e74483 | | Key Vault Reader | Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. | 21090545-7ca7-4776-b22c-e363652d74d2 | | Key Vault Certificates Officer | Perform any action on the certificates of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | a4417e6f-fecd-4de8-b567-7b0420556985 |-| Key Vault Certificates User | Read entire certificate contents including secret and key portion. Only works for key vaults that use the 'Azure role-based access control' permission model. | a4417e6f-fecd-4de8-b567-7b0420556985 | +| Key Vault Certificate User | Read entire certificate contents including secret and key portion. Only works for key vaults that use the 'Azure role-based access control' permission model. | db79e9a7-68ee-4b58-9aeb-b90e7c24fcba | | Key Vault Crypto Officer | Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | 14b46e9e-c2b7-41b4-b07b-48a6ebf60603 | | Key Vault Crypto Service Encryption User | Read metadata of keys and perform wrap/unwrap operations. Only works for key vaults that use the 'Azure role-based access control' permission model. | e147488a-f6f5-4113-8e2d-b22465e65bf6 | | Key Vault Crypto User | Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. | 12338af0-0e69-4776-bea7-57ae8d297424 | |
key-vault | Quick Create Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-cli.md | You need to provide following inputs to create a Managed HSM resource: - Azure location. - A list of initial administrators. -The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **East US 2** location, with **the current signed in user** as the only administrator, with **7 days retention period** for soft-delete. Read more about [Managed HSM soft-delete](soft-delete-overview.md) +The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **East US 2** location, with **the current signed in user** as the only administrator, with **7 days retention period** for soft-delete. The Managed HSM will continue to be billed until it is purged in a **soft-delete period**. For more information, see [Managed HSM soft-delete and purge protection](recovery.md#what-are-soft-delete-and-purge-protection) and read more about [Managed HSM soft-delete](soft-delete-overview.md). ```azurecli-interactive oid=$(az ad signed-in-user show --query id -o tsv) |
key-vault | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md | Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
lab-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md | Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
lighthouse | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md | Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
logic-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md | Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 ms.suite: integration |
machine-learning | How To Auto Train Image Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md | |
machine-learning | How To Configure Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md | When running a local Jupyter Notebook server, it's recommended that you create a 1. Launch the Jupyter Notebook server - > [!TIP] - For example notebooks, see the [AzureML-Examples](https://github.com/Azure/azureml-examples) repository. SDK examples are located under [/sdk/python](https://github.com/Azure/azureml-examples/tree/main/sdk/python). For example, the [Configuration notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/configuration.ipynb) example. +> [!TIP] +> For example notebooks, see the [AzureML-Examples](https://github.com/Azure/azureml-examples) repository. SDK examples are located under [/sdk/python](https://github.com/Azure/azureml-examples/tree/main/sdk/python). For example, the [Configuration notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/configuration.ipynb) example. ### Visual Studio Code |
machine-learning | How To Manage Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md | To request an exception from the Azure Machine Learning product team, use the st | Endpoint name| Endpoint names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>2</sup> | - | All types of endpoints <sup>3</sup> | | Deployment name| Deployment names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>2</sup> | - | All types of endpoints <sup>3</sup> | | Number of endpoints per subscription | 100 | Yes | All types of endpoints <sup>3</sup> |+| Number of endpoints per cluster | 60 | - | Kubernetes online endpoint | | Number of deployments per subscription | 500 | Yes | All types of endpoints <sup>3</sup>| | Number of deployments per endpoint | 20 | Yes | All types of endpoints <sup>3</sup> |+| Number of deployments per cluster | 100 | - | Kubernetes online endpoint | | Number of instances per deployment | 50 <sup>4</sup> | Yes | Managed online endpoint | | Max request time-out at endpoint level | 180 seconds | - | Managed online endpoint | | Max request time-out at endpoint level | 300 seconds | - | Kubernetes online endpoint | |
machine-learning | How To Monitor Model Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-model-performance.md | |
machine-learning | How To Prepare Datasets For Automl Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md | |
machine-learning | How To R Interactive Development | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-interactive-development.md | For data stored in a data asset [created in Azure Machine Learning](how-to-creat [!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=configure-ml_client)] - 1. Use this code to retrieve the asset. Make sure to replace `<DATA_NAME>` and `<VERSION_NUMBER>` with the name and number of your data asset. + 1. Use this code to retrieve the asset. Make sure to replace `<MY_NAME>` and `<MY_VERSION>` with the name and number of your data asset. > [!TIP] > In studio, select **Data** in the left navigation to find the name and version number of your data asset. Beyond the issues described earlier, use R as you would in any other environment ## Next steps -* [Adapt your R script to run in production](how-to-r-modify-script-for-production.md) +* [Adapt your R script to run in production](how-to-r-modify-script-for-production.md) |
machine-learning | How To Train Pytorch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md | |
machine-learning | How To Use Parallel Job In Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-parallel-job-in-pipeline.md | |
machine-learning | How To Use Sweep In Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-sweep-in-pipeline.md | |
machine-learning | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md | Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
machine-learning | How To Create Manage Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md | Submit this run via CLI: pfazure run create --file run.yml ``` ++ # [Python SDK](#tab/python) ```python print(base_run) Learn full end to end code first example: [Integrate prompt flow with LLM-based application DevOps](./how-to-integrate-with-llm-app-devops.md) ++ ### Reference files outside of the flow folder - automatic runtime only Sometimes, you might want to reference a `requirements.txt` file that is outside of the flow folder. For example, you might have complex project that includes multiple flows, and they share the same `requirements.txt` file. To do this, You can add this field `additional_includes` into the `flow.dag.yaml`. The value of this field is a list of the relative file/folder path to the flow folder. For example, if requirements.txt is in the parent folder of the flow folder, you can add `../requirements.txt` to the `additional_includes` field. |
machine-learning | How To End To End Azure Devops With Prompt Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-end-to-end-azure-devops-with-prompt-flow.md | Title: LLMOps with prompt flow and Azure DevOps description: Learn how to set up a sample LLMOps environment and pipeline on Azure DevOps for prompt flow project -+ Previously updated : 10/24/2023 Last updated : 01/02/2024 - cli-v2 - sdk-v2 - ignite-2023 -# LLMOps with prompt flow and Azure DevOps (preview) +# LLMOps with prompt flow and Azure DevOps Large Language Operations, or **LLMOps**, has become the cornerstone of efficient prompt engineering and LLM-infused application development and deployment. As the demand for LLM-infused applications continues to soar, organizations find themselves in need of a cohesive and streamlined process to manage their end-to-end lifecycle. LLMOps with prompt flow is a "LLMOps template and guidance" to help you build LL - **Variant and Hyperparameter Experimentation**: Experiment with multiple variants and hyperparameters, evaluating flow variants with ease. Variants and hyperparameters are like ingredients in a recipe. This platform allows you to experiment with different combinations of variants across multiple nodes in a flow. -- **Multiple Deployment Targets**: The repo supports deployment of flows to Kubernetes, Azure Managed computes driven through configuration ensuring that your flows can scale as needed.+- **Multiple Deployment Targets**: The repo supports deployment of flows to **Azure App Services, Kubernetes, Azure Managed computes** driven through configuration ensuring that your flows can scale as needed. It also generates **Docker images** infused with Flow runtime and your flows for deployment to **any target platform and Operating system** supporting Docker. :::image type="content" source="./media/how-to-end-to-end-azure-devops-with-prompt-flow/endpoints.png" alt-text="Screenshot of endpoints." lightbox = "./media/how-to-end-to-end-azure-devops-with-prompt-flow/endpoints.png"::: - **A/B Deployment**: Seamlessly implement A/B deployments, enabling you to compare different flow versions effortlessly. Just as in traditional A/B testing for websites, this platform facilitates A/B deployment for prompt flow. This means you can effortlessly compare different versions of a flow in a real-world setting to determine which performs best. LLMOps with prompt flow is a "LLMOps template and guidance" to help you build LL - **Many-to-many dataset/flow relationships**: Accommodate multiple datasets for each standard and evaluation flow, ensuring versatility in flow test and evaluation. The platform is designed to accommodate multiple datasets for each flow. +- **Conditional Data and Model registration**: The platform creates a new version for dataset in Azure Machine Learning Data Asset and flows in model registry only when there is a change in them, not otherwise. + - **Comprehensive Reporting**: Generate detailed reports for each variant configuration, allowing you to make informed decisions. Provides detailed Metric collection, experiment and variant bulk runs for all runs and experiments, enabling data-driven decisions in csv as well as HTML files. :::image type="content" source="./media/how-to-end-to-end-azure-devops-with-prompt-flow/variants.png" alt-text="Screenshot of flow variants report." lightbox = "./media/how-to-end-to-end-azure-devops-with-prompt-flow/variants.png"::: :::image type="content" source="./media/how-to-end-to-end-azure-devops-with-prompt-flow/metrics.png" alt-text="Screenshot of metrics report." lightbox = "./media/how-to-end-to-end-azure-devops-with-prompt-flow/metrics.png"::: The repository for this article is available at [LLMOps with Prompt flow templat From here on, you can learn **LLMOps with prompt flow** by following the end-to-end samples we provided, which help you build LLM-infused applications using prompt flow and Azure DevOps. Its primary objective is to provide assistance in the development of such applications, leveraging the capabilities of prompt flow and LLMOps. > [!TIP]-> We recommend you understand how we integrate [LLMOps with prompt flow](how-to-integrate-with-llm-app-devops.md). +> We recommend you understand how to integrate [LLMOps with prompt flow](how-to-integrate-with-llm-app-devops.md). > [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. |
machine-learning | How To End To End Llmops With Prompt Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-end-to-end-llmops-with-prompt-flow.md | -# LLMOps with prompt flow and GitHub (preview) +# LLMOps with prompt flow and GitHub Large Language Operations, or **LLMOps**, has become the cornerstone of efficient prompt engineering and LLM-infused application development and deployment. As the demand for LLM-infused applications continues to soar, organizations find themselves in need of a cohesive and streamlined process to manage their end-to-end lifecycle. LLMOps with prompt flow is a "LLMOps template and guidance" to help you build LL - **Variant and Hyperparameter Experimentation**: Experiment with multiple variants and hyperparameters, evaluating flow variants with ease. Variants and hyperparameters are like ingredients in a recipe. This platform allows you to experiment with different combinations of variants across multiple nodes in a flow. -- **Multiple Deployment Targets**: The repo supports deployment of flows to Kubernetes, Azure Managed computes driven through configuration ensuring that your flows can scale as needed.+- **Multiple Deployment Targets**: The repo supports deployment of flows to **Azure App Services, Kubernetes, Azure Managed computes** driven through configuration ensuring that your flows can scale as needed. It also generates **Docker images** infused with Flow runtime and your flows for deployment to **any target platform and Operating system** supporting Docker. :::image type="content" source="./media/how-to-end-to-end-azure-devops-with-prompt-flow/endpoints.png" alt-text="Screenshot of endpoints." lightbox = "./media/how-to-end-to-end-azure-devops-with-prompt-flow/endpoints.png"::: - **A/B Deployment**: Seamlessly implement A/B deployments, enabling you to compare different flow versions effortlessly. Just as in traditional A/B testing for websites, this platform facilitates A/B deployment for prompt flow. This means you can effortlessly compare different versions of a flow in a real-world setting to determine which performs best. LLMOps with prompt flow is a "LLMOps template and guidance" to help you build LL - **Many-to-many dataset/flow relationships**: Accommodate multiple datasets for each standard and evaluation flow, ensuring versatility in flow test and evaluation. The platform is designed to accommodate multiple datasets for each flow. -- **Comprehensive Reporting**: Generate detailed reports for each variant configuration, allowing you to make informed decisions. Provides detailed Metric collection, experiment and variant bulk runs for all runs and experiments, enabling data-driven decisions in csv as well as HTML files.+- **Conditional Data and Model registration**: The platform creates a new version for dataset in Azure Machine Learning Data Asset and flows in model registry only when there is a change in them, not otherwise. ++- **Comprehensive Reporting**: Generate detailed reports for each **variant configuration**, allowing you to make informed decisions. Provides detailed Metric collection, experiment and variant bulk runs for all runs and experiments, enabling data-driven decisions in csv as well as HTML files. :::image type="content" source="./media/how-to-end-to-end-azure-devops-with-prompt-flow/variants.png" alt-text="Screenshot of flow variants report." lightbox = "./media/how-to-end-to-end-azure-devops-with-prompt-flow/variants.png"::: :::image type="content" source="./media/how-to-end-to-end-azure-devops-with-prompt-flow/metrics.png" alt-text="Screenshot of metrics report." lightbox = "./media/how-to-end-to-end-azure-devops-with-prompt-flow/metrics.png"::: The repository for this article is available at [LLMOps with Prompt flow templat From here on, you can learn **LLMOps with prompt flow** by following the end-to-end samples we provided, which help you build LLM-infused applications using prompt flow and GitHub. Its primary objective is to provide assistance in the development of such applications, leveraging the capabilities of prompt flow and LLMOps. > [!TIP]-> We recommend you understand how we integrate [LLMOps with prompt flow](how-to-integrate-with-llm-app-devops.md). +> We recommend you understand how to integrate [LLMOps with prompt flow](how-to-integrate-with-llm-app-devops.md). > [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. |
machine-learning | Reference Automl Images Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md | description: Learn how to format your JSONL files for data consumption in automa -+ |
machine-learning | Reference Yaml Job Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md | |
machine-learning | Samples Notebooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md | Title: Example Jupyter Notebooks (v2) -description: Learn how to find and use the Juypter Notebooks designed to help you explore the SDK (v2) and serve as models for your own machine learning projects. +description: Learn how to find and use the Jupyter Notebooks designed to help you explore the SDK (v2) and serve as models for your own machine learning projects. - Previously updated : 08/30/2022 Last updated : 02/05/2024 #Customer intent: As a professional data scientist, I find and run example Jupyter Notebooks for Azure Machine Learning. This article shows you how to access the repository from the following environme ## Option 1: Access on Azure Machine Learning compute instance (recommended) -The easiest way to get started with the samples is to complete the [Create resources to get started](quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server pre-loaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary. +The easiest way to get started with the samples is to complete the [Create resources to get started](quickstart-create-resources.md). Once completed, you'll have a dedicated notebook server preloaded with the SDK and the Azure Machine Learning Notebooks repository. No downloads or installation necessary. To view example notebooks: 1. Sign in to [studio](https://ml.azure.com) and select your workspace if necessary. 1. Select **Notebooks**. 1. Select the **Samples** tab. Use the **SDK v2** folder for examples using Python SDK v2.-+1. Open the notebook you want to run. Select **Clone this notebook** to create a copy in your workspace file share. This action will copy the notebook along with any dependent resources. ## Option 2: Access on your own notebook server If you'd like to bring your own notebook server for local development, follow th [!INCLUDE [aml-your-server](includes/aml-your-server-v2.md)] -These instructions install the base SDK packages necessary for the quickstart and tutorial notebooks. Other sample notebooks may require you to install extra components. For more information, see [Install the Azure Machine Learning SDK for Python](https://aka.ms/sdk-v2-install). -+These instructions install the base SDK packages necessary for the quickstart and tutorial notebooks. Other sample notebooks might require you to install extra components. For more information, see [Install the Azure Machine Learning SDK for Python](https://aka.ms/sdk-v2-install). ## Option 3: Access on a DSVM The Data Science Virtual Machine (DSVM) is a customized VM image built specifica 1. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). -1. Download a workspace configuration file: -- * Sign in to [Azure Machine Learning studio](https://ml.azure.com) - * Select your workspace settings in the upper right - * Select **Download config file** -- ![Screenshot of download config.json.](./media/aml-dsvm-server/download-config.png) --1. From the directory where you added the configuration file, clone the [the AzureML-Examples repository](https://aka.ms/aml-notebooks). +1. Clone the [the AzureML-Examples repository](https://aka.ms/aml-notebooks). ```bash git clone https://github.com/Azure/azureml-examples.git --depth 1 ``` -1. Start the notebook server from the directory, which now contains the clone and the config file. +1. Start the notebook server from the directory that contains the clone. ```bash jupyter notebook ``` +## Connect to a workspace ++Some of the samples use `MLClient.from_config()` to connect to a workspace. For these samples to work, you need a configuration file in a directory on the path to the sample. ++The configuration file is created for you on the Azure Machine Learning compute instance. To use the code on your own notebook server or DSVM, create the configuration file manually. Use either of the following methods: ++* Write a [configuration file](how-to-configure-environment.md#) file (**aml_config/config.json**) in the root of your cloned repository. ++* Download the workspace configuration file: ++ * Sign in to [Azure Machine Learning studio](https://ml.azure.com) + * Select your workspace settings in the upper right + * Select **Download config file** + * Place the file in the root of your cloned repository. ++ ![Screenshot of download config.json.](./media/aml-dsvm-server/download-config.png) + ## Next steps Explore the [AzureML-Examples](https://github.com/Azure/azureml-examples) repository to discover what Azure Machine Learning can do. For more examples of MLOps, see [https://github.com/Azure/mlops-v2](https://gith Try these tutorials: -- [Train and deploy an image classification model with MNIST](tutorial-train-deploy-notebook.md)+- [Quickstart: Get started with Azure Machine Learning](tutorial-azure-ml-in-a-day.md) - [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md) |
machine-learning | Tutorial Auto Train Image Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md | |
machine-learning | Tutorial Experiment Train Models Using Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-experiment-train-models-using-features.md | The [fifth tutorial in the series](./tutorial-develop-feature-set-with-custom-so * Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md). * Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md). * View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md).-* View the [YAML reference](./reference-yaml-overview.md). +* View the [YAML reference](./reference-yaml-overview.md). |
managed-grafana | How To Connect To Data Source Privately | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-to-data-source-privately.md | -# Connect to a data source privately (preview) +# Connect to a data source privately -In this guide, you learn how to connect your Azure Managed Grafana instance to a data source using Managed Private Endpoint. Azure Managed GrafanaΓÇÖs managed private endpoints are endpoints created in a Managed Virtual Network that the Managed Grafana service uses. They establish private links from that network to your Azure data sources. Azure Managed Grafana sets up and manages these private endpoints on your behalf. You can create managed private endpoints from your Azure Managed Grafana to access other Azure managed services (for example, Azure Monitor private link scope or Azure Monitor workspace). +In this guide, you learn how to connect your Azure Managed Grafana instance to a data source using Managed Private Endpoint. Azure Managed GrafanaΓÇÖs managed private endpoints are endpoints created in a Managed Virtual Network that the Azure Managed Grafana service uses. They establish private links from that network to your Azure data sources. Azure Managed Grafana sets up and manages these private endpoints on your behalf. You can create managed private endpoints from your Azure Managed Grafana to access other Azure managed services (for example, Azure Monitor private link scope or Azure Monitor workspace) and your own self-hosted data sources (for example, connecting to your self-hosted Prometheus behind a private link service). When you use managed private endpoints, traffic between your Azure Managed Grafana and its data sources traverses exclusively over the Microsoft backbone network without going through the internet. Managed private endpoints protect against data exfiltration. A managed private endpoint uses a private IP address from your Managed Virtual Network to effectively bring your Azure Managed Grafana workspace into that network. Each managed private endpoint is mapped to a specific resource in Azure and not the entire service. Customers can limit connectivity to only resources approved by their organizations. -A private endpoint connection is created in a "Pending" state when you create a managed private endpoint in your Managed Grafana workspace. An approval workflow is started. The private link resource owner is responsible for approving or rejecting the new connection. If the owner approves the connection, the private link is established. Otherwise, the private link won't be set up. Managed Grafana shows the current connection status. Only a managed private endpoint in an approved state can be used to send traffic to the private link resource that is connected to the managed private endpoint. +A private endpoint connection is created in a "Pending" state when you create a managed private endpoint in your Azure Managed Grafana workspace. An approval workflow is started. The private link resource owner is responsible for approving or rejecting the new connection. If the owner approves the connection, the private link is established. Otherwise, the private link isn't set up. Azure Managed Grafana shows the current connection status. Only a managed private endpoint in an approved state can be used to send traffic to the private link resource that is connected to the managed private endpoint. While managed private endpoints are free, there may be charges associated with private link usage on a data source. For more information, see your data sourceΓÇÖs pricing details. -> [!IMPORTANT] -> Managed Private Endpoint is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> [!NOTE] +> Managed private endpoints are currently only available in Azure Global. -## Supported Azure data sources +## Supported data sources -Managed private endpoints work with Azure services that support private link. Using them, you can connect your Managed Grafana workspace to the following Azure data stores over private connectivity: +Managed private endpoints work with Azure services that support private link. Using them, you can connect your Azure Managed Grafana workspace to the following Azure data stores over private connectivity: -1. Azure Monitor private link scope (for example, Log Analytics workspace) -1. Azure Monitor workspace, for Managed Service for Prometheus -1. Azure Data Explorer -1. Azure Cosmos DB for Mongo DB -1. Azure SQL server +- Azure Cosmos DB for Mongo DB +- Azure Cosmos DB for PostgreSQL +- Azure Data Explorer +- Azure Monitor private link scope (for example, Log Analytics workspace) +- Azure Monitor workspace, for Managed Service for Prometheus +- Azure SQL managed instance +- Azure SQL server +- Private link services ## Prerequisites To follow the steps in this guide, you must have: ## Create a managed private endpoint for Azure Monitor workspace -You can create a managed private endpoint in your Managed Grafana workspace to connect to a [supported Azure data source](#supported-azure-data-sources) using a private link. +You can create a managed private endpoint in your Azure Managed Grafana workspace to connect to a [supported data source](#supported-data-sources) using a private link. 1. In the Azure portal, navigate to your Grafana workspace and then select **Networking (Preview)**.-1. Select **Managed private endpoint**, and then select **Create**. +1. Select **Managed Private Endpoint**, and then select **Create**. - :::image type="content" source="media/managed-private-endpoint/create-mpe.png" alt-text="Screenshot of the Azure portal create managed private endpoint." lightbox="media/managed-private-endpoint/create-mpe.png"::: + :::image type="content" source="media/managed-private-endpoint/create.png" alt-text="Screenshot of the Azure portal create managed private endpoint." lightbox="media/managed-private-endpoint/create.png"::: 1. In the *New managed private endpoint* pane, fill out required information for resource to connect to. - :::image type="content" source="media/managed-private-endpoint/new-mpe-details.png" alt-text="Screenshot of the Azure portal new managed private endpoint details." lightbox="media/managed-private-endpoint/new-mpe-details.png"::: + :::image type="content" source="media/managed-private-endpoint/new-details-azure-monitor.png" alt-text="Screenshot of the Azure portal new managed private endpoint details for Azure Monitor workspace."::: 1. Select an Azure *Resource type* (for example, **Microsoft.Monitor/accounts** for Azure Monitor Managed Service for Prometheus).-1. Click **Create** to add the managed private endpoint resource. +1. Select **Create** to add the managed private endpoint resource. 1. Contact the owner of target Azure Monitor workspace to approve the connection request. > [!NOTE]-> After the new private endpoint connection is approved, all network traffic between your Managed Grafana workspace and the selected data source will flow only through the Azure backbone network. +> After the new private endpoint connection is approved, all network traffic between your Azure Managed Grafana workspace and the selected data source will flow only through the Azure backbone network. ## Create a managed private endpoint to Azure Private Link service -If you have a data source internal to your virtual network, such as an InfluxDB server hosted on an Azure virtual machine, you can connect your Managed Grafana workspace to it. You first need to add a private link access to that resource using the Azure Private Link service. The exact steps required to set up a private link is dependent on the type of Azure resource. Refer to the documentation of the hosting service you have. For example, [this article](../aks/private-clusters.md#use-a-private-endpoint-connection) describes to configure a private link to an Azure Kubernetes Service cluster. +If you have a data source internal to your virtual network, such as an InfluxDB server hosted on an Azure virtual machine, or a Loki server hosted inside your AKS cluster, you can connect your Azure Managed Grafana to it. You first need to add a private link access to that resource using the Azure Private Link service. The exact steps required to set up a private link is dependent on the type of Azure resource. Refer to the documentation of the hosting service you have. For example, [this article](https://cloud-provider-azure.sigs.k8s.io/topics/pls-integration/) describes how to create a private link service in Azure Kubernetes Service by specifying a kubernetes service object. Once you've set up the private link service, you can create a managed private endpoint in your Grafana workspace that connects to the new private link. 1. In the Azure portal, navigate to your Grafana resource and then select **Networking (Preview)**.-1. Select **Managed private endpoint**, and then select **Create**. +1. Select **Managed Private Endpoint**, and then select **Create**. - :::image type="content" source="media/managed-private-endpoint/create-mpe.png" alt-text="Screenshot of the Azure portal create managed private endpoint." lightbox="media/managed-private-endpoint/create-mpe.png"::: + :::image type="content" source="media/managed-private-endpoint/create.png" alt-text="Screenshot of the Azure portal create managed private endpoint." lightbox="media/managed-private-endpoint/create.png"::: 1. In the *New managed private endpoint* pane, fill out required information for resource to connect to. + :::image type="content" source="media/managed-private-endpoint/new-details-private-link.png" alt-text="Screenshot of the Azure portal new managed private endpoint details for Private link services."::: + > [!TIP]- > The *Private link service url* field is optional unless you need TLS. If you specify a URL, Managed Grafana will ensure that the host IP address for that URL matches the private endpoint's IP address. Due to security reasons, AMG have an allowed list of the URL. + > The *Domain name* field is optional. If you specify a domain name, Azure Managed Grafana will ensure that this domain name will be resolved to the managed private endpoint's private IP inside this Grafana's service managed network. You can use this domain name in your Grafana data source's URL configuration instead of the private IP address. You will be required to use the domain name if you enabled TLS or Server Name Indication (SNI) for your self-hosted data store. -1. Click **Create** to add the managed private endpoint resource. +1. Select **Create** to add the managed private endpoint resource. 1. Contact the owner of target private link service to approve the connection request.-1. After the connection request is approved, click **Refresh** to see the connection status and private IP address. +1. After the connection request is approved, select **Refresh** to ensure the connection status is **Approved** and private IP address is shown. > [!NOTE]-> After the new private endpoint connection is approved, all network traffic between your Managed Grafana workspace and the selected data source will flow only through the Azure backbone network. +> The **Refresh** step cannot be skipped, since refreshing triggers a network sync operation by Azure Managed Grafana. Once the new managed private endpoint connection is shown approved, all network traffic between your Azure Managed Grafana workspace and the selected data source will only flow through the Azure backbone network. ## Next steps -In this how-to guide, you learned how to configure private access between a Managed Grafana workspace and a data source. To learn how to set up private access from your users to a Managed Grafana workspace, see [Set up private access](how-to-set-up-private-access.md). +In this how-to guide, you learned how to configure private access between an Azure Managed Grafana workspace and a data source. To learn how to set up private access from your users to an Azure Managed Grafana workspace, see [Set up private access](how-to-set-up-private-access.md). |
mariadb | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md | |
migrate | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md | Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
mysql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md | |
networking | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md | Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
notification-hubs | Push Notifications Android Specific Devices Firebase Cloud Messaging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/push-notifications-android-specific-devices-firebase-cloud-messaging.md | description: Learn how to use Notification Hubs to push notifications to specifi - mobile-android ms.devlang: java Previously updated : 06/30/2023 Last updated : 02/06/2024 --ms.lastreviewed: 04/30/2019 ++ms.lastreviewed: 02/06/2024 # Tutorial: Send notifications to specific devices using Notification Hubs and Google Firebase Cloud Messaging ms.lastreviewed: 04/30/2019 ## Overview -This tutorial shows you how to use Azure Notification Hubs to broadcast breaking news notifications to an Android app. When complete, you will be able to register for breaking news categories you are interested in, and receive only push notifications for those categories. This scenario is a common pattern for many apps where notifications have to be sent to groups of users that have previously declared interest in them, for example, RSS reader, apps for music fans, etc. +> [!NOTE] +> For information about Firebase Cloud Messaging deprecation and migration steps, see [Google Firebase Cloud Messaging migration](notification-hubs-gcm-to-fcm.md). +This tutorial shows you how to use Azure Notification Hubs to broadcast breaking news notifications to an Android app. When complete, you will be able to register for breaking news categories you are interested in, and receive only push notifications for those categories. This scenario is a common pattern for many apps where notifications have to be sent to groups of users that have previously declared interest in them, for example, RSS reader, apps for music fans, etc. Broadcast scenarios are enabled by including one or more *tags* when creating a registration in the notification hub. When notifications are sent to a tag, all devices that have registered for the tag will receive the notification. Because tags are simply strings, they do not have to be provisioned in advance. For more information about tags, see [Notification Hubs Routing and Tag Expressions](notification-hubs-tags-segment-push-message.md). |
notification-hubs | Push Notifications Android Specific Users Firebase Cloud Messaging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/push-notifications-android-specific-users-firebase-cloud-messaging.md | description: Learn how to send push notifications to specific Android apps by us - mobile-android ms.devlang: java Previously updated : 09/11/2019 Last updated : 02/06/2024 --ms.lastreviewed: 09/11/2019 ++ms.lastreviewed: 02/06/2024 # Tutorial: Send push notifications to specific Android apps using Azure Notification Hubs [!INCLUDE [notification-hubs-selector-aspnet-backend-notify-users](../../includes/notification-hubs-selector-aspnet-backend-notify-users.md)] +> [!NOTE] +> For information about Firebase Cloud Messaging deprecation and migration steps, see [Google Firebase Cloud Messaging migration](notification-hubs-gcm-to-fcm.md). + This tutorial shows you how to use Azure Notification Hubs to send push notifications to a specific app user on a specific device. An ASP.NET WebAPI backend is used to authenticate clients and to generate notifications, as shown in the guidance article [Registering from your app backend](notification-hubs-push-notification-registration-management.md#registration-management-from-a-backend). This tutorial builds on the notification hub that you created in the [Tutorial: Push notifications to Android devices by using Azure Notification Hubs and Firebase Cloud Messaging](notification-hubs-android-push-notification-google-fcm-get-started.md). In this tutorial, you take the following steps: |
notification-hubs | Xamarin Notification Hubs Push Notifications Android Gcm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/xamarin-notification-hubs-push-notifications-android-gcm.md | Title: Send push notifications to Xamarin.Android apps using Azure Notification description: In this tutorial, you learn how to use Azure Notification Hubs to send push notifications to a Xamarin Android application. - mobile-xamarin-android ms.devlang: csharp Previously updated : 08/27/2021---ms.lastreviewed: 08/01/2019 Last updated : 02/06/2024+++ms.lastreviewed: 02/06/2024 + # Tutorial: Send push notifications to Xamarin.Android apps using Notification Hubs ms.lastreviewed: 08/01/2019 ## Overview +> [!NOTE] +> For information about Firebase Cloud Messaging deprecation and migration steps, see [Google Firebase Cloud Messaging migration](notification-hubs-gcm-to-fcm.md). + This tutorial shows you how to use Azure Notification Hubs to send push notifications to a Xamarin.Android application. You create a blank Xamarin.Android app that receives push notifications by using Firebase Cloud Messaging (FCM). You use your notification hub to broadcast push notifications to all the devices running your app. The finished code is available in the [NotificationHubs app](https://github.com/Azure/azure-notificationhubs-dotnet/tree/master/Samples/Xamarin/GetStartedXamarinAndroid) sample. In this tutorial, you take the following steps: |
operator-insights | How To Install Mcc Edr Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-mcc-edr-agent.md | You must have a service principal with a certificate credential that can access > [!IMPORTANT] > You may need a Microsoft Entra tenant administrator in your organization to perform this setup for you. -1. Create or obtain a Microsoft Entra ID service principal. Follow the instructions detailed in [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). +1. Create or obtain a Microsoft Entra ID service principal. Follow the instructions detailed in [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). Leave the **Redirect URI** field empty. 1. Note the Application (client) ID, and your Microsoft Entra Directory (tenant) ID (these IDs are UUIDs of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, where each character is a hexadecimal digit). ### Prepare certificates -It's up to you whether you use the same certificate and key for each VM, or use a unique certificate and key for each.  Using a certificate per VM provides better security and has a smaller impact if a key is leaked or the certificate expires. However, this method adds a higher maintainability and operational complexity. +The ingestion agent only supports certificate-based authentication for service principals. It's up to you whether you use the same certificate and key for each VM, or use a unique certificate and key for each.  Using a certificate per VM provides better security and has a smaller impact if a key is leaked or the certificate expires. However, this method adds a higher maintainability and operational complexity. 1. Obtain a certificate. We strongly recommend using trusted certificate(s) from a certificate authority.-1. Add the certificate(s) as credential(s) to your service principal, following [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). -1. We **strongly recommend** additionally storing the certificates in a secure location such as Azure Key Vault.  Doing so allows you to configure expiry alerting and gives you time to regenerate new certificates and apply them to your ingestion agents before they expire.  Once a certificate expires, the agent is unable to authenticate to Azure and no longer uploads data.  For details of this approach see [Renew your Azure Key Vault certificates](../key-vault/certificates/overview-renew-certificate.md). +2. Add the certificate(s) as credential(s) to your service principal, following [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). +3. We **strongly recommend** additionally storing the certificates in a secure location such as Azure Key Vault.  Doing so allows you to configure expiry alerting and gives you time to regenerate new certificates and apply them to your ingestion agents before they expire.  Once a certificate expires, the agent is unable to authenticate to Azure and no longer uploads data.  For details of this approach see [Renew your Azure Key Vault certificates](../key-vault/certificates/overview-renew-certificate.md). - You need the 'Key Vault Certificates Officer' role on the Azure Key Vault in order to add the certificate to the Key Vault. See [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) for details of how to assign roles in Azure. -1. Ensure the certificate(s) are available in pkcs12 format, with no passphrase protecting them. On Linux, you can convert a certificate and key from PEM format using openssl: +4. Ensure the certificate(s) are available in pkcs12 format, with no passphrase protecting them. On Linux, you can convert a certificate and key from PEM format using openssl: `openssl pkcs12 -nodes -export -in <pem-certificate-filename> -inkey <pem-key-filename> -out <pkcs12-certificate-filename>` -5. Ensure the certificate(s) are base64 encoded. On Linux, you can based64 encode a pkcs12-formatted certificate by using the command: +> [!IMPORTANT] +> The pkcs12 file must not be protected with a passphrase. When OpenSSL prompts you for an export password, press <kbd>Enter</kbd> to supply an empty passphrase. ++5. Validate your pkcs12 file. This displays information about the pkcs12 file including the certificate and private key: ++ `openssl pkcs12 -nodes -in <pkcs12-certificate-filename> -info` ++6. Ensure the pkcs12 file is base64 encoded. On Linux, you can base64 encode a pkcs12-formatted certificate by using the command: `base64 -w 0 <pkcs12-certificate-filename> > <base64-encoded-pkcs12-certificate-filename>` |
operator-insights | How To Install Sftp Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-sftp-agent.md | You must have a service principal with a certificate credential that can access > [!IMPORTANT] > You might need a Microsoft Entra tenant administrator in your organization to perform this setup for you. -1. Create or obtain a Microsoft Entra ID service principal. Follow the instructions detailed in [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). +1. Create or obtain a Microsoft Entra ID service principal. Follow the instructions detailed in [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). Leave the **Redirect URI** field empty. 1. Note the Application (client) ID, and your Microsoft Entra Directory (tenant) ID (these IDs are UUIDs of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, where each character is a hexadecimal digit). ### Prepare certificates -It's up to you whether you use the same certificate and key for each VM, or use a unique certificate and key for each.  Using a certificate per VM provides better security and has a smaller impact if a key is leaked or the certificate expires. However, this method adds a higher maintainability and operational complexity. +The ingestion agent only supports certificate-based authentication for service principals. It's up to you whether you use the same certificate and key for each VM, or use a unique certificate and key for each.  Using a certificate per VM provides better security and has a smaller impact if a key is leaked or the certificate expires. However, this method adds a higher maintainability and operational complexity. 1. Obtain a certificate. We strongly recommend using trusted certificate(s) from a certificate authority.-1. Add the certificate(s) as credential(s) to your service principal, following [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). -1. We **strongly recommend** additionally storing the certificates in a secure location such as Azure Key vault.  Doing so allows you to configure expiry alerting and gives you time to regenerate new certificates and apply them to your ingestion agents before they expire.  Once a certificate expires, the agent is unable to authenticate to Azure and no longer uploads data.  For details of this approach see [Renew your Azure Key Vault certificates Azure portal](../key-vault/certificates/overview-renew-certificate.md). +2. Add the certificate(s) as credential(s) to your service principal, following [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). +3. We **strongly recommend** additionally storing the certificates in a secure location such as Azure Key vault.  Doing so allows you to configure expiry alerting and gives you time to regenerate new certificates and apply them to your ingestion agents before they expire.  Once a certificate expires, the agent is unable to authenticate to Azure and no longer uploads data.  For details of this approach see [Renew your Azure Key Vault certificates Azure portal](../key-vault/certificates/overview-renew-certificate.md). - You need the 'Key Vault Certificates Officer' role on the Azure Key Vault in order to add the certificate to the Key Vault. See [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) for details of how to assign roles in Azure. -1. Ensure the certificate(s) are available in pkcs12 format, with no passphrase protecting them. On Linux, you can convert a certificate and key from PEM format using openssl: +4. Ensure the certificate(s) are available in pkcs12 format, with no passphrase protecting them. On Linux, you can convert a certificate and key from PEM format using openssl: `openssl pkcs12 -nodes -export -in <pem-certificate-filename> -inkey <pem-key-filename> -out <pkcs12-certificate-filename>` -5. Ensure the certificate(s) are base64 encoded. On Linux, you can based64 encode a pkcs12-formatted certificate by using the command: + > [!IMPORTANT] + > The pkcs12 file must not be protected with a passphrase. When OpenSSL prompts you for an export password, press <kbd>Enter</kbd> to supply an empty passphrase. ++5. Validate your pkcs12 file. This displays information about the pkcs12 file including the certificate and private key: ++ `openssl pkcs12 -nodes -in <pkcs12-certificate-filename> -info` ++6. Ensure the pkcs12 file is base64 encoded. On Linux, you can base64 encode a pkcs12-formatted certificate by using the command: `base64 -w 0 <pkcs12-certificate-filename> > <base64-encoded-pkcs12-certificate-filename>` Repeat these steps for each VM onto which you want to install the agent: 7. Ensure the SFTP server's public SSH key is listed on the VM's global known_hosts file located at `/etc/ssh/ssh_known_hosts`. > [!TIP]-> Use the Linux command `ssh-keyscan` to add a server's SSH key to a VM's `known_hosts` file manually. For example, `sudo sh -c 'ssh-keyscan -H 10.213.0.6 >> /etc/ssh/ssh_known_hosts'`. +> Use the Linux command `ssh-keyscan` to add a server's SSH public key to a VM's `known_hosts` file manually. For example, `ssh-keyscan -H <server-ip> | sudo tee -a /etc/ssh/ssh_known_hosts`. ## Configure the connection between the SFTP server and VM If your agent VMs don't have access to public DNS, then you need to add entries This process assumes that you're connecting to Azure over ExpressRoute and are using Private Links and/or Service Endpoints. If you're connecting over public IP addressing, you **cannot** use this workaround and must use public DNS. -Create the following resources from a virtual network that is peered to your ingestion agents: --- A Service Endpoint to Azure Storage-- A Private Link or Service Endpoint to the Key Vault created by your Data Product.  The Key Vault is the same one you found in [Grant permissions for the Data Product Key Vault](#grant-permissions-for-the-data-product-key-vault).--Steps: -+1. Create the following resources from a virtual network that is peered to your ingestion agents: + - A Service Endpoint to Azure Storage + - A Private Link or Service Endpoint to the Key Vault created by your Data Product.  The Key Vault is the same one you found in [Grant permissions for the Data Product Key Vault](#grant-permissions-for-the-data-product-key-vault). 1. Note the IP addresses of these two connections.-2. Note the ingestion URL for your Data Product.  You can find the ingestion URL on your Data Product overview page in the Azure portal, in the form *\<account name\>.blob.core.windows.net*. -3. Note the URL of the Data Product Key Vault.  The URL appears as *\<vault name\>.vault.azure.net*. -4. Add a line to */etc/hosts* on the VM linking the two values in this format, for each of the storage and Key Vault: +1. Note the ingestion URL for your Data Product.  You can find the ingestion URL on your Data Product overview page in the Azure portal, in the form *\<account name\>.blob.core.windows.net*. +1. Note the URL of the Data Product Key Vault.  The URL appears as *\<vault name\>.vault.azure.net*. +1. Add a line to */etc/hosts* on the VM linking the two values in this format, for each of the storage and Key Vault: ``` <Storage private IP>   <ingestion URL> <Key Vault private IP>  <Key Vault URL> ````-5. Additionally to this, the public IP of the the URL *login.microsoftonline.com* must be added to */etc/hosts*. You can use any of the public addresses resolved by DNS clients. +1. Additionally to this, the public IP of the URL *login.microsoftonline.com* must be added to */etc/hosts*. You can use any of the public addresses resolved by DNS clients. ``` <Public IP>   login.microsoftonline.com ```` |
operator-insights | How To Manage Mcc Edr Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-manage-mcc-edr-agent.md | The MCC EDR agent is a software package that is installed onto a Linux Virtual M To upgrade to a new release of the agent, repeat the following steps on each VM that has the old agent. -> [!WARNING] -> When the agent restarts, a small number of EDRs being handled may be dropped.  It is not possible to gracefully upgrade without dropping any data.  For safety, upgrade agents one at a time, only upgrading the next when you are sure the previous was successful. - 1. Copy the RPM to the VM.  In an SSH session, change to the directory where the RPM was copied. 1. Save a copy of the existing */etc/az-mcc-edr-uploader/config.yaml* configuration file. |
operator-insights | Mcc Edr Agent Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/mcc-edr-agent-configuration.md | site_id: london-lab01 # The identifier for this agent agent_id: mcc-edr-agent01 # Config for secrets providers. We currently support reading secrets from Azure Key Vault and from the local filesystem. -# Multiple secret providers can be defined and each must be given -# a unique name. +# Multiple secret providers can be defined and each must be given a unique name. # The name can then be referenced for secrets later in the config. secret_providers: - name: dp_keyvault secret_providers: tenant_id: ad5421f5-99e4-44a9-8a46-cc30f34e8dc7 identity_name: 98f3263d-218e-4adf-b939-eacce6a590d2 cert_path: /path/to/local/certkey.pkcs-# Source configuration. This controls how EDRs are ingested from -# MCC. +# Source configuration. This controls how EDRs are ingested from MCC. source: - # The TCP port to listen on. Must match the port MCC is - # configured to send to. + # The TCP port to listen on. Must match the port MCC is configured to send to. listen_port: 36001 - # The maximum amount of data to buffer in memory before uploading. + # The maximum amount of data to buffer in memory before uploading. message_queue_capacity_in_bytes: 33554432 - # The maximum size of a single blob (file) to store in the input - # storage account in Azure. + # The maximum size of a single blob (file) to store in the input storage account in Azure. maximum_blob_size_in_bytes: 134217728 # Quick check on the maximum RAM that the agent should use. - # This is a guide to check the other tuning parameters, rather - # than a hard limit. + # This is a guide to check the other tuning parameters, rather than a hard limit. maximum_overall_capacity_in_bytes: 1275068416 - # The maximum time to wait when no data is received before - # uploading pending batched data to Azure. + # The maximum time to wait when no data is received before uploading pending batched data to Azure. blob_rollover_period_in_seconds: 300 - # EDRs greater than this size are dropped. Subsequent EDRs continue to be processed. This - # condition likely indicates MCC sending larger than expected EDRs. MCC is not normally expected + # EDRs greater than this size are dropped. Subsequent EDRs continue to be processed. + # This condition likely indicates MCC sending larger than expected EDRs. MCC is not normally expected # to send EDRs larger than the default size. If EDRs are being dropped because of this limit, # investigate and confirm that the EDRs are valid, and then increase this value. soft_maximum_edr_size_in_bytes: 20480 source: # corrupt EDRs to Azure. You should not need to change this value. hard_maximum_edr_size_in_bytes: 100000 sink: - # The container within the ingestion account. This *must* be in - # the format Azure Operator Insights expects. Do not adjust - # without consulting your support representative. + # The container within the ingestion account. + # This *must* be in the format Azure Operator Insights expects. + # Do not adjust without consulting your support representative. container_name: edr # Optional. How often, in hours, the agent should refresh its ADLS token. Defaults to 1. adls_token_cache_period_hours: 1 sink: # The name of a secret in the corresponding provider. # This will be the name of a secret in the Key Vault. # This is created by the Data Product and should not be changed. - secret_name: adls-sas-token -# Optional. The maximum size of each block that is uploaded to Azure. -# Each blob is composed of one or more blocks. Defaults to 32MiB (=33554432 bytes). - block_size_in_bytes : 33554432 + secret_name: input-storage-sas + # Optional. The maximum size of each block that is uploaded to Azure. + # Each blob is composed of one or more blocks. Defaults to 32MiB (=33554432 bytes). + block_size_in_bytes: 33554432 ``` |
operator-insights | Sftp Agent Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/sftp-agent-configuration.md | file_sources: # The name of a secret in the corresponding provider. # This will be the name of a secret in the Key Vault. # This is created by the Data Product and should not be changed.- secret_name: adls-sas-token + secret_name: input-storage-sas # The container within the ingestion account. This *must* be exactly the name of the container that Azure Operator Insights expects. container_name: example-container # Optional. A string giving an optional base path to use in Azure Blob Storage. Reserved URL characters must be percent-encoded. It may be required depending on the Data Product. file_sources: maximum_parallel_uploads: 10 # Optional. The maximum size of each block that is uploaded to Azure. # Each blob is composed of one or more blocks. Defaults to 32MiB (=33554432 Bytes).- block_size_in_bytes : 33554432 + block_size_in_bytes: 33554432 ``` |
operator-nexus | Howto Service Principal Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-service-principal-rotation.md | + + Title: Azure Operator Nexus service principal rotation +description: Instructions on service principal rotation lifecycle management. +++ Last updated : 02/05/2024+++++# Service principal rotation on the target cluster ++This document provides an overview on the process of performing service principal rotation on the target cluster. ++## Prerequisites ++1. The [Install Azure CLI][installation-instruction] must be installed. +2. The `networkcloud` CLI extension is required. If the `networkcloud` extension isn't installed, it can be installed following the steps listed [here](https://github.com/MicrosoftDocs/azure-docs-pr/blob/main/articles/operator-nexus/howto-install-cli-extensions.md). +3. Access to the Azure portal for the target cluster. +4. You must be logged in to the same subscription as your target cluster via `az login` +5. Target cluster must be in running and healthy state. +6. Service Principal rotation should be performed prior to the configured credentials expiring. +7. Service Principal should have owner privilege on the subscription of the target cluster. ++## Append secondary credential to the existing service principal ++List existing credentials info for the service principal ++```azurecli +az ad app credential list --id "<SP Application (client) ID>" +``` ++Append secondary credential to the service principal. Please copy the resulting generated password somewhere safe. ++```azurecli +az ad app credential reset --id "<SP Application (client) ID>" --append --display-name "<human-readable description>" +``` +## Create a new service principal ++New service principal should have owner privilege scope on the target cluster subscription. ++```azurecli +az ad sp create-for-rbac -n "<service principal display name>" --role owner --scopes /subscriptions/<subscription-id> +``` ++## Rotate service principal on the target cluster ++Service principal can be rotated on the target cluster by supplying the new information, which can either be only secondary credential update or it could be the new service principal for the target cluster. ++```azurecli +az networkcloud cluster update --resource-group "<resourceGroupName>" --cluster-service-principal application-id="<sp app id>" password="<cleartext password>" principal-id="<sp id>" tenant-id="<tenant id>" -n <cluster name> --subscription <subscription-id> +``` ++## Verify new service principal update on the target cluster ++Cluster show will list the new service principal changes if its rotated on the target cluster. ++```azurecli +az networkcloud cluster show --name "clusterName" --resource-group "resourceGroup" +``` ++In the output, you can find the details under `clusterServicePrincipal` property. ++``` +"clusterServicePrincipal": { + "applicationId": "<sp application id>", + "principalId": "<sp principal id>", + "tenantId": "tenant id" + } +``` ++> [!NOTE] +> Ensure you're using the correct service principal ID(object ID in Azure) when updating it. There are two different object IDs retrievable from Azure for the same Service Principal name, follow these steps to find the right one: +> 1. Avoid retrieving the object ID from the Service Principal of type application that appears when you search for service principal on the Azure portal search bar. +> 2. Instead, Search for the service principal name under "Enterprise applications" in Azure Services to find the correct object ID and use it as principal ID. ++If you still have questions, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). +For more information about Support plans, see [Azure Support plans](https://azure.microsoft.com/support/plans/response/). |
postgresql | Concepts Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md | Azure Backup and Azure Database for PostgreSQL flexible server services have bui - In preview, you can perform LTR backups for all databases, single db backup support will be added in the future. +For more information about performing a long term backup, visit the [how-to guide](../../backup/backup-azure-database-postgresql-flex.md). ## Frequently asked questions |
postgresql | Concepts Networking Private | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private.md | Here are some concepts to be familiar with when you're using virtual networks wh At this time, we don't support NSGs where an ASG is part of the rule with Azure Database for PostgreSQL flexible server. We currently advise using [IP-based source or destination filtering](../../virtual-network/network-security-groups-overview.md#security-rules) in an NSG. > [!IMPORTANT]- > High availability and other Features of Azure Database for PostgreSQL flexible server require ability to send/receive traffic to **destination port 5432** within Azure virtual network subnet where Azure Database for PostgreSQL flexible server is deployed, as well as to **Azure storage** for log archival. If you create **[Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)** to deny traffic flow to or from your Azure Database for PostgreSQL flexible server instance within the subnet where it's deployed, **make sure to allow traffic to destination port 5432** within the subnet, and also to Azure storage by using **[service tag](../../virtual-network/service-tags-overview.md) Azure Storage** as a destination. Also, if you elect to use [Microsoft Entra authentication](concepts-azure-ad-authentication.md) to authenticate logins to your Azure Database for PostgreSQL flexible server instance, allow outbound traffic to Microsoft Entra ID using Microsoft Entra [service tag](../../virtual-network/service-tags-overview.md). + > High availability and other Features of Azure Database for PostgreSQL flexible server require ability to send/receive traffic to **destination port 5432** within Azure virtual network subnet where Azure Database for PostgreSQL flexible server is deployed, as well as to **Azure storage** for log archival. If you create **[Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)** to deny traffic flow to or from your Azure Database for PostgreSQL flexible server instance within the subnet where it's deployed, **make sure to allow traffic to destination port 5432** within the subnet, and also to Azure storage by using **[service tag](../../virtual-network/service-tags-overview.md) Azure Storage** as a destination. You can further [filter](../../virtual-network/tutorial-filter-network-traffic.md) this exception rule by adding your Azure region to the label like *us-east.storage*. Also, if you elect to use [Microsoft Entra authentication](concepts-azure-ad-authentication.md) to authenticate logins to your Azure Database for PostgreSQL flexible server instance, allow outbound traffic to Microsoft Entra ID using Microsoft Entra [service tag](../../virtual-network/service-tags-overview.md). > When setting up [Read Replicas across Azure regions](./concepts-read-replicas.md), Azure Database for PostgreSQL flexible server requires ability to send/receive traffic to **destination port 5432** for both primary and replica, as well as to **[Azure storage](../../virtual-network/service-tags-overview.md#available-service-tags)** in primary and replica regions from both primary and replica servers. * **Private DNS zone integration**. Azure private DNS zone integration allows you to resolve the private DNS within the current virtual network or any in-region peered virtual network where the private DNS zone is linked. |
postgresql | Concepts Scaling Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-scaling-resources.md | When you update your Azure Database for PostgreSQL flexible server instance in s This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers. The experience remains consistent for both high-availablity (HA) and non-HA servers. This feature is enabled in all Azure regions. *No customer action is required* to use this capability. +For read replica configured servers, scaling operations must follow a specific sequence to ensure data consistency and minimize downtime. For details about that sequence, refer to [scaling with read replicas](./concepts-read-replicas.md#scale). + > [!NOTE] > The near-zero downtime scaling process is the _default_ operation. When the following limitations are encountered, the system switches to regular scaling, which involves more downtime compared to the near-zero downtime scaling. |
postgresql | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md | Newly created databases in Azure Database for PostgreSQL - Flexible Server have ``` In this example, user *user1* can connect and has all privileges in our test database *Test_db*, but not any other db on the server. It would be recommended further, instead of giving this user\role *ALL PRIVILEGES* on that database and its objects, to provide more selective permissions, such as *SELECT*,*INSERT*,*EXECUTE*, etc. For more information about privileges in PostgreSQL databases, see the [GRANT](https://www.postgresql.org/docs/current/sql-grant.html) and [REVOKE](https://www.postgresql.org/docs/current/sql-revoke.html) commands in the PostgreSQL docs. +### PostgreSQL 16 changes with role based security ++In PostgreSQL database role can have a number of attributes that define its privileges.One such attribute is the [**CREATEROLE** attribute](https://www.postgresql.org/docs/current/role-attributes.html), which is important to PostgreSQL database management of users and roles. In PostgreSQL 16 significant changes were introduced to this attribute. +In PostgreSQL 16, users with **CREATEROLE** attribute no longer have the ability to hand out membership in any role to anyone; instead, like other users, without this attribute, they can only hand out memberships in roles for which they possess **ADMIN OPTION**. Also, in PostgreSQL 16,the **CREATEROLE** attribute still allows a non-superuser the ability to provision new users, however they can only drop users that they themselves created. Attempts to drop users , which is not create by user with **CREATEROLE** attribute, will result in an error. ++ ## Row level security [Row level security (RLS)](https://www.postgresql.org/docs/current/ddl-rowsecurity.html) is an Azure Database for PostgreSQL - Flexible Server security feature that allows database administrators to define policies to control how specific rows of data display and operate for one or more roles. Row level security is an additional filter you can apply to an Azure Database for PostgreSQL - Flexible Server database table. When a user tries to perform an action on a table, this filter is applied before the query criteria or other filtering, and the data is narrowed or rejected according to your security policy. You can create row level security policies for specific commands like *SELECT*, *INSERT*, *UPDATE*, and *DELETE*, specify it for ALL commands. Use cases for row level security include PCI compliant implementations, classified environments, as well as shared hosting / multitenant applications. |
postgresql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md | |
role-based-access-control | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md | Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
sap | High Availability Guide Rhel Nfs Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md | The following items are prefixed with: ```bash # mount temporarily the volume sudo mkdir -p /saptmp- sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o vers=4,minorversion=1,sec=sys + sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o noresvport,vers=4,minorversion=1,sec=sys # create the SAP directories sudo cd /saptmp sudo mkdir -p sapmntNW1 The following items are prefixed with: ```bash vi /etc/fstab # Add the following lines to fstab, save and exit- sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0 - sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0 - sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1sys/ /usr/sap/NW1/SYS nfs vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1sys/ /usr/sap/NW1/SYS nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 # Mount the file systems mount -a The following items are prefixed with: sudo pcs node standby sap-cl2 sudo pcs resource create fs_NW1_ASCS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ascs' \- directory='/usr/sap/NW1/ASCS00' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \ + directory='/usr/sap/NW1/ASCS00' fstype='nfs' force_unmount=safe options='noresvport,vers=4,minorversion=1,sec=sys' \ op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \ --group g-NW1_ASCS The following items are prefixed with: sudo pcs node standby sap-cl1 sudo pcs resource create fs_NW1_AERS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ers' \- directory='/usr/sap/NW1/ERS01' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \ + directory='/usr/sap/NW1/ERS01' fstype='nfs' force_unmount=safe options='noresvport,vers=4,minorversion=1,sec=sys' \ op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40 \ --group g-NW1_AERS The following items are prefixed with: ```bash vi /etc/fstab # Add the following lines to fstab, save and exit- sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0 - sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 # Mount the file systems mount -a |
sap | High Availability Guide Rhel With Dialog Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-with-dialog-instance.md | The following items are prefixed with either **[A]** - applicable to all nodes, ```bash # mount temporarily the volume sudo mkdir -p /saptmp- sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o vers=4,minorversion=1,sec=sys + sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o noresvport,vers=4,minorversion=1,sec=sys # create the SAP directories sudo cd /saptmp The following items are prefixed with either **[A]** - applicable to all nodes, # If using NFS on Azure files sudo pcs resource create fs_NW1_PAS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1D02' \- directory='/usr/sap/NW1/D02' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \ + directory='/usr/sap/NW1/D02' fstype='nfs' force_unmount=safe options='noresvport,vers=4,minorversion=1,sec=sys' \ op start interval=0 timeout=60 \ op stop interval=0 timeout=120 \ op monitor interval=200 timeout=40 \ The following items are prefixed with either **[A]** - applicable to all nodes, # If using NFS on Azure files sudo pcs resource create fs_NW1_AAS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1D03' \- directory='/usr/sap/NW1/D03' fstype='nfs' force_unmount=safe options='sec=sys,vers=4.1' \ + directory='/usr/sap/NW1/D03' fstype='nfs' force_unmount=safe options='noresvport,vers=4,minorversion=1,sec=sys' \ op start interval=0 timeout=60 \ op stop interval=0 timeout=120 \ op monitor interval=200 timeout=40 \ |
sap | High Availability Guide Suse Nfs Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-azure-files.md | The following items are prefixed with either **[A]** - applicable to all nodes, ```bash # mount temporarily the volume sudo mkdir -p /saptmp- sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o vers=4,minorversion=1,sec=sys + sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o noresvport,vers=4,minorversion=1,sec=sys # create the SAP directories sudo cd /saptmp sudo mkdir -p sapmntNW1 The following items are prefixed with either **[A]** - applicable to all nodes, ```bash vi /etc/fstab # Add the following lines to fstab, save and exit- sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0 - sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0 - sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1sys/ /usr/sap/NW1/SYS nfs vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1sys/ /usr/sap/NW1/SYS nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 # Mount the file systems mount -a The following items are prefixed with either **[A]** - applicable to all nodes, ```bash sudo crm node standby sap-cl2- sudo crm configure primitive fs_NW1_ASCS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ascs' directory='/usr/sap/NW1/ASCS00' fstype='nfs' options='sec=sys,vers=4.1' \ + sudo crm configure primitive fs_NW1_ASCS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ascs' directory='/usr/sap/NW1/ASCS00' fstype='nfs' options='noresvport,vers=4,minorversion=1,sec=sys' \ op start timeout=60s interval=0 \ op stop timeout=60s interval=0 \ op monitor interval=20s timeout=40s The following items are prefixed with either **[A]** - applicable to all nodes, ```bash sudo crm node online sap-cl2 sudo crm node standby sap-cl1- sudo crm configure primitive fs_NW1_ERS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ers' directory='/usr/sap/NW1/ERS01' fstype='nfs' options='sec=sys,vers=4.1' \ + sudo crm configure primitive fs_NW1_ERS Filesystem device='sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1ers' directory='/usr/sap/NW1/ERS01' fstype='nfs' options='noresvport,vers=4,minorversion=1,sec=sys' \ op start timeout=60s interval=0 \ op stop timeout=60s interval=0 \ op monitor interval=20s timeout=40s The following items are prefixed with either **[A]** - applicable to both PAS an ```bash vi /etc/fstab # Add the following lines to fstab, save and exit- sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0 - sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 + sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0 # Mount the file systems mount -a |
sap | High Availability Guide Suse Nfs Simple Mount | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-simple-mount.md | The following items are prefixed with: ```bash # Temporarily mount the volume. sudo mkdir -p /saptmp- sudo mount -t nfs sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o vers=4.1,sec=sys + sudo mount -t nfs sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o noresvport,vers=4,minorversion=1,sec=sys # Create the SAP directories. sudo cd /saptmp sudo mkdir -p sapmntNW1 The following items are prefixed with: With the simple mount configuration, the Pacemaker cluster doesn't control the file systems. ```bash- echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4.1,sec=sys 0 0" >> /etc/fstab - echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1/ /usr/sap/NW1 nfs vers=4.1,sec=sys 0 0" >> /etc/fstab - echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4.1,sec=sys 0 0" >> /etc/fstab + echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab + echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1/ /usr/sap/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab + echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab # Mount the file systems. mount -a ``` If you're using NFS on Azure Files, use the following instructions to prepare th 2. Mount the file systems. ```bash- echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4.1,sec=sys 0 0" >> /etc/fstab - echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4.1,sec=sys 0 0" >> /etc/fstab + echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab + echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs noresvport,vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab # Mount the file systems. mount -a ``` |
sap | High Availability Guide Suse Pacemaker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-pacemaker.md | Run the following commands on the nodes of the new cluster that you want to crea [...] ``` + > [!NOTE] + > If the SBD_DELAY_START property value is set to "no", change the value to "yes". You must also check the SBD service file to ensure that the value of TimeoutStartSec is greater than the value of SBD_DELAY_START. For more information, see [SBD file configuraton](https://documentation.suse.com/sle-ha/15-SP5/html/SLE-HA-all/cha-ha-storage-protect.html#pro-ha-storage-protect-sbd-config) + 1. **[A]** Create the `softdog` configuration file. ```bash This section applies only if you want to use an SBD device with an Azure shared $Location = "MyAzureRegion" ``` -1. Define the size of the disk based on available disk size for Premium SSDs. In this example, P1 disk size of 4G is mentioned. +2. Define the size of the disk based on available disk size for Premium SSDs. In this example, P1 disk size of 4G is mentioned. ```bash $DiskSizeInGB = 4 $DiskName = "SBD-disk1" ``` -1. With parameter -MaxSharesCount, define the maximum number of cluster nodes to attach the shared disk for the SBD device. +3. With parameter -MaxSharesCount, define the maximum number of cluster nodes to attach the shared disk for the SBD device. ```bash $ShareNodes = 2 ``` -1. For an SBD device that uses LRS for an Azure premium shared disk, use the following storage SkuName: +4. For an SBD device that uses LRS for an Azure premium shared disk, use the following storage SkuName: ```bash $SkuName = "Premium_LRS" ``` -1. For an SBD device that uses ZRS for an Azure premium shared disk, use the following storage SkuName: +5. For an SBD device that uses ZRS for an Azure premium shared disk, use the following storage SkuName: ```bash $SkuName = "Premium_ZRS" ``` -1. Set up an Azure shared disk. +6. Set up an Azure shared disk. ```bash $diskConfig = New-AzDiskConfig -Location $Location -SkuName $SkuName -CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount $ShareNodes $dataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName $DiskName -Disk $diskConfig ``` -1. Attach the disk to the cluster VMs. +7. Attach the disk to the cluster VMs. ```bash $VM1 = "prod-cl1-0" If you want to deploy resources by using the Azure CLI or the Azure portal, you ### Set up an Azure shared disk SBD device -1. **[A]** Make sure that the attached disk is available. +1. **[A]** Install iSCSI package. ++ ```bash + sudo zypper install open-iscsi + ``` ++2. **[A]** Enable the iSCSI and SBD services. ++ ```bash + sudo systemctl enable iscsid + sudo systemctl enable iscsi + sudo systemctl enable sbd + ``` ++3. **[A]** Make sure that the attached disk is available. ```bash # lsblk If you want to deploy resources by using the Azure CLI or the Azure portal, you [5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc ``` -1. **[A]** Retrieve the IDs of the attached disks. +4. **[A]** Retrieve the IDs of the attached disks. ```bash # ls -l /dev/disk/by-id/scsi-* | grep sdc If you want to deploy resources by using the Azure CLI or the Azure portal, you The commands list device IDs for the SBD device. We recommend using the ID that starts with scsi-3. In the preceding example, the ID is **/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19**. -1. **[1]** Create the SBD device. +5. **[1]** Create the SBD device. Use the device ID from step 2 to create the new SBD devices on the first cluster node. If you want to deploy resources by using the Azure CLI or the Azure portal, you # sudo sbd -d /dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19 -1 60 -4 120 create ``` -1. **[A]** Adapt the SBD configuration. +6. **[A]** Adapt the SBD configuration. a. Open the SBD config file. If you want to deploy resources by using the Azure CLI or the Azure portal, you [...] ``` -1. Create the `softdog` configuration file. + > [!NOTE] + > If the SBD_DELAY_START property value is set to "no", change the value to "yes". You must also check the SBD service file to ensure that the value of TimeoutStartSec is greater than the value of SBD_DELAY_START. For more information, see [SBD file configuraton](https://documentation.suse.com/sle-ha/15-SP5/html/SLE-HA-all/cha-ha-storage-protect.html#pro-ha-storage-protect-sbd-config) ++7. Create the `softdog` configuration file. ```bash echo softdog | sudo tee /etc/modules-load.d/softdog.conf ``` -1. Load the module. +8. Load the module. ```bash sudo modprobe -v softdog To create a service principal, do the following: 2. Select **App registrations**. 3. Select **New registration**. 4. Enter a name for the registration, and then select **Accounts in this organization directory only**.-5. For **Application type**, select **Web**, enter a sign-on URL (for example, *http://localhost*), and then select **Add**. +5. For **Application type**, select **Web**, enter a sign-on URL (for example, `http://localhost`), and then select **Add**. The sign-on URL isn't used and can be any valid URL. 6. Select **Certificates and secrets**, and then select **New client secret**. 7. Enter a description for a new key, select **Two years**, and then select **Add**. Make sure to assign the custom role to the service principal at all VM (cluster > The 'pcmk_host_map' option is required in the command only if the hostnames and the Azure VM names are *not* identical. Specify the mapping in the format *hostname:vm-name*. > Refer to the bold section in the following command. - #### [Managed identity](#tab/msi) +#### [Managed identity](#tab/msi) - ```bash - # replace the bold strings with your subscription ID and resource group of the VM - - sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm \ - params msi=true subscriptionId="subscription ID" resourceGroup="resource group" \ - pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ - op monitor interval=3600 timeout=120 + ```bash + # replace the bold strings with your subscription ID and resource group of the VM ++ sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm \ + params msi=true subscriptionId="subscription ID" resourceGroup="resource group" \ + pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ + op monitor interval=3600 timeout=120 - sudo crm configure property stonith-timeout=900 - ``` + sudo crm configure property stonith-timeout=900 + ``` - #### [Service principal](#tab/spn) +#### [Service principal](#tab/spn) - ```bash - # replace the bold strings with your subscription ID, resource group of the VM, tenant ID, service principal application ID and password - - sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm \ - params subscriptionId="subscription ID" resourceGroup="resource group" tenantId="tenant ID" login="application ID" passwd="password" \ - pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ - op monitor interval=3600 timeout=120 + ```bash + # replace the bold strings with your subscription ID, resource group of the VM, tenant ID, service principal application ID and password - sudo crm configure property stonith-timeout=900 - ``` + sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm \ + params subscriptionId="subscription ID" resourceGroup="resource group" tenantId="tenant ID" login="application ID" passwd="password" \ + pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ + op monitor interval=3600 timeout=120 - + sudo crm configure property stonith-timeout=900 + ``` - If you're using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration. + - > [!IMPORTANT] - > The monitoring and fencing operations are deserialized. As a result, if there's a longer-running monitoring operation and simultaneous fencing event, there's no delay to the cluster failover because the monitoring operation is already running. + If you're using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration. - > [!TIP] - >The Azure fence agent requires outbound connectivity to the public endpoints, as documented, along with possible solutions, in [Public endpoint connectivity for VMs using standard ILB](./high-availability-guide-standard-load-balancer-outbound-connections.md). + > [!IMPORTANT] + > The monitoring and fencing operations are deserialized. As a result, if there's a longer-running monitoring operation and simultaneous fencing event, there's no delay to the cluster failover because the monitoring operation is already running. ++ > [!TIP] + >The Azure fence agent requires outbound connectivity to the public endpoints, as documented, along with possible solutions, in [Public endpoint connectivity for VMs using standard ILB](./high-availability-guide-standard-load-balancer-outbound-connections.md). ## Configure Pacemaker for Azure scheduled events Azure offers [scheduled events](../../virtual-machines/linux/scheduled-events.md ``` Minimum version requirements:+ - SLES 12 SP5: `resource-agents-4.3.018.a7fb5035-3.98.1` - SLES 15 SP1: `resource-agents-4.3.0184.6ee15eb2-150100.4.72.1` - SLES 15 SP2: `resource-agents-4.4.0+git57.70549516-150200.3.56.1` - SLES 15 SP3: `resource-agents-4.8.0+git30.d0077df0-150300.8.31.1` - SLES 15 SP4 and newer: `resource-agents-4.10.0+git40.0f4de473-150400.3.19.1` -2. **[1]** Configure the resources in Pacemaker. +1. **[1]** Configure the resources in Pacemaker. ```bash #Place the cluster in maintenance mode sudo crm configure property maintenance-mode=true ``` -3. **[1]** Set the pacemaker cluster health node strategy and constraint +1. **[1]** Set the pacemaker cluster health node strategy and constraint ```bash sudo crm configure property node-health-strategy=custom Azure offers [scheduled events](../../virtual-machines/linux/scheduled-events.md > > Don't define any other resources in the cluster starting with "health-", besides the resources described in the next steps of the documentation. -4. **[1]** Set initial value of the cluster attributes. +1. **[1]** Set initial value of the cluster attributes. Run for each cluster node. For scale-out environments including majority maker VM. ```bash Azure offers [scheduled events](../../virtual-machines/linux/scheduled-events.md sudo crm_attribute --node prod-cl1-1 --name '#health-azure' --update 0 ``` -5. **[1]** Configure the resources in Pacemaker. +1. **[1]** Configure the resources in Pacemaker. Important: The resources must start with 'health-azure'. ```bash Azure offers [scheduled events](../../virtual-machines/linux/scheduled-events.md > > WARNING: health-azure-events: unknown attribute 'allow-unhealthy-nodes'. -6. Take the Pacemaker cluster out of maintenance mode +1. Take the Pacemaker cluster out of maintenance mode ```bash sudo crm configure property maintenance-mode=false ``` -7. Clear any errors during enablement and verify that the health-azure-events resources have started successfully on all cluster nodes. +1. Clear any errors during enablement and verify that the health-azure-events resources have started successfully on all cluster nodes. ```bash sudo crm resource cleanup Azure offers [scheduled events](../../virtual-machines/linux/scheduled-events.md > [!NOTE] > After you've configured the Pacemaker resources for the azure-events agent, if you place the cluster in or out of maintenance mode, you might get warning messages such as: >- > WARNING: cib-bootstrap-options: unknown attribute 'hostName_ **hostname**' + > WARNING: cib-bootstrap-options: unknown attribute 'hostName_**hostname**' > WARNING: cib-bootstrap-options: unknown attribute 'azure-events_globalPullState' > WARNING: cib-bootstrap-options: unknown attribute 'hostName_ **hostname**' > These warning messages can be ignored. |
sap | Universal Print Sap Frontend | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/universal-print-sap-frontend.md | SAP defines front-end printing with several [constraints](https://help.sap.com/d ## Next steps Check out the documentation: -- [SAPΓÇÖs print queue API](https://api.sap.com/api/API_CLOUD_PRINT_PULL_SRV/overview)+- [Integrating SAP S/4HANA Cloud and Local Printers](https://help.sap.com/docs/SAP_S4HANA_CLOUD/0f69f8fb28ac4bf48d2b57b9637e81fa/1e39bb68bbda4c48af4a79d35f5837e0.html?locale=en-US&version=latest) - [Universal Print API](/graph/api/resources/print) |
search | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md | Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
search | Search Howto Index Csv Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-csv-blobs.md | id, datePublished, tags 2, 2016-07-07, "cloud,mobile" ``` +If a field inside the CSV file contains the delimeter, it should be wrapped in quotes. If the field contains a quote, it must be escaped using double quotes (`""`). ++```text +id, datePublished, tags +1, 2020-01-05, "tags,with,""quoted text""" +``` + Without the `delimitedText` parsing mode, the entire contents of the CSV file would be treated as one search document. Whenever you're creating multiple search documents from a single blob, be sure to review [Indexing blobs to produce multiple search documents](search-howto-index-one-to-many-blobs.md) to understand how document key assignments work. The blob indexer is capable of finding or generating values that uniquely define each new document. Specifically, it can create a transitory `AzureSearch_DocumentKey` that generated when a blob is parsed into smaller parts, where the value is then used as the search document's key in the index. |
security | Azure Marketplace Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-marketplace-images.md | -Your image must meet these security configuration recommendations. This helps maintain a high level of security for partner solution images in the Azure Marketplace. +Prior to uploading images to the Azure Marketplace, your image must be updated with several security configuration requirements. These requirements help maintain a high level of security for partner solution images across the Azure Marketplace. -Always run a security vulnerability detection on your image prior to submitting. If you detect a security vulnerability in your own published image, you must inform your customers in a timely manner of both the vulnerability and how to correct it. +Make sure to run a security vulnerability detection on your image Prior to submitting it to the Azure Marketplace. If you detect a security vulnerability in your own already published image, you must inform your customers in a timely manner both of the vulnerability's details and how to correct it in current deployments. -## Open Source-based Images +## Linux and open source OS images | Category | Check | | -- | -- | Always run a security vulnerability detection on your image prior to submitting. | Security | The VHD image only includes necessary locked accounts that do not have default passwords that would allow interactive login; no back doors. | | Security | Disable firewall rules unless application functionally relies on them, such as a firewall appliance. | | Security | Remove all sensitive information from the VHD image, such as test SSH keys, known hosts file, log files, and unnecessary certificates. |-| Security | Avoid using LVM. | -| Security | Include the latest versions of required libraries: </br> - OpenSSL v1.0 or greater </br> - Python 2.5 or above (Python 2.6+ is highly recommended) </br> - Python pyasn1 package if not already installed </br> - d.OpenSSL v 1.0 or greater | -| Security | Clear Bash/Shell history entries. | +| Security | Avoid using LVM. LVM is Vulnerable to write caching issues with VM hypervisors and also increases data recovery complexity for users of your image. | +| Security | Include the latest versions of required libraries: </br> - OpenSSL v1.0 or greater </br> - Python 2.5 or above (Python 2.6+ is highly recommended) </br> - Python pyasn1 package if not already installed </br> - d.OpenSSL v 1.0 or greater | +| Security | Clear Bash/Shell history entries. This could include private information or plain-text credentials for other systems. | | Networking | Include the SSH server by default. Set SSH keep alive to sshd config with the following option: ClientAliveInterval 180. |-| Networking | Remove any custom network configuration from the image. Delete the resolv.conf: `rm /etc/resolv.conf`. | +| Networking | Remove any custom network configuration from the image. Delete the resolv.conf: `rm /etc/resolv.conf`. | | Deployment | Install the latest Azure Linux Agent.</br> - Install using the RPM or Deb package. </br> - You may also use the manual install process, but the installer packages are recommended and preferred. </br> - If installing the agent manually from the GitHub repository, first copy the `waagent` file to `/usr/sbin` and run (as root): </br>`# chmod 755 /usr/sbin/waagent` </br>`# /usr/sbin/waagent -install` </br>The agent configuration file is placed at `/etc/waagent.conf`. | | Deployment | Ensure Azure Support can provide our partners with serial console output when needed and provide adequate timeout for OS disk mounting from cloud storage. Add the following parameters to the image Kernel Boot Line: `console=ttyS0 earlyprintk=ttyS0 rootdelay=300`. | | Deployment | No swap partition on the OS disk. Swap can be requested for creation on the local resource disk by the Linux Agent. | | Deployment | Create a single root partition for the OS disk. | | Deployment | 64-bit operating system only. | -## Windows Server-based Images +## Windows Server images | Category | Check | | | -- | |
security | Trusted Hardware Identity Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/trusted-hardware-identity-management.md | The Open Enclave SDK and Azure Attestation don't look at the `nextUpdate` date, The Azure Data Center Attestation Primitives (DCAP) library, a replacement for Intel Quote Provider Library (QPL), fetches quote generation collateral and quote validation collateral directly from the Trusted Hardware Identity Management service. Fetching collateral directly from the Trusted Hardware Identity Management service ensures that all Azure hosts have collateral readily available within the Azure cloud to reduce external dependencies. The current recommended version of the DCAP library is 1.11.2. -### Where can I download the latest DCAP packages? +### Where can I download the latest Azure DCAP library? Use the following links to download the packages: Use the following links to download the packages: - [Ubuntu 18.04](https://packages.microsoft.com/ubuntu/18.04/prod/pool/main/64.deb) - [Windows](https://www.nuget.org/packages/Microsoft.Azure.DCAP/1.12.0) +For newer versions of Ubuntu (for example, Ubuntu 22.04), you have to use the [Intel QPL](#how-do-i-use-intel-qpl-with-trusted-hardware-identity-management). + ### Why do Trusted Hardware Identity Management and Intel have different baselines? Trusted Hardware Identity Management and Intel provide different baseline levels of the trusted computing base. When customers assume that Intel has the latest baselines, they must ensure that all the requirements are satisfied. This approach can lead to a breakage if customers haven't updated to the specified requirements. Trusted Hardware Identity Management takes a slower approach to updating the TCB baseline, so customers can make the necessary changes at their own pace. Although this approach provides an older TCB baseline, customers won't experience a breakage if they haven't met the requirements of the new TCB baseline. This is why the TCB baseline from Trusted Hardware Identity Management is a different version from Intel's baseline. We want to empower customers to meet the requirements of the new TCB baseline at their pace, instead of forcing them to update and causing a disruption that would require reprioritization of workstreams. -### With Coffee Lake, I could get my certificates directly from the Intel PCK. Why, with Ice Lake, do I need to get the certificates from Trusted Hardware Identity Management? And how can I fetch those certificates? --The certificates are fetched and cached in the Trusted Hardware Identity Management service through a platform manifest and indirect registration. As a result, the key caching policy is set to never store root keys for a platform. Expect direct calls to the Intel service from inside the VM to fail. +### With Intel Xeon E Processors, I could get my certificates directly from the Intel PCS. Why, with Intel Xeon Scalable processors starting from the 4th generation, do I need to get the certificates from Trusted Hardware Identity Management? And how can I fetch those certificates? -To retrieve the certificate, you must install the [Azure DCAP library](#what-is-the-azure-dcap-library) that replaces Intel QPL. This library directs the fetch requests to the Trusted Hardware Identity Management service running in the Azure cloud. For download links, see [Where can I download the latest DCAP packages?](#where-can-i-download-the-latest-dcap-packages). +Starting with the 4th Generation of Intel® Xeon® Scalable Processors, Azure performs indirect registration at Intel's Registration Service using the Platform Manifest and stores the resulting PCK certificate in the Trusted Hardware Identity Management (THIM) service +Azure uses indirect registration, because Intel's registration service will not store root keys for a platform in this case and this is reflected by `false` in the `CachedKeys` flag in PCK Certificates. +As indirect registration is used, all following communication to Intel PCS would require the Platform Manifest, which Azure does not provide to virtual machines (VMs). +Instead, VMs have to reach out to THIM to receive PCK certificates. +To retrieve a PCK certificate, you can either use the [Intel QPL](#how-do-i-use-intel-qpl-with-trusted-hardware-identity-management) or the [Azure DCAP library](#what-is-the-azure-dcap-library). ### How do I use Intel QPL with Trusted Hardware Identity Management? |
sentinel | Audit Track Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/audit-track-tasks.md | Your analysts can see the list of tasks they need to perform for a particular in This article explains how you, as a SOC manager, can audit the history of Microsoft Sentinel incident tasks, and track the changes made to them throughout their life cycle, in order to gauge the efficacy of your task assignments and their contribution to your SOC's efficiency and proper functioning. -> [!IMPORTANT] -> -> The **Incident tasks** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Structure of Tasks array in the SecurityIncident table The *SecurityIncident* table is an audit table—it stores not the incidents themselves, but rather records of the life of an incident: its creation and any changes made to it. Any time an incident is created or a change is made to an incident, a record is generated in this table showing the now-current state of the incident. Apart from the **Incident tasks workbook**, you can audit task activity by query 1. Let's add a task to the incident, and then we'll come back here, run the query again, and see the changes in the results. 1. On the **Incidents** page, enter the incident ID number in the Search bar.- 1. Open the incident details page and select **Tasks (Preview)** from the toolbar. + 1. Open the incident details page and select **Tasks** from the toolbar. 1. Add a new task, give it the name "This task is a test task!", then select **Save**. The last task shown below is what you should end up with: :::image type="content" source="media/audit-track-tasks/incident-task-list-task-added.png" alt-text="Screenshot shows incident tasks panel."::: |
sentinel | Automate Incident Handling With Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md | Automation rules are a way to centrally manage automation in Microsoft Sentinel, Automation rules apply to the following categories of use cases: - Perform basic automation tasks for incident handling without using playbooks. For example:- - [Add incident tasks](incident-tasks.md) (in Preview) for analysts to follow. + - [Add incident tasks](incident-tasks.md) for analysts to follow. - Suppress noisy incidents. - Triage new incidents by changing their status from New to Active and assigning an owner. - Tag incidents to classify them. Currently the only condition that can be configured for the alert creation trigg Actions can be defined to run when the conditions (see above) are met. You can define many actions in a rule, and you can choose the order in which theyΓÇÖll run (see below). The following actions can be defined using automation rules, without the need for the [advanced functionality of a playbook](automate-responses-with-playbooks.md): -- Adding a task to an incident - you can create a [checklist of tasks for analysts to follow](incident-tasks.md) throughout the processes of triage, investigation, and remediation of the incident, to ensure that no critical steps are missed.+- Adding a task to an incident ΓÇô you can create a [checklist of tasks for analysts to follow](incident-tasks.md) throughout the processes of triage, investigation, and remediation of the incident, to ensure that no critical steps are missed. - Changing the status of an incident, keeping your workflow up to date. |
sentinel | Automate Responses With Playbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md | There are circumstances, though, that call for running playbooks manually. For e - When creating a new playbook, you'll want to test it before putting it in production. - There may be situations where you'll want to have more control and human input into when and whether a certain playbook runs. - You [run a playbook manually](tutorial-respond-threats-playbook.md#run-a-playbook-on-demand) by opening an incident, alert, or entity and selecting and running the associated playbook displayed there. Currently this feature is generally available for alerts, and in preview for incidents and entities. -- ### Set an automated response Security operations teams can significantly reduce their workload by fully automating the routine responses to recurring types of incidents and alerts, allowing you to concentrate more on unique incidents and alerts, analyzing patterns, threat hunting, and more. |
sentinel | Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation.md | Microsoft Sentinel, in addition to being a Security Information and Event Manage ## Automation rules -Automation rules allow users to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents and alerts, automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, create lists of tasks for your analysts to perform when triaging, investigating, and remediating incidents, and control the order of actions that are executed. Automation rules also allow you to apply automations when an incident is **updated** (now in **Preview**), as well as when it's created. This new capability will further streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes. +Automation rules allow users to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents and alerts, automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, create lists of tasks for your analysts to perform when triaging, investigating, and remediating incidents, and control the order of actions that are executed. Automation rules also allow you to apply automations when an incident is updated, as well as when it's created. This new capability will further streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes. Learn more with this [complete explanation of automation rules](automate-incident-handling-with-automation-rules.md). |
sentinel | Create Manage Use Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-manage-use-automation-rules.md | The first step in designing and defining your automation rule is figuring out wh You also want to determine your use case. What are you trying to accomplish with this automation? Consider the following options: -- (**Preview**) Create tasks for your analysts to follow in triaging, investigating, and remediating incidents.+- Create tasks for your analysts to follow in triaging, investigating, and remediating incidents. - Suppress noisy incidents (see [this article on handling false positives](false-positives.md#add-exceptions-by-using-automation-rules) instead) - Triage new incidents by changing their status from New to Active and assigning an owner. - Tag incidents to classify them. |
sentinel | Create Tasks Automation Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-tasks-automation-rule.md | This article explains how to use automation rules to create lists of incident ta [Incident tasks](incident-tasks.md) can be created automatically not only by automation rules, but also by playbooks, and also manually, ad-hoc, from within an incident. -> [!IMPORTANT] -> -> The **Incident tasks** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Use cases for different roles This article addresses the following scenarios that apply to SOC managers, senior analysts, and automation engineers: In the **Automation** page, you can filter the view of automation rules to see o 1. Unmark the **Select all** checkbox. -1. Scroll down and mark the **Add task (Preview)** checkbox. +1. Scroll down and mark the **Add task** checkbox. 1. Select **OK** and see the results. Give your automation rule a name that describes what it does. :::image type="content" source="media/create-tasks-automation-rule/create-new-automation-rule.png" alt-text="Screenshot of first part of automation rule wizard."::: -1. Under **Actions**, select **Add task (preview)**. +1. Under **Actions**, select **Add task**. :::image type="content" source="media/create-tasks-automation-rule/add-task-action.png" alt-text="Screenshot of choosing the Add Task action in an automation rule."::: |
sentinel | Create Tasks Playbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-tasks-playbook.md | This article explains how to use playbooks to create (and optionally perform) in [Incident tasks](incident-tasks.md) can be created automatically not only by playbooks, but also by automation rules, and also manually, ad-hoc, from within an incident. -> [!IMPORTANT] -> -> The **Incident tasks** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Use cases for different roles This article addresses the following scenarios that apply to SOC managers, senior analysts, and automation engineers: In this example we're going to add a playbook action that adds a task to the inc To add and configure these actions, take the following steps: -1. From the **Microsoft Sentinel** connector, add the **Add task to incident (Preview)** action. +1. From the **Microsoft Sentinel** connector, add the **Add task to incident** action. Choose the **Incident ARM ID** dynamic content item for the **Incident ARM id** field. Enter **Reset user password** as the **Title**. Add a description if you want. :::image type="content" source="media/create-tasks-playbook/add-task-reset-password.png" alt-text="Screenshot shows playbook actions to add a task to reset a user's password."::: To add and configure these actions, take the following steps: :::image type="content" source="media/create-tasks-playbook/confirm-compromised.png" alt-text="Screenshot shows sending entities to AADIP to confirm compromise."::: -1. Add the **Mark a task as completed (Preview)** action from the Microsoft Sentinel connector. +1. Add the **Mark a task as completed** action from the Microsoft Sentinel connector. Add the **Incident task ID** dynamic content item to the **Task ARM id** field. :::image type="content" source="media/create-tasks-playbook/mark-complete.png" alt-text="Screenshot shows how to add a playbook action to mark an incident task complete."::: In this example we're going to add a playbook action that researches an IP addre :::image type="content" source="media/create-tasks-playbook/set-condition.png" alt-text="Screenshot shows how to set a true-false condition in a playbook."::: 1. Inside the **True** option, select **Add an action**. - Select the **Add task to incident (Preview)** action from the **Microsoft Sentinel** connector. + Select the **Add task to incident** action from the **Microsoft Sentinel** connector. Choose the **Incident ARM ID** dynamic content item for the **Incident ARM id** field. Enter **Mark user as compromised** as the **Title**. Add a description if you want. :::image type="content" source="media/create-tasks-playbook/condition-true.png" alt-text="Screenshot shows playbook actions to add a task to mark a user as compromised."::: 1. Inside the **False** option, select **Add an action**. - Select the **Add task to incident (Preview)** action from the **Microsoft Sentinel** connector. + Select the **Add task to incident** action from the **Microsoft Sentinel** connector. Choose the **Incident ARM ID** dynamic content item for the **Incident ARM id** field. Enter **Reach out to the user to confirm the activity** as the **Title**. Add a description if you want. |
sentinel | Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customer-managed-keys.md | This article provides background information and steps to configure a [customer- ## Prerequisites -1. Configure a Log Analytics dedicated cluster with at least a 500 GB/day commitment tier. When multiple workspaces are linked to the same dedicated cluster, they share the same customer-managed key. Learn about [Log Analytics Dedicated Cluster Pricing](../azure-monitor/logs/logs-dedicated-clusters.md#cluster-pricing-model). +1. Configure a Log Analytics dedicated cluster with at least a 100 GB/day commitment tier. When multiple workspaces are linked to the same dedicated cluster, they share the same customer-managed key. Learn about [Log Analytics Dedicated Cluster Pricing](../azure-monitor/logs/logs-dedicated-clusters.md#cluster-pricing-model). 1. Configure the CMK within Azure Monitor. Don't onboard the workspace to Sentinel yet. Learn about the [CMK provisioning steps](../azure-monitor/logs/customer-managed-keys.md?tabs=portal#customer-managed-key-provisioning-steps). 1. Contact the [Microsoft Sentinel Product Group](mailto:onboardrecoeng@microsoft.com) - you must receive onboarding confirmation as part of completing the steps in this guide before you use the workspace. |
sentinel | Feature Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md | This article describes the features available in Microsoft Sentinel across diffe |[Create incidents manually](create-incident-manually.md) |GA |✅ |✅| ✅ | |[Cross-tenant/Cross-workspace incidents view](multiple-workspace-view.md) |GA |✅ |✅ |✅ | |[Incident advanced search](investigate-cases.md#search-for-incidents) |GA |✅ |✅| ✅ |-|[Incident tasks](incident-tasks.md) |Public preview |✅ |✅| ✅ | -|[Microsoft Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md#working-with-microsoft-defender-xdr-incidents-in-microsoft-sentinel-and-bi-directional-sync) |GA |✅ |✅| ❌ | +|[Incident tasks](incident-tasks.md) |GA |✅ |✅| ✅ | +|[Microsoft 365 Defender incident integration](microsoft-365-defender-sentinel-integration.md#working-with-microsoft-defender-xdr-incidents-in-microsoft-sentinel-and-bi-directional-sync) |GA |✅ |✅| ❌ | |[Microsoft Teams integrations](collaborate-in-microsoft-teams.md) |Public preview |✅ |✅| ❌ | |[Playbook template gallery](use-playbook-templates.md) |Public preview |✅ |✅| ❌ | |[Run playbooks on entities](respond-threats-during-investigation.md) |Public preview |✅ |✅ |❌ | |
sentinel | Incident Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/incident-tasks.md | Last updated 11/14/2022 # Use tasks to manage incidents in Microsoft Sentinel -> [!IMPORTANT] -> -> The **Incident tasks** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - One of the most important factors in running your security operations (SecOps) effectively and efficiently is the **standardization of processes**. SecOps analysts are expected to perform a list of steps, or tasks, in the process of triaging, investigating, or remediating an incident. Standardizing and formalizing the list of tasks can help keep your SOC running smoothly, ensuring the same requirements apply to all analysts. This way, regardless of who is on-shift, an incident will always get the same treatment and SLAs. Analysts won't need to spend time thinking about what to do, or worry about missing a critical step. Those steps are defined by the SOC manager or senior analysts (tier 2/3) based on common security knowledge (such as NIST), their experience with past incidents, or recommendations provided by the security vendor that detected the incident. ## Use cases |
sentinel | Investigate Incidents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-incidents.md | As you're setting up to investigate an incident, assemble the things you'll need :::image type="content" source="media/investigate-incidents/top-toolbar.png" alt-text="Screenshot of the button bar on the incident details page."::: -1. Select **Tasks (Preview)** to [see the tasks assigned for this incident](work-with-tasks.md#view-and-follow-incident-tasks), or to [add your own tasks](work-with-tasks.md#manually-add-an-ad-hoc-task-to-an-incident). +1. Select **Tasks** to [see the tasks assigned for this incident](work-with-tasks.md#view-and-follow-incident-tasks), or to [add your own tasks](work-with-tasks.md#manually-add-an-ad-hoc-task-to-an-incident). Learn more about [using incident tasks](incident-tasks.md) to improve process standardization in your SOC. |
sentinel | Watchlists Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-create.md | Watchlists in Microsoft Sentinel allow you to correlate data from a data source Upload a watchlist file from a local folder or from your Azure Storage account. To create a watchlist file, you have the option to download one of the watchlist templates from Microsoft Sentinel to populate with your data. Then upload that file when you create the watchlist in Microsoft Sentinel. -Local file uploads are currently limited to files of up to 3.8 MB in size. If you have large watchlist file that's up to 500 MB in size, upload the file to your Azure Storage account. Before you create a watchlist, review the [limitations of watchlists](watchlists.md). +Local file uploads are currently limited to files of up to 3.8 MB in size. A file that's over 3.8 MB in size and up to 500 MB is considered a [large watchlist](#create-a-large-watchlist-from-file-in-azure-storage-preview) Upload the file to an Azure Storage account. Before you create a watchlist, review the [limitations of watchlists](watchlists.md). When you create a watchlist, the watchlist name and alias must each be between 3 and 64 characters. The first and last characters must be alphanumeric. But you can include whitespaces, hyphens, and underscores in between the first and last characters. |
sentinel | Watchlists | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists.md | Before you create a watchlist, be aware of the following limitations: - The use of watchlists should be limited to reference data, as they aren't designed for large data volumes. - The **total number of active watchlist items** across all watchlists in a single workspace is currently limited to **10 million**. Deleted watchlist items don't count against this total. If you require the ability to reference large data volumes, consider ingesting them using [custom logs](../azure-monitor/agents/data-sources-custom-logs.md) instead. - Watchlists are refreshed in your workspace every 12 days, updating the `TimeGenerated` field.-- Watchlists can only be referenced from within the same workspace. Cross-workspace and/or Lighthouse scenarios are currently not supported.+- Using Lighthouse to manage watchlists across different workspaces is not supported at this time. - Local file uploads are currently limited to files of up to 3.8 MB in size. - File uploads from an Azure Storage account (in preview) are currently limited to files up to 500 MB in size.+- Watchlists must adhere to the same column and table restrictions as KQL entities. For more information, see [KQL entity names](/azure/data-explorer/kusto/query/schema-entities/entity-names). ## Options to create watchlists |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | The listed features were released in the last three months. For information abou ## February 2024 +- [Incident tasks now generally available (GA)](#incident-tasks-now-generally-available-ga) - [AWS and GCP data connectors now support Azure Government clouds](#aws-and-gcp-data-connectors-now-support-azure-government-clouds) - [Windows DNS Events via AMA connector now generally available (GA)](#windows-dns-events-via-ama-connector-now-generally-available-ga) +### Incident tasks now generally available (GA) ++Incident tasks, which help you standardize your incident investigation and response practices so you can more effectively manage incident workflow, are now generally available (GA) in Microsoft Sentinel. ++- Learn more about incident tasks in the Microsoft Sentinel documentation: + - [Use tasks to manage incidents in Microsoft Sentinel](incident-tasks.md) + - [Work with incident tasks in Microsoft Sentinel](work-with-tasks.md) + - [Audit and track changes to incident tasks in Microsoft Sentinel](audit-track-tasks.md) ++- See [this blog post by Benji Kovacevic](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/create-tasks-repository-in-microsoft-sentinel/ba-p/4038563) that shows how you can use incident tasks in combination with watchlists, automation rules, and playbooks to build a task management solution with two parts: + - A repository of incident tasks. + - A mechanism that automatically attaches tasks to newly created incidents, according to the incident title, and assigns them to the proper personnel. + ### AWS and GCP data connectors now support Azure Government clouds Microsoft Sentinel data connectors for Amazon Web Services (AWS) and Google Cloud Platform (GCP) now include supporting configurations to ingest data into workspaces in Azure Government clouds. Windows DNS events can now be ingested to Microsoft Sentinel using the Azure Mon ## January 2024 +[Reduce false positives for SAP systems with analytics rules](#reduce-false-positives-for-sap-systems-with-analytics-rules) + ### Reduce false positives for SAP systems with analytics rules Use analytics rules together with the [Microsoft Sentinel solution for SAP® applications](sap/solution-overview.md) to lower the number of false positives triggered from your SAP® systems. The Microsoft Sentinel solution for SAP® applications now includes the following enhancements: |
sentinel | Work With Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-tasks.md | This article explains how SOC analysts can use incident tasks to manage their in You can see the list of tasks you need to perform for a particular incident on the incident details page, and mark them complete as you go. -> [!IMPORTANT] -> -> The **Incident tasks** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Use cases for different roles This article addresses the following scenarios, which apply to SOC analysts: The **Microsoft Sentinel Responder** role is required to create automation rules ## View and follow incident tasks -1. In the **Incidents** page, select an incident from the list, and select **View full details** under **Tasks (Preview)** in the details panel, or select **View full details** at the bottom of the details panel. +1. In the **Incidents** page, select an incident from the list, and select **View full details** under **Tasks** in the details panel, or select **View full details** at the bottom of the details panel. :::image type="content" source="media/work-with-tasks/tasks-from-incident-info-panel.png" alt-text="Screenshot of link to enter the tasks panel from the incident info panel on the main incidents screen."::: -1. If you opted to enter the full details page, select **Tasks (Preview)** from the top banner. +1. If you opted to enter the full details page, select **Tasks** from the top banner. :::image type="content" source="media/work-with-tasks/incident-details-screen.png" alt-text="Screenshot shows incident details screen with tasks panel open." lightbox="media/work-with-tasks/incident-details-screen.png"::: -1. The **Incident tasks (Preview)** panel will open on the right side of whichever screen you were in (the main incidents page or the incident details page). You'll see the list of tasks defined for this incident, along with how or by whom it was created - whether manually or by an automation rule or a playbook. +1. The **Incident tasks** panel will open on the right side of whichever screen you were in (the main incidents page or the incident details page). You'll see the list of tasks defined for this incident, along with how or by whom it was created - whether manually or by an automation rule or a playbook. :::image type="content" source="media/work-with-tasks/incident-tasks-panel.png" alt-text="Screenshot shows incident tasks panel as seen from incident details page."::: The **Microsoft Sentinel Responder** role is required to create automation rules You can also add tasks for yourself, on the spot, to an incident's task list. This task will apply only to the open incident. This helps if your investigation leads you in new directions and you think of new things you need to check. Adding these as tasks ensures that you won't forget to do them, and that there will be a record of what you did, that other analysts and managers can benefit from. -1. Select **+ Add task** from the top of the **Incident tasks (Preview)** panel. +1. Select **+ Add task** from the top of the **Incident tasks** panel. :::image type="content" source="media/work-with-tasks/add-task-ad-hoc-1.png" alt-text="Screenshot shows how to manually add a task to your task list."::: |
service-bus-messaging | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md | Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
service-bus-messaging | Service Bus Dotnet Get Started With Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md | |
service-bus-messaging | Service Bus Dotnet How To Use Topics Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md | |
service-bus-messaging | Service Bus Integrate With Rabbitmq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-integrate-with-rabbitmq.md | |
service-bus-messaging | Service Bus Java How To Use Jms Api Amqp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-jms-api-amqp.md | - - seo-java-july2019 - - seo-java-august2019 - - seo-java-september2019 - devx-track-java - devx-track-extended-java - ignite-2023 |
service-bus-messaging | Service Bus Manage With Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-manage-with-ps.md | Title: Use PowerShell to manage Azure Service Bus resources | Microsoft Docs description: This article explains how to use Azure PowerShell module to create and manage Service Bus entities (namespaces, queues, topics, subscriptions). Last updated 09/20/2021-+ # Use PowerShell to manage Service Bus resources |
service-bus-messaging | Service Bus Migrate Standard Premium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-standard-premium.md | Title: Migrate Azure Service Bus namespaces - standard to premium description: Guide to allow migration of existing Azure Service Bus standard namespaces to premium - Last updated 08/17/2023 |
service-bus-messaging | Service Bus Outages Disasters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-outages-disasters.md | Title: Insulate Azure Service Bus applications against outages and disasters description: This article provides techniques to protect applications against a potential Azure Service Bus outage. - Last updated 12/15/2022 |
service-bus-messaging | Service Bus Partitioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-partitioning.md | Last updated 10/12/2022 ms.devlang: csharp - devx-track-csharp- - ignite-2022 - ignite-2023 |
service-bus-messaging | Service Bus Performance Improvements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md | description: Describes how to use Service Bus to optimize performance when excha Last updated 06/30/2023 ms.devlang: csharp-+ # Best Practices for performance improvements using Service Bus Messaging |
service-bus-messaging | Service Bus Premium Messaging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-premium-messaging.md | Title: Azure Service Bus premium and standard tiers description: This article describes standard and premium tiers of Azure Service Bus. Compares these tiers and provides technical differences. -+ Last updated 05/02/2023 |
service-bus-messaging | Service Bus Python How To Use Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-python-how-to-use-queues.md | |
service-bus-messaging | Transport Layer Security Audit Minimum Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-audit-minimum-version.md | |
service-bus-messaging | Transport Layer Security Configure Client Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-client-version.md | |
service-bus-messaging | Transport Layer Security Configure Minimum Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md | |
service-bus-messaging | Transport Layer Security Enforce Minimum Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-enforce-minimum-version.md | |
service-connector | Concept Region Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-region-support.md | |
service-connector | Concept Service Connector Internals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-service-connector-internals.md | description: Learn about Service Connector internals, the architecture, the conn -+ Last updated 01/17/2023 |
service-connector | How To Integrate Confluent Kafka | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-confluent-kafka.md | |
service-connector | How To Integrate Cosmos Cassandra | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-cassandra.md | |
service-connector | How To Integrate Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md | This table indicates that all combinations of client types and authentication me Use the connection details below to connect compute services to Azure Cosmos DB. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection, as well as sample code. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<Azure-Cosmos-DB-API-for-MongoDB-account>`, `<subscription-ID>`, `<resource-group-name>`, `<client-secret>`, and `<tenant-id>` with your own information. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article. -### Secret / Connection string --#### SpringBoot client type --| Default environment variable name | Description | Example value | -|--|-|-| -| spring.data.mongodb.database | Your database | `<database-name>` | -| spring.data.mongodb.uri | Your database URI | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` | --#### Other client types --| Default environment variable name | Description | Example value | -|--|-|-| -| AZURE_COSMOS_CONNECTIONSTRING | MongoDB API connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` | --#### Sample code --Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a connection string. - ### System-assigned managed identity | Default environment variable name | Description | Example value | Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB usin Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a user-assigned managed identity. [!INCLUDE [code sample for mongo](./includes/code-cosmosmongo-me-id.md)] +### Connection string ++#### SpringBoot client type ++| Default environment variable name | Description | Example value | +|--|-|-| +| spring.data.mongodb.database | Your database | `<database-name>` | +| spring.data.mongodb.uri | Your database URI | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` | ++#### Other client types ++| Default environment variable name | Description | Example value | +|--|-|-| +| AZURE_COSMOS_CONNECTIONSTRING | MongoDB API connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` | ++#### Sample code ++Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a connection string. + ### Service principal | Default environment variable name | Description | Example value | |
service-connector | How To Integrate Cosmos Gremlin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-gremlin.md | |
service-connector | How To Integrate Cosmos Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-sql.md | |
service-connector | How To Integrate Cosmos Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-table.md | |
service-connector | How To Integrate Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-event-hubs.md | |
service-connector | How To Integrate Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-key-vault.md | |
service-connector | How To Integrate Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md | |
service-connector | How To Integrate Postgres | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md | |
service-connector | How To Integrate Redis Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-redis-cache.md | |
service-connector | How To Integrate Service Bus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md | description: Integrate Azure Service Bus into your application with Service Conn - Last updated 02/02/2024 |
service-connector | How To Integrate Signalr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-signalr.md | + - kr2b-contr-experiment # Integrate Azure SignalR Service with Service Connector |
service-connector | How To Integrate Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md | |
service-connector | How To Integrate Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md | description: Integrate Azure Blob Storage into your application with Service Con - Last updated 02/02/2024 |
service-connector | How To Integrate Storage File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-file.md | |
service-connector | How To Integrate Storage Queue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md | |
service-connector | How To Integrate Storage Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md | |
service-connector | How To Troubleshoot Front End Error | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-troubleshoot-front-end-error.md | |
service-connector | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md | description: Understand typical use case scenarios for Service Connector, and le - Last updated 10/19/2023 |
service-connector | Quickstart Cli App Service Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-app-service-connection.md | |
service-connector | Quickstart Cli Spring Cloud Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-spring-cloud-connection.md | Title: Quickstart - Create a service connection in Azure Spring Apps with the Azure CLI description: Quickstart showing how to create a service connection in Azure Spring Apps with the Azure CLI -displayName: Last updated 10/31/2022 ms.devlang: azurecli-+ # Quickstart: Create a service connection in Azure Spring Apps with the Azure CLI |
service-connector | Quickstart Portal App Service Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-app-service-connection.md | description: Quickstart showing how to create a service connection in App Servic - Last updated 10/05/2023 #Customer intent: As an app developer, I want to connect several services together so that I can ensure I have the right connectivity to access my Azure resources. |
service-connector | Quickstart Portal Container Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-container-apps.md | description: This quickstart shows how to create a service connection in Azure C - Last updated 10/31/2023- #Customer intent: As an app developer, I want to connect Azure Container Apps to a storage account in the Azure portal using Service Connector. |
service-connector | Quickstart Portal Spring Cloud Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-spring-cloud-connection.md | + - kr2b-contr-experiment #Customer intent: As an app developer, I want to connect an application deployed to Azure Spring Apps to a storage account in the Azure portal. |
service-connector | Tutorial Csharp Webapp Storage Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-csharp-webapp-storage-cli.md | |
service-connector | Tutorial Django Webapp Postgres Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-django-webapp-postgres-cli.md | Title: 'Tutorial: Using Service Connector to build a Django app with Postgres on Azure App Service' description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework, the app is hosted on Azure App Service on Linux, and the App Service and Database is connected with Service Connector. ms.devlang: python-+ |
service-connector | Tutorial Java Spring Confluent Kafka | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-confluent-kafka.md | Title: 'Tutorial: Deploy a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Apps' description: Create a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Apps. ms.devlang: java-+ |
service-connector | Tutorial Java Spring Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-mysql.md | |
service-connector | Tutorial Portal Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-portal-key-vault.md | description: Tutorial showing how to store your web app's secrets in Azure Key V - Last updated 10/31/2023 When no longer needed, delete the resource group and all related resources creat ## Next steps > [!div class="nextstepaction"]-> [Service Connector internals](./concept-service-connector-internals.md) +> [Service Connector internals](./concept-service-connector-internals.md) |
service-fabric | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md | |
service-fabric | Service Fabric Best Practices Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-security.md | |
spring-apps | How To Staging Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-staging-environment.md | To build the application, follow these steps: 1. Create the app in your Azure Spring Apps instance: + ### [Standard plan](#tab/standard) + ```azurecli az spring app create \ --resource-group <resource-group-name> \ To build the application, follow these steps: --assign-endpoint ``` + ### [Enterprise plan](#tab/enterprise) ++ ```azurecli + az spring app create \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-instance-name> \ + --name demo \ + --assign-endpoint + ``` + 1. Deploy the app to Azure Spring Apps: ```azurecli To build the application, follow these steps: 1. Create the green deployment: + ### [Standard plan](#tab/standard) + ```azurecli az spring app deployment create \ --resource-group <resource-group-name> \ To build the application, follow these steps: --artifact-path target\hellospring-0.0.1-SNAPSHOT.jar ``` + ### [Enterprise plan](#tab/enterprise) ++ ```azurecli + az spring app deployment create \ + --resource-group <resource-group-name> \ + --service <Azure-Spring-Apps-instance-name> \ + --app demo \ + --name green \ + --artifact-path target\hellospring-0.0.1-SNAPSHOT.jar + ``` + ## View apps and deployments Use the following steps to view deployed apps. |
spring-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/policy-reference.md | Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
static-web-apps | Local Development | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/local-development.md | When published to the cloud, an Azure Static Web Apps site links together many s These services must communicate with each other, and Azure Static Web Apps handles this integration for you in the cloud. -Running locally, however, these services aren't automatically tied together. +However, when you run your application locally these services aren't automatically tied together. To provide a similar experience as to what you get in Azure, the [Azure Static Web Apps CLI](https://github.com/Azure/static-web-apps-cli) provides the following To provide a similar experience as to what you get in Azure, the [Azure Static W > [!NOTE] > Often sites built with a front-end framework require a proxy configuration setting to correctly handle requests under the `api` route. When using the Azure Static Web Apps CLI the proxy location value is `/api`, and without the CLI the value is `http://localhost:7071/api`. -## How it works -The following chart shows how requests are handled locally. ---> [!IMPORTANT] -> Go to `http://localhost:4280` to access the application served by the CLI. --- **Requests** made to port `4280` are forwarded to the appropriate server depending on the type of request.--- **Static content** requests, such as HTML or CSS, are either handled by the internal CLI static content server, or by the front-end framework server for debugging.--- **Authentication and authorization** requests are handled by an emulator, which provides a fake identity profile to your app.--- **Functions Core Tools runtime**<sup>1</sup> handles requests to the site's API.--- **Responses** from all services are returned to the browser as if they were all a single application.--The following article details the steps for running a node-based application, but the process is the same for any language or environment. Once you start the UI and the Azure Functions API apps independently, then start the Static Web Apps CLI and point it to the running apps using the following command: --```console -swa start http://localhost:<DEV-SERVER-PORT-NUMBER> --api-location http://localhost:7071 -``` --Optionally, if you use the `swa init` command, the Static Web Apps CLI looks at your application code and build a _swa-cli.config.json_ configuration file for the CLI. When you use the _swa-cli.config.json_ file, you can run `swa start` to launch your application locally. --<sup>1</sup> The Azure Functions Core Tools are automatically installed by the CLI if they aren't already on your system. +The following article details the steps for running a node-based application, but the process is the same for any language or environment. ## Prerequisites Open a terminal to the root folder of your existing Azure Static Web Apps site. |--|--|--| | Serve a specific folder | `swa start ./<OUTPUT_FOLDER_NAME>` | Replace `<OUTPUT_FOLDER_NAME>` with the name of your output folder. | | Use a running framework development server | `swa start http://localhost:3000` | This command works when you have an instance of your application running under port `3000`. Update the port number if your configuration is different. |-| Start a Functions app in a folder | `swa start ./<OUTPUT_FOLDER_NAME> --api-location ./api` | Replace `<OUTPUT_FOLDER_NAME>` with the name of your output folder. This command expects your application's API to have files in the _api_ folder. Update this value if your configuration is different. | +| Start a Functions app in a folder | `swa start ./<OUTPUT_FOLDER_NAME> --api-location ./api` | Replace `<OUTPUT_FOLDER_NAME>` with the name of your output folder. This command expects your application's API to have files in the `api` folder. Update this value if your configuration is different. | | Use a running Functions app | `swa start ./<OUTPUT_FOLDER_NAME> --api-location http://localhost:7071` | Replace `<OUTPUT_FOLDER_NAME>` with the name of your output folder. This command expects your Azure Functions application to be available through port `7071`. Update the port number if your configuration is different. | ## Authorization and authentication emulation Once logged in: - You can use the `/.auth/me` endpoint, or a function endpoint to retrieve the user's [client principal](./user-information.md). -- Navigating to `/.auth/logout` clears the client principal and logs out the mock user.+- Navigating to `/.auth/logout` clears the client principal and signs out the mock user. ## Debugging |
static-web-apps | Static Web Apps Cli Api Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/static-web-apps-cli-api-server.md | + + Title: API server Azure Static Web Apps CLI +description: API server Azure Static Web Apps CLI ++++ Last updated : 02/02/2024++++# Start the API server with the Azure Static Web App CLI ++In Azure Static Web Apps, you can use the [integrated managed Functions](/azure/static-web-apps/apis-functions) to add API endpoints to your application. You can run an Azure Functions app locally using [Azure Functions core tools CLI](/azure/azure-functions/functions-run-local). The core tools CLI gives you the opportunity to run and debug your API endpoints locally. ++You can start the core tools manually or automatically. ++## Manual start ++To use the SWA CLI emulator alongside the API server: ++1. Start API server using the Azure Functions core tools CLI or the [Visual Studio Code Extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). ++ Copy the URL of the local API server, once the core tools are running. + + ```bash + func host start + ``` ++1. In a separate terminal, start the SWA CLI using the `--api-devserver-url` option to pass it the local API Server URI. ++ For example: + + ```bash + swa start ./my-dist --api-devserver-url http://localhost:7071 + ``` ++## Automatic start ++To set up an automatic start, you first need to have an Azure Functions application project located in an `api` folder in your local development environment. ++1. Launch the API server alongside the SWA emulator ++ ```bash + swa start ./my-dist --api-location ./api + ``` ++1. Combine the launch with usage of a running dev server ++ ```bash + swa start http://localhost:3000 --api-location ./api + ``` + +## Next steps ++> [!div class="nextstepaction"] +> [Deploy to Azure](static-web-apps-cli-deploy.md) |
static-web-apps | Static Web Apps Cli Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/static-web-apps-cli-configuration.md | The Azure Static Web Apps (SWA) CLI gets configuration information for your stat The configuration file can contain multiple configurations, each identified by a unique configuration name. - If only a single configuration is present in the *swa-cli.config.json* file, `swa start` uses it by default.-- If options are loaded from a config file, then command line options are ignored. For example, if you run `swa start app --ssl`, the `ssl=true` option is not be picked up by the CLI. -## Configuration file example +- If options are loaded from a config file, then command line options are ignored. ++## Example configuration file ++The following code snippet shows the configuration file's shape. ```json { The configuration file can contain multiple configurations, each identified by a } ``` +When you only have one configuration section, as shown by this example, then the `swa start` command automatically uses these values. + ## Initialize a configuration file -Use `swa init` to kickstart the workflow to create a configuration file for a new or existing project. If the project exists, `swa init` tries to guess the configuration settings for you. +You can initialize your configuration file with the `swa init` command. If you run the command against an existing project, then `swa init` tries to guess the configuration settings for you. By default, the process creates these settings in a *swa-cli.config.json* in the current working directory of your project. This directory is the default file name and location used by `swa` when searching for project configuration values. By default, the process creates these settings in a *swa-cli.config.json* in the swa --config <PATH> ``` -If the file contains only one named configuration, then it is used by default. If multiple configurations are defined, you need to specify the one to use as an option. +If the file contains only one named configuration, then that configuration is used by default. If multiple configurations are defined, then you pass the desired configuration name in as an option. ```azstatic-cli-swa --config-name +swa --<CONFIG_NAME> ``` When the configuration file option is used, the settings are stored in JSON format. Once created, you can manually edit the file to update settings or use `swa init` to make updates. ## View configuration -The Static Webs CLI provides a `--print-config` option so you can determine resolved options for your current setup. +The Static Webs CLI provides a `--print-config` option so you can review your current configuration. -Here is an example of what that output looks like when run on a new project with default settings. +Here's an example of what that output looks like when run on a new project with default settings. ```azstatic-cli swa --print-config Running `swa --print-config` provide's the current configuration defaults. ## Validate configuration -The swa-cli.config.json file can be validated against the following schema: https://aka.ms/azure/static-web-apps-cli/schema +You can validate the *swa-cli.config.json* file against the following schema: https://aka.ms/azure/static-web-apps-cli/schema |
static-web-apps | Static Web Apps Cli Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/static-web-apps-cli-deploy.md | -The `deploy` command deploys the current project to Azure Static Web Apps. +The Azure Static Web Apps CLI (SWA CLI) features the `deploy` command to deploy the current project to Azure Static Web Apps. -Some common use cases include: +Common deployment scenarios include: -- Deploy a front-end app without an API-- Deploy a front-end app with an API-- Deploy a Blazor app+- A front-end app without an API +- A front-end app with an API +- Blazor apps ## Deployment token -The SWA CLI supports deploying using a deployment token. This is usually useful when deploying from a CI/CD environment. You can get a deployment token either from: +The SWA CLI supports deploying using a deployment token to enable setups in CI/CD environments. -- The [Azure portal](https://portal.azure.com/): **Home → Static Web App → Your Instance → Overview → Manage deployment token**+You can get a deployment token from: -- If you are using the [Azure CLI](https://aka.ms/azcli), you can get the deployment token of your project using the following command:+- **Azure portal**: **Home → Static Web App → Your Instance → Overview → Manage deployment token** -```azstatic-cli -az staticwebapp secrets list --name <APPLICATION_NAME> --query "properties.apiKey" -``` +- **Azure CLI**: Using the `secrets list` command: -- If you are using the Azure Static Web Apps CLI, you can use the following command:+ ```azstatic-cli + az staticwebapp secrets list --name <APPLICATION_NAME> --query "properties.apiKey" + ``` -```azstatic-cli -swa deploy --print-token -``` +- **Azure Static Web Apps CLI**: Using the `deploy` command: -You can then use that value with the `--deployment-token <token>` or you can create an environment variable called `SWA_CLI_DEPLOYMENT_TOKEN` and set it to the deployment token. + ```azstatic-cli + swa deploy --print-token + ``` ++You can then use the token value with the `--deployment-token <TOKEN>` or you can create an environment variable called `SWA_CLI_DEPLOYMENT_TOKEN` and set it to the deployment token. > [!IMPORTANT] > Don't store deployment tokens in a public repository. ## Deploy a front-end app without an API -You can deploy a front-end application without an API to Azure Static Web Apps by running the following steps: +You can deploy a front-end application without an API to Azure Static Web Apps. If your front-end application requires a build step, run `swa build` or refer to your application build instructions. -If your front-end application requires a build step, run `swa build` or refer to your application build instructions. +Select the option that best suits your needs to configure your deployment -* **Option 1:** From build folder you would like to deploy, run the deploy command: +- **Option 1:** From build folder you would like to deploy, run the deploy command: ```azstatic-cli cd build/ If your front-end application requires a build step, run `swa build` or refer to > [!NOTE] > The *build* folder must contain the static content of your app to be deployed. -* **Option 2:** You can also deploy a specific folder: +- **Option 2:** You can also deploy a specific folder: 1. If your front-end application requires a build step, run `swa build` or refer to your application build instructions. If your front-end application requires a build step, run `swa build` or refer to ## Deploy a front-end app with an API -To deploy both the front-end app and an API to Azure Static Web Apps, use the following steps. +Use the following steps to deploy an application that has API endpoints. 1. If your front-end application requires a build step, run `swa build` or refer to your application build instructions. -2. Make sure the API language runtime version in the *staticwebapp.config.json* file is set correctly, for example: +1. Ensure the API language runtime version in the *staticwebapp.config.json* file is set correctly, for example: -```json -{ - "platform": { - "apiRuntime": "node:16" - } -} -``` --> [!NOTE] -> If your project doesn't have the *staticwebapp.config.json* file, add one under your `outputLocation` folder. + ```json + { + "platform": { + "apiRuntime": "node:16" + } + } + ``` + + > [!NOTE] + > If your project doesn't have the *staticwebapp.config.json* file, add one under your `outputLocation` folder. -3. Deploy your app: +1. Deploy your app: -```azstatic-cli -swa deploy ./my-dist --api-location ./api -``` + ```azstatic-cli + swa deploy ./my-dist --api-location ./api + ``` ### Deploy a Blazor app -To deploy a Blazor app with an API to Azure Static Web Apps, use the following steps: +You can deploy a Blazor app using the following steps. 1. Build your Blazor app in **Release** mode: -```azstatic-cli -dotnet publish -c Release -o bin/publish -``` + ```azstatic-cli + dotnet publish -c Release -o bin/publish + ``` -2. From the root of your project, run the deploy command: +1. From the root of your project, run the deploy command: -```azstatic-cli -swa deploy ./bin/publish/wwwroot --api-location ./Api -``` + ```azstatic-cli + swa deploy ./bin/publish/wwwroot --api-location ./Api + ``` -## Deploy using the `swa-cli.config.json` +## Deploy using a configuration file > [!NOTE] > The path for `outputLocation` must be relative to the `appLocation`. -If you are using a [`swa-cli.config.json`](./static-web-apps-cli-configuration.md) configuration file in your project and have a single configuration entry, for example: --```json -{ - "configurations": { - "my-app": { - "appLocation": "./", - "apiLocation": "api", - "outputLocation": "frontend", - "start": { - "outputLocation": "frontend" - }, - "deploy": { - "outputLocation": "frontend" +If you're using a [`swa-cli.config.json`](./static-web-apps-cli-configuration.md) configuration file in your project with a single configuration entry, then you can deploy your application by running the following steps. ++For reference, an example of a single configuration entry looks like the following code snippet. ++ ```json + { + "configurations": { + "my-app": { + "appLocation": "./", + "apiLocation": "api", + "outputLocation": "frontend", + "start": { + "outputLocation": "frontend" + }, + "deploy": { + "outputLocation": "frontend" + } } } }-} -``` --Then you can deploy your application by running the following steps: + ``` 1. If your front-end application requires a build step, run `swa build` or refer to your application build instructions. -2. Deploy your app: +2. Deploy your app. -```azstatic-cli -swa deploy -``` --If you have multiple configuration entries, you can provide the entry ID to specify which one to use: --```azstatic-cli -swa deploy my-otherapp -``` + ```azstatic-cli + swa deploy + ``` + + If you have multiple configuration entries, you can provide the entry ID to specify which one to use: + + ```azstatic-cli + swa deploy my-otherapp + ``` ## Options -Here are the options you can use with `swa deploy`: +The following are options you can use with `swa deploy`: - `-a, --app-location <path>`: the folder containing the source code of the front-end application (default: "`.`") - `-i, --api-location <path>`: the folder containing the source code of the API application Here are the options you can use with `swa deploy`: - `-C, --client-id <clientId>`: Azure client ID - `-CS, --client-secret <clientSecret>`: Azure client secret - `-n, --app-name <appName>`: Azure Static Web App application name-- `-cc, --clear-credentials`: clear persisted credentials before login (default: `false`)+- `-cc, --clear-credentials`: clear persisted credentials before sign in (default: `false`) - `-u, --use-keychain`: enable using the operating system native keychain for persistent credentials (default: `true`) - `-nu, --no-use-keychain`: disable using the operating system native keychain - `-h, --help`: display help for command Deploy to a specific environment. ```azstatic-cli swa deploy --env production ```++## Next steps ++> [!div class="nextstepaction"] +> [Configure your deployment](static-web-apps-cli-configuration.md) |
static-web-apps | Static Web Apps Cli Emulator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/static-web-apps-cli-emulator.md | + + Title: Emulator Azure Static Web Apps CLI +description: Emulator Azure Static Web Apps CLI ++++ Last updated : 12/15/2023++++# Start the Static Web Apps CLI emulator ++Static Web Apps is a cloud-based platform that hosts and runs your web apps. When you run your app locally, you need special tools to help you approximate how your app would run in the cloud. ++The Static Web Apps CLI (SWA CLI) includes an emulator that mimics how your app would run on Azure, but instead runs exclusively on your machine. ++The `swa start` command launches the emulator with default settings. By default, the emulator uses port `4280`. ++For more information about individual commands, see the [CLI reference](/azure/static-web-apps/static-web-apps-cli#swa-start). ++## Serve static files from your filesystem ++The SWA CLI allows you to directly serve your static content from your filesystem with no other required tools. You can either serve the static content from your current directory or a specific folder. ++| Serve from... | Command | Notes | +|||| +| Current folder | `swa start` | By default, the CLI starts and serves static content (HTML, image, script, and CSS files) from the current working directory. | +| Specific folder | `swa start ./my-dist` | You can override the behavior to start the emulator with a different static assets folder. | ++## Use development server ++As you develop your app's front-end, you might want to use the framework's default development server. Using a framework's server allows you to take advantage of benefits like live reload and hot module replacement (HMR). ++For example, Angular developers often use `ng serve` or `npm start` to run the development server. ++You can set up the Static Web Apps SWA CLI to proxy requests to the dev server, which gives you the benefits of both your framework's CLI while simultaneously working with Static Web Apps CLI. ++There are two steps to using a framework's dev server along with the SWA CLI: ++1. Start your framework's local dev server as usual. Make sure to note the URL (including the port) used by the framework. ++1. Start the SWA CLI in a new terminal, passing in the dev server URL. ++ ```bash + swa start <DEV_SERVER_URL> + ``` ++> [!NOTE] +> Make sure to replace the `<DEV_SERVER_URL>` placeholder with your own value. ++### Launch dev server ++You can simplify your workflow further by having the SWA CLI launch the dev server for you. ++You can pass a custom command to the `--run` parameter to the `swa start` command. ++```bash +swa start <DEV_SERVER_URL> --run <DEV_SERVER_LAUNCH_COMMAND> +``` ++Here's some examples of starting the emulator with a few different frameworks: ++| Framework | Command | +||| +| React | `swa start http://localhost:3000 --run "npm start"` | +| Blazor | `swa start http://localhost:5000 --run "dotnet watch run"` | +| Jekyll | `swa start http://localhost:4000 --run "jekyll serve"` | ++You can also use the `--run` parameter if you want to run a custom script as you launch the dev server. ++```bash +swa start http://localhost:4200 --run "./startup.sh" +``` ++Using the above command, you can access the application with the emulated services from `http://localhost:4280` +++## Next steps ++> [!div class="nextstepaction"] +> [Start the API server](static-web-apps-cli-api-server.md) |
static-web-apps | Static Web Apps Cli Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/static-web-apps-cli-install.md | + + Title: Install Azure Static Web Apps CLI +description: Learn how to install Azure Static Web Apps CLI ++++ Last updated : 02/05/2024++++# Install the Static Web Apps CLI (SWA CLI) ++You have different options available to install the Azure Static Web Apps CLI. ++| Resource | Command | +||| +| [`npm`](https://docs.npmjs.com/cli/v6/commands/npm-install) | `npm install -g @azure/static-web-apps-cli` | +| [`yarn`](https://classic.yarnpkg.com/lang/en/docs/cli/install/) | `yarn add @azure/static-web-apps-cli` | +| [`pnpm`](https://pnpm.io/cli/install) | `pnpm install -g @azure/static-web-apps-cli` | ++> [!NOTE] +> SWA CLI only supports Node versions 16 and below. ++## Validate install ++Installing the package makes the `swa` command available on your machine. You can verify the installation is successful by requesting the CLI version. ++```bash +swa --version +# When installed, the version number is printed out +``` ++## Usage ++To begin using the CLI, you can run the `swa` command alone and follow the interactive prompts. ++The SWA CLI interactive prompts help guide you through the different options important as you develop your web app. ++Run the `swa` command to begin setting up your application. ++```bash +swa +``` ++The `swa` command generates a configuration file, builds your project, and gives you the option to deploy to Azure. ++For details on all the SWA CLI commands, see the [CLI reference](static-web-apps-cli.yml). ++## Using npx ++You can run any Static Web Apps CLI commands directly using npx. For example: ++```bash +npx @azure/static-web-apps-cli --version +``` ++Alternatively, you can start the emulator via the `start` command: ++```bash +npx @azure/static-web-apps-cli start +``` ++## Next steps ++> [!div class="nextstepaction"] +> [Start the emulator](static-web-apps-cli-emulator.md) |
static-web-apps | Static Web Apps Cli Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/static-web-apps-cli-overview.md | + + Title: About the Azure Static Web Apps CLI +description: Learn how to use the Azure Static Web Apps CLI ++++ Last updated : 02/06/2024++++# Azure Static Web Apps CLI overview ++Azure Static Web Apps websites are hosted in the cloud and often connect together a collection of cloud services. During development, and any time you need to run your app locally, you need tools to mimic how your app runs in the cloud. ++The Static Web Apps CLI (SWA CLI) includes a series of local services that approximate how your app would run on Azure, but instead they run exclusively on your machine. ++The Azure Static Web Apps CLI provides the following ++- A local static site server +- A proxy to the front-end framework development server +- A proxy to your API endpoints - available through Azure Functions Core Tools +- A mock authentication and authorization server +- Local routes and configuration settings enforcement +++## Get started ++Get started working with the Static Web Apps CLI with the following resources. ++| Resource | Description | +||| +| [Install the Static Web Apps CLI (SWA CLI)](static-web-apps-cli-install.md) | Install the Azure Static Web Apps CLI to your machine. | +| [Configure your environment](static-web-apps-cli-configuration.md) | Set up how your application reads configuration information. | +| [Start the website emulator](static-web-apps-cli-emulator.md) | Start the service to locally serve your website. | +| [Start the local API server](static-web-apps-cli-api-server.md) | Start the service to locally serve your API endpoints. | +| [Deploy to Azure](static-web-apps-cli-deploy.md) | Deploy your application to production on Azure. | ++> [!NOTE] +> Often sites built with a front-end framework require a proxy configuration setting to correctly handle requests under the `api` route. When using the Azure Static Web Apps CLI the proxy location value is `/api`, and without the CLI the value is `http://localhost:7071/api`. ++## Next steps ++> [!div class="nextstepaction"] +> [Install the CII](static-web-apps-cli-install.md) |
storage | Assign Azure Role Data Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md | To assign a role scoped to a blob container or a storage account, you should spe The scope for a container is in the form: ```-/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container-name> +/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>/blobServices/default/containers/<container-name> ``` The scope for a storage account is in the form: ```-/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account> +/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name> ``` To assign a role scoped to a storage account, specify a string containing the scope of the container for the `--scope` parameter. -The following example assigns the **Storage Blob Data Contributor** role to a user, scoped to a container named *sample-container*. Make sure to replace the sample values and the placeholder values in brackets with your own values: +The following example assigns the **Storage Blob Data Contributor** role to a user. The role assignment is scoped to level of the container. Make sure to replace the sample values and the placeholder values in brackets (`<>`) with your own values: ```powershell New-AzRoleAssignment -SignInName <email> ` -RoleDefinitionName "Storage Blob Data Contributor" `- -Scope "/subscriptions/<subscription>/resourceGroups/sample-resource-group/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/sample-container" + -Scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>/blobServices/default/containers/<container-name>" ``` -The following example assigns the **Storage Blob Data Reader** role to a user by specifying the object ID. The role assignment is scoped to a storage account named **storage-account**. Make sure to replace the sample values and the placeholder values in brackets with your own values: +The following example assigns the **Storage Blob Data Reader** role to a user by specifying the object ID. The role assignment is scoped to the level of the storage account. Make sure to replace the sample values and the placeholder values in brackets (`<>`) with your own values: ```powershell New-AzRoleAssignment -ObjectID "ab12cd34-ef56-ab12-cd34-ef56ab12cd34" ` -RoleDefinitionName "Storage Blob Data Reader" `- -Scope "/subscriptions/<subscription>/resourceGroups/sample-resource-group/providers/Microsoft.Storage/storageAccounts/storage-account" + -Scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>" ``` Your output should be similar to the following: To assign an Azure role to a security principal with Azure CLI, use the [az role To assign a role scoped to a container, specify a string containing the scope of the container for the `--scope` parameter. The scope for a container is in the form: ```-/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container-name> +/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>/blobServices/default/containers/<container-name> ``` -The following example assigns the **Storage Blob Data Contributor** role to a user, scoped to the level of the container. Make sure to replace the sample values and the placeholder values in brackets with your own values: +The following example assigns the **Storage Blob Data Contributor** role to a user. The role assignment is scoped to the level of the container. Make sure to replace the sample values and the placeholder values in brackets (`<>`) with your own values: ```azurecli-interactive az role assignment create \ --role "Storage Blob Data Contributor" \ --assignee <email> \- --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" + --scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>/blobServices/default/containers/<container-name>" ``` For information about assigning roles with PowerShell at the subscription, resource group, or storage account scope, see [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md). |
storage | Storage Quickstart Blobs Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md | Title: "Quickstart: Azure Blob Storage library - .NET" description: In this quickstart, you learn how to use the Azure Blob Storage client library for .NET to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container. Previously updated : 01/30/2024 Last updated : 02/06/2024 ms.devlang: csharp ai-usage: ai-assisted+zone_pivot_groups: azure-blob-storage-quickstart-options # Quickstart: Azure Blob Storage client library for .NET ++> [!NOTE] +> The **Build from scratch** option walks you step by step through the process of creating a new project, installing packages, writing the code, and running a basic console app. This approach is recommended if you want to understand all the details involved in creating an app that connects to Azure Blob Storage. If you prefer to automate deployment tasks and start with a completed project, choose [Start with a template](storage-quickstart-blobs-dotnet.md?pivots=blob-storage-quickstart-template). ++++> [!NOTE] +> The **Start with a template** option uses the Azure Developer CLI to automate deployment tasks and starts you off with a completed project. This approach is recommended if you want to explore the code as quickly as possible without going through the setup tasks. If you prefer step by step instructions to build the app, choose [Build from scratch](storage-quickstart-blobs-dotnet.md?pivots=blob-storage-quickstart-scratch). ++ Get started with the Azure Blob Storage client library for .NET. Azure Blob Storage is Microsoft's object storage solution for the cloud, and is optimized for storing massive amounts of unstructured data. + In this article, you follow steps to install the package and try out example code for basic tasks. +++In this article, you use the [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) to deploy Azure resources and run a completed console app with just a few commands. ++ [API reference documentation](/dotnet/api/azure.storage.blobs) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples) + This video shows you how to start using the Azure Blob Storage client library for .NET. > [!VIDEO cdae65e7-1892-48fe-934a-70edfbe147be] The steps in the video are also described in the following sections. + ## Prerequisites + - Azure subscription - [create one for free](https://azure.microsoft.com/free/) - Azure storage account - [create a storage account](../common/storage-account-create.md)-- Current [.NET SDK](https://dotnet.microsoft.com/download/dotnet) for your operating system. Be sure to get the SDK and not the runtime.+- Latest [.NET SDK](https://dotnet.microsoft.com/download/dotnet) for your operating system. Be sure to get the SDK and not the runtime. ++++- Azure subscription - [create one for free](https://azure.microsoft.com/free/) +- Latest [.NET SDK](https://dotnet.microsoft.com/download/dotnet) for your operating system. This code sample uses .NET 8.0. Be sure to get the SDK and not the runtime. +- [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) + ## Setting up + This section walks you through preparing a project to work with the Azure Blob Storage client library for .NET. ### Create the project using System.IO; Console.WriteLine("Hello, World!"); ``` +++With [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed, you can create a storage account and run the sample code with just a few commands. You can run the project in your local development environment, or in a [DevContainer](https://code.visualstudio.com/docs/devcontainers/containers). ++### Initialize the Azure Developer CLI template and deploy resources ++From an empty directory, follow these steps to initialize the `azd` template, provision Azure resources, and get started with the code: ++- Clone the quickstart repository assets from GitHub and initialize the template locally: ++ ```console + azd init --template blob-storage-quickstart-dotnet + ``` ++ You'll be prompted for the following information: ++ - **Environment name**: This value is used as a prefix for all Azure resources created by Azure Developer CLI. The name must be unique across all Azure subscriptions and must be between 3 and 24 characters long. The name can contain numbers and lowercase letters only. ++- Log in to Azure: ++ ```console + azd auth login + ``` +- Provision and deploy the resources to Azure: ++ ```console + azd up + ``` ++ You'll be prompted for the following information: ++ - **Subscription**: The Azure subscription that your resources are deployed to. + - **Location**: The Azure region where your resources are deployed. + + The deployment might take a few minutes to complete. The output from the `azd up` command includes the name of the newly created storage account, which you'll need later to run the code. ++## Run the sample code ++At this point, the resources are deployed to Azure and the project is ready to run. Follow these steps to update the name of the storage account in the code and run the sample console app: ++- **Update the storage account name**: Navigate to the `src` directory and edit `Program.cs`. Find the `<storage-account-name>` placeholder and replace it with the actual name of the storage account created by the `azd up` command. Save the changes. +- **Run the project**: If you're using Visual Studio, press F5 to build and run the code and interact with the console app. If you're using the .NET CLI, navigate to your application directory, build the project using `dotnet build`, and run the application using the `dotnet run`. +- **Observe the output**: This app creates a test file in your local *data* folder and uploads it to a container in the storage account. The example then lists the blobs in the container and downloads the file with a new name so that you can compare the old and new files. ++To learn more about how the sample code works, see [Code examples](#code-examples). ++When you're finished testing the code, see the [Clean up resources](#clean-up-resources) section to delete the resources created by the `azd up` command. ++ ## Object model Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data doesn't adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources: The sample code snippets in the following sections demonstrate how to perform th - [Download a blob](#download-a-blob) - [Delete a container](#delete-a-container) + > [!IMPORTANT] > Make sure you've installed the correct NuGet packages and added the necessary using statements in order for the code samples to work, as described in the [setting up](#setting-up) section. +++> [!NOTE] +> The Azure Developer CLI template includes a project with sample code already in place. The following examples provide detail for each part of the sample code. The template implements the recommended passwordless authentication method, as described in the [Authenticate to Azure](#authenticate-to-azure-and-authorize-access-to-blob-data) section. The connection string method is shown as an alternative, but isn't used in the template and isn't recommended for production code. ++ [!INCLUDE [storage-quickstart-credential-free-include](../../../includes/storage-quickstart-credential-free-include.md)] ### Create a container Create a new container in your storage account by calling the [CreateBlobContainerAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.createblobcontainerasync) method on the `blobServiceClient` object. In this example, the code appends a GUID value to the container name to ensure that it's unique. -Add this code to the end of the `Program.cs` file: ++Add the following code to the end of the `Program.cs` file: + ```csharp // TODO: Replace <storage-account-name> with your actual storage account name To learn more about creating a container, and to explore more code samples, see Upload a blob to a container using [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync). The example code creates a text file in the local *data* directory to upload to the container. -Add the following code to the end of the `Program.cs` class: ++Add the following code to the end of the `Program.cs` file: + ```csharp // Create a local file in the ./data/ directory for uploading and downloading To learn more about uploading blobs, and to explore more code samples, see [Uplo List the blobs in the container by calling the [GetBlobsAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsasync) method. + Add the following code to the end of the `Program.cs` file: + ```csharp Console.WriteLine("Listing blobs..."); To learn more about listing blobs, and to explore more code samples, see [List b Download the blob we created earlier by calling the [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync) method. The example code appends the string "DOWNLOADED" to the file name so that you can see both files in local file system. + Add the following code to the end of the `Program.cs` file: + ```csharp // Download the blob to a local file // Append the string "DOWNLOADED" before the .txt extension The following code cleans up the resources the app created by deleting the conta The app pauses for user input by calling `Console.ReadLine` before it deletes the blob, container, and local files. This is a good chance to verify that the resources were created correctly, before they're deleted. + Add the following code to the end of the `Program.cs` file: + ```csharp // Clean up Console.Write("Press any key to begin clean up"); Console.WriteLine("Done"); To learn more about deleting a container, and to explore more code samples, see [Delete and restore a blob container with .NET](storage-blob-container-delete.md). + ## The completed code After completing these steps, the code in your `Program.cs` file should now resemble the following: Done Before you begin the clean-up process, check your *data* folder for the two files. You can open them and observe that they're identical. -After you verify the files, press the **Enter** key to delete the test files and finish the demo. ++## Clean up resources +++After you verify the files and finish testing, press the **Enter** key to delete the test files along with the container you created in the storage account. You can also use [Azure CLI](storage-quickstart-blobs-cli.md#clean-up-resources) to delete resources. ++++When you're done with the quickstart, you can clean up the resources you created by running the following command: ++```console +azd down +``` ++You'll be prompted to confirm the deletion of the resources. Enter `y` to confirm. + ## Next steps |
storage | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md | Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
storage | Elastic San Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-delete.md | Your Elastic storage area network (SAN) resources can be deleted at different re ### Windows -To delete iSCSI connections to volumes, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure Elastic SAN volume. --Run the following commands to get these values: --```azurepowershell -# Get the target name and iSCSI portal name to connect a volume to a client -$connectVolume = Get-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $searchedVolumeGroup -Name $searchedVolume -$connectVolume.storagetargetiqn -$connectVolume.storagetargetportalhostname -$connectVolume.storagetargetportalport -``` --Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next commands. --In your compute client, retrieve the sessionID for the Elastic SAN volumes you'd like to disconnect using `iscsicli SessionList`. --Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to disconnect an Elastic SAN volume. +You can use the following script to delete your connections. To execute it, you require the following parameters: +- $ResourceGroupName: Resource Group Name +- $ElasticSanName: Elastic SAN Name +- $VolumeGroupName: Volume Group Name +- $VolumeName: List of Volumes to be disconnected (comma separated) +Copy the script from [here](https://github.com/Azure-Samples/azure-elastic-san/blob/main/PSH%20(Windows)%20Multi-Session%20Connect%20Scripts/ElasticSanDocScripts0523/disconnect.ps1) and save it as a .ps1 file, for example, disconnect.ps1. Then execute it with the required parameters. The following is an example of how to run the script: ```-iscsicli RemovePersistentTarget ROOT\ISCSIPRT\0000_0 $yourStorageTargetIQN -1 $yourStorageTargetPortalPort $yourStorageTargetPortalHostName --iscsicli LogoutTarget <sessionID> +./disconnect.ps1 $ResourceGroupName $ElasticSanName $VolumeGroupName $VolumeName ``` ### Linux -To delete iSCSI connections to volumes, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure Elastic SAN volume. --Run the following command to get these values: --```azurecli -az elastic-san volume-group list -e $sanName -g $resourceGroupName -v $searchedVolumeGroup -n $searchedVolume -``` +You can use the following script to create your connections. To execute it, you will require the following parameters: -Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next commands. +- subscription: Subscription ID +- g: Resource Group Name +- e: Elastic SAN Name +- v: Volume Group Name +- n <vol1, vol2, ...>: Names of volumes 1 and 2 and other volume names that you might require, comma separated -Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume. +Copy the script from [here](https://github.com/Azure-Samples/azure-elastic-san/blob/main/CLI%20(Linux)%20Multi-Session%20Connect%20Scripts/disconnect_for_documentation.py) and save it as a .py file, for example, disconnect.py. Then execute it with the required parameters. The following is an example of how you'd run the script: ```-iscsiadm --mode node --target **yourStorageTargetIQN** --portal **yourStorageTargetPortalHostName**:**yourStorageTargetPortalPort** --logout +./disconnect.py --subscription <subid> -g <rgname> -e <esanname> -v <vgname> -n <vol1, vol2> ``` ## Delete a SAN |
stream-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
synapse-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
synapse-analytics | Workspace Data Exfiltration Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/workspace-data-exfiltration-protection.md | This article will explain data exfiltration protection in Azure Synapse Analytic ## Securing data egress from Synapse workspaces Azure Synapse Analytics workspaces support enabling data exfiltration protection for workspaces. With exfiltration protection, you can guard against malicious insiders accessing your Azure resources and exfiltrating sensitive data to locations outside of your organizationΓÇÖs scope. -At the time of workspace creation, you can choose to configure the workspace with a managed virtual network and additional protection against data exfiltration. When a workspace is created with a [managed virtual network](./synapse-workspace-managed-vnet.md), Data integration and Spark resources are deployed in the managed virtual network. The workspaceΓÇÖs dedicated SQL pools and serverless SQL pools have multi-tenant capabilities and as such, need to exist outside the managed virtual network. For workspaces with data exfiltration protection, resources within the managed virtual network always communicate over [managed private endpoints](./synapse-workspace-managed-private-endpoints.md) and the Synapse SQL resources can only connect to authorized Azure resources (targets of approved managed private endpoint connections from the workspace). +At the time of workspace creation, you can choose to configure the workspace with a managed virtual network and additional protection against data exfiltration. When a workspace is created with a [managed virtual network](./synapse-workspace-managed-vnet.md), Data integration and Spark resources are deployed in the managed virtual network. The workspaceΓÇÖs dedicated SQL pools and serverless SQL pools have multi-tenant capabilities and as such, need to exist outside the managed virtual network. For workspaces with data exfiltration protection, resources within the managed virtual network always communicate over [managed private endpoints](./synapse-workspace-managed-private-endpoints.md). When data exfiltration protection is enabled, Synapse SQL resources can connect to and query any authorized Azure Storage using OPENROWSETS or EXTERNAL TABLE, since the ingress traffic is not controlled by the data exfiltration protection. However, the egress traffic via [CREATE EXTERNAL TABLE AS SELECT](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) will be controlled by the data exfiltration protection. > [!Note] > You cannot change the workspace configuration for managed virtual network and data exfiltration protection after the workspace is created. |
synapse-analytics | Apache Spark Version Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md | The runtimes have the following advantages: > [!WARNING] > End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4 and Apache Spark 3.1.-> * Effective September 29, 2023, the Azure Synapse will discontinue official support for Spark 2.4 Runtimes. -> * Effective January 26, 2024, the Azure Synapse will discontinue official support for Spark 3.1 Runtimes. +> * Effective September 29, 2023, Azure Synapse will discontinue official support for Spark 2.4 Runtimes. +> * Effective January 26, 2024, Azure Synapse will discontinue official support for Spark 3.1 Runtimes. > * Post these dates, we will not be addressing any support tickets related to Spark 2.4 or 3.1. There will be no release pipeline in place for bug or security fixes for Spark 2.4 and 3.1. Utilizing Spark 2.4 or 3.1 post the support cutoff dates is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.-> * We strongly advise to proactively upgrade their workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)). +> * We strongly advise proactively upgrading workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)). The following table lists the runtime name, Apache Spark version, and release date for supported Azure Synapse Runtime releases. |
update-manager | Manage Dynamic Scoping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-dynamic-scoping.md | To view the list of Dynamic scopes associated to a given maintenance configurati 1. Select **Machines** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to view the Dynamic scope. 1. In the given maintenance configuration page, select **Dynamic scopes** to view all the Dynamic scopes that are associated with the maintenance configuration.-1. The schedules associated to dynamic scopes are displayed in the following two areas by design: +1. The schedules associated to dynamic scopes are displayed in the following two areas: - **Update manager** > **Machines** > **Associated schedules** column - In your virtual machine home page > **Updates** > **Scheduling** tab.--To view the VMs that are associated to the schedule, go to the existing schedule and view under **Dynamic scopes** tab. +1. To view the VMs that are associated to the schedule, go to the existing schedule and view under **Dynamic scopes** tab. ## Edit a Dynamic scope |
update-manager | Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/scheduled-patching.md | Title: Scheduling recurring updates in Azure Update Manager description: This article details how to use Azure Update Manager to set update schedules that install recurring updates on your machines. Previously updated : 02/03/2024 Last updated : 02/05/2024 Update Manager uses a maintenance control schedule instead of creating its own s All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. -VMs in a common availability set are updated within Update Domain boundaries. VMs across multiple Update Domains aren't updated concurrently. +VMs in a common availability set are updated within Update Domain boundaries. VMs across multiple Update Domains aren't updated concurrently. ++In scenarios where machines from the same availability set are being patched at the same time in different schedules, it is likely that they might not get patched or could potentially fail if the maintenance window exceeds. To avoid this, we recommend that you either increase the maintenance window or split the machines belonging to the same availability set across multiple schedules at different times. + ## Configure reboot settings |
update-manager | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md | +## February 2024 ++### Migration scripts to move machines and schedules from Automation Update Management to Azure Update Manager (preview) ++Migration scripts allow you to move all machines and schedules in an automation account from Automation Update Management to azure Update Management in an automated fashion. [Learn more](guidance-migration-automation-update-management-azure-update-manager.md). +++### Updates blade in Azure Update Manager (preview) ++The purpose of this new blade is to present information from Updates pivot instead of machines. It would be particularly useful for Central IT admins, Security admins who care about vulnerabilities in the system and want to act on them by applying updates. [Learn more](deploy-manage-updates-using-updates-view.md). + ## December 2023 ### Pre and Post Events (preview) |
virtual-machine-scale-sets | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md | |
virtual-machines | Disk Encryption Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption-overview.md | Here's a comparison of Disk Storage SSE, ADE, encryption at host, and Confidenti | | **Azure Disk Storage Server-Side Encryption** | **Encryption at Host** | **Azure Disk Encryption** | **Confidential disk encryption (For the OS disk only)** | |--|--|--|--|--| | Encryption at rest (OS and data disks) | ✅ | ✅ | ✅ | ✅ | -| Temp disk encryption | ❌ | ✅ | ✅ | ❌ | +| Temp disk encryption | ❌ | ✅ Only supported with platform managed key | ✅ | ❌ | | Encryption of caches | ❌ | ✅ | ✅ | ✅ | | Data flows encrypted between Compute and Storage | ❌ | ✅ | ✅ | ✅ | | Customer control of keys | ✅ When configured with DES | ✅ When configured with DES | ✅ When configured with KEK | ✅ When configured with DES | |
virtual-machines | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md | Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |
virtual-network | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md | Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/30/2024 Last updated : 02/06/2024 |