Updates from: 04/01/2024 01:08:25
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Batch Synthesis Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-synthesis-properties.md
# Batch synthesis properties for text to speech > [!IMPORTANT]
-> The Batch synthesis API is currently in public preview. Once it's generally available, the Long Audio API will be deprecated. For more information, see [Migrate to batch synthesis API](migrate-to-batch-synthesis.md).
+> The Batch synthesis API is generally available. The Long Audio API will be retired on April 1st, 2027. For more information, see [Migrate to batch synthesis API](migrate-to-batch-synthesis.md).
-The Batch synthesis API (Preview) can synthesize a large volume of text input (long and short) asynchronously. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The batch synthesis API can create synthesized audio longer than 10 minutes.
+The Batch synthesis API can synthesize a large volume of text input (long and short) asynchronously. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The batch synthesis API can create synthesized audio longer than 10 minutes.
Some properties in JSON format are required when you create a new batch synthesis job. Other properties are optional. The batch synthesis response includes other properties to provide information about the synthesis status and results. For example, the `outputs.result` property contains the location of the batch synthesis result files with audio output and logs.
Batch synthesis properties are described in the following table.
| Property | Description | |-|-| |`createdDateTime`|The date and time when the batch synthesis job was created.<br/><br/>This property is read-only.|
-|`customProperties`|A custom set of optional batch synthesis configuration settings.<br/><br/>This property is stored for your convenience to associate the synthesis jobs that you created with the synthesis jobs that you get or list. This property is stored, but isn't used by the Speech service.<br/><br/>You can specify up to 10 custom properties as key and value pairs. The maximum allowed key length is 64 characters, and the maximum allowed value length is 256 characters.|
-|`customVoices`|The map of a custom voice name and its deployment ID.<br/><br/>For example: `"customVoices": {"your-custom-voice-name": "502ac834-6537-4bc3-9fd6-140114daa66d"}`<br/><br/>You can use the voice name in your `synthesisConfig.voice` (when the `textType` is set to `"PlainText"`) or within the SSML text of `inputs` (when the `textType` is set to `"SSML"`).<br/><br/>This property is required to use a custom voice. If you try to use a custom voice that isn't defined here, the service returns an error.|
+|`customVoices`|The map of a custom voice name and its deployment ID.<br/><br/>For example: `"customVoices": {"your-custom-voice-name": "502ac834-6537-4bc3-9fd6-140114daa66d"}`<br/><br/>You can use the voice name in your `synthesisConfig.voice` (when the `inputKind` is set to `"PlainText"`) or within the SSML text of `inputs` (when the `inputKind` is set to `"SSML"`).<br/><br/>This property is required to use a custom voice. If you try to use a custom voice that isn't defined here, the service returns an error.|
|`description`|The description of the batch synthesis.<br/><br/>This property is optional.|
-|`displayName`|The name of the batch synthesis. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
-|`id`|The batch synthesis job ID.<br/><br/>This property is read-only.|
-|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-AvaMultilingualNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result is written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but the maximum JSON payload size (including all text inputs and other properties) is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
+|`id`|The batch synthesis job ID you passed in path.<br/><br/>This property is required in path.|
+|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `inputKind` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `inputKind` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-AvaMultilingualNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result is written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but the maximum JSON payload size (including all text inputs and other properties) is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
+|`internalId`|The internal batch synthesis job ID.<br/><br/>This property is read-only.|
|`lastActionDateTime`|The most recent date and time when the `status` property value changed.<br/><br/>This property is read-only.| |`outputs.result`|The location of the batch synthesis result files with audio output and logs.<br/><br/>This property is read-only.| |`properties`|A defined set of optional batch synthesis configuration settings.|
-|`properties.audioSize`|The audio output size in bytes.<br/><br/>This property is read-only.|
-|`properties.billingDetails`|The number of words that were processed and billed by `customNeural` versus `neural` (prebuilt) voices.<br/><br/>This property is read-only.|
+|`properties.sizeInBytes`|The audio output size in bytes.<br/><br/>This property is read-only.|
+|`properties.billingDetails`|The number of words that were processed and billed by `customNeuralCharacters` versus `neuralCharacters` (prebuilt) voices.<br/><br/>This property is read-only.|
|`properties.concatenateResult`|Determines whether to concatenate the result. This optional `bool` value ("true" or "false") is "false" by default.|
-|`properties.decompressOutputFiles`|Determines whether to unzip the synthesis result files in the destination container. This property can only be set when the `destinationContainerUrl` property is set or BYOS (Bring Your Own Storage) is configured for the Speech resource. This optional `bool` value ("true" or "false") is "false" by default.|
+|`properties.decompressOutputFiles`|Determines whether to unzip the synthesis result files in the destination container. This property can only be set when the `destinationContainerUrl` property is set. This optional `bool` value ("true" or "false") is "false" by default.|
|`properties.destinationContainerUrl`|The batch synthesis results can be stored in a writable Azure container. If you don't specify a container URI with [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) token, the Speech service stores the results in a container managed by Microsoft. SAS with stored access policies isn't supported. When the synthesis job is deleted, the result data is also deleted.<br/><br/>This optional property isn't included in the response when you get the synthesis job.|
-|`properties.duration`|The audio output duration. The value is an ISO 8601 encoded duration.<br/><br/>This property is read-only.|
-|`properties.durationInTicks`|The audio output duration in ticks.<br/><br/>This property is read-only.|
+|`properties.destinationPath`|The prefix path where batch synthesis results can be stored with. If you don't specify a prefix path, the default prefix path is `YourSpeechResourceId/YourSynthesisId`.<br/><br/>This optional property can only be set when the `destinationContainerUrl` property is set.|
+|`properties.durationInMilliseconds`|The audio output duration in milliseconds.<br/><br/>This property is read-only.|
|`properties.failedAudioCount`|The count of batch synthesis inputs to audio output failed.<br/><br/>This property is read-only.| |`properties.outputFormat`|The audio output format.<br/><br/>For information about the accepted values, see [audio output formats](rest-text-to-speech.md#audio-outputs). The default output format is `riff-24khz-16bit-mono-pcm`.| |`properties.sentenceBoundaryEnabled`|Determines whether to generate sentence boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If sentence boundary data is requested, then a corresponding `[nnnn].sentence.json` file is included in the results data ZIP file.| |`properties.succeededAudioCount`|The count of batch synthesis inputs to audio output succeeded.<br/><br/>This property is read-only.|
-|`properties.timeToLive`|A duration after the synthesis job is created, when the synthesis results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. This optional setting is `P31D` (31 days) by default. The maximum time to live is 31 days. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.<br/><br/>Otherwise, you can call the [delete](./batch-synthesis.md#delete-batch-synthesis) synthesis method to remove the job sooner.|
+|`properties.timeToLiveInHours`|A duration in hours after the synthesis job is created, when the synthesis results will be automatically deleted. This optional setting is `744` (31 days) by default. The maximum time to live is 31 days. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLiveInHours` properties.<br/><br/>Otherwise, you can call the [delete](./batch-synthesis.md#delete-batch-synthesis) synthesis method to remove the job sooner.|
|`properties.wordBoundaryEnabled`|Determines whether to generate word boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If word boundary data is requested, then a corresponding `[nnnn].word.json` file is included in the results data ZIP file.| |`status`|The batch synthesis processing status.<br/><br/>The status should progress from "NotStarted" to "Running", and finally to either "Succeeded" or "Failed".<br/><br/>This property is read-only.|
-|`synthesisConfig`|The configuration settings to use for batch synthesis of plain text.<br/><br/>This property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.pitch`|The pitch of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.rate`|The rate of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.style`|For some voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.voice`|The voice that speaks the audio output.<br/><br/>For information about the available prebuilt neural voices, see [language and voice support](language-support.md?tabs=tts). To use a custom voice, you must specify a valid custom voice and deployment ID mapping in the `customVoices` property.<br/><br/>This property is required when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.volume`|The volume of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`textType`|Indicates whether the `inputs` text property should be plain text or SSML. The possible case-insensitive values are "PlainText" and "SSML". When the `textType` is set to `"PlainText"`, you must also set the `synthesisConfig` voice property.<br/><br/>This property is required.|
-
+|`synthesisConfig`|The configuration settings to use for batch synthesis of plain text.<br/><br/>This property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.backgroundAudio`|The background audio for each audio output.<br/><br/>This optional property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.backgroundAudio.fadein`|The duration of the background audio fade-in as milliseconds. The default value is `0`, which is the equivalent to no fade in. Accepted values: `0` to `10000` inclusive.<br/><br/>For information, see the attributes table under [add background audio](speech-synthesis-markup-voice.md#add-background-audio) in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.backgroundAudio.fadeout`|The duration of the background audio fade-out in milliseconds. The default value is `0`, which is the equivalent to no fade out. Accepted values: `0` to `10000` inclusive.<br/><br/>For information, see the attributes table under [add background audio](speech-synthesis-markup-voice.md#add-background-audio) in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.backgroundAudio.src`|The URI location of the background audio file.<br/><br/>For information, see the attributes table under [add background audio](speech-synthesis-markup-voice.md#add-background-audio) in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This property is required when `synthesisConfig.backgroundAudio` is set.|
+|`synthesisConfig.backgroundAudio.volume`|The volume of the background audio file. Accepted values: `0` to `100` inclusive. The default value is `1`.<br/><br/>For information, see the attributes table under [add background audio](speech-synthesis-markup-voice.md#add-background-audio) in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.pitch`|The pitch of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.rate`|The rate of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.role`|For some voices, you can adjust the speaking role-play. The voice can imitate a different age and gender, but the voice name isn't changed. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name isn't changed. If the role is missing or isn't supported for your voice, this attribute is ignored.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.speakerProfileId`|The speaker profile ID of a personal voice.<br/><br/>For information about available personal voice base model names, see [integrate personal voice](personal-voice-how-to-use.md#integrate-personal-voice-in-your-application).<br/>For information about how to get the speaker profile ID, see [language and voice support](personal-voice-create-voice.md).<br/><br/>This property is required when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.style`|For some voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `synthesisConfig.style` is set.|
+|`synthesisConfig.styleDegree`|The intensity of the speaking style. You can specify a stronger or softer style to make the speech more expressive or subdued. The range of accepted values are: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. If the style degree is missing or isn't supported for your voice, this attribute is ignored.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.voice`|The voice that speaks the audio output.<br/><br/>For information about the available prebuilt neural voices, see [language and voice support](language-support.md?tabs=tts). To use a custom voice, you must specify a valid custom voice and deployment ID mapping in the `customVoices` property. To use a personal voice, you need to specify the `synthesisConfig.speakerProfileId` property. <br/><br/>This property is required when `inputKind` is set to `"PlainText"`.|
+|`synthesisConfig.volume`|The volume of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `inputKind` is set to `"PlainText"`.|
+|`inputKind`|Indicates whether the `inputs` text property should be plain text or SSML. The possible case-insensitive values are "PlainText" and "SSML". When the `inputKind` is set to `"PlainText"`, you must also set the `synthesisConfig` voice property.<br/><br/>This property is required.|
+ ## Batch synthesis latency and best practices When using batch synthesis for generating synthesized speech, it's important to consider the latency involved and follow best practices for achieving optimal results.
An HTTP 204 error indicates that the request was successful, but the resource do
- You tried to get or delete a synthesis job that doesn't exist. - You successfully deleted a synthesis job.
-### HTTP 400 error
+### HTTP 400 error
Here are examples that can result in the 400 error:+ - The `outputFormat` is unsupported or invalid. Provide a valid format value, or leave `outputFormat` empty to use the default setting.-- The number of requested text inputs exceeded the limit of 1,000.-- The `top` query parameter exceeded the limit of 100.
+- The number of requested text inputs exceeded the limit of 10,000.
- You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.-- You tried to delete a batch synthesis job that isn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".-- You tried to use a *F0* Speech resource, but the region only supports the *Standard* Speech resource pricing tier. -- You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
+- You tried to use a _F0_ Speech resource, but the region only supports the _Standard_ Speech resource pricing tier.
+- You tried to create a new batch synthesis job that would exceed the limit of 300 active jobs. Each Speech resource can have up to 300 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
### HTTP 404 error
The specified entity can't be found. Make sure the synthesis ID is correct.
### HTTP 429 error
-There are too many recent requests. Each client application can submit up to 50 requests per 5 seconds for each Speech resource. Reduce the number of requests per second.
-
-You can check the rate limit and quota remaining via the HTTP headers as shown in the following example:
-
-```http
-X-RateLimit-Limit: 50
-X-RateLimit-Remaining: 49
-X-RateLimit-Reset: 2022-11-11T01:49:43Z
-```
+There are too many recent requests. Each client application can submit up to 100 requests per 10 seconds for each Speech resource. Reduce the number of requests per second.
### HTTP 500 error
HTTP 500 Internal Server Error indicates that the request failed. The response b
### HTTP error example
-Here's an example request that results in an HTTP 400 error, because the `top` query parameter is set to a value greater than 100.
+Here's an example request that results in an HTTP 400 error, because the `inputs` property is required to create a job.
```console
-curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+curl -v -X PUT -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type: application/json" -d '{
+ "inputKind": "SSML"
+}' "https://YourSpeechRegion.api.cognitive.microsoft.com/texttospeech/batchsyntheses/YourSynthesisId?api-version=2024-04-01"
``` In this case, the response headers include `HTTP/1.1 400 Bad Request`.
The response body resembles the following JSON example:
```json {
- "code": "InvalidRequest",
- "message": "The top parameter should not be greater than 100.",
- "innerError": {
- "code": "InvalidParameter",
- "message": "The top parameter should not be greater than 100."
+ "error": {
+ "code": "BadRequest",
+ "message": "The inputs is required."
} } ```
ai-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-synthesis.md
Title: Batch synthesis API (Preview) for text to speech - Speech service
+ Title: Batch synthesis API for text to speech - Speech service
description: Learn how to use the batch synthesis API for asynchronous synthesis of long-form text to speech.
Last updated 1/18/2024
-# Batch synthesis API (Preview) for text to speech
+# Batch synthesis API for text to speech
-The Batch synthesis API (Preview) can synthesize a large volume of text input (long and short) asynchronously. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The batch synthesis API can create synthesized audio longer than 10 minutes.
+The Batch synthesis API can synthesize a large volume of text input (long and short) asynchronously. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The batch synthesis API can create synthesized audio longer than 10 minutes.
> [!IMPORTANT]
-> The Batch synthesis API is currently in public preview. Once it's generally available, the Long Audio API will be deprecated. For more information, see [Migrate to batch synthesis API](migrate-to-batch-synthesis.md).
+> The Batch synthesis API is generally available. The Long Audio API will be retired on April 1st, 2027. For more information, see [Migrate to batch synthesis API](migrate-to-batch-synthesis.md).
The batch synthesis API is asynchronous and doesn't return synthesized audio in real-time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success. The text inputs must be plain text or [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) text.
This diagram provides a high-level overview of the workflow.
You can use the following REST API operations for batch synthesis:
-| Operation | Method | REST API call |
-| - | -- | |
-| [Create batch synthesis](#create-batch-synthesis) | `POST` | texttospeech/3.1-preview1/batchsynthesis |
-| [Get batch synthesis](#get-batch-synthesis) | `GET` | texttospeech/3.1-preview1/batchsynthesis/{id} |
-| [List batch synthesis](#list-batch-synthesis) | `GET` | texttospeech/3.1-preview1/batchsynthesis |
-| [Delete batch synthesis](#delete-batch-synthesis) | `DELETE` | texttospeech/3.1-preview1/batchsynthesis/{id} |
+| Operation | Method | REST API call |
+| - | -- | - |
+| [Create batch synthesis](#create-batch-synthesis) | `PUT` | texttospeech/batchsyntheses/YourSynthesisId |
+| [Get batch synthesis](#get-batch-synthesis) | `GET` | texttospeech/batchsyntheses/YourSynthesisId |
+| [List batch synthesis](#list-batch-synthesis) | `GET` | texttospeech/batchsyntheses |
+| [Delete batch synthesis](#delete-batch-synthesis) | `DELETE` | texttospeech/batchsyntheses/YourSynthesisId |
+
+<!-- | [Get operation for status monitor](#get-operation) | `GET` | texttospeech/operations/YourOperationId | -->
For code samples, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch-synthesis). ## Create batch synthesis
-To submit a batch synthesis request, construct the HTTP POST request body according to the following instructions:
+To submit a batch synthesis request, construct the HTTP PUT request path and body according to the following instructions:
-- Set the required `textType` property. -- If the `textType` property is set to "PlainText", then you must also set the `voice` property in the `synthesisConfig`. In the example below, the `textType` is set to "SSML", so the `speechSynthesis` isn't set.-- Set the required `displayName` property. Choose a name that you can refer to later. The display name doesn't have to be unique.-- Optionally you can set the `description`, `timeToLive`, and other properties. For more information, see [batch synthesis properties](batch-synthesis-properties.md).
+- Set the required `inputKind` property.
+- If the `inputKind` property is set to "PlainText", then you must also set the `voice` property in the `synthesisConfig`. In the example below, the `inputKind` is set to "SSML", so the `synthesisConfig` isn't set.
+- Optionally you can set the `description`, `timeToLiveInHours`, and other properties. For more information, see [batch synthesis properties](batch-synthesis-properties.md).
> [!NOTE]
-> The maximum JSON payload size that will be accepted is 500 kilobytes. Each Speech resource can have up to 200 batch synthesis jobs that are running concurrently.
+> The maximum JSON payload size that will be accepted is 2 megabytes. Each Speech resource can have up to 300 batch synthesis jobs that are running concurrently.
+
+Set the required `YourSynthesisId` in path. The `YourSynthesisId` have to be unique. It must be 3-64 long, contains only numbers, letters, hyphens, underscores and dots, starts and ends with a letter or number.
-Make an HTTP POST request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key, replace `YourSpeechRegion` with your Speech resource region, and set the request body properties as previously described.
+Make an HTTP PUT request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key, replace `YourSpeechRegion` with your Speech resource region, and set the request body properties as previously described.
```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type: application/json" -d '{
- "displayName": "batch synthesis sample",
+curl -v -X PUT -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type: application/json" -d '{
"description": "my ssml test",
- "textType": "SSML",
+ "inputKind": "SSML",
"inputs": [ {
- "text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''>
- <voice name='\''en-US-JennyNeural'\''>
- The rainbow has seven colors.
- </voice>
- </speak>",
- },
+ "content": "<speak version=\"1.0\" xml:lang=\"en-US\"><voice name=\"en-US-JennyNeural\">The rainbow has seven colors.</voice></speak>"
+ }
], "properties": { "outputFormat": "riff-24khz-16bit-mono-pcm",
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type:
"sentenceBoundaryEnabled": false, "concatenateResult": false, "decompressOutputFiles": false
- },
-}' "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis"
+ }
+}' "https://YourSpeechRegion.api.cognitive.microsoft.com/texttospeech/batchsyntheses/YourSynthesisId?api-version=2024-04-01"
``` You should receive a response body in the following format: ```json {
- "textType": "SSML",
- "synthesisConfig": {},
+ "id": "YourSynthesisId",
+ "internalId": "7ab84171-9070-4d3b-88d4-1b8cc1cb928a",
+ "status": "NotStarted",
+ "createdDateTime": "2024-03-12T07:23:18.0097387Z",
+ "lastActionDateTime": "2024-03-12T07:23:18.0097388Z",
+ "inputKind": "SSML",
"customVoices": {}, "properties": {
- "timeToLive": "P31D",
+ "timeToLiveInHours": 744,
"outputFormat": "riff-24khz-16bit-mono-pcm", "concatenateResult": false, "decompressOutputFiles": false, "wordBoundaryEnabled": false, "sentenceBoundaryEnabled": false
- },
- "lastActionDateTime": "2022-11-16T15:07:04.121Z",
- "status": "NotStarted",
- "id": "1e2e0fe8-e403-417c-a382-b55eb2ea943d",
- "createdDateTime": "2022-11-16T15:07:04.121Z",
- "displayName": "batch synthesis sample",
- "description": "my ssml test"
+ }
} ```
The `status` property should progress from `NotStarted` status, to `Running`, an
## Get batch synthesis
-To get the status of the batch synthesis job, make an HTTP GET request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
+To get the status of the batch synthesis job, make an HTTP GET request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
```azurecli-interactive
-curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+curl -v -X GET "https://YourSpeechRegion.api.cognitive.microsoft.com/texttospeech/batchsyntheses/YourSynthesisId?api-version=2024-04-01" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
``` You should receive a response body in the following format: ```json {
- "textType": "SSML",
- "synthesisConfig": {},
- "customVoices": {},
- "properties": {
- "audioSize": 100000,
- "durationInTicks": 31250000,
- "succeededAudioCount": 1,
- "failedAudioCount": 0,
- "duration": "PT3.125S",
- "billingDetails": {
- "customNeural": 0,
- "neural": 33
- },
- "timeToLive": "P31D",
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "decompressOutputFiles": false,
- "wordBoundaryEnabled": false,
- "sentenceBoundaryEnabled": false
- },
- "outputs": {
- "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
- },
- "lastActionDateTime": "2022-11-05T14:00:32.523Z",
- "status": "Succeeded",
- "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
- "createdDateTime": "2022-11-05T14:00:31.523Z",
- "displayName": "batch synthesis sample",
- "description": "my test"
+ "id": "YourSynthesisId",
+ "internalId": "7ab84171-9070-4d3b-88d4-1b8cc1cb928a",
+ "status": "Succeeded",
+ "createdDateTime": "2024-03-12T07:23:18.0097387Z",
+ "lastActionDateTime": "2024-03-12T07:23:18.7979669",
+ "inputKind": "SSML",
+ "customVoices": {},
+ "properties": {
+ "timeToLiveInHours": 744,
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false,
+ "sizeInBytes": 120000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "durationInMilliseconds": 2500,
+ "billingDetails": {
+ "neuralCharacters": 29
+ }
+ },
+ "outputs": {
+ "result": "https://stttssvcuse.blob.core.windows.net/batchsynthesis-output/29f2105f997c4bfea176d39d05ff201e/YourSynthesisId/results.zip?SAS_Token"
}
+}
``` From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results). ## List batch synthesis
-To list all batch synthesis jobs for the Speech resource, make an HTTP GET request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key and replace `YourSpeechRegion` with your Speech resource region. Optionally, you can set the `skip` and `top` (page size) query parameters in URL. The default value for `skip` is 0 and the default value for `top` is 100.
+To list all batch synthesis jobs for the Speech resource, make an HTTP GET request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key and replace `YourSpeechRegion` with your Speech resource region. Optionally, you can set the `skip` and `maxpagesize` (up to 100) query parameters in URL. The default value for `skip` is 0 and the default value for `maxpagesize` is 100.
```azurecli-interactive
-curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+curl -v -X GET "https://YourSpeechRegion.api.cognitive.microsoft.com/texttospeech/batchsyntheses?api-version=2024-04-01&skip=1&maxpagesize=2" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
``` You should receive a response body in the following format: ```json {
- "values": [
+ "value": [
{
- "textType": "SSML",
- "synthesisConfig": {},
+ "id": "my-job-03",
+ "internalId": "5f7e9ab6-2c92-4dcb-b5ee-ec0983ee4db0",
+ "status": "Succeeded",
+ "createdDateTime": "2024-03-12T07:28:32.5690441Z",
+ "lastActionDateTime": "2024-03-12T07:28:33.0042293",
+ "inputKind": "SSML",
"customVoices": {}, "properties": {
- "audioSize": 100000,
- "durationInTicks": 31250000,
- "succeededAudioCount": 1,
- "failedAudioCount": 0,
- "duration": "PT3.125S",
- "billingDetails": {
- "customNeural": 0,
- "neural": 33
- },
- "timeToLive": "P31D",
+ "timeToLiveInHours": 744,
"outputFormat": "riff-24khz-16bit-mono-pcm", "concatenateResult": false, "decompressOutputFiles": false, "wordBoundaryEnabled": false,
- "sentenceBoundaryEnabled": false
+ "sentenceBoundaryEnabled": false,
+ "sizeInBytes": 120000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "durationInMilliseconds": 2500,
+ "billingDetails": {
+ "neuralCharacters": 29
+ }
}, "outputs": {
- "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
- },
- "lastActionDateTime": "2022-11-05T14:00:32.523Z",
- "status": "Succeeded",
- "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
- "createdDateTime": "2022-11-05T14:00:31.523Z",
- "displayName": "batch synthesis sample",
- "description": "my test"
- }
+ "result": "https://stttssvcuse.blob.core.windows.net/batchsynthesis-output/29f2105f997c4bfea176d39d05ff201e/my-job-03/results.zip?SAS_Token"
+ }
+ },
{
- "textType": "PlainText",
- "synthesisConfig": {
- "voice": "en-US-JennyNeural",
- "style": "chat",
- "rate": "+30.00%",
- "pitch": "x-high",
- "volume": "80"
- },
+ "id": "my-job-02",
+ "internalId": "5577585f-4710-4d4f-aab6-162d14bd7ee0",
+ "status": "Succeeded",
+ "createdDateTime": "2024-03-12T07:28:29.6418211Z",
+ "lastActionDateTime": "2024-03-12T07:28:30.0910306",
+ "inputKind": "SSML",
"customVoices": {}, "properties": {
- "audioSize": 79384,
- "durationInTicks": 24800000,
- "succeededAudioCount": 1,
- "failedAudioCount": 0,
- "duration": "PT2.48S",
- "billingDetails": {
- "customNeural": 0,
- "neural": 33
- },
- "timeToLive": "P31D",
+ "timeToLiveInHours": 744,
"outputFormat": "riff-24khz-16bit-mono-pcm", "concatenateResult": false, "decompressOutputFiles": false, "wordBoundaryEnabled": false,
- "sentenceBoundaryEnabled": false
+ "sentenceBoundaryEnabled": false,
+ "sizeInBytes": 120000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "durationInMilliseconds": 2500,
+ "billingDetails": {
+ "neuralCharacters": 29
+ }
}, "outputs": {
- "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/38e249bf-2607-4236-930b-82f6724048d8/results.zip?SAS_Token"
- },
- "lastActionDateTime": "2022-11-05T18:52:23.210Z",
- "status": "Succeeded",
- "id": "38e249bf-2607-4236-930b-82f6724048d8",
- "createdDateTime": "2022-11-05T18:52:22.807Z",
- "displayName": "batch synthesis sample",
- "description": "my test"
- },
+ "result": "https://stttssvcuse.blob.core.windows.net/batchsynthesis-output/29f2105f997c4bfea176d39d05ff201e/my-job-02/results.zip?SAS_Token"
+ }
+ }
],
- // The next page link of the list of batch synthesis.
- "@nextLink": "https://{region}.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2"
-}
+ "nextLink": "https://YourSpeechRegion.api.cognitive.microsoft.com/texttospeech/batchsyntheses?skip=3&maxpagesize=2&api-version=2024-04-01"
+}
``` From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results).
-The `values` property in the json response lists your synthesis requests. The list is paginated, with a maximum page size of 100. The `"@nextLink"` property is provided as needed to get the next page of the paginated list.
+The `value` property in the json response lists your synthesis requests. The list is paginated, with a maximum page size of 100. The `"nextLink"` property is provided as needed to get the next page of the paginated list.
## Delete batch synthesis
-Delete the batch synthesis job history after you retrieved the audio output results. The Speech service keeps batch synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+Delete the batch synthesis job history after you retrieved the audio output results. The Speech service keeps batch synthesis history for up to 31 days, or the duration of the request `timeToLiveInHours` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLiveInHours` properties.
To delete a batch synthesis job, make an HTTP DELETE request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region. ```azurecli-interactive
-curl -v -X DELETE "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+curl -v -X DELETE "https://YourSpeechRegion.api.cognitive.microsoft.com/texttospeech/batchsyntheses/YourSynthesisId?api-version=2024-04-01" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
``` The response headers include `HTTP/1.1 204 No Content` if the delete request was successful. ## Batch synthesis results
-After you [get a batch synthesis job](#get-batch-synthesis) with `status` of "Succeeded", you can download the audio output results. Use the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response.
+After you [get a batch synthesis job](#get-batch-synthesis) with `status` of "Succeeded", you can download the audio output results. Use the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response.
To get the batch synthesis results file, make an HTTP GET request using the URI as shown in the following example. Replace `YourOutputsResultUrl` with the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response. Replace `YourSpeechKey` with your Speech resource key.
The summary file contains the synthesis results for each text input. Here's an e
```json {
- "jobID": "41b83de2-380d-45dc-91af-722b68cfdc8e",
- "status": "Succeeded",
- "results": [
+ "jobID": "7ab84171-9070-4d3b-88d4-1b8cc1cb928a",
+ "status": "Succeeded",
+ "results": [
{
- "texts": [
- "<speak version='1.0' xml:lang='en-US'>\n\t\t\t\t<voice name='en-US-JennyNeural'>\n\t\t\t\t\tThe rainbow has seven colors.\n\t\t\t\t</voice>\n\t\t\t</speak>"
+ "contents": [
+ "<speak version=\"1.0\" xml:lang=\"en-US\"><voice name=\"en-US-JennyNeural\">The rainbow has seven colors.</voice></speak>"
],
- "status": "Succeeded",
- "billingDetails": {
- "CustomNeural": "0",
- "Neural": "33"
- },
- "audioFileName": "0001.wav",
- "properties": {
- "audioSize": "100000",
- "duration": "PT3.1S",
- "durationInTicks": "31250000"
+ "status": "Succeeded",
+ "audioFileName": "0001.wav",
+ "properties": {
+ "sizeInBytes": "120000",
+ "durationInMilliseconds": "2500"
} } ] } ```
-If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file is included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file is included in the results.
+If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file is included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file is included in the results.
Here's an example word data file with both audio offset and duration in milliseconds: ```json [ {
- "Text": "the",
- "AudioOffset": 38,
- "Duration": 153
+ "Text": "The",
+ "AudioOffset": 50,
+ "Duration": 137
}, { "Text": "rainbow",
- "AudioOffset": 201,
- "Duration": 326
+ "AudioOffset": 200,
+ "Duration": 350
}, { "Text": "has",
- "AudioOffset": 567,
- "Duration": 96
+ "AudioOffset": 562,
+ "Duration": 175
}, { "Text": "seven",
- "AudioOffset": 673,
- "Duration": 96
+ "AudioOffset": 750,
+ "Duration": 300
}, { "Text": "colors",
- "AudioOffset": 778,
- "Duration": 451
+ "AudioOffset": 1062,
+ "Duration": 625
},
+ {
+ "Text": ".",
+ "AudioOffset": 1700,
+ "Duration": 100
+ }
] ```
HTTP 200 OK indicates that the request was successful.
### HTTP 201 Created
-HTTP 201 Created indicates that the create batch synthesis request (via HTTP POST) was successful.
+HTTP 201 Created indicates that the create batch synthesis request (via HTTP PUT) was successful.
### HTTP 204 error An HTTP 204 error indicates that the request was successful, but the resource doesn't exist. For example:-- You tried to get or delete a synthesis job that doesn't exist. -- You successfully deleted a synthesis job.
-### HTTP 400 error
+- You tried to get or delete a synthesis job that doesn't exist.
+- You successfully deleted a synthesis job.
+
+### HTTP 400 error
Here are examples that can result in the 400 error:+ - The `outputFormat` is unsupported or invalid. Provide a valid format value, or leave `outputFormat` empty to use the default setting.-- The number of requested text inputs exceeded the limit of 1,000.-- The `top` query parameter exceeded the limit of 100.
+- The number of requested text inputs exceeded the limit of 10,000.
- You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.-- You tried to delete a batch synthesis job that isn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".-- You tried to use a *F0* Speech resource, but the region only supports the *Standard* Speech resource pricing tier. -- You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
+- You tried to use a _F0_ Speech resource, but the region only supports the _Standard_ Speech resource pricing tier.
+- You tried to create a new batch synthesis job that would exceed the limit of 300 active jobs. Each Speech resource can have up to 300 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
### HTTP 404 error The specified entity can't be found. Make sure the synthesis ID is correct. ### HTTP 429 error
-
-There are too many recent requests. Each client application can submit up to 50 requests per 5 seconds for each Speech resource. Reduce the number of requests per second.
-
-You can check the rate limit and quota remaining via the HTTP headers as shown in the following example:
-```http
-X-RateLimit-Limit: 50
-X-RateLimit-Remaining: 49
-X-RateLimit-Reset: 2022-11-11T01:49:43Z
-```
+There are too many recent requests. Each client application can submit up to 100 requests per 10 seconds for each Speech resource. Reduce the number of requests per second.
### HTTP 500 error
HTTP 500 Internal Server Error indicates that the request failed. The response b
### HTTP error example
-Here's an example request that results in an HTTP 400 error, because the `top` query parameter is set to a value greater than 100.
+Here's an example request that results in an HTTP 400 error, because the `inputs` property is required to create a job.
```console
-curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+curl -v -X PUT -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type: application/json" -d '{
+ "inputKind": "SSML"
+}' "https://YourSpeechRegion.api.cognitive.microsoft.com/texttospeech/batchsyntheses/YourSynthesisId?api-version=2024-04-01"
``` In this case, the response headers include `HTTP/1.1 400 Bad Request`.
The response body resembles the following JSON example:
```json {
- "code": "InvalidRequest",
- "message": "The top parameter should not be greater than 100.",
- "innerError": {
- "code": "InvalidParameter",
- "message": "The top parameter should not be greater than 100."
+ "error": {
+ "code": "BadRequest",
+ "message": "The inputs is required."
} } ```
ai-services Migrate To Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-to-batch-synthesis.md
# Migrate code from Long Audio API to Batch synthesis API
-The [Batch synthesis API](batch-synthesis.md) (Preview) provides asynchronous synthesis of long-form text to speech. This article describes the benefits of upgrading from Long Audio API to Batch synthesis API, and details about how to do so.
+The [Batch synthesis API](batch-synthesis.md) provides asynchronous synthesis of long-form text to speech. This article describes the benefits of upgrading from Long Audio API to Batch synthesis API, and details about how to do so.
> [!IMPORTANT]
-> [Batch synthesis API](batch-synthesis.md) is currently in public preview. Once it's generally available, the Long Audio API will be deprecated.
+> [Batch synthesis API](batch-synthesis.md) is generally available. the Long Audio API will be retired on April 1st, 2027.
-## Base path
+## Base path and version
-You must update the base path in your code from `/texttospeech/v3.0/longaudiosynthesis` to `/texttospeech/3.1-preview1/batchsynthesis`. For example, to list synthesis jobs for your Speech resource in the `eastus` region, use `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis` instead of `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis`.
+Update the endpoint from `https://YourSpeechRegion.customvoice.api.speech.microsoft.com` to `https://YourSpeechRegion.api.cognitive.microsoft.com` or you can use custom domain instead: `https://{customDomainName}.cognitiveservices.azure.com/`.
+
+Update the base path in your code from `/texttospeech/v3.0/longaudiosynthesis` to `/texttospeech/batchsyntheses`.
+
+Update the version from base path to query string `/texttospeech/v3.0/longaudiosynthesis` to `?api-version=2024-04-01`.
+
+For example, to list synthesis jobs for your Speech resource in the `eastus` region, use `https://eastus.api.cognitive.microsoft.com/texttospeech/batchsyntheses?api-version=2024-04-01` instead of `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis`.
## Regions and endpoints
-Batch synthesis API is available in all [Speech regions](regions.md).
+Batch synthesis API is available in more [Speech regions](regions.md).
The Long Audio API is limited to the following regions:
-| Region | Endpoint |
-|--|-|
-| Australia East | `https://australiaeast.customvoice.api.speech.microsoft.com` |
-| East US | `https://eastus.customvoice.api.speech.microsoft.com` |
-| India Central | `https://centralindia.customvoice.api.speech.microsoft.com` |
+| Region | Endpoint |
+| - | - |
+| Australia East | `https://australiaeast.customvoice.api.speech.microsoft.com` |
+| East US | `https://eastus.customvoice.api.speech.microsoft.com` |
+| India Central | `https://centralindia.customvoice.api.speech.microsoft.com` |
| South Central US | `https://southcentralus.customvoice.api.speech.microsoft.com` |
-| Southeast Asia | `https://southeastasia.customvoice.api.speech.microsoft.com` |
-| UK South | `https://uksouth.customvoice.api.speech.microsoft.com` |
-| West Europe | `https://westeurope.customvoice.api.speech.microsoft.com` |
+| Southeast Asia | `https://southeastasia.customvoice.api.speech.microsoft.com` |
+| UK South | `https://uksouth.customvoice.api.speech.microsoft.com` |
+| West Europe | `https://westeurope.customvoice.api.speech.microsoft.com` |
## Voices list
The Long Audio API is limited to the set of voices returned by a GET request to
## Text inputs
-Batch synthesis text inputs are sent in a JSON payload of up to 500 kilobytes.
+Batch synthesis text inputs are sent in a JSON payload of up to 2 megabytes.
Long Audio API text inputs are uploaded from a file that meets the following requirements:
-* One plain text (.txt) or SSML text (.txt) file encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom). Don't use compressed files such as ZIP. If you have more than one input file, you must submit multiple requests.
-* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs. For plain text, each paragraph is separated by a new line. For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs.
+
+- One plain text (.txt) or SSML text (.txt) file encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom). Don't use compressed files such as ZIP. If you have more than one input file, you must submit multiple requests.
+- Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs. For plain text, each paragraph is separated by a new line. For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs.
With Batch synthesis API, you can use any of the [supported SSML elements](speech-synthesis-markup.md), including the `audio`, `mstts:backgroundaudio`, and `lexicon` elements. The long audio API doesn't support the `audio`, `mstts:backgroundaudio`, and `lexicon` elements.
Batch synthesis API supports all [text to speech audio output formats](rest-text
The Long Audio API is limited to the following set of audio output formats. The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
-* riff-8khz-16bit-mono-pcm
-* riff-16khz-16bit-mono-pcm
-* riff-24khz-16bit-mono-pcm
-* riff-48khz-16bit-mono-pcm
-* audio-16khz-32kbitrate-mono-mp3
-* audio-16khz-64kbitrate-mono-mp3
-* audio-16khz-128kbitrate-mono-mp3
-* audio-24khz-48kbitrate-mono-mp3
-* audio-24khz-96kbitrate-mono-mp3
-* audio-24khz-160kbitrate-mono-mp3
+- riff-8khz-16bit-mono-pcm
+- riff-16khz-16bit-mono-pcm
+- riff-24khz-16bit-mono-pcm
+- riff-48khz-16bit-mono-pcm
+- audio-16khz-32kbitrate-mono-mp3
+- audio-16khz-64kbitrate-mono-mp3
+- audio-16khz-128kbitrate-mono-mp3
+- audio-24khz-48kbitrate-mono-mp3
+- audio-24khz-96kbitrate-mono-mp3
+- audio-24khz-160kbitrate-mono-mp3
## Getting results
-With batch synthesis API, use the URL from the `outputs.result` property of the HTTP GET batch synthesis response. The [results](batch-synthesis.md#batch-synthesis-results) are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details.
+With batch synthesis API, use the URL from the `outputs.result` property of the HTTP GET batch synthesis response. The [results](batch-synthesis.md#batch-synthesis-results) are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details.
Long Audio API text inputs and results are returned via two separate content URLs as shown in the following example. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request. Both ZIP files can be downloaded from the URL in their `links.contentUrl` property. ## Cleaning up resources
-Batch synthesis API supports up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed". The Speech service keeps each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+Batch synthesis API supports up to 300 batch synthesis jobs that don't have a status of "Succeeded" or "Failed". The Speech service keeps each synthesis history for up to 31 days, or the duration of the request `timeToLiveInHours` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLiveInHours` properties.
-The Long Audio API is limited to 20,000 requests for each Azure subscription account. The Speech service doesn't remove job history automatically. You must remove the previous job run history before making new requests that would otherwise exceed the limit.
+The Long Audio API is limited to 20,000 requests for each Azure subscription account. The Speech service doesn't remove job history automatically. You must remove the previous job run history before making new requests that would otherwise exceed the limit.
## Next steps
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md
The Speech service allows your application to convert audio to text, perform spe
Keep in mind the following points:
-* If your application uses a [Speech SDK](speech-sdk.md), you provide the region identifier, such as `westus`, when you create a `SpeechConfig`. Make sure the region matches the region of your subscription.
-* If your application uses one of the Speech service REST APIs, the region is part of the endpoint URI you use when making requests.
-* Keys created for a region are valid only in that region. If you attempt to use them with other regions, you get authentication errors.
+- If your application uses a [Speech SDK](speech-sdk.md), you provide the region identifier, such as `westus`, when you create a `SpeechConfig`. Make sure the region matches the region of your subscription.
+- If your application uses one of the Speech service REST APIs, the region is part of the endpoint URI you use when making requests.
+- Keys created for a region are valid only in that region. If you attempt to use them with other regions, you get authentication errors.
> [!NOTE] > Speech service doesn't store or process customer data outside the region the customer deploys the service instance in.
Keep in mind the following points:
The following regions are supported for Speech service features such as speech to text, text to speech, pronunciation assessment, and translation. The geographies are listed in alphabetical order.
-| Geography | Region | Region identifier |
-| -- | -- | -- |
-| Africa | South Africa North | `southafricanorth` <sup>6</sup>|
-| Asia Pacific | East Asia | `eastasia` <sup>5</sup>|
-| Asia Pacific | Southeast Asia | `southeastasia` <sup>1,2,3,4,5,7,9</sup>|
-| Asia Pacific | Australia East | `australiaeast` <sup>1,2,3,4,7</sup>|
-| Asia Pacific | Central India | `centralindia` <sup>1,2,3,4,5</sup>|
-| Asia Pacific | Japan East | `japaneast` <sup>2,5</sup>|
-| Asia Pacific | Japan West | `japanwest` |
-| Asia Pacific | Korea Central | `koreacentral` <sup>2</sup>|
-| Canada | Canada Central | `canadacentral` <sup>1</sup>|
-| Europe | North Europe | `northeurope` <sup>1,2,4,5,7</sup>|
-| Europe | West Europe | `westeurope` <sup>1,2,3,4,5,7,9</sup>|
-| Europe | France Central | `francecentral` |
-| Europe | Germany West Central | `germanywestcentral` |
-| Europe | Norway East | `norwayeast` |
-| Europe | Sweden Central | `swedencentral`<sup>8</sup> |
-| Europe | Switzerland North | `switzerlandnorth` <sup>6</sup>|
-| Europe | Switzerland West | `switzerlandwest` |
-| Europe | UK South | `uksouth` <sup>1,2,3,4,7</sup>|
-| Middle East | UAE North | `uaenorth` <sup>6</sup>|
-| South America | Brazil South | `brazilsouth` <sup>6</sup>|
-| Qatar | Qatar Central | `qatarcentral`<sup>8</sup> |
-| US | Central US | `centralus` |
-| US | East US | `eastus` <sup>1,2,3,4,5,7,9</sup>|
-| US | East US 2 | `eastus2` <sup>1,2,4,5</sup>|
-| US | North Central US | `northcentralus` <sup>4,6</sup>|
-| US | South Central US | `southcentralus` <sup>1,2,3,4,5,6,7</sup>|
-| US | West Central US | `westcentralus` <sup>5</sup>|
-| US | West US | `westus` <sup>2,5</sup>|
-| US | West US 2 | `westus2` <sup>1,2,4,5,7</sup>|
-| US | West US 3 | `westus3` |
+| Geography | Region | Region identifier |
+| - | -- | |
+| Africa | South Africa North | `southafricanorth` <sup>6</sup> |
+| Asia Pacific | East Asia | `eastasia` <sup>5</sup> |
+| Asia Pacific | Southeast Asia | `southeastasia` <sup>1,2,4,5,7,9</sup> |
+| Asia Pacific | Australia East | `australiaeast` <sup>1,2,4,7</sup> |
+| Asia Pacific | Central India | `centralindia` <sup>1,2,4,5</sup> |
+| Asia Pacific | Japan East | `japaneast` <sup>2,5</sup> |
+| Asia Pacific | Japan West | `japanwest` <sup>3</sup> |
+| Asia Pacific | Korea Central | `koreacentral` <sup>2</sup> |
+| Canada | Canada Central | `canadacentral` <sup>1</sup> |
+| Europe | North Europe | `northeurope` <sup>1,2,4,5,7</sup> |
+| Europe | West Europe | `westeurope` <sup>1,2,4,5,7,9</sup> |
+| Europe | France Central | `francecentral` |
+| Europe | Germany West Central | `germanywestcentral` |
+| Europe | Norway East | `norwayeast` |
+| Europe | Sweden Central | `swedencentral`<sup>8</sup> |
+| Europe | Switzerland North | `switzerlandnorth` <sup>6</sup> |
+| Europe | Switzerland West | `switzerlandwest` <sup>3</sup> |
+| Europe | UK South | `uksouth` <sup>1,2,4,7</sup> |
+| Middle East | UAE North | `uaenorth` <sup>6</sup> |
+| South America | Brazil South | `brazilsouth` <sup>6</sup> |
+| Qatar | Qatar Central | `qatarcentral`<sup>3,8</sup> |
+| US | Central US | `centralus` |
+| US | East US | `eastus` <sup>1,2,4,5,7,9</sup> |
+| US | East US 2 | `eastus2` <sup>1,2,4,5</sup> |
+| US | North Central US | `northcentralus` <sup>4,6</sup> |
+| US | South Central US | `southcentralus` <sup>1,2,4,5,6,7</sup> |
+| US | West Central US | `westcentralus` <sup>3,5</sup> |
+| US | West US | `westus` <sup>2,5</sup> |
+| US | West US 2 | `westus2` <sup>1,2,4,5,7</sup> |
+| US | West US 3 | `westus3` <sup>3</sup> |
<sup>1</sup> The region has dedicated hardware for custom speech training. If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region. <sup>2</sup> The region is available for custom neural voice training. You can copy a trained neural voice model to other regions for deployment.
-<sup>3</sup> The Long Audio API is available in the region.
+<sup>3</sup> The region doesn't support Batch Synthesis API.
<sup>4</sup> The region supports custom keyword advanced models.
The following regions are supported for Speech service features such as speech t
Available regions for intent recognition via the Speech SDK are in the following table. | Global region | Region | Region identifier |
-| - | - | -- |
-| Asia | East Asia | `eastasia` |
-| Asia | Southeast Asia | `southeastasia` |
-| Australia | Australia East | `australiaeast` |
-| Europe | North Europe | `northeurope` |
-| Europe | West Europe | `westeurope` |
-| North America | East US | `eastus` |
-| North America | East US 2 | `eastus2` |
-| North America | South Central US | `southcentralus` |
-| North America | West Central US | `westcentralus` |
-| North America | West US | `westus` |
-| North America | West US 2 | `westus2` |
-| South America | Brazil South | `brazilsouth` |
+| - | - | -- |
+| Asia | East Asia | `eastasia` |
+| Asia | Southeast Asia | `southeastasia` |
+| Australia | Australia East | `australiaeast` |
+| Europe | North Europe | `northeurope` |
+| Europe | West Europe | `westeurope` |
+| North America | East US | `eastus` |
+| North America | East US 2 | `eastus2` |
+| North America | South Central US | `southcentralus` |
+| North America | West Central US | `westcentralus` |
+| North America | West US | `westus` |
+| North America | West US 2 | `westus2` |
+| South America | Brazil South | `brazilsouth` |
This is a subset of the publishing regions supported by the [Language Understanding service (LUIS)](../luis/luis-reference-regions.md).
This is a subset of the publishing regions supported by the [Language Understand
The [Speech SDK](speech-sdk.md) supports voice assistant capabilities through [Direct Line Speech](./direct-line-speech.md) for regions in the following table.
-| Global region | Region | Region identifier |
-| - | - | -- |
-| North America | West US | `westus` |
-| North America | West US 2 | `westus2` |
-| North America | East US | `eastus` |
-| North America | East US 2 | `eastus2` |
-| North America | West Central US | `westcentralus` |
-| North America | South Central US | `southcentralus` |
-| Europe | West Europe | `westeurope` |
-| Europe | North Europe | `northeurope` |
-| Asia | East Asia | `eastasia` |
-| Asia | Southeast Asia | `southeastasia` |
-| India | Central India | `centralindia` |
+| Global region | Region | Region identifier |
+| - | - | -- |
+| North America | West US | `westus` |
+| North America | West US 2 | `westus2` |
+| North America | East US | `eastus` |
+| North America | East US 2 | `eastus2` |
+| North America | West Central US | `westcentralus` |
+| North America | South Central US | `southcentralus` |
+| Europe | West Europe | `westeurope` |
+| Europe | North Europe | `northeurope` |
+| Asia | East Asia | `eastasia` |
+| Asia | Southeast Asia | `southeastasia` |
+| India | Central India | `centralindia` |
ai-studio Content Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/content-filtering.md
Last updated 2/22/2024 --++ # Content filtering in Azure AI Studio
ai-studio Evaluation Approach Gen Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-approach-gen-ai.md
Title: Evaluation of generative AI applications with Azure AI Studio description: Explore the broader domain of monitoring and evaluating large language models through the establishment of precise metrics, the development of test sets for measurement, and the implementation of iterative testing.-+ - ignite-2023 Last updated 3/28/2024 --++ # Evaluation of generative AI applications
ai-studio Evaluation Improvement Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-improvement-strategies.md
Title: Harms mitigation strategies with Azure AI description: Explore various strategies for addressing the challenges posed by large language models and mitigating potential harms.-+ - ignite-2023 Last updated 2/22/2024 --++ # Harms mitigation strategies with Azure AI
ai-studio Evaluation Metrics Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-metrics-built-in.md
Title: Evaluation and monitoring metrics for generative AI description: Discover the supported built-in metrics for evaluating large language models, understand their application and usage, and learn how to interpret them effectively.-+ - ignite-2023 Last updated 03/28/2024 --++ # Evaluation and monitoring metrics for generative AI
ai-studio Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/vulnerability-management.md
Title: Vulnerability management
description: Learn how Azure AI Studio manages vulnerabilities in images that the service provides, and how you can get the latest security updates for the components that you manage. ++ Last updated : 02/22/2024+ - Previously updated : 02/22/2024-- # Vulnerability management for Azure AI Studio
ai-studio Configure Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-managed-network.md
Title: How to configure a managed network for Azure AI
+ Title: How to configure a managed network for Azure AI hubs
-description: Learn how to configure a managed network for Azure AI
+description: Learn how to configure a managed network for Azure AI hubs
- ignite-2023 Previously updated : 02/13/2024 Last updated : 3/30/2024
-# How to configure a managed network for Azure AI
+# How to configure a managed network for Azure AI hubs
[!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)]
-We have two network isolation aspects. One is the network isolation to access an Azure AI. Another is the network isolation of computing resources in your Azure AI and Azure AI projects such as Compute Instance, Serverless and Managed Online Endpoint. This document explains the latter highlighted in the diagram. You can use Azure AI built-in network isolation to protect your computing resources.
+We have two network isolation aspects. One is the network isolation to access an Azure AI hub. Another is the network isolation of computing resources in your Azure AI hub and Azure AI projects such as compute instance, serverless and managed online endpoint. This document explains the latter highlighted in the diagram. You can use Azure AI hub built-in network isolation to protect your computing resources.
You need to configure following network isolation configurations. - Choose network isolation mode. You have two options: allow internet outbound mode or allow only approved outbound mode.-- Create private endpoint outbound rules to your private Azure resources. Note that private Azure AI Services and Azure AI Search are not supported yet.
+- Create private endpoint outbound rules to your private Azure resources. Note that private Azure AI services and Azure AI Search are not supported yet.
- If you use Visual Studio Code integration with allow only approved outbound mode, create FQDN outbound rules described in the [use Visual Studio Code](#scenario-use-visual-studio-code) section. - If you use HuggingFace models in Models with allow only approved outbound mode, create FQDN outbound rules described in the [use HuggingFace models](#scenario-use-huggingface-models) section. ## Network isolation architecture and isolation modes
-When you enable managed virtual network isolation, a managed virtual network is created for the Azure AI. Managed compute resources you create for the Azure AI automatically use this managed VNet. The managed VNet can use private endpoints for Azure resources that are used by your Azure AI, such as Azure Storage, Azure Key Vault, and Azure Container Registry.
+When you enable managed virtual network isolation, a managed virtual network is created for the Azure AI hub. Managed compute resources you create for the Azure AI hub automatically use this managed VNet. The managed VNet can use private endpoints for Azure resources that are used by your Azure AI hub, such as Azure Storage, Azure Key Vault, and Azure Container Registry.
There are three different configuration modes for outbound traffic from the managed VNet:
There are three different configuration modes for outbound traffic from the mana
| -- | -- | -- | | Allow internet outbound | Allow all internet outbound traffic from the managed VNet. | You want unrestricted access to machine learning resources on the internet, such as python packages or pretrained models.<sup>1</sup> | | Allow only approved outbound | Outbound traffic is allowed by specifying service tags. | * You want to minimize the risk of data exfiltration, but you need to prepare all required machine learning artifacts in your private environment.</br>* You want to configure outbound access to an approved list of services, service tags, or FQDNs. |
-| Disabled | Inbound and outbound traffic isn't restricted. | You want public inbound and outbound from the Azure AI. |
+| Disabled | Inbound and outbound traffic isn't restricted. | You want public inbound and outbound from the Azure AI hub. |
<sup>1</sup> You can use outbound rules with _allow only approved outbound_ mode to achieve the same result as using allow internet outbound. The differences are:
There are three different configuration modes for outbound traffic from the mana
* Adding FQDN outbound rules __increase your costs__ as this rule type uses Azure Firewall. * The default rules for _allow only approved outbound_ are designed to minimize the risk of data exfiltration. Any outbound rules you add might increase your risk.
-The managed VNet is preconfigured with [required default rules](#list-of-required-rules). It's also configured for private endpoint connections to your Azure AI, Azure AI's default storage, container registry and key vault __if they're configured as private__ or __the Azure AI isolation mode is set to allow only approved outbound__. After choosing the isolation mode, you only need to consider other outbound requirements you might need to add.
+The managed VNet is preconfigured with [required default rules](#list-of-required-rules). It's also configured for private endpoint connections to your Azure AI hub, Azure AI hub's default storage, container registry and key vault __if they're configured as private__ or __the Azure AI hub isolation mode is set to allow only approved outbound__. After choosing the isolation mode, you only need to consider other outbound requirements you might need to add.
The following diagram shows a managed VNet configured to __allow internet outbound__:
The following diagram shows a managed VNet configured to __allow internet outbou
The following diagram shows a managed VNet configured to __allow only approved outbound__: > [!NOTE]
-> In this configuration, the storage, key vault, and container registry used by the Azure AI are flagged as private. Since they are flagged as private, a private endpoint is used to communicate with them.
+> In this configuration, the storage, key vault, and container registry used by the Azure AI hub are flagged as private. Since they are flagged as private, a private endpoint is used to communicate with them.
:::image type="content" source="../media/how-to/network/only-approved-outbound.svg" alt-text="Diagram of managed VNet isolation configured for allow only approved outbound." lightbox="../media/how-to/network/only-approved-outbound.png":::
The following diagram shows a managed VNet configured to __allow only approved o
# [Azure CLI](#tab/azure-cli)
-Not available in AI CLI, but you can use [Azure Machine Learning CLI](../../machine-learning/how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-internet-outbound). Use your Azure AI name as workspace name in Azure Machine Learning CLI.
+Not available in AI CLI, but you can use [Azure Machine Learning CLI](../../machine-learning/how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-internet-outbound). Use your Azure AI hub name as workspace name in Azure Machine Learning CLI.
# [Python SDK](#tab/python)
Not available.
# [Azure portal](#tab/portal)
-* __Create a new Azure AI__:
+* __Create a new Azure AI hub__:
- 1. Sign in to the [Azure portal](https://portal.azure.com), and choose Azure AI from Create a resource menu.
+ 1. Sign in to the [Azure portal](https://portal.azure.com), and choose Azure AI Studio from Create a resource menu.
+ 1. Select **+ New Azure AI**.
1. Provide the required information on the __Basics__ tab. 1. From the __Networking__ tab, select __Private with Internet Outbound__. 1. To add an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information: * __Rule name__: A name for the rule. The name must be unique for this workspace.
- * __Destination type__: Private Endpoint is the only option when the network isolation is private with internet outbound. Azure AI managed VNet doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section.
+ * __Destination type__: Private Endpoint is the only option when the network isolation is private with internet outbound. Azure AI hub managed VNet doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section.
* __Subscription__: The subscription that contains the Azure resource you want to add a private endpoint for. * __Resource group__: The resource group that contains the Azure resource you want to add a private endpoint for. * __Resource type__: The type of the Azure resource.
Not available.
* __Update an existing workspace__:
- 1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure AI that you want to enable managed VNet isolation for.
+ 1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure AI hub that you want to enable managed VNet isolation for.
1. Select __Networking__, then select __Private with Internet Outbound__. * To _add_ an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the same information as used when creating a workspace in the 'Create a new workspace' section.
Not available.
# [Azure CLI](#tab/azure-cli)
-Not available in AI CLI, but you can use [Azure Machine Learning CLI](../../machine-learning/how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-only-approved-outbound). Use your Azure AI name as workspace name in Azure Machine Learning CLI.
+Not available in AI CLI, but you can use [Azure Machine Learning CLI](../../machine-learning/how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-only-approved-outbound). Use your Azure AI hub name as workspace name in Azure Machine Learning CLI.
# [Python SDK](#tab/python)
Not available.
# [Azure portal](#tab/portal)
-* __Create a new Azure AI__:
+* __Create a new Azure AI hub__:
- 1. Sign in to the [Azure portal](https://portal.azure.com), and choose Azure AI from Create a resource menu.
+ 1. Sign in to the [Azure portal](https://portal.azure.com), and choose Azure AI Studio from Create a resource menu.
+ 1. Select **+ New Azure AI**.
1. Provide the required information on the __Basics__ tab. 1. From the __Networking__ tab, select __Private with Approved Outbound__.
Not available.
* __Sub Resource__: The sub resource of the Azure resource type. > [!TIP]
- > Azure AI managed VNet doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section.
+ > Azure AI hub managed VNet doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section.
If the destination type is __Service Tag__, provide the following information:
Not available.
* __Update an existing workspace__:
- 1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure AI that you want to enable managed VNet isolation for.
+ 1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure AI hub that you want to enable managed VNet isolation for.
1. Select __Networking__, then select __Private with Approved Outbound__. * To _add_ an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the same information as when creating a workspace in the previous 'Create a new workspace' section.
Not available.
# [Azure CLI](#tab/azure-cli)
-Not available in AI CLI, but you can use [Azure Machine Learning CLI](../../machine-learning/how-to-managed-network.md#manage-outbound-rules). Use your Azure AI name as workspace name in Azure Machine Learning CLI.
+Not available in AI CLI, but you can use [Azure Machine Learning CLI](../../machine-learning/how-to-managed-network.md#manage-outbound-rules). Use your Azure AI hub name as workspace name in Azure Machine Learning CLI.
# [Python SDK](#tab/python)
Not available.
# [Azure portal](#tab/portal)
-1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure AI that you want to enable managed VNet isolation for.
+1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure AI hub that you want to enable managed VNet isolation for.
1. Select __Networking__. The __Azure AI Outbound access__ section allows you to manage outbound rules. * To _add_ an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Azure AI outbound rules__ sidebar, provide the following information:
Not available.
> These rules are automatically added to the managed VNet. __Private endpoints__:
-* When the isolation mode for the managed VNet is `Allow internet outbound`, private endpoint outbound rules are automatically created as required rules from the managed VNet for the Azure AI and associated resources __with public network access disabled__ (Key Vault, Storage Account, Container Registry, Azure AI).
-* When the isolation mode for the managed VNet is `Allow only approved outbound`, private endpoint outbound rules are automatically created as required rules from the managed VNet for the Azure AI and associated resources __regardless of public network access mode for those resources__ (Key Vault, Storage Account, Container Registry, Azure AI).
+* When the isolation mode for the managed VNet is `Allow internet outbound`, private endpoint outbound rules are automatically created as required rules from the managed VNet for the Azure AI hub and associated resources __with public network access disabled__ (Key Vault, Storage Account, Container Registry, Azure AI hub).
+* When the isolation mode for the managed VNet is `Allow only approved outbound`, private endpoint outbound rules are automatically created as required rules from the managed VNet for the Azure AI hub and associated resources __regardless of public network access mode for those resources__ (Key Vault, Storage Account, Container Registry, Azure AI hub).
__Outbound__ service tag rules:
To allow installation of __Python packages for training and deployment__, add ou
Visual Studio Code relies on specific hosts and ports to establish a remote connection. #### Hosts
-If you plan to use __Visual Studio Code__ with Azure AI, add outbound _FQDN_ rules to allow traffic to the following hosts:
+If you plan to use __Visual Studio Code__ with the Azure AI hub, add outbound _FQDN_ rules to allow traffic to the following hosts:
> [!WARNING] > FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing. For more information, see [Pricing](#pricing).
You must allow network traffic to ports 8704 to 8710. The VS Code server dynamic
### Scenario: Use HuggingFace models
-If you plan to use __HuggingFace models__ with Azure AI, add outbound _FQDN_ rules to allow traffic to the following hosts:
+If you plan to use __HuggingFace models__ with the Azure AI hub, add outbound _FQDN_ rules to allow traffic to the following hosts:
> [!WARNING] > FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing. For more information, see [Pricing](#pricing).
If you plan to use __HuggingFace models__ with Azure AI, add outbound _FQDN_ rul
Private endpoints are currently supported for the following Azure
-* Azure AI
+* Azure AI hub
* Azure Machine Learning * Azure Machine Learning registries * Azure Storage (all sub resource types)
Private endpoints are currently supported for the following Azure
When you create a private endpoint, you provide the _resource type_ and _subresource_ that the endpoint connects to. Some resources have multiple types and subresources. For more information, see [what is a private endpoint](/azure/private-link/private-endpoint-overview).
-When you create a private endpoint for Azure AI dependency resources, such as Azure Storage, Azure Container Registry, and Azure Key Vault, the resource can be in a different Azure subscription. However, the resource must be in the same tenant as the Azure AI.
+When you create a private endpoint for Azure AI hub dependency resources, such as Azure Storage, Azure Container Registry, and Azure Key Vault, the resource can be in a different Azure subscription. However, the resource must be in the same tenant as the Azure AI hub.
A private endpoint is automatically created for a connection if the target resource is an Azure resource listed above. A valid target ID is expected for the private endpoint. A valid target ID for the connection can be the ARM ID of a parent resource. The target ID is also expected in the target of the connection or in `metadata.resourceid`. For more on connections, see [How to add a new connection in Azure AI Studio](connections-add.md). ## Pricing
-The Azure AI managed VNet feature is free. However, you're charged for the following resources that are used by the managed VNet:
+The Azure AI hub managed VNet feature is free. However, you're charged for the following resources that are used by the managed VNet:
* Azure Private Link - Private endpoints used to secure communications between the managed VNet and Azure resources relies on Azure Private Link. For more information on pricing, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-* FQDN outbound rules - FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing. Azure Firewall SKU is standard. Azure Firewall is provisioned per Azure AI.
+* FQDN outbound rules - FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing. Azure Firewall SKU is standard. Azure Firewall is provisioned per Azure AI hub.
> [!IMPORTANT] > The firewall isn't created until you add an outbound FQDN rule. If you don't use FQDN rules, you will not be charged for Azure Firewall. For more information on pricing, see [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/).
The Azure AI managed VNet feature is free. However, you're charged for the follo
## Limitations * Azure AI Studio currently doesn't support bring your own virtual network, it only supports managed VNet isolation.
-* Azure AI services provisioned with Azure AI and Azure AI Search attached with Azure AI should be public.
+* Azure AI services provisioned with Azure AI hub and Azure AI Search attached with Azure AI hub should be public.
* The "Add your data" feature in the Azure AI Studio playground doesn't support private storage account. * Once you enable managed VNet isolation of your Azure AI, you can't disable it. * Managed VNet uses private endpoint connection to access your private resources. You can't have a private endpoint and a service endpoint at the same time for your Azure resources, such as a storage account. We recommend using private endpoints in all scenarios.
ai-studio Costs Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/costs-plan-manage.md
# Plan and manage costs for Azure AI Studio + This article describes how you plan for and manage costs for Azure AI Studio. First, you use the Azure pricing calculator to help plan for Azure AI Studio costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs. You use Azure AI services in Azure AI Studio. Costs for Azure AI services are only a portion of the monthly costs in your Azure bill. You're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
ai-studio Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-runtime.md
Title: Create and manage prompt flow runtimes description: Learn how to create and manage prompt flow runtimes in Azure AI Studio.-+ - ignite-2023 Last updated 2/22/2024 --++ # Create and manage prompt flow runtimes in Azure AI Studio
ai-studio Create Secure Ai Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-secure-ai-hub.md
Title: Create a secure AI hub
description: Create an Azure AI hub inside a managed virtual network. The managed virtual network secures access to managed resources such as computes. + Last updated : 03/22/2024 Previously updated : 03/22/2024- # Customer intent: As an administrator, I want to create a secure AI hub and project with a managed virtual network so that I can secure access to the AI hub and project resources.
ai-studio Data Image Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-image-add.md
Last updated 12/11/2023 --++ # Azure OpenAI on your data with images using GPT-4 Turbo with Vision (preview)
ai-studio Evaluate Flow Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-flow-results.md
Title: How to view evaluation results in Azure AI Studio description: This article provides instructions on how to view evaluation results in Azure AI Studio.-+ - ignite-2023 Last updated 3/28/2024 --++ # How to view evaluation results in Azure AI Studio
ai-studio Evaluate Generative Ai App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-generative-ai-app.md
Title: How to evaluate with Azure AI Studio and SDK description: Evaluate your generative AI application with Azure AI Studio UI and SDK.-+ Last updated 3/28/2024 --++ zone_pivot_groups: azure-ai-studio-sdk
ai-studio Evaluate Prompts Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-prompts-playground.md
Title: How to manually evaluate prompts in Azure AI Studio playground description: Quickly test and evaluate prompts in Azure AI Studio playground.-+ - ignite-2023 Last updated 2/22/2024 --++ # Manually evaluate prompts in Azure AI Studio playground
ai-studio Fine Tune Model Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/fine-tune-model-llama.md
Title: Fine-tune a Llama 2 model in Azure AI Studio description: Learn how to fine-tune a Llama 2 model in Azure AI Studio.-+ Last updated 12/11/2023 --++
ai-studio Flow Bulk Test Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-bulk-test-evaluation.md
Title: Submit batch run and evaluate a flow description: Learn how to submit batch run and use built-in evaluation methods in prompt flow to evaluate how well your flow performs with a large dataset with Azure AI Studio.-+ - ignite-2023 Last updated 2/24/2024 --++ # Submit a batch run and evaluate a flow
ai-studio Flow Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-deploy.md
Title: Deploy a flow as a managed online endpoint for real-time inference description: Learn how to deploy a flow as a managed online endpoint for real-time inference with Azure AI Studio.-+ - ignite-2023 Last updated 2/24/2024---+++ # Deploy a flow for real-time inference
ai-studio Flow Develop Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-develop-evaluation.md
Title: Develop an evaluation flow description: Learn how to customize or create your own evaluation flow tailored to your tasks and objectives, and then use in a batch run as an evaluation method in prompt flow with Azure AI Studio.-+ - ignite-2023 Last updated 2/24/2024 --++ # Develop an evaluation flow in Azure AI Studio
ai-studio Flow Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-develop.md
Title: How to build with prompt flow description: This article provides instructions on how to build with prompt flow.-+ - ignite-2023 Last updated 2/24/2024 --++ # Develop a prompt flow
ai-studio Flow Process Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-process-image.md
Title: Process images in prompt flow description: Learn how to use images in prompt flow.+ Last updated 2/26/2024
ai-studio Flow Tune Prompts Using Variants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-tune-prompts-using-variants.md
Title: Tune prompts using variants description: Learn how to tune prompts using variants in Prompt flow with Azure AI Studio.-+ - ignite-2023 Last updated 2/24/2024 --++ # Tune prompts using variants in Azure AI Studio
ai-studio Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog.md
Title: Explore the model catalog in Azure AI Studio description: This article introduces foundation model capabilities and the model catalog in Azure AI Studio.-+ - ignite-2023 Last updated 2/22/2024---+++ # Explore the model catalog in Azure AI Studio
ai-studio Azure Open Ai Gpt 4V Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md
Title: Azure OpenAI GPT-4 Turbo with Vision tool in Azure AI Studio description: This article introduces the Azure OpenAI GPT-4 Turbo with Vision tool for flows in Azure AI Studio.-+ Last updated 2/26/2024
ai-studio Content Safety Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/content-safety-tool.md
Title: Content Safety tool for flows in Azure AI Studio description: This article introduces the Content Safety tool for flows in Azure AI Studio.-+ - ignite-2023
ai-studio Embedding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/embedding-tool.md
Title: Embedding tool for flows in Azure AI Studio description: This article introduces the Embedding tool for flows in Azure AI Studio.-+ - ignite-2023
ai-studio Faiss Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/faiss-index-lookup-tool.md
Title: Faiss Index Lookup tool for flows in Azure AI Studio description: This article introduces the Faiss Index Lookup tool for flows in Azure AI Studio.-+ - ignite-2023
ai-studio Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/index-lookup-tool.md
Title: Index lookup tool for flows in Azure AI Studio
+ Title: Index Lookup tool for flows in Azure AI Studio
-description: This article introduces the Index Lookup tool for flows in Azure AI Studio.
--
+description: This article introduces the Index Lookup tool for flows in Azure AI Studio.
+ - Previously updated : 01/18/2024-+ Last updated : 3/6/2024+++ # Index Lookup tool for Azure AI Studio
ai-studio Llm Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md
Title: LLM tool for flows in Azure AI Studio description: This article introduces the LLM tool for flows in Azure AI Studio.-+ - ignite-2023
ai-studio Prompt Flow Tools Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md
Title: Overview of prompt flow tools in Azure AI Studio description: Learn about prompt flow tools that are available in Azure AI Studio.-+ Last updated 2/6/2024
ai-studio Prompt Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-tool.md
Title: Prompt tool for flows in Azure AI Studio description: This article introduces the Prompt tool for flows in Azure AI Studio.-+ - ignite-2023
ai-studio Python Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md
Title: Python tool for flows in Azure AI Studio description: This article introduces the Python tool for flows in Azure AI Studio.-+
ai-studio Serp Api Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/serp-api-tool.md
Title: Serp API tool for flows in Azure AI Studio description: This article introduces the Serp API tool for flows in Azure AI Studio.-+ - ignite-2023
ai-studio Vector Db Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/vector-db-lookup-tool.md
Title: Vector DB Lookup tool for flows in Azure AI Studio description: This article introduces the Vector DB Lookup tool for flows in Azure AI Studio.-+ - ignite-2023
ai-studio Vector Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/vector-index-lookup-tool.md
Title: Vector index lookup tool for flows in Azure AI Studio description: This article introduces the Vector index lookup tool for flows in Azure AI Studio.-+ - ignite-2023
ai-studio Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow.md
Title: Prompt flow in Azure AI Studio description: This article introduces prompt flow in Azure AI Studio.-+ - ignite-2023 Last updated 2/22/2024 --++ # Prompt flow in Azure AI Studio
ai-studio Troubleshoot Secure Connection Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/troubleshoot-secure-connection-project.md
Title: Troubleshoot private endpoint connection description: 'Learn how to troubleshoot connectivity problems to a project that is configured with a private endpoint.'+ Last updated : 01/19/2024+ -- Previously updated : 01/19/2024 # Troubleshoot connection to a project with a private endpoint
ai-studio Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/assistants.md
Last updated 03/19/2024--++
ai-studio Content Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/content-safety.md
Last updated 11/15/2023 --++ # QuickStart: Moderate text and images with content safety in Azure AI Studio
ai-studio Multimodal Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/multimodal-vision.md
Last updated 12/11/2023 --++ # Quickstart: Get started using GPT-4 Turbo with Vision on your images and videos in Azure AI Studio
aks Istio Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-scale.md
+
+ Title: Istio service mesh AKS add-on performance
+description: Istio service mesh AKS add-on performance
++ Last updated : 03/19/2024+++
+# Istio service mesh add-on performance
+The Istio-based service mesh add-on is logically split into a control plane (`istiod`) and a data plane. The data plane is composed of Envoy sidecar proxies inside workload pods. Istiod manages and configures these Envoy proxies. This article presents the performance of both the control and data plane for revision asm-1-19, including resource consumption, sidecar capacity, and latency overhead. Additionally, it provides suggestions for addressing potential strain on resources during periods of heavy load.
+
+## Control plane performance
+[IstiodΓÇÖs CPU and memory requirements][control-plane-performance] correlate with the rate of deployment and configuration changes and the number of proxies connected. The scenarios tested were:
+
+- Pod churn: examines the impact of pod churning on `istiod`. To reduce variables, only one service is used for all sidecars.
+- Multiple
+
+#### Test specifications
+- One `istiod` instance with default settings
+- Horizontal pod autoscaling disabled
+- Tested with two network plugins: Azure CNI Overlay and Azure CNI Overlay with Cilium [ (recommended network plugins for large scale clusters) ](/azure/aks/azure-cni-overlay?tabs=kubectl#choosing-a-network-model-to-use)
+- Node SKU: Standard D16 v3 (16 vCPU, 64-GB memory)
+- Kubernetes version: 1.28.5
+- Istio revision: asm-1-19
+
+### Pod churn
+The [ClusterLoader2 framework][clusterloader2] was used to determine the maximum number of sidecars Istiod can manage when there's sidecar churning. The churn percent is defined as the percent of sidecars churned down/up during the test. For example, 50% churn for 10,000 sidecars would mean that 5,000 sidecars were churned down, then 5,000 sidecars were churned up. The churn percents tested were determined from the typical churn percentage during deployment rollouts (`maxUnavailable`). The churn rate was calculated by determining the total number of sidecars churned (up and down) over the actual time taken to complete the churning process.
+
+#### Sidecar capacity and Istiod CPU and memory
+
+**Azure CNI overlay**
+
+| Churn (%) | Churn Rate (sidecars/sec) | Sidecar Capacity | Istiod Memory (GB) | Istiod CPU |
+|-|--|--|-|--|
+| 0 | -- | 25000 | 32.1 | 15 |
+| 25 | 31.2 | 15000 | 22.2 | 15 |
+| 50 | 31.2 | 15000 | 25.4 | 15 |
++
+**Azure CNI overlay with Cilium**
+
+| Churn (%) | Churn Rate (sidecars/sec) | Sidecar Capacity | Istiod Memory (GB) | Istiod CPU |
+|-|--|--|-|--|
+| 0 |-- | 30000 | 41.2 | 15 |
+| 25 | 41.7 | 25000 | 36.1 | 16 |
+| 50 | 37.9 | 25000 | 42.7 | 16 |
++
+### Multiple services
+The [ClusterLoader2 framework][clusterloader2] was used to determine the maximum number of sidecars `istiod` can manage with 1,000 services. The results can be compared to the 0% churn test (one service) in the pod churn scenario. Each service had `N` sidecars contributing to the overall maximum sidecar count. The API Server resource usage was observed to determine if there was any significant stress from the add-on.
+
+**Sidecar capacity**
+
+| Azure CNI Overlay | Azure CNI Overlay with Cilium |
+|||
+| 20000 | 20000 |
+
+**CPU and memory**
+
+| Resource | Azure CNI Overlay | Azure CNI Overlay with Cilium |
+||--||
+| API Server Memory (GB) | 38.9 | 9.7 |
+| API Server CPU | 6.1 | 4.7 |
+| Istiod Memory (GB) | 40.4 | 42.6 |
+| Istiod CPU | 15 | 16 |
++
+## Data plane performance
+Various factors impact [sidecar performance][data-plane-performance] such as request size, number of proxy worker threads, and number of client connections. Additionally, any request flowing through the mesh traverses the client-side proxy and then the server-side proxy. Therefore, latency and resource consumption are measured to determine the data plane performance.
+
+[Fortio][fortio] was used to create the load. The test was conducted with the [Istio benchmark repository][istio-benchmark] that was modified for use with the add-on.
+
+#### Test specifications
+- Tested with two network plugins: Azure CNI Overlay and Azure CNI Overlay with Cilium [ (recommended network plugins for large scale clusters) ](/azure/aks/azure-cni-overlay?tabs=kubectl#choosing-a-network-model-to-use)
+- Node SKU: Standard D16 v5 (16 vCPU, 64-GB memory)
+- Kubernetes version: 1.28.5
+- Two proxy workers
+- 1-KB payload
+- 1000 QPS at varying client connections
+- `http/1.1` protocol and mutual TLS enabled
+- 26 data points collected
+
+#### CPU and memory
+The memory and CPU usage for both the client and server proxy for 16 client connections and 1000 QPS across all network plugin scenarios is roughly 0.4 vCPU and 72 MB.
+
+#### Latency
+The sidecar Envoy proxy collects raw telemetry data after responding to a client, which doesn't directly affect the request's total processing time. However, this process delays the start of handling the next request, contributing to queue wait times and influencing average and tail latencies. Depending on the traffic pattern, the actual tail latency varies.
+
+The following evaluates the impact of adding sidecar proxies to the data path, showcasing the P90 and P99 latency.
+
+| Azure CNI Overlay |Azure CNI Overlay with Cilium |
+|:-:|:-:|
+[ ![Diagram that compares P99 latency for Azure CNI Overlay.](./media/aks-istio-addon/latency-box-plot/overlay-azure-p99.png) ](./media/aks-istio-addon/latency-box-plot/overlay-azure-p99.png#lightbox) | [ ![Diagram that compares P99 latency for Azure CNI Overlay with Cilium.](./media/aks-istio-addon/latency-box-plot/overlay-cilium-p99.png) ](./media/aks-istio-addon/latency-box-plot/overlay-cilium-p99.png#lightbox)
+[ ![Diagram that compares P90 latency for Azure CNI Overlay.](./media/aks-istio-addon/latency-box-plot/overlay-azure-p90.png) ](./media/aks-istio-addon/latency-box-plot/overlay-azure-p90.png#lightbox) | [ ![Diagram that compares P90 latency for Azure CNI Overlay with Cilium.](./media/aks-istio-addon/latency-box-plot/overlay-cilium-p90.png) ](./media/aks-istio-addon/latency-box-plot/overlay-cilium-p90.png#lightbox)
+
+## Service entry
+Istio's ServiceEntry custom resource definition enables adding other services into the IstioΓÇÖs internal service registry. A [ServiceEntry][serviceentry] allows services already in the mesh to route or access the services specified. However, the configuration of multiple ServiceEntries with the `resolution` field set to DNS can cause a [heavy load on DNS servers][understanding-dns]. The following suggestions can help reduce the load:
+
+- Switch to `resolution: NONE` to avoid proxy DNS lookups entirely. Suitable for most use cases.
+- Increase TTL (Time To Live) if you control the domains being resolved.
+- Limit the ServiceEntry scope with `exportTo`.
++
+[control-plane-performance]: https://istio.io/latest/docs/ops/deployment/performance-and-scalability/#control-plane-performance
+[data-plane-performance]: https://istio.io/latest/docs/ops/deployment/performance-and-scalability/#data-plane-performance
+[clusterloader2]: https://github.com/kubernetes/perf-tests/tree/master/clusterloader2#clusterloader
+[fortio]: https://fortio.org/
+[istio-benchmark]: https://github.com/istio/tools/tree/master/perf/benchmark#istio-performance-benchmarking
+[serviceentry]: https://istio.io/latest/docs/reference/config/networking/service-entry/
+[understanding-dns]: https://preliminary.istio.io/latest/docs/ops/configuration/traffic-management/dns/#proxy-dns-resolution
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
There are two execution models for .NET functions:
[!INCLUDE [functions-dotnet-execution-model](../../includes/functions-dotnet-execution-model.md)] + This article describes the current state of the functional and behavioral differences between the two models. To migrate from the in-process model to the isolated worker model, see [Migrate .NET apps from the in-process model to the isolated worker model][migrate]. ## Execution model comparison table Use the following table to compare feature and functional differences between the two models:
-| Feature/behavior | Isolated worker process | In-process<sup>3</sup> |
+| Feature/behavior | Isolated worker model | In-process model<sup>3</sup> |
| - | - | - | | [Supported .NET versions](#supported-versions) | Long Term Support (LTS) versions,<br/>Standard Term Support (STS) versions,<br/>.NET Framework | Long Term Support (LTS) versions<sup>6</sup> | | Core packages | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) |
azure-functions Functions Add Output Binding Storage Queue Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs.md
Because you're using a Queue storage output binding, you need the Storage bindin
Install-Package Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues ``` # [In-process model](#tab/in-process) +
+ [!INCLUDE [functions-in-process-model-retirement-note](../../includes/functions-in-process-model-retirement-note.md)]
+ ```bash Install-Package Microsoft.Azure.WebJobs.Extensions.Storage ```
azure-functions Functions Bindings Azure Data Explorer Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-input.md
The Azure Data Explorer input binding retrieves data from a database.
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) More samples for the Azure Data Explorer input binding (out of process) are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc).
azure-functions Functions Bindings Azure Data Explorer Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-output.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + ### [Isolated worker model](#tab/isolated-process) More samples for the Azure Data Explorer output binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc).
azure-functions Functions Bindings Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer.md
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Kusto --prereleas
# [In-process model](#tab/in-process) + Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kusto).
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
For information on setup and configuration details, see the [overview](./functio
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
For information on setup and configuration details, see the [overview](./functio
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
namespace AzureSQL.ToDo
# [In-process model](#tab/in-process) + More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql --prerelease
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql).
azure-functions Functions Bindings Cache Trigger Redislist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redislist.md
namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisListTri
### [In-process model](#tab/in-process) + ```csharp using Microsoft.Extensions.Logging;
azure-functions Functions Bindings Cache Trigger Redispubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redispubsub.md
namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisPubSubT
### [In-process model](#tab/in-process) + This sample listens to the channel `pubsubTest`. ```csharp
azure-functions Functions Bindings Cache Trigger Redisstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redisstream.md
namespace Microsoft.Azure.Functions.Worker.Extensions.Redis.Samples.RedisStreamT
### [In-process model](#tab/in-process) + ```csharp using Microsoft.Extensions.Logging;
azure-functions Functions Bindings Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache.md
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Redis --prereleas
### [In-process model](#tab/in-process) + Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Redis).
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
Unless otherwise noted, examples in this article target version 3.x of the [Azur
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) This section contains examples that require version 3.x of Azure Cosmos DB extension and 5.x of Azure Storage extension. If not already present in your function app, add reference to the following NuGet packages:
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
This article supports both programming models.
::: zone-end ::: zone pivot="programming-language-csharp" [!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]+ ::: zone-end ## Example
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
An isolated worker process class library compiled C# function runs in a process
# [In-process model](#tab/in-process) + An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md
Functions execute in an isolated C# worker process. To learn more, see [Guide fo
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
The following example shows how the custom type is used in both the trigger and
# [In-process model](#tab/in-process) + The following example shows a C# function that publishes a `CloudEvent` using version 3.x of the extension: ```cs
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
The type of the input parameter used with an Event Grid trigger depends on these
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) When running your C# function in an isolated worker process, you need to define a custom type for event properties. The following example defines a `MyEventType` class.
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
Functions execute in an isolated C# worker process. To learn more, see [Guide fo
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
The following example shows a [C# function](dotnet-isolated-process-guide.md) th
# [In-process model](#tab/in-process) + The following example shows a [C# function](functions-dotnet-class-library.md) that writes a message to an event hub, using the method return value as the output: ```csharp
azure-functions Functions Bindings Http Webhook Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-output.md
A return value attribute isn't required. To learn more, see [Usage](#usage).
# [In-process model](#tab/in-process) + A return value attribute isn't required. To learn more, see [Usage](#usage).
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + The code in this article defaults to .NET Core syntax, used in Functions version 2.x and higher. For information on the 1.x syntax, see the [1.x functions templates](https://github.com/Azure/azure-functions-templates/tree/v1.x/Functions.Templates/Templates). # [Isolated worker model](#tab/isolated-process)
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Functions execute in an isolated C# worker process. To learn more, see [Guide fo
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
azure-functions Functions Bindings Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-output.md
An [isolated worker process class library](dotnet-isolated-process-guide.md) com
# [In-process model](#tab/in-process) + An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
azure-functions Functions Bindings Kafka Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md
An [isolated worker process class library](dotnet-isolated-process-guide.md) com
# [In-process model](#tab/in-process) + An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
Add the extension to your project by installing this [NuGet package](https://www
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kafka).
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/RabbitMQ/RabbitMQFunction.cs" range="12-23":::
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/RabbitMQ/RabbitMQFunction.cs" range="12-23" :::
azure-functions Functions Bindings Rabbitmq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq.md
Add the extension to your project by installing this [NuGet package](https://www
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.RabbitMQ).
azure-functions Functions Bindings Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-return-value.md
See [Output bindings in the .NET worker guide](./dotnet-isolated-process-guide.m
# [In-process model](#tab/in-process) In a C# class library, apply the output binding attribute to the method return value. In C# and C# script, alternative ways to send data to an output binding are `out` parameters and [collector objects](functions-reference-csharp.md#writing-multiple-output-values).
azure-functions Functions Bindings Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md
Functions execute in an isolated C# worker process. To learn more, see [Guide fo
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
You can add the extension to your project by explicitly installing the [NuGet pa
::: zone pivot="programming-language-csharp" [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) We don't currently have an example for using the SendGrid binding in a function app running in an isolated worker process.
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) This code defines and initializes the `ILogger`:
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) This code defines and initializes the `ILogger`:
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
Add the extension to your project installing this [NuGet package](https://www.nu
# [In-process model](#tab/in-process) + _This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x or later._ Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) The following example shows a [C# function](dotnet-isolated-process-guide.md) that acquires SignalR connection information using the input binding and returns it over HTTP.
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) The following example shows a function that sends a message using the output binding to all connected clients. The *newMessage* is the name of the method to be invoked on each client.
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) The following sample shows a C# function that receives a message event from clients and logs the message content.
azure-functions Functions Bindings Signalr Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service.md
Add the extension to your project by installing this [NuGet package](https://www
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). Add the extension to your project by installing this [NuGet package].
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
For information on setup and configuration details, see the [overview](./functio
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated process](#tab/isolated-process) The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
For information on setup and configuration details, see the [overview](./functio
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
For information on setup and configuration details, see the [overview](./functio
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Functions execute in an isolated C# worker process. To learn more, see [Guide fo
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_output_binding" :::
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
Use the queue trigger to start a function when a new item is received on a queue
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) The following example shows a [C# function](dotnet-isolated-process-guide.md) that polls the `input-queue` queue and writes several messages to an output queue each time a queue item is processed.
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md
Functions execute in an isolated C# worker process. To learn more, see [Guide fo
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
An [isolated worker process class library](dotnet-isolated-process-guide.md) com
# [In-process model](#tab/in-process) + An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
For information on setup and configuration details, see the [overview](./functio
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) The following `MyTableData` class represents a row of data in the table:
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Functions execute in an isolated C# worker process. To learn more, see [Guide fo
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
This example shows a C# function that executes each time the minutes have a valu
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)] + # [Isolated worker model](#tab/isolated-process) :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Timer/TimerFunction.cs" range="11-17":::
azure-functions Functions Bindings Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md
Functions execute in an isolated C# worker process. To learn more, see [Guide fo
# [In-process model](#tab/in-process) + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
Unless otherwise noted, these examples are specific to version 2.x and later ver
::: zone pivot="programming-language-csharp" [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) The Twilio binding isn't currently supported for a function app running in an isolated worker process.
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
The following considerations apply when using a warmup trigger:
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)] + # [Isolated worker model](#tab/isolated-process) The following example shows a [C# function](dotnet-isolated-process-guide.md) that runs on each new instance when added to your app.
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
Last updated 10/12/2022
<!-- When updating this article, make corresponding changes to any duplicate content in functions-reference-csharp.md -->
-This article is an introduction to developing Azure Functions by using C# in .NET class libraries.
+
+This article is an introduction to developing Azure Functions by using C# in .NET class libraries. These class libraries are used to run _in-process with the Functions runtime_. Your .NET functions can alternatively run _isolated from the Functions _runtime_, which offers several advantages. To learn more, see [the isolated worker model](dotnet-isolated-process-guide.md). For a comprehensive comparison between these two models, see [Differences between the in-process model and the isolated worker model](dotnet-isolated-in-process-differences.md).
>[!IMPORTANT] >This article supports .NET class library functions that run in-process with the runtime. Your C# functions can also run out-of-process and isolated from the Functions runtime. The isolated worker process model is the only way to run non-LTS versions of .NET and .NET Framework apps in current versions of the Functions runtime. To learn more, see [.NET isolated worker process functions](dotnet-isolated-process-guide.md).
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
In Visual Studio, you select the runtime version when you create a project. Azur
# [Version 4.x](#tab/v4) ```xml
-<TargetFramework>net6.0</TargetFramework>
+<TargetFramework>net8.0</TargetFramework>
<AzureFunctionsVersion>v4</AzureFunctionsVersion> ```
-You can also choose `net6.0`, `net7.0`, `net8.0`, or `net48` as the target framework if you are using [.NET isolated worker process functions](dotnet-isolated-process-guide.md).
-
-> [!NOTE]
-> Azure Functions 4.x requires the `Microsoft.NET.Sdk.Functions` extension be at least `4.0.0`.
+You can choose `net8.0`, `net7.0`, `net6.0`, or `net48` as the target framework if you are using the [isolated worker model](dotnet-isolated-process-guide.md). If you are using the [in-process model](./functions-dotnet-class-library.md), you can only choose `net6.0`, and you must include the `Microsoft.NET.Sdk.Functions` extension set to at least `4.0.0`.
# [Version 1.x](#tab/v1)
azure-functions Migrate Cosmos Db Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md
Update your `.csproj` project file to use the latest extension version for your
### [In-process model](#tab/in-process) + ```xml <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup>
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
Last updated 01/17/2024
# Migrate .NET apps from the in-process model to the isolated worker model
+> [!IMPORTANT]
+> [Support will end for the in-process model on November 10, 2026](https://aka.ms/azure-functions-retirements/in-process-model). We highly recommend that you migrate your apps to the isolated worker model by following the instructions in this article.
+ This article walks you through the process of safely migrating your .NET function app from the [in-process model](./functions-dotnet-class-library.md) to the [isolated worker model][isolated-guide]. To learn about the high-level differences between these models, see the [execution mode comparison](./dotnet-isolated-in-process-differences.md). This guide assumes that your app is running on version 4.x of the Functions runtime. If not, you should instead follow the guides for upgrading your host version:
azure-functions Migrate Service Bus Version 4 Version 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-service-bus-version-4-version-5.md
Update your `.csproj` project file to use the latest extension version for your
### [In-process model](#tab/in-process) + ```xml <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup>
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
On version 1.x of the Functions runtime, your C# function app targets .NET Frame
[!INCLUDE [functions-dotnet-migrate-v4-versions](../../includes/functions-dotnet-migrate-v4-versions.md)] > [!TIP]
-> **Unless your app depends on a library or API only available to .NET Framework, we recommend upgrading to .NET 8 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should target a more recent version. .NET 8 is the fully released version with the longest support window from .NET.
+> **Unless your app depends on a library or API only available to .NET Framework, we recommend updating to .NET 8 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should target a more recent version. .NET 8 is the fully released version with the longest support window from .NET.
>
-> Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
+> Although you can choose to instead use the in-process model, this is not recommended if it can be avoided. [Support will end for the in-process model on November 10, 2026](https://aka.ms/azure-functions-retirements/in-process-model), so you'll need to move to the isolated worker model before then. Doing so while migrating to version 4.x will decrease the total effort required, and the isolated worker model will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
This guide doesn't present specific examples for .NET 7 or .NET 6 on the isolated worker model. If you need to target these versions, you can adapt the .NET 8 isolated worker model examples.
The following example is a `.csproj` project file that runs on version 1.x:
Use one of the following procedures to update this XML file to run in Functions version 4.x:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
[!INCLUDE [functions-dotnet-migrate-project-v4-isolated-net8](../../includes/functions-dotnet-migrate-project-v4-isolated-net8.md)]
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-- # [.NET Framework 4.8](#tab/netframework48) [!INCLUDE [functions-dotnet-migrate-project-v4-isolated-net-framework](../../includes/functions-dotnet-migrate-project-v4-isolated-net-framework.md)]
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
+
Use one of the following procedures to update this XML file to run in Functions
Based on the model you are migrating to, you might need to update or change the packages your application references. When you adopt the target packages, you then need to update the namespace of using statements and some types you reference. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) later in this article.
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
[!INCLUDE [functions-dotnet-migrate-packages-v4-isolated](../../includes/functions-dotnet-migrate-packages-v4-isolated.md)]
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-- # [.NET Framework 4.8](#tab/netframework48) [!INCLUDE [functions-dotnet-migrate-packages-v4-isolated](../../includes/functions-dotnet-migrate-packages-v4-isolated.md)]
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
++ The [Notification Hubs](./functions-bindings-notification-hubs.md) and [Mobile Apps](./functions-bindings-mobile-apps.md) bindings are supported only in version 1.x of the runtime. When upgrading to version 4.x of the runtime, you need to remove these bindings in favor of working with these services directly using their SDKs.
The [Notification Hubs](./functions-bindings-notification-hubs.md) and [Mobile A
In most cases, migrating requires you to add the following program.cs file to your project:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
```csharp using Microsoft.Azure.Functions.Worker;
This example includes [ASP.NET Core integration] to improve performance and prov
[!INCLUDE [functions-dotnet-migrate-isolated-program-cs](../../includes/functions-dotnet-migrate-isolated-program-cs.md)]
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-
-A program.cs file isn't required when running in-process.
- # [.NET Framework 4.8](#tab/netframework48) ```csharp
namespace Company.FunctionApp
[!INCLUDE [functions-dotnet-migrate-isolated-program-cs](../../includes/functions-dotnet-migrate-isolated-program-cs.md)]
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
+
+A program.cs file isn't required when running in-process.
+ ### host.json file
Settings in the host.json file apply at the function app level, both locally and
To run on version 4.x, you must add `"version": "2.0"` to the host.json file. You should also consider adding `logging` to your configuration, as in the following examples:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
:::code language="json" source="~/functions-quickstart-templates//Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/host.json"::: The `host.json` file only controls logging from the Functions host runtime, and in the isolated worker model, some of these logs come from your application directly, giving you more control. See [Managing log levels in the isolated worker model](./dotnet-isolated-process-guide.md#managing-log-levels) for details on how to filter these logs.
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-- # [.NET Framework 4.8](#tab/netframework48) :::code language="json" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/host.json"::: The `host.json` file only controls logging from the Functions host runtime, and in the isolated worker model, some of these logs come from your application directly, giving you more control. See [Managing log levels in the isolated worker model](./dotnet-isolated-process-guide.md#managing-log-levels) for details on how to filter these logs.
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
++ ### local.settings.json file
The local.settings.json file is only used when running locally. For information,
When you migrate to version 4.x, make sure that your local.settings.json file has at least the following elements:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/local.settings.json"::: > [!NOTE] > When migrating from running in-process to running in an isolated worker process, you need to change the `FUNCTIONS_WORKER_RUNTIME` value to "dotnet-isolated".
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-- # [.NET Framework 4.8](#tab/netframework48) :::code language="json" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/local.settings.json":::
When you migrate to version 4.x, make sure that your local.settings.json file ha
> [!NOTE] > When migrating from running in-process to running in an isolated worker process, you need to change the `FUNCTIONS_WORKER_RUNTIME` value to "dotnet-isolated".
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
++ ### Class name changes Some key classes changed names between version 1.x and version 4.x. These changes are a result either of changes in .NET APIs or in differences between in-process and isolated worker process. The following table indicates key .NET classes used by Functions that could change when migrating:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
| Version 1.x | .NET 8 | | | |
Some key classes changed names between version 1.x and version 4.x. These change
| `HttpRequestMessage` | `HttpRequestData`, `HttpRequest` (using [ASP.NET Core integration])| | `HttpResponseMessage` | `HttpResponseData`, `IActionResult` (using [ASP.NET Core integration])|
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-
-| Version 1.x | .NET 6 (in-process) |
-| | |
-| `FunctionName` (attribute) | `FunctionName` (attribute) |
-| `TraceWriter` | `ILogger<T>`, `ILogger` |
-| `HttpRequestMessage` | `HttpRequest` |
-| `HttpResponseMessage` | `IActionResult` |
# [.NET Framework 4.8](#tab/netframework48)
Some key classes changed names between version 1.x and version 4.x. These change
| `HttpRequestMessage` | `HttpRequestData` | | `HttpResponseMessage` | `HttpResponseData` |
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
+
+| Version 1.x | .NET 6 (in-process) |
+| | |
+| `FunctionName` (attribute) | `FunctionName` (attribute) |
+| `TraceWriter` | `ILogger<T>`, `ILogger` |
+| `HttpRequestMessage` | `HttpRequest` |
+| `HttpResponseMessage` | `IActionResult` |
namespace Company.Function
In version 4.x, the HTTP trigger template looks like the following example:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
```csharp using Microsoft.AspNetCore.Http;
namespace Company.Function
} ```
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-- # [.NET Framework 4.8](#tab/netframework48) ```csharp
namespace Company.Function
} ```
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
++ ::: zone-end
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
On version 3.x of the Functions runtime, your C# function app targets .NET Core
[!INCLUDE [functions-dotnet-migrate-v4-versions](../../includes/functions-dotnet-migrate-v4-versions.md)] > [!TIP]
-> **If you're migrating from .NET 5 (on the isolated worker model), we recommend upgrading to .NET 8 on the isolated worker model.** This provides a quick migration path to the fully released version with the longest support window from .NET.
+> **We recommend updating to .NET 8 on the isolated worker model.** .NET 8 is the fully released version with the longest support window from .NET.
>
-> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick migration path. However, you might also consider upgrading to .NET 8 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
+> Although you can choose to instead use the in-process model, this is not recommended if it can be avoided. [Support will end for the in-process model on November 10, 2026](https://aka.ms/azure-functions-retirements/in-process-model), so you'll need to move to the isolated worker model before then. Doing so while migrating to version 4.x will decrease the total effort required, and the isolated worker model will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
This guide doesn't present specific examples for .NET 7 or .NET 6 on the isolated worker model. If you need to target these versions, you can adapt the .NET 8 isolated worker model examples.
The following example is a `.csproj` project file that uses .NET Core 3.1 on ver
Use one of the following procedures to update this XML file to run in Functions version 4.x:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
[!INCLUDE [functions-dotnet-migrate-project-v4-isolated-net8](../../includes/functions-dotnet-migrate-project-v4-isolated-net8.md)]
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-- # [.NET Framework 4.8](#tab/netframework48) [!INCLUDE [functions-dotnet-migrate-project-v4-isolated-net-framework](../../includes/functions-dotnet-migrate-project-v4-isolated-net-framework.md)]
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
++ ### Package and namespace changes Based on the model you are migrating to, you might need to update or change the packages your application references. When you adopt the target packages, you then need to update the namespace of using statements and some types you reference. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) later in this article.
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
[!INCLUDE [functions-dotnet-migrate-packages-v4-isolated](../../includes/functions-dotnet-migrate-packages-v4-isolated.md)]
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-- # [.NET Framework 4.8](#tab/netframework48) [!INCLUDE [functions-dotnet-migrate-packages-v4-isolated](../../includes/functions-dotnet-migrate-packages-v4-isolated.md)]
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
++ ### Program.cs file When migrating to run in an isolated worker process, you must add the following program.cs file to your project:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
```csharp using Microsoft.Azure.Functions.Worker;
This example includes [ASP.NET Core integration] to improve performance and prov
[!INCLUDE [functions-dotnet-migrate-isolated-program-cs](../../includes/functions-dotnet-migrate-isolated-program-cs.md)]
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-
-A program.cs file isn't required when running in-process.
- # [.NET Framework 4.8](#tab/netframework48) ```csharp
namespace Company.FunctionApp
[!INCLUDE [functions-dotnet-migrate-isolated-program-cs](../../includes/functions-dotnet-migrate-isolated-program-cs.md)]
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
+
+A `Program.cs` file isn't required when you are using the in-process model.
+ ### local.settings.json file
The local.settings.json file is only used when running locally. For information,
When you migrate to version 4.x, make sure that your local.settings.json file has at least the following elements:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/local.settings.json"::: > [!NOTE] > When migrating from running in-process to running in an isolated worker process, you need to change the `FUNCTIONS_WORKER_RUNTIME` value to "dotnet-isolated".
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-- # [.NET Framework 4.8](#tab/netframework48) :::code language="json" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/local.settings.json":::
When you migrate to version 4.x, make sure that your local.settings.json file ha
> [!NOTE] > When migrating from running in-process to running in an isolated worker process, you need to change the `FUNCTIONS_WORKER_RUNTIME` value to "dotnet-isolated".
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
++ ### host.json file
When you migrate to version 4.x, make sure that your local.settings.json file ha
No changes are required to your `host.json` file. However, if your Application Insights configuration in this file from your in-process model project, you might want to make additional changes in your `Program.cs` file. The `host.json` file only controls logging from the Functions host runtime, and in the isolated worker model, some of these logs come from your application directly, giving you more control. See [Managing log levels in the isolated worker model](./dotnet-isolated-process-guide.md#managing-log-levels) for details on how to filter these logs.
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-
-No changes are required to your `host.json` file.
- # [.NET Framework 4.8](#tab/netframework48) No changes are required to your `host.json` file. However, if your Application Insights configuration in this file from your in-process model project, you might want to make additional changes in your `Program.cs` file. The `host.json` file only controls logging from the Functions host runtime, and in the isolated worker model, some of these logs come from your application directly, giving you more control. See [Managing log levels in the isolated worker model](./dotnet-isolated-process-guide.md#managing-log-levels) for details on how to filter these logs. +
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+No changes are required to your `host.json` file.
+
No changes are required to your `host.json` file. However, if your Application I
Some key classes changed names between versions. These changes are a result either of changes in .NET APIs or in differences between in-process and isolated worker process. The following table indicates key .NET classes used by Functions that could change when migrating:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
| .NET Core 3.1 | .NET 5 | .NET 8 | | | | |
Some key classes changed names between versions. These changes are a result eith
| `IActionResult` | `HttpResponseData` | `HttpResponseData`, `IActionResult` (using [ASP.NET Core integration])| | `FunctionsStartup` (attribute) | Uses [`Program.cs`](#programcs-file) instead | Uses [`Program.cs`](#programcs-file) instead |
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-
-| .NET Core 3.1 | .NET 5 | .NET 6 (in-process) |
-| | | |
-| `FunctionName` (attribute) | `Function` (attribute) | `FunctionName` (attribute) |
-| `ILogger` | `ILogger` | `ILogger` |
-| `HttpRequest` | `HttpRequestData` | `HttpRequest` |
-| `IActionResult` | `HttpResponseData` | `IActionResult` |
-| `FunctionsStartup` (attribute) | Uses [`Program.cs`](#programcs-file) instead | `FunctionsStartup` (attribute) |
- # [.NET Framework 4.8](#tab/netframework48) | .NET Core 3.1 | .NET 5 |.NET Framework 4.8 |
Some key classes changed names between versions. These changes are a result eith
| `IActionResult` | `HttpResponseData` | `HttpResponseData`| | `FunctionsStartup` (attribute) | Uses [`Program.cs`](#programcs-file) instead | Uses [`Program.cs`](#programcs-file) instead |
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
+
+| .NET Core 3.1 | .NET 5 | .NET 6 (in-process) |
+| | | |
+| `FunctionName` (attribute) | `Function` (attribute) | `FunctionName` (attribute) |
+| `ILogger` | `ILogger` | `ILogger` |
+| `HttpRequest` | `HttpRequestData` | `HttpRequest` |
+| `IActionResult` | `HttpResponseData` | `IActionResult` |
+| `FunctionsStartup` (attribute) | Uses [`Program.cs`](#programcs-file) instead | `FunctionsStartup` (attribute) |
+ [ASP.NET Core integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration
This section highlights other code changes to consider as you work through the m
[!INCLUDE [functions-dotnet-migrate-isolated-other-code-changes](../../includes/functions-dotnet-migrate-isolated-other-code-changes.md)]
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-
-Make sure to check [Breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x) for additional changes you might need to make to your project.
- # [.NET Framework 4.8](#tab/netframework48) This section highlights other code changes to consider as you work through the migration. These changes are not needed by all applications, but you should evaluate if any are relevant to your scenarios. Make sure to check [Breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x) for additional changes you might need to make to your project. [!INCLUDE [functions-dotnet-migrate-isolated-other-code-changes](../../includes/functions-dotnet-migrate-isolated-other-code-changes.md)]
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+Make sure to check [Breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x) for additional changes you might need to make to your project.
+ ### HTTP trigger template
The differences between in-process and isolated worker process can be seen in HT
The HTTP trigger template for the migrated version looks like the following example:
-# [.NET 8 (isolated)](#tab/net8)
+# [.NET 8](#tab/net8)
```csharp using Microsoft.AspNetCore.Http;
namespace Company.Function
} ```
-# [.NET 6 (in-process)](#tab/net6-in-proc)
-
-Sames as version 3.x (in-process).
- # [.NET Framework 4.8](#tab/netframework48) ```csharp
namespace Company.Function
} } ```+
+# [.NET 6 (in-process model)](#tab/net6-in-proc)
+
+Sames as version 3.x (in-process).
+ ::: zone-end
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
This section provides answers to common questions.
An agent is only required to collect data from the operating system and workloads in virtual machines. The virtual machines can be located in Azure, another cloud environment, or on-premises. See [Azure Monitor Agent overview](./agents-overview.md).
-### How can I be notified when data collection from the Log Analytics agent stops?
+### Does Azure Monitor Agent support data collection for the various Log Analytics solutions and Azure services like Microsoft Defender for Cloud and Microsoft Sentinel?
-Use the steps described in [Create a new log search alert](../alerts/alerts-metric.md) to be notified when data collection stops. Use the following settings for the alert rule:
--- **Define alert condition**: Specify your Log Analytics workspace as the resource target.-- **Alert criteria**:
- - **Signal Name**: *Custom log search*.
- - **Search query**: `Heartbeat | summarize LastCall = max(TimeGenerated) by Computer | where LastCall < ago(15m)`.
- - **Alert logic**: **Based on** *number of results*, **Condition** *Greater than*, **Threshold value** *0*.
- - **Evaluated based on**: **Period (in minutes)** *30*, **Frequency (in minutes)** *10*.
-- **Define alert details**:
- - **Name**: *Data collection stopped*.
- - **Severity**: *Warning*.
-
-Specify an existing or new [action group](../alerts/action-groups.md) so that when the log search alert matches criteria, you're notified if you have a heartbeat missing for more than 15 minutes.
-
-### Will Azure Monitor Agent support data collection for the various Log Analytics solutions and Azure services like Microsoft Defender for Cloud and Microsoft Sentinel?
-
-Review the list of [Azure Monitor Agent extensions currently available in preview](#supported-services-and-features). These extensions are the same solutions and services now available by using the new Azure Monitor Agent instead.
+For a list of features and services that use Azure Monitor Agent for data collection, see [Migrate to Azure Monitor Agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md#migrate-additional-services-and-features).
-You might see more extensions getting installed for the solution or service to collect extra data or perform transformation or processing as required for the solution or service. Then use Azure Monitor Agent to route the final data to Azure Monitor.
+Some services might install other extensions to collect more data or to transforms or process data, and then use Azure Monitor Agent to route the final data to Azure Monitor.
The following diagram explains the new extensibility architecture. :::image type="content" source="./media/azure-monitor-agent/extensibility-arch-new.png" lightbox="./media/azure-monitor-agent/extensibility-arch-new.png" alt-text="Diagram that shows extensions architecture.":::
-### Is Azure Monitor Agent at parity with the Log Analytics agents?
-
-Review the [current limitations](./azure-monitor-agent-overview.md#current-limitations) of Azure Monitor Agent when compared with Log Analytics agents.
- ### Does Azure Monitor Agent support non-Azure environments like other clouds or on-premises? Both on-premises machines and machines connected to other clouds are supported for servers today, after you have the Azure Arc agent installed. For purposes of running Azure Monitor Agent and data collection rules, the Azure Arc requirement comes at *no extra cost or resource consumption*. The Azure Arc agent is only used as an installation mechanism. You don't need to enable the paid management features if you don't want to use them.
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Last updated 05/02/2023 -+ # Action groups
You might have a limited number of voice actions per action group.
> [!NOTE] >
-> If you can't select your country/region code in the Azure portal, voice calls aren't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party voice call provider that offers support in your country/region.
+> If you can't select your country/region code in the Azure portal, voice calls aren't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party voice call provider that offers support in your country/region. If a country is marked with an '*' calls will come from a USA based phone number.
### Countries/Regions with Voice notification support | Country code | Country | |:|:|
You might have a limited number of voice actions per action group.
| 55 | Brazil | | 1 |Canada | | 56 | Chile |
+| 86 | China* |
| 420 | Czech Republic | | 45 | Denmark | | 372 | Estonia | | 358 | Finland | | 33 | France | | 49 | Germany |
-| 852 | Hong Kong |
+| 852 | Hong Kong* |
+| 91 | India* |
| 353 | Ireland | | 972 | Israel |
+| 39 | Italy* |
+| 81 | Japan* |
| 352 | Luxembourg | | 60 | Malaysia | | 52 | Mexico |
You might have a limited number of voice actions per action group.
| 64 | New Zealand | | 47 | Norway | | 351 | Portugal |
-| 40 | Romania |
+| 40 | Romania* |
+| 7 | Russia* |
| 65 | Singapore | | 27 | South Africa |
+| 82 | South Korea |
| 34 | Spain | | 46 | Sweeden | | 41 | Switzerland |
-| 886 | Taiwan |
-| 971 | United Arab Emirates |
+| 886 | Taiwan* |
+| 971 | United Arab Emirates* |
| 44 | United Kingdom | | 1 | United States |
azure-vmware Configure Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-windows-server-failover-cluster.md
Title: Configure Windows Server Failover Cluster on Azure VMware Solution vSAN
description: Learn how to configure Windows Server Failover Cluster (WSFC) on Azure VMware Solution vSAN with native shared disks. Previously updated : 12/07/2023 Last updated : 3/29/2024
Azure VMware Solution provides native support for virtualized WSFC. It supports
The following diagram illustrates the architecture of WSFC virtual nodes on an Azure VMware Solution private cloud. It shows where Azure VMware Solution resides, including the WSFC virtual servers (blue box), in relation to the broader Azure platform. This diagram illustrates a typical hub-spoke architecture, but a similar setup is possible using Azure Virtual WAN. Both offer all the value other Azure services can bring you. ## Supported configurations
batch Batch Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-upgrade-policy.md
Title: Provision a pool with Auto OS Upgrade description: Learn how to create a Batch pool with Auto OS Upgrade so that customers can have control over their OS upgrade strategy to ensure safe, workload-aware OS upgrade deployments. Previously updated : 02/29/2024 Last updated : 03/27/2024 # Create an Azure Batch pool with Automatic Operating System (OS) Upgrade > [!IMPORTANT]
-> - Support for pools with Auto OS Upgrade in Azure Batch is currently in public preview, and is currently controlled by an account-level feature flag. If you want to use this feature, please start a [support request](../azure-portal/supportability/how-to-create-azure-support-request.md) and provide your batch account to request its activation.
> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
public async Task CreateUpgradePolicyPool()
## FAQs -- How can I enable Auto OS Upgrade?-
- Start a [support request](../azure-portal/supportability/how-to-create-azure-support-request.md) and provide your batch account to request its activation.
- - Will my tasks be disrupted if I enabled Auto OS Upgrade? Tasks won't be disrupted when *automaticOSUpgradePolicy.osRollingUpgradeDeferral* is set to 'true'. In that case, the upgrade will be postponed until node becomes idle. Otherwise, node will upgrade when it receives a new OS version, regardless of whether it is currently running a task or not. So we strongly advise enabling *automaticOSUpgradePolicy.osRollingUpgradeDeferral*.
communication-services Chat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/chat-metrics.md
Title: Chat metrics definitions for Azure Communication Service
description: This document covers definitions of chat metrics available in the Azure portal. - Last updated 06/23/2023
communication-services Call Automation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/call-automation-metrics.md
Title: Call automation metrics definitions for Azure Communication Service
description: This document covers definitions of call automation metrics available in the Azure portal. - Last updated 06/23/2023
communication-services Rooms Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/rooms-metrics.md
Title: Rooms metrics definitions for Azure Communication Service
description: This document covers definitions of rooms metrics available in the Azure portal. - Last updated 06/26/2023
communication-services Sms Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/sms-metrics.md
Title: SMS metrics definitions for Azure Communication Service
description: This document covers definitions of SMS metrics available in the Azure portal. - Last updated 06/23/2023
communication-services Turn Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/turn-metrics.md
Title: TURN metrics definitions for Azure Communication Services
description: This document covers definitions of TURN metrics available in the Azure portal. - Last updated 06/26/2023
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/authentication.md
The Azure platform provides role-based access (Azure RBAC) to control access to
To set up a service principal, [create a registered application from the Azure CLI](../quickstarts/identity/service-principal.md?pivots=platform-azcli). Then, the endpoint and credentials can be used to authenticate the SDKs. See examples of how [service principal](../quickstarts/identity/service-principal.md) is used.
-Communication services supports Microsoft Entra authentication for Communication services resources. You can find more details, about the managed identity support in the [Microsoft Entra documentation](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
+Communication services supports Microsoft Entra authentication for Communication services resources. You can find more details, about the managed identity support in the [Microsoft Entra documentation](/entra/identity/managed-identities-azure-resources/managed-identities-status).
++ Use our [Trusted authentication service hero sample](../samples/trusted-auth-sample.md) to map Azure Communication Services access tokens with your Microsoft Entra ID.
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-recording/bring-your-own-storage.md
The same Azure Communication Services Call Recording APIs are used to export rec
## Azure Managed Identities
-Bring your own Azure storage uses [Azure Managed Identities](../../../../active-directory/managed-identities-azure-resources/overview.md) to access user-owned resources securely. Azure Managed Identities provides an identity for the application to use when it needs to access Azure resources, eliminating the need for developers to manage credentials.
+Bring your own Azure storage uses [Azure Managed Identities](/entra/identity/managed-identities-azure-resources/overview) to access user-owned resources securely. Azure Managed Identities provides an identity for the application to use when it needs to access Azure resources, eliminating the need for developers to manage credentials.
## Known issues
communication-services Credentials Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/credentials-best-practices.md
const fetchTokenFromMyServerForUser = async function (abortSignal, username) {
} ```
-In this example, we use the Microsoft Authentication Library (MSAL) to refresh the Microsoft Entra access token. Following the guide to [acquire a Microsoft Entra token to call an API](../../active-directory/develop/scenario-spa-acquire-token.md), we first try to obtain the token without the user's interaction. If that's not possible, we trigger one of the interactive flows.
+In this example, we use the Microsoft Authentication Library (MSAL) to refresh the Microsoft Entra access token. Following the guide to [acquire a Microsoft Entra token to call an API](/entra/identity-platform/scenario-spa-acquire-token), we first try to obtain the token without the user's interaction. If that's not possible, we trigger one of the interactive flows.
```javascript const refreshAadToken = async function (abortSignal, username) {
To minimize the number of roundtrips to the Azure Communication Identity API, ma
# [JavaScript](#tab/javascript)
-Option 1: Trigger the token acquisition flow with [`AuthenticationParameters.forceRefresh`](../../active-directory/develop/msal-js-pass-custom-state-authentication-request.md) set to `true`.
+Option 1: Trigger the token acquisition flow with [`AuthenticationParameters.forceRefresh`](/entra/identity-platform/msal-js-pass-custom-state-authentication-request) set to `true`.
```javascript // Extend the `refreshAadToken` function
communication-services Email Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email-metrics.md
Title: Email metric definitions for Azure Communication Services
description: This document covers definitions of Azure Communication Services email metrics available in the Azure portal. -
Primitives in Azure Communication Services emit metrics for API requests. These
All API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`.
-More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../azure-monitor/essentials/metrics-charts.md#aggregation)
+More information about supported aggregation types and time series aggregations, see [Advanced features of Azure Metrics Explorer](../../azure-monitor/essentials/metrics-charts.md#aggregation).
- **Operation** - All operations or routes that can be called on the Azure Communication Services Chat gateway. - **Status Code** - The status code response sent after the request. - **StatusSubClass** - The status code series sent after the response. ### Email Service Delivery Status Updates
-The `Email Service Delivery Status Updates` metric lets the email sender track SMTP and Enhanced SMTP status codes and get an idea of how many hard bounces they are encountering.
+The `Email Service Delivery Status Updates` metric lets the email sender track SMTP and Enhanced SMTP status codes and get an idea of how many hard bounces they're encountering.
The following dimensions are available on the `Email Service Delivery Status Updates` metric: | Dimension | Description | | -- | - | | Result | High level status of the message delivery: Success, Failure. |
-| MessageStatus | Terminal state of the Delivered, Failed, Suppressed. Emails are suppressed when a user sends an email to an email address that is known not to exist. Sending emails to addresses that do not exist trigger a hard bounce. |
+| MessageStatus | Terminal state of the Delivered, Failed, Suppressed. Emails are suppressed when a user sends an email to an email address that is known not to exist. Sending emails to addresses that don't exist trigger a hard bounce. |
| IsHardBounce | True when a message delivery failed due to a hard bounce or if an item was suppressed due to a previous hard bounce. | | SenderDomain | The domain portion of the senders email address. | | SmtpStatusCode | Smpt error code from for failed deliveries. |
-| EnhancedSmtpStatusCode | The EnhancedSmtpStatusCode status code will be emitted if it is available. This status code provides additional details not available with the SmtpStatusCode. |
+| EnhancedSmtpStatusCode | The EnhancedSmtpStatusCode status code will be emitted if it's available. This status code provides other details not available with the SmtpStatusCode. |
:::image type="content" source="./media/acs-email-delivery-status-hardbounce-metrics.png" alt-text="Screenshot showing the Email delivery status update metric - IsHardBounce.":::+ :::image type="content" source="./media/acs-email-delivery-status-smtp-metrics.png" alt-text="Screenshot showing the Email delivery status update metric - SmptStatusCode."::: ### Email Service API requests
-The following operations are available for the `Email Service API Requests` metric. These standard dimensions are supported: StatusCode, StatusCodeClass, StatusCodeReason and Operation.
+The following operations are available for the `Email Service API Requests` metric. These standard dimensions are supported: StatusCode, StatusCodeClass, StatusCodeReason, and Operation.
| Operation | Description | | -- | - |
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md
To enable calling between your Communication Services users and Teams tenant, al
## Get Teams user ID To start a call with a Teams user or Teams Voice application, you need an identifier of the target. You have the following options to retrieve the ID:-- User interface of [Microsoft Entra ID](../troubleshooting-info.md?#getting-user-id) or with on-premises directory synchronization [Microsoft Entra Connect](../../../active-directory/hybrid/how-to-connect-sync-whatis.md)
+- User interface of [Microsoft Entra ID](../troubleshooting-info.md?#getting-user-id) or with on-premises directory synchronization [Microsoft Entra Connect](/entra/identity/hybrid/connect/how-to-connect-sync-whatis)
- Programmatically via [Microsoft Graph API](/graph/api/resources/users) ## Calling
communication-services Custom Teams Endpoint Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-authentication-overview.md
The following sequence diagram details single-tenant authentication.
:::image type="content" source="./media/custom-teams-endpoint/authentication-case-single-tenant-azure-rbac.svg" alt-text="A sequence diagram that details authentication of Fabrikam Teams users. The client application gets an Azure Communication Services access token for a single tenant Microsoft Entra application." lightbox="./media/custom-teams-endpoint/authentication-case-single-tenant-azure-rbac.svg"::: Before we begin:-- Alice or her Microsoft Entra administrator needs to give the custom Teams application consent, prior to the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md).
+- Alice or her Microsoft Entra administrator needs to give the custom Teams application consent, prior to the first attempt to sign in. Learn more about [consent](/entra/identity-platform/application-consent-experience).
- The Azure Communication Services resource admin needs to grant Alice permission to perform her role. Learn more about [Azure RBAC role assignment](../../../role-based-access-control/role-assignments-portal.md). Steps:
The following sequence diagram details multi-tenant authentication.
:::image type="content" source="./media/custom-teams-endpoint/authentication-case-multiple-tenants-hmac.svg" alt-text="A sequence diagram that details authentication of Teams users and Azure Communication Services access tokens for multi-tenant Microsoft Entra applications." lightbox="./media/custom-teams-endpoint/authentication-case-multiple-tenants-hmac.svg"::: Before we begin:-- Alice or her Microsoft Entra administrator needs to give Contoso's Microsoft Entra application consent before the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md).
+- Alice or her Microsoft Entra administrator needs to give Contoso's Microsoft Entra application consent before the first attempt to sign in. Learn more about [consent](/entra/identity-platform/application-consent-experience).
Steps:
-1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. Make sure you configure MSAL with a correct [authority](../../../active-directory/develop/msal-client-application-configuration.md#authority). If authentication is successful, the Contoso client application receives a Microsoft Entra access token with a value of 'A1' and an Object ID of a Microsoft Entra user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. Make sure you configure MSAL with a correct [authority](/entr).
1. Get an access token for Alice: The Contoso application by using a custom authentication artifact with value 'B' performs authorization logic to decide whether Alice has permission to exchange the Microsoft Entra access token for an Azure Communication Services access token. After successful authorization, the Contoso application performs control plane logic, using artifacts 'A1', 'A2', and 'A3'. An Azure Communication Services access token 'D' is generated for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling. The 'A2' and 'A3' artifacts are passed along with the artifact 'A1'. The validation assures that the Microsoft Entra Token was issued to the expected user. The application and will prevent attackers from using the Microsoft Entra access tokens issued to other applications or other users. For more information on how to get 'A' artifacts, see [Receive the Microsoft Entra user token and object ID via the MSAL library](../../quickstarts/manage-teams-identity.md?pivots=programming-language-csharp#step-1-receive-the-azure-ad-user-token-and-object-id-via-the-msal-library) and [Getting Application ID](../troubleshooting-info.md#getting-application-id). 1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's application. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about developing custom, Teams apps [in this quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
Artifacts:
- Artifact A2 - Type: Object ID of a Microsoft Entra user - Source: Fabrikam's Microsoft Entra tenant
- - Authority: `https://login.microsoftonline.com/<tenant>/` or `https://login.microsoftonline.com/organizations/` (based on your [scenario](../../../active-directory/develop/msal-client-application-configuration.md#authority))
+ - Authority: `https://login.microsoftonline.com/<tenant>/` or `https://login.microsoftonline.com/organizations/` (based on your [scenario](/entra/identity-platform/msal-client-application-configuration#authority)
- Artifact A3 - Type: Microsoft Entra application ID - Source: Contoso application registration's Microsoft Entra tenant
communication-services Azure Ad Api Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/azure-ad-api-permissions.md
None.
- Application admin - Cloud application admin
-Find more details in [Microsoft Entra documentation](../../../../active-directory/roles/permissions-reference.md).
+Find more details in [Microsoft Entra documentation](/entra/identity/role-based-access-control/permissions-reference).
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/metrics.md
Title: Metric definitions for Azure Communication Services
description: This document covers definitions of metrics available in the Azure portal. -
communication-services Troubleshoot Web Voip Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/troubleshoot-web-voip-quality.md
For more information, see [End of Call Survey overview](end-of-call-survey-conce
## Next steps
-For more information about using Call Quality Dashboard (CQD) to view interop call logs, see [Use CQD to manage call and meeting quality in Microsoft Teams](https://learn.microsoft.com/microsoftteams/quality-of-experience-review-guide).
+For more information about using Call Quality Dashboard (CQD) to view interop call logs, see [Use CQD to manage call and meeting quality in Microsoft Teams](/microsoftteams/quality-of-experience-review-guide).
For more information about Calling SDK error codes, see [Troubleshooting in Azure Communication Services](../troubleshooting-info.md#calling-sdk-error-codes). You can use these codes to help determine why a call ended with disruptions.
communication-services Teams Interop Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/teams-interop-call-automation.md
In this quickstart, we use the Azure Communication Services Call Automation APIs
## Step 1: Authorization for your Azure Communication Services Resource to enable calling to Microsoft Teams users
-To enable calling through Call Automation APIs, a [Microsoft Teams Administrator](/azure/active-directory/roles/permissions-reference#teams-administrator) or [Global Administrator](/en-us/azure/active-directory/roles/permissions-reference#global-administrator) must explicitly enable the Communication Services resource(s) access to their tenant to allow calling.
+To enable calling through Call Automation APIs, a [Microsoft Teams Administrator](/entra/identity/role-based-access-control/permissions-reference#teams-administrator) or [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator) must explicitly enable the Communication Services resource(s) access to their tenant to allow calling.
[Set-CsTeamsAcsFederationConfiguration (MicrosoftTeamsPowerShell)](/powershell/module/teams/set-csteamsacsfederationconfiguration) Tenant level setting that enables/disables federation between their tenant and specific Communication Services resources.
communication-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/managed-identity.md
Assigning a user-assigned identity to your Azure Communication Services resource
First, you need to create a user-assigned managed identity resource.
-1. Create a user-assigned managed identity resource according to [these instructions](~/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity).
+1. Create a user-assigned managed identity resource according to [these instructions](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities).
2. In the left navigation for your app's page, scroll down to the **Settings** group.
For more information on using the golang Management SDK, see [Azure Communicatio
## Next steps Now that you have learned how to enable Managed Identity with Azure Communication Services. Consider implementing this feature in your own applications to simplify your authentication process and improve security. -- [Managed Identities](~/articles/active-directory/managed-identities-azure-resources/overview.md)-- [Manage user-assigned managed identities](~/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md)
+- [Managed Identities](/entra/identity/managed-identities-azure-resources/overview)
+- [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities)
communication-services Eligible Teams Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/eligible-teams-licenses.md
# Teams License requirements to use Azure Communication Services support for Teams users
-To use Azure Communication Services support for Teams users, you need a Microsoft Entra instance with users that have a valid Teams license. Furthermore, license must be assigned to the administrators or relevant users. Also, note that [MSA accounts (personal Microsoft accounts)](../../active-directory/external-identities/microsoft-account.md) are not supported. This article describes the service plans requirements to use Azure Communication Services support for Teams users.
+To use Azure Communication Services support for Teams users, you need a Microsoft Entra instance with users that have a valid Teams license. Furthermore, license must be assigned to the administrators or relevant users. Also, note that [MSA accounts (personal Microsoft accounts)](/entra/external-id/microsoft-account) are not supported. This article describes the service plans requirements to use Azure Communication Services support for Teams users.
## Eligible products and service plans
Ensure that your Microsoft Entra users have at least one of the following eligib
| TEAMS_AR_DOD | fd500458-c24c-478e-856c-a6067a8376cd | Office 365 E3_USGOV_DOD | | | | Microsoft 365 E3_USGOV_DOD |
-For more information, see [Microsoft Entra Product names and service plan identifiers](../../active-directory/enterprise-users/licensing-service-plan-reference.md).
+For more information, see [Microsoft Entra Product names and service plan identifiers](/entra/identity/users/licensing-service-plan-reference).
### How to find assigned service plans and products?
communication-services Smtp Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-smtp/smtp-authentication.md
In this quick start, you learn about how to use an Entra application to create t
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An Azure Communication Email Resource created and ready with a provisioned domain [Get started with Creating Email Communication Resource](../create-email-communication-resource.md) - An active Azure Communication Services Resource connected with Email Domain and a Connection String. [Get started by Connecting Email Resource with a Communication Resource](../connect-email-communication-resource.md)-- An Entra application with access to the Azure Communication Services Resource. [Register an application with Microsoft Entra ID and create a service principal](../../../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-microsoft-entra-id-and-create-a-service-principal)-- A client secret for the Entra application with access to the Azure Communication Service Resource. [Create a new client secret](../../../../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-client-secret)
+- An Entra application with access to the Azure Communication Services Resource. [Register an application with Microsoft Entra ID and create a service principal](/entra/identity-platform/howto-create-service-principal-portal#register-an-application-with-microsoft-entra-id-and-create-a-service-principal)
+- A client secret for the Entra application with access to the Azure Communication Service Resource. [Create a new client secret](/entra/identity-platform/howto-create-service-principal-portal#option-3-create-a-new-client-secret)
## Using a Microsoft Entra application with access to the Azure Communication Services Resource for SMTP
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
The following application settings influence the experience:
- The *Supported account types* property defines whether the application is single tenant ("Accounts in this organizational directory only") or multitenant ("Accounts in any organizational directory"). For this scenario, you can use multitenant. - *Redirect URI* defines the URI where the authentication request is redirected after authentication. For this scenario, you can use **Public client/native (mobile & desktop)** and enter **`http://localhost`** as the URI.
-For more detailed information, see [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md#register-an-application).
+For more detailed information, see [Register an application with the Microsoft identity platform](/entra/identity-platform/quickstart-register-app#register-an-application).
When the application is registered, you'll see an [identifier in the overview](../concepts/troubleshooting-info.md#getting-application-id). This identifier, *Application (client) ID*, is used in the next steps.
The developer's required actions are shown in following diagram:
By using the MSAL, developers can acquire Microsoft Entra user tokens from the Microsoft identity platform endpoint to authenticate users and access secure web APIs. It can be used to provide secure access to Communication Services. The MSAL supports many different application architectures and platforms, including .NET, JavaScript, Java, Python, Android, and iOS.
-For more information about setting up environments in public documentation, see [Microsoft Authentication Library overview](../../active-directory/develop/msal-overview.md).
+For more information about setting up environments in public documentation, see [Microsoft Authentication Library overview](/entra/identity-platform/msal-overview).
> [!NOTE] > The following sections describe how to exchange the Microsoft Entra access token for the access token of Teams user for the console application.
communication-services Trusted Auth Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/trusted-auth-sample.md
Since this sample only focuses on the server APIs, the client application is not
To be able to run this sample, you will need to: -- Register a Client and Server (Web API) applications in Microsoft Entra ID as part of [On Behalf Of workflow](../../active-directory/develop/v2-oauth2-on-behalf-of-flow.md). Follow instructions on [registrations set up guideline](https://github.com/Azure-Samples/communication-services-authentication-hero-csharp/blob/main/docs/deployment-guides/set-up-app-registrations.md)
+- Register a Client and Server (Web API) applications in Microsoft Entra ID as part of [On Behalf Of workflow](/entr)
- A deployed Azure Communication Services resource. [Create an Azure Communication Services resource](../quickstarts/create-communication-resource.md?tabs=linux&pivots=platform-azp). - Update the Server (Web API) application with information from the app registrations.
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
Microsoft Graph enables event management platforms to empower organizers to sche
1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of reminders.
- 2. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](../../active-directory/develop/access-tokens.md). and [refresh tokens](../../active-directory/develop/refresh-tokens.md).
+ 2. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](/entra/identity-platform/access-tokens). and [refresh tokens](/entra/identity-platform/refresh-tokens).
- 3. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Microsoft Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
+ 3. The application will require "on behalf of" permissions with the [offline scope](/entra/identity-platform/permissions-consent-overview#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Microsoft Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
4. Refresh tokens can be revoked in the event of a breach or account termination
communication-services Integrate Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/integrate-azure-function.md
With extra configuration, this sample supports connecting to a Microsoft Entra p
Note that we currently don't support Microsoft Entra ID in sample code. Follow the links below to enable it in your app and Azure Function:
-[Register your app under Microsoft Entra ID (using Android platform settings)](../../active-directory/develop/tutorial-v2-android.md).
+[Register your app under Microsoft Entra ID (using Android platform settings)](/entra/identity-platform/tutorial-v2-android).
[Configure your App Service or Azure Functions app to use Microsoft Entra ID log in](../../app-service/configure-authentication-provider-aad.md).
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
adobe-target: true
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table, PostgreSQL](includes/appliesto-nosql-mongodb-cassandra-gremlin-table-postgresql.md)]
+> OpenAI relies on Cosmos DB to dynamically scale their ChatGPT service ΓÇô one of the fastest-growing consumer apps ever ΓÇô enabling high reliability and low maintenance.ΓÇ¥ ΓÇô Satya Nadella, Microsoft chairman and chief executive officer
+ Today's applications are required to be highly responsive and always online. They must respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Recently, the surge of AI-powered applications created another layer of complexity, because many of these applications currently integrate a multitude of data stores. For example, some teams built applications that simultaneously connect to MongoDB, Postgres, Redis, and Gremlin. These databases differ in implementation workflow and operational performances, posing extra complexity for scaling applications.
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
Use the vector database in Azure Cosmos DB for MongoDB vCore to seamlessly conne
## What is a vector database?
-A vector database is a database designed to store and manage vector embeddings, which are mathematical representations of data in a high-dimensional space. In this space, each dimension corresponds to a feature of the data, and tens of thousands of dimensions might be used to represent sophisticated data. A vector's position in this space represents its characteristics. Words, phrases, or entire documents, and images, audio, and other types of data can all be vectorized. Vector search is used to query these embeddings.
+A [vector database](../../vector-database.md) is a database designed to store and manage vector embeddings, which are mathematical representations of data in a high-dimensional space. In this space, each dimension corresponds to a feature of the data, and tens of thousands of dimensions might be used to represent sophisticated data. A vector's position in this space represents its characteristics. Words, phrases, or entire documents, and images, audio, and other types of data can all be vectorized. Vector search is used to query these embeddings.
## What is vector search?
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
Previously updated : 03/29/2024 Last updated : 03/30/2024 # Vector database
Last updated 03/29/2024
Vector databases are used in numerous domains and situations across analytical and generative AI, including natural language processing, video and image recognition, recommendation system, search, etc.
-Many AI-enhanced systems that emerged in 2023 use standalone vector databases that are distinct from "traditional" databases in their tech stacks. Instead of adding a separate vector database, you can use our integrated vector database when working with multi-modal data. By doing so, you avoid the extra cost of moving data to a separate database. Moreover, this architecture keeps your vector embeddings and original data together, and you can better achieve data consistency, scale, and performance. The latter reason is why OpenAI built its ChatGPT service on top of Azure Cosmos DB.
+In 2023, a notable trend in software was the integration of AI enhancements, often achieved by incorporating specialized standalone vector databases into existing tech stacks. This article explains what vector databases are, as well as presents an alternative architecture that you might want to consider: using an integrated vector database in the NoSQL or relational database you already use, especially when working with multi-modal data. This approach not only allows you to reduce cost but also achieve greater data consistency, scale, and performance.
-Here's how to implement our integrated vector database, thereby taking advantage of its single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at any scale:
-
-| | Description |
-| | |
-| **[Azure Cosmos DB for Mongo DB vCore](#how-to-implement-vector-database-functionalities-using-our-api-for-mongodb-vcore)** | Store your application data and vector embeddings together in a single MongoDB-compatible service featuring natively integrated vector database. |
-| **[Azure Cosmos DB for PostgreSQL](#how-to-implement-vector-database-functionalities-using-our-api-for-postgresql)** | Store your data and vectors together in a scalable PostgreSQL offering with natively integrated vector database. |
-| **[Azure Cosmos DB for NoSQL with Azure AI Search](#how-to-implement-vector-database-functionalities-using-our-nosql-api-and-ai-search)** | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure AI Search. |
+> [!TIP]
+> Data consistency, scale, and performance guarantees are why OpenAI built its ChatGPT service on top of Azure Cosmos DB. You, too, can take advantage of its integrated vector database, as well as its single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at any scale. Please consult the [implementation samples](#how-to-implement-integrated-vector-database-functionalities) section of this article and [try](#next-step) the lifetime free tier or one of the free trial options.
## What is a vector database?
A vector database is a database designed to store and manage [vector embeddings]
In a vector database, embeddings are indexed and queried through [vector search](#vector-search) algorithms based on their vector distance or similarity. A robust mechanism is necessary to identify the most relevant data. Some well-known vector search algorithms include Hierarchical Navigable Small World (HNSW), Inverted File (IVF), DiskANN, etc.
-Besides the above functionalities of a typical vector database, our integrated vector database also converts the existing raw data in your account into embeddings and stores them as vectors. This way, you avoid the extra cost of moving data to a separate vector database. Moreover, this architecture keeps your vector embeddings and original data together, and you can better achieve data consistency, scale, and performance.
+Besides the typical vector database functionalities above, an integrated vector database in a highly performant NoSQL or relational database converts the existing raw data in your account into embeddings and stores them alongside your original data. This way, you can avoid the extra cost of replicating your data in a separate vector database. Moreover, this architecture keeps your vector embeddings and original data together, which better facilitates multi-modal data operations, and you can achieve greater data consistency, scale, and performance.
## What are some vector database use cases?
Vector databases are used in numerous domains and situations across analytical a
- identify data anomalies or fraudulent activities that are dissimilar from predominant or normal patterns - implement persistent memory for AI agents
-Besides these typical use cases for vector database, our integrated vector database is also an ideal solution for production-level LLM caching thanks to its low latency, high scalability, and high availability.
+> [!TIP]
+> Besides these typical use cases for vector databases, our integrated vector database is also an ideal solution for production-level LLM caching thanks to its low latency, high scalability, and high availability.
It's especially popular to use vector databases to enable [retrieval-augmented generation (RAG)](#retrieval-augmented-generation) that harnesses LLMs and custom data or domain-specific information. This approach allows you to:
It's especially popular to use vector databases to enable [retrieval-augmented g
This process involves extracting pertinent information from a custom data source and integrating it into the model request through prompt engineering. Before sending a request to the LLM, the user input/query/request is also transformed into an embedding, and vector search techniques are employed to locate the most similar embeddings within the database. This technique enables the identification of the most relevant data records in the database. These retrieved records are then supplied as input to the LLM request using [prompt engineering](#prompts-and-prompt-engineering).
-Here are multiple ways to implement RAG on your data by using our vector database functionalities:
+## Vector database related concepts
+
+### Embeddings
+
+An embedding is a special format of data representation that machine learning models and algorithms can easily use. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. A vector database extension that allows you to store your embeddings with your original data ensures data consistency, scale, and performance. [[Go back](#what-is-a-vector-database)]
+
+### Vector search
+
+Vector search is a method that helps you find similar items based on their data characteristics rather than by exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the vector representations (lists of numbers) of your data that you created by using a machine learning model by using an embeddings API, such as [Azure OpenAI Embeddings](../ai-services/openai/how-to/embeddings.md) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically. Using a native vector search feature offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications. [[Go back](#what-is-a-vector-database)]
+
+### Prompts and prompt engineering
+
+A prompt refers to a specific text or information that can serve as an instruction to an LLM, or as contextual data that the LLM can build upon. A prompt can take various forms, such as a question, a statement, or even a code snippet. Prompts can serve as:
+
+- Instructions provide directives to the LLM
+- Primary content: gives information to the LLM for processing
+- Examples: help condition the model to a particular task or process
+- Cues: direct the LLM's output in the right direction
+- Supporting content: represents supplemental information the LLM can use to generate output
+
+The process of creating good prompts for a scenario is called prompt engineering. For more information about prompts and best practices for prompt engineering, see Azure OpenAI Service [prompt engineering techniques](../ai-services/openai/concepts/advanced-prompt-engineering.md). [[Go back](#what-are-some-vector-database-use-cases)]
+
+### Tokens
+
+Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word hamburger would be divided into tokens such as ham, bur, and ger while a short and common word like pear would be considered a single token. LLMs like ChatGPT, GPT-3.5, or GPT-4 break words into tokens for processing. [[Go back](#what-are-some-vector-database-use-cases)]
+
+### Retrieval-augmented generation
+
+Retrieval-augmentated generation (RAG) is an architecture that augments the capabilities of LLMs like ChatGPT, GPT-3.5, or GPT-4 by adding an information retrieval system like vector search that provides grounding data, such as those stored in a vector database. This approach allows your LLM to generate contextually relevant and accurate responses based on your custom data sourced from vectorized documents, images, audio, video, etc.
+
+A simple RAG pattern using Azure Cosmos DB for NoSQL could be:
+
+1. Insert data into an Azure Cosmos DB for NoSQL database and collection
+2. Create embeddings from a data property using Azure OpenAI Embeddings
+3. Link the Azure Cosmos DB for NoSQL to Azure Cognitive Search (for vector indexing/search)
+4. Create a vector index over the embeddings properties
+5. Create a function to perform vector similarity search based on a user prompt
+6. Perform question answering over the data using an Azure OpenAI Completions model
+
+The RAG pattern, with prompt engineering, serves the purpose of enhancing response quality by offering more contextual information to the model. RAG enables the model to apply a broader knowledge base by incorporating relevant external sources into the generation process, resulting in more comprehensive and informed responses. For more information on "grounding" LLMs, see [grounding LLMs](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857). [[Go back](#what-are-some-vector-database-use-cases)]
+
+Here are multiple ways to implement RAG on your data by using our integrated vector database functionalities:
+
+## How to implement integrated vector database functionalities
-## How to implement vector database functionalities using our API for MongoDB vCore
+You can implement integrated vector database functionalities for the following [Azure Cosmos DB APIs](choose-api.md):
-Use the natively integrated vector database in [Azure Cosmos DB for MongoDB vCore](mongodb/vcore/vector-search.md), which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
+### API for MongoDB
-### Vector database implementation code samples
+Use the natively [integrated vector database in Azure Cosmos DB for MongoDB vCore](mongodb/vcore/vector-search.md), which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
+
+#### Code samples
- [.NET RAG Pattern retail reference solution](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore) - [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore)
Use the natively integrated vector database in [Azure Cosmos DB for MongoDB vCor
- [Python - LlamaIndex integration](https://docs.llamaindex.ai/en/stable/examples/vector_stores/AzureCosmosDBMongoDBvCoreDemo.html) - [Python - Semantic Kernel memory integration](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/memory/azure_cosmosdb)
-## How to implement vector database functionalities using our API for PostgreSQL
+### API for PostgreSQL
Use the natively integrated vector database in [Azure Cosmos DB for PostgreSQL](postgresql/howto-use-pgvector.md), which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
-### Vector database implementation code samples
+#### Code samples
- Python: [Python notebook tutorial - food review chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-PostgreSQL_CognitiveSearch)
-## How to implement vector database functionalities using our NoSQL API and AI Search
+### NoSQL API
The natively integrated vector database in our NoSQL API will become available in mid-2024. In the meantime, you may implement RAG patterns with Azure Cosmos DB for NoSQL and [Azure AI Search](../search/vector-search-overview.md). This approach enables powerful integration of your data residing in the NoSQL API into your AI-oriented applications.
-### Vector database implementation code samples
+#### Code samples
- [.NET tutorial - Build and Modernize AI Applications](https://github.com/Azure/Build-Modern-AI-Apps-Hackathon) - [.NET tutorial - Bring Your Data to ChatGPT](https://github.com/Azure/Vector-Search-AI-Assistant/tree/cognitive-search-vector)
The natively integrated vector database in our NoSQL API will become available i
> [!div class="nextstepaction"] > [Use the Azure Cosmos DB lifetime free tier](free-tier.md)
-## Vector database related concepts
-
-### Embeddings
-
-An embedding is a special format of data representation that machine learning models and algorithms can easily use. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. A vector database extension that allows you to store your embeddings with your original data ensures data consistency, scale, and performance.
-
-### Vector search
-
-Vector search is a method that helps you find similar items based on their data characteristics rather than by exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the vector representations (lists of numbers) of your data that you created by using a machine learning model by using an embeddings API, such as [Azure OpenAI Embeddings](../ai-services/openai/how-to/embeddings.md) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically. Using a native vector search feature offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
-
-### Retrieval-augmented generation
-
-Retrieval-augmentated generation (RAG) is an architecture that augments the capabilities of LLMs like ChatGPT, GPT-3.5, or GPT-4 by adding an information retrieval system like vector search that provides grounding data, such as those stored in a vector database. This approach allows your LLM to generate contextually relevant and accurate responses based on your custom data sourced from vectorized documents, images, audio, video, etc.
-
-A simple RAG pattern using Azure Cosmos DB for NoSQL could be:
-
-1. Insert data into an Azure Cosmos DB for NoSQL database and collection
-2. Create embeddings from a data property using Azure OpenAI Embeddings
-3. Link the Azure Cosmos DB for NoSQL to Azure Cognitive Search (for vector indexing/search)
-4. Create a vector index over the embeddings properties
-5. Create a function to perform vector similarity search based on a user prompt
-6. Perform question answering over the data using an Azure OpenAI Completions model
-
-The RAG pattern, with prompt engineering, serves the purpose of enhancing response quality by offering more contextual information to the model. RAG enables the model to apply a broader knowledge base by incorporating relevant external sources into the generation process, resulting in more comprehensive and informed responses. For more information on "grounding" LLMs, see [grounding LLMs](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857).
-
-### Prompts and prompt engineering
-
-A prompt refers to a specific text or information that can serve as an instruction to an LLM, or as contextual data that the LLM can build upon. A prompt can take various forms, such as a question, a statement, or even a code snippet. Prompts can serve as:
--- Instructions provide directives to the LLM-- Primary content: gives information to the LLM for processing-- Examples: help condition the model to a particular task or process-- Cues: direct the LLM's output in the right direction-- Supporting content: represents supplemental information the LLM can use to generate output-
-The process of creating good prompts for a scenario is called prompt engineering. For more information about prompts and best practices for prompt engineering, see Azure OpenAI Service [prompt engineering techniques](../ai-services/openai/concepts/advanced-prompt-engineering.md).
-
-### Tokens
-
-Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word hamburger would be divided into tokens such as ham, bur, and ger while a short and common word like pear would be considered a single token. LLMs like ChatGPT, GPT-3.5, or GPT-4 break words into tokens for processing.
-
-## Related content
+## More Vector Databases
-- [Azure Cosmos DB for MongoDB vCore Integrated Vector Database](mongodb/vcore/vector-search.md) - [Azure PostgreSQL Server pgvector Extension](../postgresql/flexible-server/how-to-use-pgvector.md) - [Azure AI Search](../search/search-what-is-azure-search.md) - [Open Source Vector Database List](/semantic-kernel/memories/vector-db#available-connectors-to-vector-databases)
defender-for-cloud Alerts Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-schemas.md
Title: Schemas for the Microsoft Defender for Cloud alerts
+ Title: Alerts schema
description: This article describes the different schemas used by Microsoft Defender for Cloud for security alerts.-+ Previously updated : 11/09/2021 Last updated : 03/25/2024
+#customer intent: As a reader, I want to understand the different schemas used by Microsoft Defender for Cloud for security alerts so that I can effectively work with the alerts.
-# Security alerts schemas
+# Alerts schemas
-If your subscription has Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads) enabled, you receive security alerts when Defender for Cloud detects threats to their resources.
-You can view these security alerts in Microsoft Defender for Cloud's pages - [overview dashboard](overview-page.md), [alerts](managing-and-responding-alerts.md), [resource health pages](investigate-resource-health.md), or [workload protections dashboard](workload-protections-dashboard.md) - and through external tools such as:
+Defender for Cloud provides alerts that help you identify, understand, and respond to security threats. Alerts are generated when Defender for Cloud detects suspicious activity or a security-related issue in your environment. You can view these alerts in the Defender for Cloud portal, or you can export them to external tools for further analysis and response.
+
+You can review security alerts from the [overview dashboard](overview-page.md), [alerts](managing-and-responding-alerts.md) page, [resource health pages](investigate-resource-health.md), or [workload protections dashboard](workload-protections-dashboard.md).
+
+The following external tools can be used to consume alerts from Defender for Cloud:
- [Microsoft Sentinel](../sentinel/index.yml) - Microsoft's cloud-native SIEM. The Sentinel Connector gets alerts from Microsoft Defender for Cloud and sends them to the [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) for Microsoft Sentinel. - Third-party SIEMs - Send data to [Azure Event Hubs](../event-hubs/index.yml). Then integrate your Event Hubs data with a third-party SIEM. Learn more in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md). - [The REST API](/rest/api/defenderforcloud/operation-groups?view=rest-defenderforcloud-2020-01-01&preserve-view=true) - If you're using the REST API to access alerts, see the [online Alerts API documentation](/rest/api/defenderforcloud/alerts).
-If you're using any programmatic methods to consume the alerts, you need the correct schema to find the fields that are relevant to you. Also, if you're exporting to an Event Hubs or trying to trigger Workflow Automation with generic HTTP connectors, use the schemas to properly parse the JSON objects.
+If you're using any programmatic methods to consume the alerts, you need the correct schema to find the fields that are relevant to you. Also, if you're exporting to an Event Hubs or trying to trigger Workflow Automation with generic HTTP connectors, schemas should be utilized to properly parse the JSON objects.
>[!IMPORTANT]
-> The schema is slightly different for each of these scenarios, so make sure you select the relevant tab.
+> Since the schema is different for each of these scenarios, ensure you select the relevant tab.
## The schemas
The schema and a JSON representation for security alerts sent to MS Graph, are a
-## Next steps
-
-This article described the schemas that Microsoft Defenders for Cloud's threat protection tools use when sending security alert information.
-
-For more information on the ways to access security alerts from outside Defender for Cloud, see:
+## Related articles
+- [Log Analytics workspaces](../azure-monitor/logs/quick-create-workspace.md) - Azure Monitor stores log data in a Log Analytics workspace, a container that includes data and configuration information
- [Microsoft Sentinel](../sentinel/index.yml) - Microsoft's cloud-native SIEM - [Azure Event Hubs](../event-hubs/index.yml) - Microsoft's fully managed, real-time data ingestion service-- [Continuously export Defender for Cloud data](continuous-export.md)-- [Log Analytics workspaces](../azure-monitor/logs/quick-create-workspace.md) - Azure Monitor stores log data in a Log Analytics workspace, a container that includes data and configuration information+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Continuously export Defender for Cloud data](continuous-export.md)
defender-for-cloud Episode Thirty One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-one.md
Last updated 05/16/2023
- [11:35](/shows/mdc-in-the-field/data-aware-security-posture#time=11m35s) - Demonstration ## Recommended resources
- - Learn more about [Data Aware Security Posture](concept-data-security-posture.md)
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Learn more about [Data Aware Security Posture](concept-data-security-posture.md)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity) ## Next steps > [!div class="nextstepaction"]
-> [API Security with Defender for APIs](episode-thirty-two.md)
+> [API Security with Defender for APIs](episode-thirty-two.md)
defender-for-cloud Episode Thirty Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-seven.md
Last updated 08/29/2023
# Capabilities to counter identity-based supply chain attacks
-**Episode description**: In this episode of Defender for Cloud in the Field, Security Researcher, Hagai Kestenberg joins Yuri Diogenes to talk about Defender for Cloud capabilities to counter identity-based supply chain attacks. Hagai explains the different types of supply chain attacks and focuses on the risks of identity-based supply chain attacks. Hagai makes recommendations to mitigate this type of attack and explain the new capability in Defender for Resource Manager that can be used to identify this type of attack. Hagai also demonstrates the new alert generated by Defender for Resource Manager when this type of attack is identified.
+**Episode description**: In this episode of Defender for Cloud in the Field, Security Researcher, Hagai Kestenberg joins Yuri Diogenes to talk about Defender for Cloud capabilities to counter identity-based supply chain attacks. Hagai explains the different types of supply chain attacks and focuses on the risks of identity-based supply chain attacks. Hagai makes recommendations to mitigate this type of attack and explain the new capability in Defender for Resource Manager that can be used to identify this type of attack. Hagai also demonstrates the new alert generated by Defender for Resource Manager when this type of attack is identified.
> [!VIDEO https://aka.ms/docs/player?id=d69fb652-46a7-4f8c-8632-8cf2cbc3685a]
defender-for-cloud Episode Thirty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-three.md
Last updated 06/13/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Thirty Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-two.md
Last updated 06/08/2023
- [15:53](/shows/mdc-in-the-field/api-security#time=15m53s) - Demonstration ## Recommended resources
- - Learn more about [Defender for APIs](defender-for-apis-introduction.md)
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Learn more about [Defender for APIs](defender-for-apis-introduction.md)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity) ## Next steps > [!div class="nextstepaction"]
-> [Agentless Container Posture Management in Defender for Cloud](episode-thirty-three.md)
+> [Agentless Container Posture Management in Defender for Cloud](episode-thirty-three.md)
defender-for-cloud Episode Thirty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty.md
Last updated 05/14/2023
- [03:15](/shows/mdc-in-the-field/new-custom-recommendations#time=03m15s) - Creating a custom recommendation based on a template - [08:20](/shows/mdc-in-the-field/new-custom-recommendations#time=08m20s) - Creating a custom recommendation from scratch - [12:27](/shows/mdc-in-the-field/new-custom-recommendations#time=12m27s) - Custom recommendation update interval-- [14:30](/shows/mdc-in-the-field/new-custom-recommendations#time=14m30s) - Filtering custom recommendations in the Defender for Cloud dashboard
+- [14:30](/shows/mdc-in-the-field/new-custom-recommendations#time=14m30s) - Filtering custom recommendations in the Defender for Cloud dashboard
- [16:40](/shows/mdc-in-the-field/new-custom-recommendations#time=16m40s) - Prerequisites to use the custom recommendations feature
-
+ ## Recommended resources
- - Learn how to [create custom recommendations and security standards](create-custom-recommendations.md)
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Learn how to [create custom recommendations and security standards](create-custom-recommendations.md)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
defender-for-cloud Episode Twelve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twelve.md
Netta explains how Defender for Servers applies Azure Arc as a bridge to onboard
Introduce yourself to [Microsoft Defender for Servers](defender-for-servers-introduction.md). -- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
-- Follow us on social media:
+- Follow us on social media:
[LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F) [Twitter](https://twitter.com/msftsecurity) -- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
-- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
## Next steps > [!div class="nextstepaction"]
-> [Defender for Storage](episode-thirteen.md)
+> [Defender for Storage](episode-thirteen.md)
defender-for-cloud Episode Twenty Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-eight.md
Last updated 04/27/2023
-# Zero Trust and Defender for Cloud | Defender for Cloud in the field
+# Zero Trust and Defender for Cloud | Defender for Cloud in the field
**Episode description**: In this episode of Defender for Cloud in the Field, Mekonnen Kassa joins Yuri Diogenes to discuss the importance of using Zero Trust. Mekonnen covers the principles of Zero Trust, the importance of switching your mindset to adopt this strategy and how Defender for Cloud can help. Mekonnen also talks about best practices to get started, visibility and analytics as part of Zero Trust, and what tools can be leveraged to achieve it.
Last updated 04/27/2023
- [18:09](/shows/mdc-in-the-field/zero-trust#time=18m09s) - Final recommendations to start your Zero Trust journey ## Recommended resources
- - Learn more about [Zero Trust](https://www.microsoft.com/security/business/zero-trust)
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Learn more about [Zero Trust](https://www.microsoft.com/security/business/zero-trust)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Security Policy Enhancements in Defender for Cloud](episode-twenty-nine.md)
+> [Security Policy Enhancements in Defender for Cloud](episode-twenty-nine.md)
defender-for-cloud Episode Twenty Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-five.md
Title: AWS ECR coverage in Defender for Containers | Defender for Cloud in the field- description: Learn about AWS ECR coverage in Defender for Containers Last updated 04/27/2023
Last updated 04/27/2023
- [07:33](/shows/mdc-in-the-field/aws-ecr#time=07m33s) - Demonstration ## Recommended resources
- - [Learn more](defender-for-containers-vulnerability-assessment-elastic.md) about AWS ECR Coverage in Defender for Containers.
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- [Learn more](defender-for-containers-vulnerability-assessment-elastic.md) about AWS ECR Coverage in Defender for Containers.
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Governance capability improvements in Defender for Cloud](episode-twenty-six.md)
+> [Governance capability improvements in Defender for Cloud](episode-twenty-six.md)
defender-for-cloud Episode Twenty Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-four.md
Last updated 04/27/2023
- [12:56](/shows/mdc-in-the-field/defender-sql-enhancements#time=12m56s) - Demonstration ## Recommended resources
- - [Learn more](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/new-express-configuration-for-vulnerability-assessment-in/ba-p/3695390) about Defender for SQL Vulnerability Assessment (VA).
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- [Learn more](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/new-express-configuration-for-vulnerability-assessment-in/ba-p/3695390) about Defender for SQL Vulnerability Assessment (VA).
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [AWS ECR Coverage in Defender for Containers](episode-twenty-five.md)
+> [AWS ECR Coverage in Defender for Containers](episode-twenty-five.md)
defender-for-cloud Episode Twenty Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-nine.md
Last updated 04/27/2023
- [12:02](/shows/mdc-in-the-field/security-policy#time=12m02s) - What's next? ## Recommended resources
- - Learn more about [managing security policies](tutorial-security-policy.md)
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Learn more about [managing security policies](tutorial-security-policy.md)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [New Custom Recommendations for AWS and GCP in Defender for Cloud](episode-thirty.md)
+> [New Custom Recommendations for AWS and GCP in Defender for Cloud](episode-thirty.md)
defender-for-cloud Episode Twenty One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-one.md
Last updated 04/27/2023
- [13:49](/shows/mdc-in-the-field/security-explorer#time=13m49s) - Manual attestation - ## Recommended resources
- - [Learn more](./regulatory-compliance-dashboard.md) about improving your regulatory compliance.
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- [Learn more](./regulatory-compliance-dashboard.md) about improving your regulatory compliance.
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Defender External Attack Surface Management (Defender EASM)](episode-twenty-two.md)
+> [Defender External Attack Surface Management (Defender EASM)](episode-twenty-two.md)
defender-for-cloud Episode Twenty Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-seven.md
Last updated 04/27/2023
- [22:52](/shows/mdc-in-the-field/demystify-servers#time=22m52s) - Deploying Defender for Servers at scale ## Recommended resources
- - Learn more about [Defender for Servers](plan-defender-for-servers.md)
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Learn more about [Defender for Servers](plan-defender-for-servers.md)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Zero Trust and Defender for Cloud](episode-twenty-eight.md)
+> [Zero Trust and Defender for Cloud](episode-twenty-eight.md)
defender-for-cloud Episode Twenty Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-six.md
Last updated 04/27/2023
- [19:00](/shows/mdc-in-the-field/governance-improvements#time=19m00s) - Learn more about governance ## Recommended resources
- - Learn how to [drive your organization to remediate security recommendations with governance](governance-rules.md)
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Learn how to [drive your organization to remediate security recommendations with governance](governance-rules.md)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Demystifying Defender for Servers](episode-twenty-seven.md)
+> [Demystifying Defender for Servers](episode-twenty-seven.md)
defender-for-cloud Episode Twenty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-three.md
Last updated 04/27/2023
## Recommended resources
- - [Learn more](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti) about Defender TI.
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+- [Learn more](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti) about Defender TI.
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Enhancements in Defender for SQL Vulnerability Assessment](episode-twenty-four.md)
+> [Enhancements in Defender for SQL Vulnerability Assessment](episode-twenty-four.md)
defender-for-cloud Episode Twenty Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-two.md
Last updated 04/27/2023
- [11:51](/shows/mdc-in-the-field/security-explorer#time=11m51s) - Demonstration ## Recommended resources
- - [Learn more](concept-easm.md) about external attack surface management.
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- [Learn more](concept-easm.md) about external attack surface management.
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Defender Threat Intelligence (Defender TI)](episode-twenty-three.md)
+> [Defender Threat Intelligence (Defender TI)](episode-twenty-three.md)
defender-for-cloud Episode Twenty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty.md
Last updated 04/27/2023
- [19:25](/shows/mdc-in-the-field/security-explorer#time=19m25s) - Saving cloud security explorer queries - ## Recommended resources
- - [Learn more](./concept-attack-path.md) about Attack path.
- - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
- - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
- - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- [Learn more](./concept-attack-path.md) about Attack path.
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
- Follow us on social media:
- - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
- - [Twitter](https://twitter.com/msftsecurity)
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
Last updated 04/27/2023
## Next steps > [!div class="nextstepaction"]
-> [Latest updates in the regulatory compliance dashboard](episode-twenty-one.md)
+> [Latest updates in the regulatory compliance dashboard](episode-twenty-one.md)
defender-for-cloud Episode Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-two.md
Last updated 04/27/2023
-# Integrate Microsoft Purview with Microsoft Defender for Cloud
+# Integrate Microsoft Purview with Microsoft Defender for Cloud
**Episode description**: In this episode of Defender for Cloud in the field, David Trigano joins Yuri Diogenes to share the new integration of Microsoft Defender for Cloud with Microsoft Purview, which was released at Ignite 2021.
David explains the use case scenarios for this integration and how the data clas
Learn more about the [integration with Microsoft Purview](information-protection.md). -- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
-- Follow us on social media:
+- Follow us on social media:
[LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F) [Twitter](https://twitter.com/msftsecurity) -- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
-- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
## Next steps > [!div class="nextstepaction"]
-> [Watch Episode 3](episode-three.md)
+> [Watch Episode 3](episode-three.md)
defender-for-cloud Investigate Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/investigate-resource-health.md
Title: Tutorial - Investigate the health of your resources description: 'Tutorial: Learn how to investigate the health of your resources using Microsoft Defender for Cloud.' Previously updated : 01/24/2023 Last updated : 02/21/2024 # Tutorial: Investigate the health of your resources The resource health page provides a snapshot view of the overall health of a single resource. You can review detailed information about the resource and all recommendations that apply to that resource. Also, if you're using any of the [advanced protection plans of Microsoft Defender for Cloud](defender-for-cloud-introduction.md), you can see outstanding security alerts for that specific resource too.
-This single page, currently in preview, in Defender for Cloud's portal pages shows:
+This single page, in Defender for Cloud's portal pages shows:
1. **Resource information** - The resource group and subscription it's attached to, the geographic location, and more. 1. **Applied security feature** - Whether a Microsoft Defender plan is enabled for the resource.
In this tutorial you'll learn how to:
To step through the features covered in this tutorial: - You need an Azure subscription. If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.-- To apply security recommendations, you must be signed in with an account that has the relevant permissions (Resource Group Contributor, Resource Group Owner, Subscription Contributor, or Subscription Owner)-- To dismiss alerts, you must be signed in with an account that has the relevant permissions (Security Admin, Subscription Contributor, or Subscription Owner)+
+- [Microsoft Defender for Cloud enabled on your subscription](connect-azure-subscription.md).
+
+- **To apply security recommendations**: you must be signed in with an account that has the relevant permissions (Resource Group Contributor, Resource Group Owner, Subscription Contributor, or Subscription Owner)
+
+- **To dismiss alerts**: you must be signed in with an account that has the relevant permissions (Security Admin, Subscription Contributor, or Subscription Owner)
## Access the health information for a resource > [!TIP] > In the following screenshots, we're opening a virtual machine, but the resource health page can show you the details for all resource types.
-To open the resource health page for a resource:
+**To open the resource health page for a resource**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Microsoft Defender for Cloud**.
+
+1. Select **Inventory**.
-1. Select any resource from the [asset inventory page](asset-inventory.md).
+1. Select any resource.
:::image type="content" source="media/investigate-resource-health/inventory-select-resource.png" alt-text="Select a resource from the asset inventory to view the resource health page." lightbox="./media/investigate-resource-health/inventory-select-resource.png":::
-1. Use the left pane of the resource health page for an overview of the subscription, status, and monitoring information about the resource. You can also see whether enhanced security features are enabled for the resource:
+1. Review the left pane of the resource health page for an overview of the subscription, status, and monitoring information about the resource. You can also see whether enhanced security features are enabled for the resource:
:::image type="content" source="media/investigate-resource-health/resource-health-left-pane.png" alt-text="The left pane of Microsoft Defender for Cloud's resource health page shows the subscription, status, and monitoring information about the resource. It also includes the total number of outstanding security recommendations and security alerts.":::
To open the resource health page for a resource:
The resource health page lists the recommendations for which your resource is "unhealthy" and the alerts that are active. -- To ensure your resource is hardened according to the policies applied to your subscriptions, fix the issues described in the recommendations:
- 1. From the right pane, select a recommendation.
- 1. Continue as instructed on screen.
+### Harden a resource
+
+To ensure your resource is hardened according to the policies applied to your subscriptions, fix the issues described in the recommendations:
+
+1. From the right pane, select a recommendation.
+
+1. Continue as instructed on screen.
+
+ > [!TIP]
+ > The instructions for fixing issues raised by security recommendations differ for each of Defender for Cloud's recommendations.
+ >
+ > To decide which recommendations to resolve first, look at the severity of each one and its [potential impact on your secure score](secure-score-security-controls.md).
+
+### Investigate a security alert
- > [!TIP]
- > The instructions for fixing issues raised by security recommendations differ for each of Defender for Cloud's recommendations.
- >
- > To decide which recommendations to resolve first, look at the severity of each one and its [potential impact on your secure score](secure-score-security-controls.md).
+1. From the right pane, select an alert.
-- To investigate a security alert:
- 1. From the right pane, select an alert.
- 1. Follow the instructions in [Respond to security alerts](managing-and-responding-alerts.md#respond-to-a-security-alert).
+1. Follow the instructions in [Respond to security alerts](managing-and-responding-alerts.md#respond-to-a-security-alert).
## Next steps
defender-for-cloud Sql Azure Vulnerability Assessment Find https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-find.md
Title: Find vulnerabilities in your Azure SQL databases
description: Learn how to find software vulnerabilities with the express configuration on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. Previously updated : 11/29/2022 Last updated : 03/25/2024
defender-for-cloud Sql Information Protection Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-information-protection-policy.md
Title: SQL information protection policy
-description: Learn how to customize information protection policies in Microsoft Defender for Cloud.
+description: Learn how to customize information protection policies in Microsoft Defender for Cloud to secure your data effectively and meet compliance requirements.
Previously updated : 11/09/2021 Last updated : 03/25/2024
+#customer intent: As a user, I want to learn how to customize information protection policies in Microsoft Defender for Cloud so that I can secure my data effectively.
+ # SQL information protection policy in Microsoft Defender for Cloud SQL information protection's [data discovery and classification mechanism](/azure/azure-sql/database/data-discovery-and-classification-overview) provides advanced capabilities for discovering, classifying, labeling, and reporting the sensitive data in your databases. It's built into [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview), [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview), and [Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md).
Learn more in [Grant and request tenant-wide visibility](tenant-wide-permissions
- [Get-AzSqlInformationProtectionPolicy](/powershell/module/az.security/get-azsqlinformationprotectionpolicy): Retrieves the effective tenant SQL information protection policy. - [Set-AzSqlInformationProtectionPolicy](/powershell/module/az.security/set-azsqlinformationprotectionpolicy): Sets the effective tenant SQL information protection policy.
-## Next steps
+## Related articles
+
+- [Azure SQL Database Data Discovery and Classification](/azure/azure-sql/database/data-discovery-and-classification-overview)
-In this article, you learned about defining an information protection policy in Microsoft Defender for Cloud. To learn more about using SQL Information Protection to classify and protect sensitive data in your SQL databases, see [Azure SQL Database Data Discovery and Classification](/azure/azure-sql/database/data-discovery-and-classification-overview).
+- [Microsoft Defender for Cloud data security](data-security.md)
-For more information on security policies and data security in Defender for Cloud, see the following articles:
+## Next step
-- [Setting security policies in Microsoft Defender for Cloud](tutorial-security-policy.md): Learn how to configure security policies for your Azure subscriptions and resource groups-- [Microsoft Defender for Cloud data security](data-security.md): Learn how Defender for Cloud manages and safeguards data
+> [!div class="nextstepaction"]
+> [Setting security policies in Microsoft Defender for Cloud](tutorial-security-policy.md)
defender-for-iot Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-recommendations.md
Security recommendations are actionable and aim to aid customers in complying wi
In this article, you will find a list of recommendations, which can be triggered on your IoT Hub.
-> [!NOTE]
-> The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
- ## Built in recommendations in IoT Hub Recommendation alerts provide insight and suggestions for actions to improve the security posture of your environment.
defender-for-iot How To Investigate Cis Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-investigate-cis-benchmark.md
Perform basic and advanced investigations based on OS baseline recommendations.
-> [!NOTE]
-> The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
- ## Basic OS baseline security recommendation investigation You can investigate OS baseline recommendations by navigating to [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). For more information, see how to [Investigate security recommendations](quickstart-investigate-security-recommendations.md).
defender-for-iot How To Investigate Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-investigate-device.md
In this guide, use the investigation suggestions provided to help determine the
> * Find your device data > * Investigate using KQL queries
-> [!NOTE]
-> The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
->
-> For more information, see [Tutorial: Investigate security recommendations](tutorial-investigate-security-recommendations.md) and [Tutorial: Investigate security alerts](tutorial-investigate-security-alerts.md).
- ## How can I access my data? By default, Defender for IoT stores your security alerts and recommendations in your Log Analytics workspace. You can also choose to store your raw security data.
defender-for-iot How To Security Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-security-data-access.md
Last updated 03/28/2022
Defender for IoT stores security alerts, recommendations, and raw security data (if you choose to save it) in your Log Analytics workspace.
-> [!NOTE]
-> The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
->
-> For more information, see [Tutorial: Configure Microsoft Defender for IoT agent-based solution](tutorial-configure-agent-based-solution.md).
## Log Analytics To configure which Log Analytics workspace is used:
For details on querying data from Log Analytics, see [Get started with log queri
Security alerts are stored in _AzureSecurityOfThings.SecurityAlert_ table in the Log Analytics workspace configured for the Defender for IoT solution.
-We've provided a number of useful queries to help you get started exploring security alerts.
+We provide many useful queries to help you get started exploring security alerts.
### Sample records
SecurityAlert
Security recommendations are stored in _AzureSecurityOfThings.SecurityRecommendation_ table in the Log Analytics workspace configured for the Defender for IoT solution.
-We've provided a number of useful queries to help you get start exploring security recommendations.
+We provide many useful queries to help you get start exploring security recommendations.
### Sample records
SecurityRecommendation
| TimeGenerated | IoTHubId | DeviceId | RecommendationSeverity | RecommendationState | RecommendationDisplayName | Description | RecommendationAdditionalData | ||-|-||||-||
-| 2019-03-22T10:21:06.060 | /subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Devices/IotHubs/<iot_hub> | <device_name> | Medium | Active | Permissive firewall rule in the input chain was found | A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or Ports | {"Rules":"[{\"SourceAddress\":\"\",\"SourcePort\":\"\",\"DestinationAddress\":\"\",\"DestinationPort\":\"1337\"}]"} |
-| 2019-03-22T10:50:27.237 | /subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Devices/IotHubs/<iot_hub> | <device_name> | Medium | Active | Permissive firewall rule in the input chain was found | A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or Ports | {"Rules":"[{\"SourceAddress\":\"\",\"SourcePort\":\"\",\"DestinationAddress\":\"\",\"DestinationPort\":\"1337\"}]"} |
+| 2019-03-22T10:21:06.060 | /subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Devices/IotHubs/<iot_hub> | <device_name> | Medium | Active | Permissive firewall rule in the input chain was found | A rule in the firewall was found that contains a permissive pattern for a wide range of IP addresses or Ports | {"Rules":"[{\"SourceAddress\":\"\",\"SourcePort\":\"\",\"DestinationAddress\":\"\",\"DestinationPort\":\"1337\"}]"} |
+| 2019-03-22T10:50:27.237 | /subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Devices/IotHubs/<iot_hub> | <device_name> | Medium | Active | Permissive firewall rule in the input chain was found | A rule in the firewall was found that contains a permissive pattern for a wide range of IP addresses or Ports | {"Rules":"[{\"SourceAddress\":\"\",\"SourcePort\":\"\",\"DestinationAddress\":\"\",\"DestinationPort\":\"1337\"}]"} |
### Device summary
defender-for-iot Quickstart Create Custom Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/quickstart-create-custom-alerts.md
Last updated 01/01/2023
Using custom security groups and alerts, takes full advantage of the end-to-end security information and categorical device knowledge to ensure better security across your IoT solution.
-> [!NOTE]
-> The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
## Why use custom alerts? You know your IoT devices best.
defender-for-iot Tutorial Investigate Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-investigate-security-alerts.md
In this tutorial you'll learn how to:
> - Investigate security alert details > - Investigate alerts in Log Analytics workspace
-> [!NOTE]
-> The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
- ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
defender-for-iot Tutorial Investigate Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-investigate-security-recommendations.md
In this tutorial you'll learn how to:
> - Investigate security recommendation details > - Investigate recommendations in a Log Analytics workspace
-> [!NOTE]
-> The Microsoft Defender for IoT legacy experience under IoT Hub has been replaced by our new Defender for IoT standalone experience, in the Defender for IoT area of the Azure portal. The legacy experience under IoT Hub will not be supported after **March 31, 2023**.
- ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
For more information, see [Free trial](billing.md#free-trial).
## Prerequisites
-Before you start, all you need is an email address to be used as the contact for your new Microsoft Tenant.
+Before you start, you need:
-You also need to enter credit card details for your new Azure subscription, although you aren't charged until you switch from the **Free Trial** to the **Pay-As-You-Go** plan.
+1. An email address to be used as the contact for your new Microsoft Tenant
+1. A Global Admin permissions (Entra ID role on the tenant)
+1. Credit card details for your new Azure subscription, although you aren't charged until you switch from the **Free Trial** to the **Pay-As-You-Go** plan
## Add a trial license
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Cloud features may be dependent on a specific sensor version. Such features are
| Version / Patch | Release date | Scope | Supported until | | - | | -- | - | | **24.1** | | | |
-| 24.1.0 |02/2024 | Major |01/2025 |
+| 24.1.2 |02/2024 | Major |01/2025 |
| **23.1** | | | | | 23.1.3 | 09/2023 | Patch | 08/2024 | | 23.1.2 | 07/2023 | Major | 06/2024 |
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-| **OT networks** | **Version 24.1.0**:<br> - [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)<br><br>**Cloud features**<br>- [New license renewal reminder in the Azure portal](#new-license-renewal-reminder-in-the-azure-portal) <br><br>- [New OT appliance hardware profile](#new-ot-appliance-hardware-profile) <br><br>- [New fields for SNMP MIB OIDs](#new-fields-for-snmp-mib-oids)|
+| **OT networks** | **Version 24.1.2**:<br> - [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)<br><br>**Cloud features**<br>- [New license renewal reminder in the Azure portal](#new-license-renewal-reminder-in-the-azure-portal) <br><br>- [New OT appliance hardware profile](#new-ot-appliance-hardware-profile) <br><br>- [New fields for SNMP MIB OIDs](#new-fields-for-snmp-mib-oids)|
### Alert suppression rules from the Azure portal (Public preview)
event-grid Mqtt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-overview.md
IoT applications are software designed to interact with and process data from Io
### Client authentication
-Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra ID](mqtt-client-microsoft-entra-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
+Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra ID](mqtt-client-microsoft-entra-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
### Access control
event-grid Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-support.md
# MQTT features supported by Azure Event GridΓÇÖs MQTT broker feature
-MQTT is a publish-subscribe messaging transport protocol that was designed for constrained environments. ItΓÇÖs efficient, scalable, and reliable, which made it the gold standard for communication in IoT scenarios. MQTT broker supports clients that publish and subscribe to messages over MQTT v3.1.1, MQTT v3.1.1 over WebSockets, MQTT v5, and MQTT v5 over WebSockets. MQTT broker also supports cross MQTT version (MQTT 3.1.1 and MQTT 5) communication.
+MQTT is a publish-subscribe messaging transport protocol that was designed for constrained environments. ItΓÇÖs efficient, scalable, and reliable, which made it the gold standard for communication in IoT scenarios. MQTT broker supports clients that publish and subscribe to messages over MQTT v3.1.1, MQTT v3.1.1 over WebSockets, MQTT v5, and MQTT v5 over WebSockets. MQTT broker also supports cross MQTT version (MQTT 3.1.1 and MQTT 5) communication.
-MQTT v5 has introduced many improvements over MQTT v3.1.1 to deliver a more seamless, transparent, and efficient communication. It added:
+MQTT v5 introduced many improvements over MQTT v3.1.1 to deliver a more seamless, transparent, and efficient communication. It added:
- Better error reporting. - More transparent communication clients through features like user properties and content type. - More control to clients over the communication through features like message and session expiry.
The CONNECT packet should include the following properties:
- The ClientId field is required, and it should include the session name of the client. The session name needs to be unique across the namespace. You can use the client authentication name as the session name if each client is using one session per client. If one client is using multiple sessions, it needs to use different values for ClientId for each of its sessions. - The Username field is required if you didnΓÇÖt select a value in the alternativeAuthenticationNameSources during namespace creation. In that case, you need to provide your clientΓÇÖs authentication name in the Username field. That name needs to match the authentication name provided and the value in the clientΓÇÖs certificate field that was specified during the client resource creation.
-Learn more about [Client authentication](mqtt-client-authentication.md)
+Learn more about [Client authentication.](mqtt-client-authentication.md)
### Multi-session support
For example, the following combinations of Username and ClientIds in the CONNECT
:::image type="content" source="media/mqtt-support/mqtt-multi-session-high-res.png" alt-text="Diagram of a multi-session example." border="false":::
-For more information, see [How to establish multiple sessions for a single client](mqtt-establishing-multiple-sessions-per-client.md)
+For more information, see [How to establish multiple sessions for a single client.](mqtt-establishing-multiple-sessions-per-client.md)
#### Handling sessions: -- If a client tries to take over another client's active session by presenting its session name with a different authentication name, its connection request is rejected with an unauthorized error. For example, if Client B tries to connect to session 123 that is assigned at that time for client A, Client B's connection request is rejected. That being said, if the same client tries to reconnect with the same session names and the same authentication name, it is able to take over its existing session.
+- If a client tries to take over another client's active session by presenting its session name with a different authentication name, its connection request is rejected with an unauthorized error. For example, if Client B tries to connect to session 123 that is assigned at that time for client A, Client B's connection request is rejected. That being said, if the same client tries to reconnect with the same session names and the same authentication name, it's able to take over its existing session.
- If a client resource is deleted without ending its session, other clients can't use its session name until the session expires. For example, If client B creates a session with session name 123 then client B gets deleted, client A can't connect to session 123 until it expires. - The limit for the number of sessions per client applies to online and offline sessions at any point in time. For example, consider a namespace with the maximum client sessions per authentication name is set to 1. If client A connects with a persistent session 123 then gets disconnected, client A won't be able to connect with a new session 456 since its session 123 is still active even if it's offline. Accordingly, we recommend that the same client always reconnects with the same static session names as opposed to generating a new session name with every reconnect.
MQTT broker supports QoS 0 and 1, which define the guarantee of message delivery
### Persistent sessions MQTT broker supports persistent sessions for MQTT v3.1.1 such that MQTT broker preserves information about a clientΓÇÖs session in case of disconnections to ensure reliability of the communication. This information includes the clientΓÇÖs subscriptions and missed/ unacknowledged QoS 1 messages. Clients can configure a persistent session through setting the cleanSession flag in the CONNECT packet to false. #### Clean start and session expiry
-MQTT v5 has introduced the clean start and session expiry features as an improvement over MQTT v3.1.1 in handling session persistence. Clean Start is a feature that allows a client to start a new session with MQTT broker, discarding any previous session data. Session Expiry allows a client to inform MQTT broker when an inactive session is considered expired and automatically removed. In the CONNECT packet, a client can set Clean Start flag to true and/or short session expiry interval for security reasons or to avoid any potential data conflicts that might have occurred during the previous session. A client can also set a clean start to false and/or long session expiry interval to ensure the reliability and efficiency of persistent sessions.
+MQTT v5 introduced the clean start and session expiry features as an improvement over MQTT v3.1.1 in handling session persistence. Clean Start is a feature that allows a client to start a new session with MQTT broker, discarding any previous session data. Session Expiry allows a client to inform MQTT broker when an inactive session is considered expired and automatically removed. In the CONNECT packet, a client can set Clean Start flag to true and/or short session expiry interval for security reasons or to avoid any potential data conflicts that might have occurred during the previous session. A client can also set a clean start to false and/or long session expiry interval to ensure the reliability and efficiency of persistent sessions.
#### Maximum session expiry interval configuration
-You can configure the maximum session expiry interval allowed for all your clients connecting to the Event Grid namespace. For MQTT v3.1.1 clients, the configured limit is applied as the default session expiry interval for all persistent sessions. For MQTT v5 clients, the configured limit is applied as the maximum value for the Session Expiry Interval property in the CONNECT packet; any value that exceeds the limit will be adjusted. The default value for this namespace property is 1 hour and can be extended up to 8 hours. Use the following steps to configure the maximum session expiry interval in the Azure portal:
+You can configure the maximum session expiry interval allowed for all your clients connecting to the Event Grid namespace. For MQTT v3.1.1 clients, the configured limit is applied as the default session expiry interval for all persistent sessions. For MQTT v5 clients, the configured limit is applied as the maximum value for the Session Expiry Interval property in the CONNECT packet; any value that exceeds the limit is adjusted. The default value for this namespace property is 1 hour and can be extended up to 8 hours. Use the following steps to configure the maximum session expiry interval in the Azure portal:
- Go to your namespace in the Azure portal. - Under **Configuration**, change the value for the **Maximum session expiry interval in hours** to the desired limit. - Select **Apply**.
You can configure the maximum session expiry interval allowed for all your clien
#### Session overflow MQTT broker maintains a queue of messages for each active MQTT session that isn't connected, until the client connects with MQTT broker again to receive the messages in the queue. If a client doesn't connect to receive the queued QOS1 messages, the session queue starts accumulating the messages until it reaches its limit: 100 messages or 1 MB. Once the queue reaches its limit during the lifespan of the session, the session is terminated.
+### Last Will and Testament (LWT) messages (preview)
+Last Will and Testament (LWT) notifies your MQTT clients with the abrupt disconnections of other MQTT clients. You can use LWT to ensure predictable and reliable flow of communication among MQTT clients during unexpected disconnections, which is valuable for scenarios where real-time communication, system reliability, and coordinated actions are critical. Clients that collaborate to perform complex tasks can react to LWT messages from each other by adjusting their behavior, redistributing tasks, or taking over certain responsibilities to maintain the systemΓÇÖs performance and stability.
+To use LWT, a client can specify the will message, will topic, and the rest of the will properties in the CONNECT packet during connection. When the client disconnects abruptly, the MQTT broker publishes the will message to all the clients that subscribed to the will topic.
+ ### User properties MQTT broker supports user properties on MQTT v5 PUBLISH packets that allow you to add custom key-value pairs in the message header to provide more context about the message. The use cases for user properties are versatile. You can use this feature to include the purpose or origin of the message so the receiver can handle the message without parsing the payload, saving computing resources. For example, a message with a user property indicating its purpose as a "warning" could trigger different handling logic than one with the purpose of "information."
MQTTv5 introduced fields in the MQTT PUBLISH packet header that provide context
:::image type="content" source="media/mqtt-support/mqtt-request-response-high-res.png" alt-text="Diagram of the request-response pattern example." border="false"::: ### Message expiry interval:
-In MQTT v5, message expiry interval allows messages to have a configurable lifespan. The message expiry interval is defined as the time interval between the time a message is published to MQTT broker and the time when the MQTT broker needs to discard the message if it hasn't been delivered. This feature is useful in scenarios where messages are only valid for a certain amount of time, such as time-sensitive commands, real-time data streaming, or security alerts. By setting a message expiry interval, MQTT broker can automatically remove outdated messages, ensuring that only relevant information is available to subscribers. If a message's expiry interval is set to zero, it means the message should never expire.
+In MQTT v5, message expiry interval allows messages to have a configurable lifespan. The message expiry interval is defined as the time interval between the time a message is published to MQTT broker and the time when the MQTT broker needs to discard the undelivered message. This feature is useful in scenarios where messages are only valid for a certain amount of time, such as time-sensitive commands, real-time data streaming, or security alerts. By setting a message expiry interval, MQTT broker can automatically remove outdated messages, ensuring that only relevant information is available to subscribers. If a message's expiry interval is set to zero, it means the message should never expire.
### Topic aliases: In MQTT v5, topic aliases allow a client to use a shorter alias in place of the full topic name in the published message. MQTT broker maintains a mapping between the topic alias and the actual topic name. This feature can save network bandwidth and reduce the size of the message header, particularly for topics with long names. It's useful in scenarios where the same topic is repeatedly published in multiple messages, such as in sensor networks. MQTT broker supports up to 10 topic aliases. A client can use a Topic Alias field in the PUBLISH packet to replace the full topic name with the corresponding alias.
In MQTT v5, topic aliases allow a client to use a shorter alias in place of the
In MQTT v5, flow control refers to the mechanism for managing the rate and size of messages that a client can handle. Flow control can be configured by setting the Maximum Packet Size and Receive Maximum parameters in the CONNECT packet. The Receive Maximum parameter allows the client to limit the number of messages sent by the broker to the number of messages that the client is able to handle. The Maximum Packet Size parameter defines the maximum size of packets that the client can receive. MQTT broker has a message size limit of 512 KiB. This feature ensures reliability and stability of the communication for constrained devices with limited processing speed or storage capabilities. ### Negative acknowledgments and server-initiated disconnect packet
-For MQTT v5, MQTT broker is able to send negative acknowledgments (NACKs) and server-initiated disconnect packets that provide the client with more information about failures for message delivery or connection. These features help the client diagnose the reason behind a failure and take appropriate mitigating actions. MQTT broker uses the reason codes that are defined in the [MQTT v5 Specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html)
+For MQTT v5, MQTT broker is able to send negative acknowledgments (NACKs) and server-initiated disconnect packets that provide the client with more information about failures for message delivery or connection. These features help the client diagnose the reason behind a failure and take appropriate mitigating actions. MQTT broker uses the reason codes that are defined in the [MQTT v5 Specification.](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html)
## Current limitations
MQTT broker is adding more MQTT v5 and MQTT v3.1.1 features in the future to ali
MQTT v5 currently differs from the [MQTT v5 Specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html) in the following ways: - Shared Subscriptions aren't supported yet. - Retain flag isn't supported yet.-- Will Message isn't supported yet. Receiving a CONNECT request with Will Message results in CONNACK with 0x83 (Implementation specific error).
+- Will delay interval isn't supported yet.
- Maximum QoS is 1. - Maximum Packet Size is 512 KiB - Message ordering isn't guaranteed.
MQTT v5 currently differs from the [MQTT v5 Specification](https://docs.oasis-op
- Assigned Client Identifiers aren't supported yet. - Topic Alias Maximum is 10. The server doesn't assign any topic aliases for outgoing messages at this time. Clients can assign and use topic aliases within set limit. - CONNACK doesn't return Response Information property even if the CONNECT request contains Request Response Information property.-- User Properties on CONNECT, SUBSCRIBE, DISCONNECT, PUBACK, AUTH packets are not used by the service so they're not supported. If any of these requests include user properties, the request will fail.
+- User Properties on CONNECT, SUBSCRIBE, DISCONNECT, PUBACK, AUTH packets aren't used by the service so they're not supported. If any of these requests include user properties, the request fails.
- If the server receives a PUBACK from a client with non-success response code, the connection is terminated.-- Keep Alive Maximum is 1160 seconds.
+- Keep Alive Maximum is 1,160 seconds.
### MQTTv3.1.1 current limitations MQTT v5 currently differs from the [MQTT v3.1.1 Specification](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html) in the following ways:-- Will Message isn't supported yet. Receiving a CONNECT request with Will Message results in a connection failure. - QoS2 and Retain Flag aren't supported yet. A publish request with a retain flag or with a QoS2 fails and closes the connection. - Message ordering isn't guaranteed.-- Keep Alive Maximum is 1160 seconds.
+- Keep Alive Maximum is 1,160 seconds.
## Code samples:
-[This repository](https://github.com/Azure-Samples/MqttApplicationSamples) contains C#, C, and python code samples that show how to send telemetry, send commands, and broadcast alerts. Note that the certificates created through the samples are fit for testing, but they aren't fit for production environments.
+[This repository](https://github.com/Azure-Samples/MqttApplicationSamples) contains C#, C, and python code samples that show how to send telemetry, send commands, and broadcast alerts. The certificates created through the samples are fit for testing, but they aren't fit for production environments.
## Next steps:
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
Title: About Azure ExpressRoute FastPath
-description: Learn about Azure ExpressRoute FastPath to send network traffic by bypassing the gateway
+description: Learn about Azure ExpressRoute FastPath to send network traffic by bypassing the gateway.
Previously updated : 01/05/2023 Last updated : 03/24/2024
ExpressRoute virtual network gateway is designed to exchange network routes and
### Circuits
-FastPath is available on all ExpressRoute circuits. Limited Generally Available (GA) support for Private Endpoint/Private Link connectivity and Public preview support for VNet peering and UDR connectivity over FastPath is only available for connections associated to ExpressRoute Direct circuits.
+FastPath is available on all ExpressRoute circuits. Limited general availability (GA) support for Private Endpoint/Private Link connectivity and Public preview support for virtual network peering and UDR connectivity over FastPath is only available for connections associated to ExpressRoute Direct circuits.
### Gateways
-FastPath still requires a virtual network gateway to be created to exchange routes between virtual network and on-premises network. For more information about virtual network gateways and ExpressRoute, including performance information and gateway SKUs, see [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
+FastPath still requires a virtual network gateway to be created to exchange routes between virtual network and on-premises network. For more information about virtual network gateways and ExpressRoute, including performance information, and gateway SKUs, see [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
To configure FastPath, the virtual network gateway must be either:
While FastPath supports most configurations, it doesn't support the following fe
* Basic Load Balancer: If you deploy a Basic internal load balancer in your virtual network or the Azure PaaS service you deploy in your virtual network uses a Basic internal load balancer, the network traffic from your on-premises network to the virtual IPs hosted on the Basic load balancer is sent to the virtual network gateway. The solution is to upgrade the Basic load balancer to a [Standard load balancer](../load-balancer/load-balancer-overview.md).
-* Private Link: FastPath Connectivity to a private endpoint or Private Link service over an ExpressRoute Direct circuit is supported for limited scenarios. For more information, see [enable FastPath and Private Link for 100 Gbps ExpressRoute Direct](expressroute-howto-linkvnet-arm.md#fastpath-virtual-network-peering-user-defined-routes-udrs-and-private-link-support-for-expressroute-direct-connections). FastPath connectivity to a Private endpoint/Private Link service is not supported for ExpressRoute partner circuits.
+* Private Link: FastPath Connectivity to a private endpoint or Private Link service over an ExpressRoute Direct circuit is supported for limited scenarios. For more information, see [enable FastPath and Private Link for 100-Gbps ExpressRoute Direct](expressroute-howto-linkvnet-arm.md#fastpath-virtual-network-peering-user-defined-routes-udrs-and-private-link-support-for-expressroute-direct-connections). FastPath connectivity to a Private endpoint/Private Link service isn't supported for ExpressRoute partner circuits.
-* DNS Private Resolver: Azure ExpressRoute FastPath does not support connectivity to [DNS Private Resolver](../dns/dns-private-resolver-overview.md).
+* DNS Private Resolver: Azure ExpressRoute FastPath doesn't support connectivity to [DNS Private Resolver](../dns/dns-private-resolver-overview.md).
### IP address limits | ExpressRoute SKU | Bandwidth | FastPath IP limit |
-| -- | -- | -- |
-| ExpressRoute Direct Port | 100Gbps | 200,000 |
-| ExpressRoute Direct Port | 10Gbps | 100,000 |
-| ExpressRoute provider circuit | 10Gbps and lower | 25,000 |
+|--|--|--|
+| ExpressRoute Direct Port | 100 Gbps | 200,000 |
+| ExpressRoute Direct Port | 10 Gbps | 100,000 |
+| ExpressRoute provider circuit | 10 Gbps and lower | 25,000 |
> [!NOTE] > * ExpressRoute Direct has a cumulative limit at the port level. > * Traffic flows through the ExpressRoute gateway when these limits are reached. ## Limited General Availability (GA)
-FastPath support for Virtual Network Peering, User Defined Routes (UDRs) and Private Endpoint/Private Link connectivity is available for limited scenarios for 100/10Gbps ExpressRoute Direct connections. Virtual Network Peering and UDR support is available globally across all Azure regions. Private Endpoint/ Private Link connectivity is available in the following Azure regions:
+FastPath support for Virtual Network Peering, User Defined Routes (UDRs) and Private Endpoint/Private Link connectivity is available for limited scenarios for 100/10Gbps ExpressRoute Direct connections. Virtual Network Peering and UDR support are available globally across all Azure regions. Private Endpoint/ Private Link connectivity is available in the following Azure regions:
- Australia East - East Asia - East US
FastPath Private endpoint/Private Link connectivity is supported for the followi
> * Private Link pricing will not apply to traffic sent over ExpressRoute FastPath. For more information about pricing, check out the [Private Link pricing page](https://azure.microsoft.com/pricing/details/private-link/). > * FastPath supports a max of 100Gbps connectivity to a single Availability Zone (Az).
-For more information about supported scenarios and to enroll in the limited GA offering, complete this [Microsoft Form](https://aka.ms/FastPathLimitedGA)
+For more information about supported scenarios and to enroll in the limited GA offering, complete this [Microsoft Form](https://aka.ms/FastPathLimitedGA).
+ ## Next steps - To enable FastPath, see [Configure ExpressRoute FastPath](expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
expressroute Designing For High Availability With Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-high-availability-with-expressroute.md
In this section, let us review optional (depending on your Azure deployment and
### Availability Zone aware ExpressRoute virtual network gateways
-An Availability Zone in an Azure region is a combination of a fault domain and an update domain. If you opt for zone-redundant Azure IaaS deployment, you may also want to configure zone-redundant virtual network gateways that terminate ExpressRoute private peering. To learn further, see [About zone-redundant virtual network gateways in Azure Availability Zones][zone redundant vgw]. To configure zone-redundant virtual network gateway, see [Create a zone-redundant virtual network gateway in Azure Availability Zones][conf zone redundant vgw].
+An Availability Zone in an Azure region is a combination of a fault domain and an update domain. To achieve the highest resiliency and availability, you should configure a zone-redundant ExpressRoute virtual network gateway. To learn more, see [About zone-redundant virtual network gateways in Azure Availability Zones][zone redundant vgw]. To configure a zone-redundant virtual network gateway, see [Create a zone-redundant virtual network gateway in Azure Availability Zones][conf zone redundant vgw].
### Improving failure detection time
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
description: Learn about Azure ExpressRoute monitoring, metrics, and alerts usin
Previously updated : 02/08/2023 Last updated : 03/31/2024
This article helps you understand ExpressRoute monitoring, metrics, and alerts using Azure Monitor. Azure Monitor is one stop shop for all metrics, alerting, diagnostic logs across all of Azure.
->[!NOTE]
->Using **Classic Metrics** is not recommended.
+> [!NOTE]
+> Using **Classic Metrics** is not recommended.
> ## ExpressRoute metrics To view **Metrics**, go to the *Azure Monitor* page and select *Metrics*. To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*. To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
-Once a metric is selected, the default aggregation will be applied. Optionally, you can apply splitting, which will show the metric with different dimensions.
+Once a metric is selected, the default aggregation is applied. Optionally, you can apply splitting, which shows the metric with different dimensions.
> [!IMPORTANT] > When viewing ExpressRoute metrics in the Azure portal, select a time granularity of **5 minutes or greater** for best possible results.
Once a metric is selected, the default aggregation will be applied. Optionally,
### Aggregation Types:
-Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](../azure-monitor/essentials/metrics-charts.md#aggregation). You should use the recommended Aggregation type when reviewing the insights for each ExpressRoute metric.
+Metrics explorer supports sum, maximum, minimum, average and count as [aggregation types](../azure-monitor/essentials/metrics-charts.md#aggregation). You should use the recommended Aggregation type when reviewing the insights for each ExpressRoute metric.
* Sum: The sum of all values captured during the aggregation interval. * Count: The number of measurements captured during the aggregation interval.
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | |
-| [Arp Availability](#arp) | Availability | Percent | Average | ARP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
-| [Bgp Availability](#bgp) | Availability | Percent | Average | BGP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
+| [ARP Availability](#arp) | Availability | Percent | Average | ARP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
+| [BGP Availability](#bgp) | Availability | Percent | Average | BGP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
| [BitsInPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Peering Type | Yes | | [BitsOutPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Peering Type | Yes | | DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Peering Type | Yes |
You can also view the bits out per second across both links of the ExpressRoute
Aggregation type: *Avg*
-You can view the line protocol across each link of the ExpressRoute Direct port pair. The Line Protocol indicates if the physical link is up and running over ExpressRoute Direct. Monitor this dashboard and set alerts to know when the physical connection has gone down.
+You can view the line protocol across each link of the ExpressRoute Direct port pair. The Line Protocol indicates if the physical link is up and running over ExpressRoute Direct. Monitor this dashboard and set alerts to know when the physical connection goes down.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/line-protocol-per-link.jpg" alt-text="ER Direct line protocol":::
When you deploy an ExpressRoute gateway, Azure manages the compute and functions
* Active flows * Max flows created per second
-It's highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
+We highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
### <a name = "gwbits"></a>Bits received per second - Split by instance
This metric captures inbound bandwidth utilization on the ExpressRoute virtual n
Aggregation type: *Avg*
-You can view the CPU utilization of each gateway instance. The CPU utilization may spike briefly during routine host maintenance but prolong high CPU utilization could indicate your gateway is reaching a performance bottleneck. Increasing the size of the ExpressRoute gateway may resolve this issue. Set an alert for how frequent the CPU utilization exceeds a certain threshold.
+You can view the CPU utilization of each gateway instance. The CPU utilization might spike briefly during routine host maintenance but prolong high CPU utilization could indicate your gateway is reaching a performance bottleneck. Increasing the size of the ExpressRoute gateway might resolve this issue. Set an alert for how frequent the CPU utilization exceeds a certain threshold.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/cpu-split.jpg" alt-text="Screenshot of CPU utilization - split metrics.":::
This metric captures the number of inbound packets traversing the ExpressRoute g
Aggregation type: *Max*
-This metric shows the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces may include virtual networks that are connected using VNet peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of.
+This metric shows the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces might include virtual networks that are connected using virtual network peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-advertised-to-peer.png" alt-text="Screenshot of count of routes advertised to peer.":::
This metric shows the number of routes the ExpressRoute gateway is advertising t
Aggregation type: *Max*
-This metric shows the number of routes the ExpressRoute gateway is learning from peers connected to the ExpressRoute circuit. These routes can be either from another virtual network connected to the same circuit or learned from on-premises. Set an alert for when the number of learned routes drop below a certain threshold. This could indicate either the gateway is seeing a performance problem or remote peers are no longer advertising routes to the ExpressRoute circuit.
+This metric shows the number of routes the ExpressRoute gateway is learning from peers connected to the ExpressRoute circuit. These routes can be either from another virtual network connected to the same circuit or learned from on-premises. Set an alert for when the number of learned routes drop below a certain threshold. This metric can indicate either the gateway is seeing a performance problem or remote peers are no longer advertising routes to the ExpressRoute circuit.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-learned-from-peer.png" alt-text="Screenshot of count of routes learned from peer.":::
This metric shows the number of routes the ExpressRoute gateway is learning from
Aggregation type: *Sum*
-This metric shows the frequency of routes being learned from or advertised to remote peers. You should first investigate your on-premises devices to understand why the network is changing so frequently. A high frequency in routes change could indicate a performance problem on the ExpressRoute gateway where scaling the gateway SKU up may resolve the problem. Set an alert for a frequency threshold to be aware of when your ExpressRoute gateway is seeing abnormal route changes.
+This metric shows the frequency of routes being learned from or advertised to remote peers. You should first investigate your on-premises devices to understand why the network is changing so frequently. A high frequency in routes change could indicate a performance problem on the ExpressRoute gateway where scaling the gateway SKU up might resolve the problem. Set an alert for a frequency threshold to be aware of when your ExpressRoute gateway is seeing abnormal route changes.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/frequency-of-routes-changed.png" alt-text="Screenshot of frequency of routes changed metric.":::
This metric shows the frequency of routes being learned from or advertised to re
Aggregation type: *Max*
-This metric shows the number of virtual machines that are using the ExpressRoute gateway. The number of virtual machines may include VMs from peered virtual networks that use the same ExpressRoute gateway. Set an alert for this metric if the number of VMs goes above a certain threshold that could affect the gateway performance.
+This metric shows the number of virtual machines that are using the ExpressRoute gateway. The number of virtual machines might include VMs from peered virtual networks that use the same ExpressRoute gateway. Set an alert for this metric if the number of VMs goes above a certain threshold that could affect the gateway performance.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/number-of-virtual-machines-virtual-network.png" alt-text="Screenshot of number of virtual machines in the virtual network metric.":::
Aggregation type: *Max*
Split by: Gateway Instance and Direction (Inbound/Outbound)
-This metric display maximum number of flows created per second on the ExpressRoute Gateway. Through split at instance level and direction, you can see max flow creation rate per gateway instance and inbound/outbound direction respectively. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
+This metric displays the maximum number of flows created per second on the ExpressRoute Gateway. Through split at instance level and direction, you can see max flow creation rate per gateway instance and inbound/outbound direction respectively. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/max-flows-per-second.png" alt-text="Screenshot of the maximum number of flows created per second metrics dashboard.":::
Aggregation type:ΓÇ»*Avg* (of percentage of total utilized CPU)
*Granularity: 5 min*
-You can view the CPU utilization of each ExpressRoute Traffic Collector instance. The CPU utilization may spike briefly during routine host maintenance, but prolonged high CPU utilization could indicate your ExpressRoute Traffic Collector is reaching a performance bottleneck.
+You can view the CPU utilization of each ExpressRoute Traffic Collector instance. The CPU utilization might spike briefly during routine host maintenance, but prolonged high CPU utilization could indicate your ExpressRoute Traffic Collector is reaching a performance bottleneck.
**Guidance:** Set an alert for when avg CPU utilization exceeds a certain threshold.
Aggregation type:ΓÇ»*Avg* (of percentage of total utilized Memory)
*Granularity: 5 min*
-You can view the memory utilization of each ExpressRoute Traffic Collector instance. Memory utilization may spike briefly during routine host maintenance, but prolonged high memory utilization could indicate your Azure Traffic Collector is reaching a performance bottleneck.
+You can view the memory utilization of each ExpressRoute Traffic Collector instance. Memory utilization might spike briefly during routine host maintenance, but prolonged high memory utilization could indicate your Azure Traffic Collector is reaching a performance bottleneck.
**Guidance:** Set an alert for when avg memory utilization exceeds a certain threshold.
Aggregation type: *Count*
*Granularity: 5 min*
-You can view the count of number of flow records processed by ExpressRoute Traffic Collector, aggregated across ExpressRoute Circuits. Customer can split the metrics across each ExpressRoute Traffic Collector instance or ExpressRoute circuit when multiple circuits are associated to the ExpressRoute Traffic Collector. Monitoring this metric will help you understand if you need to deploy more ExpressRoute Traffic Collector instances or migrate ExpressRoute circuit association from one ExpressRoute Traffic Collector deployment to another.
+You can view the count of number of flow records processed by ExpressRoute Traffic Collector, aggregated across ExpressRoute Circuits. Customer can split the metrics across each ExpressRoute Traffic Collector instance or ExpressRoute circuit when multiple circuits are associated to the ExpressRoute Traffic Collector. Monitoring this metric helps you understand if you need to deploy more ExpressRoute Traffic Collector instances or migrate ExpressRoute circuit association from one ExpressRoute Traffic Collector deployment to another.
-**Guidance:** Splitting by circuits is recommended when multiple ExpressRoute circuits are associated with an ExpressRoute Traffic Collector deployment. This will help determine the flow count of each ExpressRoute circuit and ExpressRoute Traffic Collector utilization by each ExpressRoute circuit.
+**Guidance:** Splitting by circuits is recommended when multiple ExpressRoute circuits are associated with an ExpressRoute Traffic Collector deployment. This metric helps determine the flow count of each ExpressRoute circuit and ExpressRoute Traffic Collector utilization by each ExpressRoute circuit.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/flow-records.png" alt-text="Screenshot of average flow records for an ExpressRoute circuit." lightbox="./media/expressroute-monitoring-metrics-alerts/flow-records.png":::
You can view the count of number of flow records processed by ExpressRoute Traff
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/select-expressroute-gateway.png" alt-text="Screenshot of the selecting ExpressRoute virtual network gateway from the select a resource page.":::
-1. On the *Select a signal* page, select a metric, resource health, or activity log that you want to be alerted. Depending on the signal you select, you may need to enter additional information such as a threshold value. You may also combine multiple signals into a single alert. Select **Next: Actions >** to define who and how they get notify.
+1. On the *Select a signal* page, select a metric, resource health, or activity log that you want to be alerted. Depending on the signal you select, you might need to enter additional information such as a threshold value. You can also combine multiple signals into a single alert. Select **Next: Actions >** to define who and how they get notify.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/signal.png" alt-text="Screenshot of list of signals that can be alerted for ExpressRoute gateways.":::
-1. Select **+ Select action groups** to choose an existing action group you previously created or select **+ Create action group** to define a new one. In the action group, you determine how notifications get sent and who will receive them.
+1. Select **+ Select action groups** to choose an existing action group you previously created or select **+ Create action group** to define a new one. In the action group, you determine how notifications get sent and who receives them.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/action-group.png" alt-text="Screenshot of add action groups page.":::
You can view the count of number of flow records processed by ExpressRoute Traff
### Alerts based on each peering
-After you select a metric, certain metrics allow you to set up dimensions based on peering or a specific peer (virtual networks).
+After you select a metric, certain metric allow you to set up dimensions based on peering or a specific peer (virtual networks).
### Configure alerts for activity logs on circuits
When selecting signals to be alerted on, you can select **Activity Log** signal
## More metrics in Log Analytics
-You can also view ExpressRoute metrics by going to your ExpressRoute circuit resource and selecting the *Logs* tab. For any metrics you query, the output will contain the columns below.
+You can also view ExpressRoute metrics by going to your ExpressRoute circuit resource and selecting the *Logs* tab. For any metrics you query, the output contains the following columns.
| **Column** | **Type** | **Description** | | | | | | TimeGrain | string | PT1M (metric values are pushed every minute) |
-| Count | real | Usually equal to 2 (each MSEE pushes a single metric value every minute) |
+| Count | real | Usually is 2 (each MSEE pushes a single metric value every minute) |
| Minimum | real | The minimum of the two metric values pushed by the two MSEEs | | Maximum | real | The maximum of the two metric values pushed by the two MSEEs | | Average | real | Equal to (Minimum + Maximum)/2 |
Set up your ExpressRoute connection.
* [Create and modify a circuit](expressroute-howto-circuit-arm.md) * [Create and modify peering configuration](expressroute-howto-routing-arm.md)
-* [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
+* [Link a virtual network to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
expressroute Monitor Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/monitor-expressroute.md
Previously updated : 01/04/2023 Last updated : 03/31/2024 # Monitoring Azure ExpressRoute
This article describes the monitoring data generated by Azure ExpressRoute. Azur
## ExpressRoute insights
-Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
+Some services in Azure have a special focused prebuilt monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called *insights*.
ExpressRoute uses Network insights to provide a detailed topology mapping of all ExpressRoute components (peerings, connections, gateways) in relation with one another. Network insights for ExpressRoute also have preloaded metrics dashboard for availability, throughput, packet drops, and gateway metrics. For more information, see [Azure ExpressRoute Insights using Networking Insights](expressroute-network-insights.md).
For reference, you can see a list of [all resource metrics supported in Azure Mo
* To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled. * To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
-Once a metric is selected, the default aggregation will be applied. Optionally, you can apply splitting, which will show the metric with different dimensions.
+Once a metric is selected, the default aggregation is applied. Optionally, you can apply splitting, which shows the metric with different dimensions.
## Analyzing logs
To view these tables, navigate to your ExpressRoute circuit resource and select
Here are some queries that you can enter into the Log search bar to help you monitor your Azure ExpressRoute resources. These queries work with the [new language](../azure-monitor/logs/log-query-overview.md).
-* To query for BGP route table learned over the last 12 hours.
+* To query for Border Gateway Protocol (BGP) route table learned over the last 12 hours.
```Kusto AzureDiagnostics
The following table lists common and recommended alert rules for ExpressRoute.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/select-expressroute-gateway.png" alt-text="Screenshot of the selecting ExpressRoute virtual network gateway from the select a resource page.":::
-1. On the *Select a signal* page, select a metric, resource health, or activity log that you want to be alerted. Depending on the signal you select, you may need to enter additional information such as a threshold value. You may also combine multiple signals into a single alert. Select **Next: Actions >** to define who and how they get notify.
+1. On the *Select a signal* page, select a metric, resource health, or activity log that you want to be alerted. Depending on the signal you select, you might need to enter additional information such as a threshold value. You can also combine multiple signals into a single alert. Select **Next: Actions >** to define who and how they get notify.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/signal.png" alt-text="Screenshot of list of signals that can be alerted for ExpressRoute gateways.":::
-1. Select **+ Select action groups** to choose an existing action group you previously created or select **+ Create action group** to define a new one. In the action group, you determine how notifications get sent and who will receive them.
+1. Select **+ Select action groups** to choose an existing action group you previously created or select **+ Create action group** to define a new one. In the action group, you determine how notifications get sent and who receives them.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/action-group.png" alt-text="Screenshot of add action groups page.":::
expressroute Site To Site Vpn Over Microsoft Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/site-to-site-vpn-over-microsoft-peering.md
Previously updated : 01/03/2023 Last updated : 03/31/2024
To configure a site-to-site VPN connection over ExpressRoute, you must use Expre
* If you already have an ExpressRoute circuit, but don't have Microsoft peering configured, configure Microsoft peering using the [Create and modify peering for an ExpressRoute circuit](expressroute-howto-routing-arm.md#msft) article.
-Once you've configured your circuit and Microsoft peering, you can easily view it using the **Overview** page in the Azure portal.
+Once you configured your circuit and Microsoft peering, you can easily view it using the **Overview** page in the Azure portal.
:::image type="content" source="./media/site-to-site-vpn-over-microsoft-peering/circuit.png" alt-text="Screenshot of the overview page of an ExpressRoute circuit.":::
Configure a route filter. For steps, see [Configure route filters for Microsoft
### <a name="verifybgp"></a>2.2 Verify BGP routes
-Once you have successfully created Microsoft peering over your ExpressRoute circuit and associated a route filter with the circuit, you can verify the BGP routes received from MSEEs on the PE devices that are peering with the MSEEs. The verification command varies, depending on the operating system of your PE devices.
+Once you successfully created the Microsoft peering over your ExpressRoute circuit and associated a route filter with the circuit, you can verify the BGP routes received from Microsoft Enterprise Edge (MSEEs) on the PE devices that are peering with the MSEEs. The verification command varies, depending on the operating system of your PE devices.
#### Cisco examples
This example uses a Cisco IOS-XE command. In the example, a virtual routing and
show ip bgp vpnv4 vrf 10 summary ```
-The following partial output shows that 68 prefixes were received from the neighbor \*.243.229.34 with the ASN 12076 (MSEE):
+The following partial output shows that 68 prefixes were received from the neighbor \*.243.229.34 with the Autonomous System Number (ASN) 12076 (MSEE):
``` ...
The following diagram shows the abstracted overview of the example network:
### About the Azure Resource Manager template examples
-In the examples, the VPN gateway and the IPsec tunnel terminations are configured using an Azure Resource Manager template. If you're new to using Resource Manager templates, or to understand the Resource Manager template basics, see [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). The template in this section creates a green field Azure environment (VNet). However, if you have an existing VNet, you can reference it in the template. If you aren't familiar with VPN gateway IPsec/IKE site-to-site configurations, see [Create a site-to-site connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md).
+In the examples, the VPN gateway and the IPsec tunnel terminations are configured using an Azure Resource Manager template. If you're new to using Resource Manager templates, or to understand the Resource Manager template basics, see [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). The template in this section creates a green field Azure environment (virtual network). However, if you have an existing virtual network, you can reference it in the template. If you aren't familiar with VPN gateway IPsec/IKE site-to-site configurations, see [Create a site-to-site connection](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md).
>[!NOTE] >You do not need to use Azure Resource Manager templates in order to create this configuration. You can create this configuration using the Azure portal, or PowerShell.
In this example, the variable declarations correspond to the example network. Wh
}, ```
-### <a name="vnet"></a>3.2 Create virtual network (VNet)
+### <a name="vnet"></a>3.2 Create virtual network (virtual network)
-If you're associating an existing VNet with the VPN tunnels, you can skip this step.
+If you're associating an existing virtual network with the VPN tunnels, you can skip this step.
```json {
The final action of the script creates IPsec tunnels between the Azure VPN gatew
## <a name="device"></a>4. Configure the on-premises VPN device
-The Azure VPN gateway is compatible with many VPN devices from different vendors. For configuration information and devices that have been validated to work with VPN gateway, see [About VPN devices](../vpn-gateway/vpn-gateway-about-vpn-devices.md).
+The Azure VPN gateway is compatible with many VPN devices from different vendors. For configuration information and devices that are validated to work with VPN gateway, see [About VPN devices](../vpn-gateway/vpn-gateway-about-vpn-devices.md).
When configuring your VPN device, you need the following items: * A shared key. This value is the same shared key that you specify when creating your site-to-site VPN connection. The examples use a basic shared key. We recommend that you generate a more complex key to use. * The Public IP address of your VPN gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the Public IP address of your VPN gateway using the Azure portal, navigate to Virtual network gateways, then select the name of your gateway.
-Typically eBGP peers are directly connected (often over a WAN connection). However, when you're configuring eBGP over IPsec VPN tunnels via ExpressRoute Microsoft peering, there are multiple routing domains between the eBGP peers. Use the **ebgp-multihop** command to establish the eBGP neighbor relationship between the two not-directly connected peers. The integer that follows ebgp-multihop command specifies the TTL value in the BGP packets. The command **maximum-paths eibgp 2** enables load balancing of traffic between the two BGP paths.
+Typically eBGP peers are directly connected (often over a WAN connection). However, when you're configuring eBGP over IPsec VPN tunnels via ExpressRoute Microsoft peering, there are multiple routing domains between the eBGP peers. Use the **ebgp-multihop** command to establish the eBGP neighbor relationship between the two not-directly connected peers. The integer that follows ebgp-multihop command specifies the time to live (TTL) value in the BGP packets. The command **maximum-paths eibgp 2** enables load balancing of traffic between the two BGP paths.
### <a name="cisco1"></a>Cisco CSR1000 example
Peer: 52.175.253.112 port 4500 fvrf: (none) ivrf: (none)
Outbound: #pkts enc'ed 477 drop 0 life (KB/Sec) 4607953/437 ```
-The line protocol on the Virtual Tunnel Interface (VTI) doesn't change to "up" until IKE phase 2 has completed. The following command verifies the security association:
+The line protocol on the Virtual Tunnel Interface (VTI) doesn't change to "up" until IKE phase 2 completes. The following command verifies the security association:
``` csr1#show crypto ikev2 sa
csr1#show crypto ipsec sa | inc encaps|decaps
#pkts decaps: 746, #pkts decrypt: 746, #pkts verify: 746 ```
-### <a name="verifye2e"></a>Verify end-to-end connectivity between the inside network on-premises and the Azure VNet
+### <a name="verifye2e"></a>Verify end-to-end connectivity between the inside network on-premises and the Azure virtual network
If the IPsec tunnels are up and the static routes are correctly set, you should be able to ping the IP address of the remote BGP peer:
Total number of prefixes 2
* [Configure Network Performance Monitor for ExpressRoute](how-to-npm.md)
-* [Add a site-to-site connection to a VNet with an existing VPN gateway connection](../vpn-gateway/add-remove-site-to-site-connections.md)
+* [Add a site-to-site connection to a virtual network with an existing VPN gateway connection](../vpn-gateway/add-remove-site-to-site-connections.md)
frontdoor Apex Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/apex-domain.md
Previously updated : 02/07/2023 Last updated : 03/31/2024 # Apex domains in Azure Front Door
-Apex domains, also called *root domains* or *naked domains*, are at the root of a DNS zone and don't contain subdomains. For example, `contoso.com` is an apex domain.
+Apex domains, also called *root domains*, or *naked domains*, are at the root of a Domain Name System (DNS) zone and don't contain subdomains. For example, `contoso.com` is an apex domain.
Azure Front Door supports apex domains, but requires special considerations. This article describes how apex domains work in Azure Front Door.
Azure Front Door doesn't expose the frontend public IP address associated with y
> [!WARNING] > Don't create an A record with the public IP address of your Azure Front Door endpoint. Your Azure Front Door endpoint's public IP address might change and we don't provide any guarantees that it will remain the same.
-However, this problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. You can point a zone apex record to an Azure Front Door profile that has public endpoints. Multiple application owners can point to the same Azure Front Door endpoint that's used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Azure Front Door endpoint.
+However, this problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. You can point a zone apex record to an Azure Front Door profile that has public endpoints. Multiple application owners can point to the same Azure Front Door endpoint used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Azure Front Door endpoint.
-Mapping your apex or root domain to your Azure Front Door profile uses *CNAME flattening*, sometimes called *DNS chasing*. CNAME flattening is where a DNS provider recursively resolves CNAME entries until it resolves an IP address. This functionality is supported by Azure DNS for Azure Front Door endpoints.
+Mapping your apex or root domain to your Azure Front Door profile uses *CNAME flattening*, sometimes called *DNS chasing*. CNAME flattening is where a DNS provider recursively resolves CNAME entries until it resolves an IP address. Azure DNS supports this functionality for Azure Front Door endpoints.
> [!NOTE] > Other DNS providers support CNAME flattening or DNS chasing. However, Azure Front Door recommends using Azure DNS for hosting your apex domains.
To validate a domain, you need to create a DNS TXT record. The name of the TXT r
For example, suppose you want to use the apex domain `contoso.com` with Azure Front Door. First, you should add the domain to your Azure Front Door profile, and note the TXT record value that you need to use. Then, you should configure a DNS record with the following properties: | Property | Value |
-|-|-|
+|--|--|
| Record name | `_dnsauth` | | Record value | *use the value provided by Azure Front Door* | | Time to live (TTL) | 1 hour | ## Azure Front Door-managed TLS certificate rotation
-When you use an Azure Front Door-managed certificate, Azure Front Door attempts to automatically rotate (renew) the certificate. Before it does so, Azure Front Door checks whether the DNS CNAME record is still pointed to the Azure Front Door endpoint. Apex domains don't have a CNAME record pointing to an Azure Front Door endpoint, so the auto-rotation for managed certificate fails until the domain ownership is revalidated.
+When you use an Azure Front Door-managed certificate, Azure Front Door attempts to automatically rotate (renew) the certificate. Before it does so, Azure Front Door checks whether the DNS CNAME record is still pointed to the Azure Front Door endpoint. Apex domains don't have a CNAME record pointing to an Azure Front Door endpoint, so the autorotation for managed certificate fails until the domain ownership is revalidated.
Select the **Pending revalidation** link and then select the **Regenerate** button to regenerate the TXT token. After that, add the TXT token to the DNS provider settings.
frontdoor Front Door How To Onboard Apex Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-how-to-onboard-apex-domain.md
Title: Onboard a root or apex domain to Azure Front Door
-description: Learn how to onboard a root or apex domain to an existing Front Door using the Azure portal.
+description: Learn how to onboard a root or apex domain to an existing Azure Front Door using the Azure portal.
Previously updated : 02/07/2023 Last updated : 03/31/2024 zone_pivot_groups: front-door-tiers
zone_pivot_groups: front-door-tiers
Azure Front Door uses CNAME records to validate domain ownership for the onboarding of custom domains. Azure Front Door doesn't expose the frontend IP address associated with your Front Door profile. So you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door.
-The DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. This restriction presents a problem for application owners who have load-balanced applications behind Azure Front Door. Since using a Front Door profile requires creation of a CNAME record, it isn't possible to point at the Front Door profile from the zone apex.
+The Domain Name System (DNS) protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. This restriction presents a problem for application owners who load balances applications behind Azure Front Door. Since using an Azure Front Door profile requires creation of a CNAME record, it isn't possible to point at the Azure Front Door profile from the zone apex.
-This problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. Application owners can use it to point their zone apex record to a Front Door profile that has public endpoints. Application owners point to the same Front Door profile that's used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Front Door profile.
+This problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. Application owners can use it to point their zone apex record to an Azure Front Door profile that has public endpoints. Application owners can point to the same Azure Front Door profile used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Azure Front Door profile.
-Mapping your apex or root domain to your Front Door profile requires *CNAME flattening* or *DNS chasing*, which is where the DNS provider recursively resolves CNAME entries until it resolves an IP address. This functionality is supported by Azure DNS for Azure Front Door endpoints.
+Mapping your apex or root domain to your Azure Front Door profile requires *CNAME flattening* or *DNS chasing*, which is when the DNS provider recursively resolves CNAME entries until it resolves an IP address. Azure DNS supports this functionality for Azure Front Door endpoints.
> [!NOTE] > There are other DNS providers as well that support CNAME flattening or DNS chasing. However, Azure Front Door recommends using Azure DNS for its customers for hosting their domains.
-You can use the Azure portal to onboard an apex domain on your Azure Front Door and enable HTTPS on it by associating it with a TLS certificate. Apex domains are also referred as *root* or *naked* domains.
+You can use the Azure portal to onboard an apex domain on your Azure Front Door and enable HTTPS on it by associating it with a Transport Layer Security (TLS) certificate. Apex domains are also referred as *root* or *naked* domains.
::: zone-end
You can use the Azure portal to onboard an apex domain on your Azure Front Door
1. Select **Domains** from under *Settings* on the left side pane for your Azure Front Door profile and then select **+ Add** to add a new custom domain.
- :::image type="content" source="./media/front-door-apex-domain/add-domain.png" alt-text="Screenshot of adding a new domain to Front Door profile.":::
+ :::image type="content" source="./media/front-door-apex-domain/add-domain.png" alt-text="Screenshot of adding a new domain to an Azure Front Door profile.":::
-1. On **Add a domain** page, you'll enter information about the custom domain. You can choose Azure-managed DNS (recommended) or you can choose to use your DNS provider.
+1. On **Add a domain** page, you enter information about the custom domain. You can choose Azure-managed DNS (recommended) or you can choose to use your DNS provider.
- **Azure-managed DNS** - select an existing DNS zone and for *Custom domain*, select **Add new**. Select **APEX domain** from the pop-up and then select **OK** to save.
- :::image type="content" source="./media/front-door-apex-domain/add-custom-domain.png" alt-text="Screenshot of adding a new custom domain to Front Door profile.":::
+ :::image type="content" source="./media/front-door-apex-domain/add-custom-domain.png" alt-text="Screenshot of adding a new custom domain to an Azure Front Door profile.":::
- **Another DNS provider** - make sure the DNS provider supports CNAME flattening and follow the steps for [adding a custom domain](standard-premium/how-to-add-custom-domain.md#add-a-new-custom-domain).
-1. Select the **Pending** validation state. A new page will appear with DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`.
+1. Select the **Pending** validation state. A new page appears with DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`.
:::image type="content" source="./media/front-door-apex-domain/pending-validation.png" alt-text="Screenshot of custom domain pending validation.":::
- - **Azure DNS-based zone** - select the **Add** button and a new TXT record with the displayed record value will be created in the Azure DNS zone.
+ - **Azure DNS-based zone** - select the **Add** button to create a new TXT record with the displayed value in the Azure DNS zone.
:::image type="content" source="./media/front-door-apex-domain/validate-custom-domain.png" alt-text="Screenshot of validate a new custom domain."::: - If you're using another DNS provider, manually create a new TXT record of name `_dnsauth.<your_subdomain>` with the record value as shown on the page.
-1. Close the *Validate the custom domain* page and return to the *Domains* page for the Front Door profile. You should see the *Validation state* change from **Pending** to **Approved**. If not, wait up to 10 minutes for changes to reflect. If your validation doesn't get approved, make sure your TXT record is correct and name servers are configured correctly if you're using Azure DNS.
+1. Close the *Validate the custom domain* page and return to the *Domains* page for the Azure Front Door profile. You should see the *Validation state* change from **Pending** to **Approved**. If not, wait up to 10 minutes for changes to reflect. If your validation doesn't get approved, make sure your TXT record is correct and name servers are configured correctly if you're using Azure DNS.
:::image type="content" source="./media/front-door-apex-domain/validation-approved.png" alt-text="Screenshot of new custom domain passing validation.":::
You can use the Azure portal to onboard an apex domain on your Azure Front Door
- **A DNS provider that supports CNAME flattening** - you must manually enter the alias record name.
-1. Once the alias record gets created and the custom domain is associated to the Azure Front Door endpoint, traffic will start flowing.
+1. Once the alias record gets created and the custom domain is associated to the Azure Front Door endpoint, traffic starts flowing.
:::image type="content" source="./media/front-door-apex-domain/cname-record-added.png" alt-text="Screenshot of completed APEX domain configuration."::: > [!NOTE] > * The **DNS state** column is used for CNAME mapping check. Since an apex domain doesnΓÇÖt support a CNAME record, the DNS state will show 'CNAME record is currently not detected' even after you add the alias record to the DNS provider.
-> * When placing service like an Azure Web App behind Azure Front Door, you need to configure with the web app with the same domain name as the root domain in Front Door. You also need to configure the backend host header with that domain name to prevent a redirect loop.
+> * When placing service like an Azure Web App behind Azure Front Door, you need to configure with the web app with the same domain name as the root domain in Azure Front Door. You also need to configure the backend host header with that domain name to prevent a redirect loop.
> * Apex domains don't have CNAME records pointing to the Azure Front Door profile, therefore managed certificate autorotation will always fail unless domain validation is completed between rotations. ## Enable HTTPS on your custom domain
Follow the guidance for [configuring HTTPS for your custom domain](standard-prem
1. Select the record **type** as *A* record and then select *Yes* for **Alias record set**. **Alias type** should be set to *Azure resource*.
-1. Select the Azure subscription where your Front Door profile gets hosted. Then select the Front Door resource from the **Azure resource** dropdown.
+1. Select the Azure subscription that contains your Azure Front Door profile. Then select the Azure Front Door resource from the **Azure resource** dropdown.
1. Select **OK** to submit your changes. :::image type="content" source="./media/front-door-apex-domain/front-door-apex-alias-record.png" alt-text="Alias record for zone apex":::
-1. The above step will create a zone apex record pointing to your Front Door resource and also a CNAME record mapping 'afdverify' (example - `afdverify.contosonews.com`) to that will be used for onboarding the domain on your Front Door profile.
+1. The above step creates a zone apex record pointing to your Azure Front Door resource and also a CNAME record mapping *afdverify* (example - `afdverify.contosonews.com`) that is used for onboarding the domain on your Azure Front Door profile.
-## Onboard the custom domain on your Front Door
+## Onboard the custom domain on your Azure Front Door
-1. On the Front Door designer tab, select on '+' icon on the Frontend hosts section to add a new custom domain.
+1. On the Azure Front Door designer tab, select on '+' icon on the Frontend hosts section to add a new custom domain.
1. Enter the root or apex domain name in the custom host name field, example `contosonews.com`.
-1. Once the CNAME mapping from the domain to your Front Door is validated, select on **Add** to add the custom domain.
+1. Once the CNAME mapping from the domain to your Azure Front Door is validated, select on **Add** to add the custom domain.
1. Select **Save** to submit the changes.
Follow the guidance for [configuring HTTPS for your custom domain](standard-prem
:::image type="content" source="./media/front-door-apex-domain/front-door-onboard-apex-custom-domain.png" alt-text="Custom domain HTTPS settings"::: > [!WARNING]
- > Front Door managed certificate management type is not currently supported for apex or root domains. The only option available for enabling HTTPS on an apex or root domain for Front Door is using your own custom TLS/SSL certificate hosted on Azure Key Vault.
+ > Azure Front Door managed certificate management type is not currently supported for apex or root domains. The only option available for enabling HTTPS on an apex or root domain for Azure Front Door is using your own custom TLS/SSL certificate hosted on Azure Key Vault.
-1. Ensure that you have setup the right permissions for Front Door to access your key Vault as noted in the UI, before proceeding to the next step.
+1. Ensure that you have setup the right permissions for Azure Front Door to access your key Vault as noted in the UI, before proceeding to the next step.
1. Choose a **Key Vault account** from your current subscription and then select the appropriate **Secret** and **Secret version** to map to the right certificate.
Follow the guidance for [configuring HTTPS for your custom domain](standard-prem
## Next steps -- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
+- Learn how to [create an Azure Front Door profile](quickstart-create-front-door.md).
+- Learn [how Azure Front Door works](front-door-routing-architecture.md).
frontdoor Front Door Http Headers Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-http-headers-protocol.md
Previously updated : 01/16/2023 Last updated : 03/27/2024 # Protocol support for HTTP headers in Azure Front Door
-This article outlines the protocol that Front Door supports with parts of the call path (see image). In the following sections, you'll find information about HTTP headers supported by Front Door.
+This article outlines the protocol that Front Door supports with parts of the call path (see image). In the following sections, you find information about HTTP headers supported by Front Door.
> [!IMPORTANT]
-> Front Door doesn't certify any HTTP headers that aren't documented here.
+> Azure Front Door doesn't certify any HTTP headers that aren't documented here.
-## From client to the Front Door
+## From client to Azure Front Door
-Azure Front Door accepts most headers for the incoming request without modifying them. Some reserved headers are removed from the incoming request if sent, including headers with the X-FD-* prefix.
+Azure Front Door accepts most headers for the incoming request without modifying them. Some reserved headers are removed from the incoming request if sent, including headers with the `X-FD-*` prefix.
-The debug request header, "X-Azure-DebugInfo", provides extra debugging information about the Front Door. You'll need to send "X-Azure-DebugInfo: 1" request header from the client to the AzureFront Door to receive [optional response headers](#optional-debug-response-headers) when Front Door response to the client.
+The debug request header, `X-Azure-DebugInfo`, provides extra debugging information about the Front Door. You need to send `X-Azure-DebugInfo: 1` request header from the client to the Azure Front Door to receive [optional response headers](#optional-debug-response-headers) when Azure Front Door response to the client.
## From the Front Door to the backend
-Azure Front Door includes headers for an incoming request unless they're removed because of restrictions. Front Door also adds the following headers:
+Azure Front Door includes headers for an incoming request unless they're removed because of restrictions. Azure Front Door also appends the following headers:
| Header | Example and description | | - | - |
-| Via | *Via: 1.1 Azure* </br> Front Door adds the client's HTTP version followed by *Azure* as the value for the Via header. This header indicates the client's HTTP version and that Front Door was an intermediate recipient for the request between the client and the backend. |
-| X-Azure-ClientIP | *X-Azure-ClientIP: 127.0.0.1* </br> Represents the client IP address associated with the request being processed. For example, a request coming from a proxy might add the X-Forwarded-For header to indicate the IP address of the original caller. |
-| X-Azure-SocketIP | *X-Azure-SocketIP: 127.0.0.1* </br> Represents the socket IP address associated with the TCP connection that the current request originated from. A request's client IP address might not be equal to its socket IP address because the client IP can be arbitrarily overwritten by a user.|
-| X-Azure-Ref | *X-Azure-Ref: 0zxV+XAAAAABKMMOjBv2NT4TY6SQVjC0zV1NURURHRTA2MTkANDM3YzgyY2QtMzYwYS00YTU0LTk0YzMtNWZmNzA3NjQ3Nzgz* </br> A unique reference string that identifies a request served by Front Door. It's used to search access logs and critical for troubleshooting.|
-| X-Azure-RequestChain | *X-Azure-RequestChain: hops=1* </br> A header that Front Door uses to detect request loops, and users shouldn't take a dependency on it. |
-| X-Azure-FDID | *X-Azure-FDID: 55ce4ed1-4b06-4bf1-b40e-4638452104da* <br/> A reference string that identifies the request came from a specific Front Door resource. The value can be seen in the Azure portal or retrieved using the management API. You can use this header in combination with IP ACLs to lock down your endpoint to only accept requests from a specific Front Door resource. See the FAQ for [more detail](front-door-faq.yml#what-are-the-steps-to-restrict-the-access-to-my-backend-to-only-azure-front-door-) |
-| X-Forwarded-For | *X-Forwarded-For: 127.0.0.1* </br> The X-Forwarded-For (XFF) HTTP header field often identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. If there's an existing XFF header, then Front Door appends the client socket IP to it or adds the XFF header with the client socket IP. |
-| X-Forwarded-Host | *X-Forwarded-Host: contoso.azurefd.net* </br> The X-Forwarded-Host HTTP header field is a common method used to identify the original host requested by the client in the Host HTTP request header. This is because the host name from Front Door may differ for the backend server handling the request. Any previous value will be overridden by Front Door. |
-| X-Forwarded-Proto | *X-Forwarded-Proto: http* </br> The X-Forwarded-Proto HTTP header field is often used to identify the originating protocol of an HTTP request. Front Door based on configuration might communicate with the backend by using HTTPS. This is true even if the request to the reverse proxy is HTTP. Any previous value will be overridden by Front Door. |
-| X-FD-HealthProbe | X-FD-HealthProbe HTTP header field is used to identify the health probe from Front Door. If this header is set to 1, the request is from the health probe. It can be used to restrict access from Front Door with a particular value for the X-Forwarded-Host header field. |
+| Via | `Via: 1.1 Azure` </br> Front Door adds the client's HTTP version followed by *Azure* as the value for the Via header. This header indicates the client's HTTP version and that Front Door was an intermediate recipient for the request between the client and the backend. |
+| X-Azure-ClientIP | `X-Azure-ClientIP: 127.0.0.1` </br> Represents the client IP address associated with the request being processed. For example, a request coming from a proxy might add the X-Forwarded-For header to indicate the IP address of the original caller. |
+| X-Azure-SocketIP | `X-Azure-SocketIP: 127.0.0.1` </br> Represents the socket IP address associated with the TCP connection that the current request originated from. A request's client IP address might not be equal to its socket IP address because the client IP can be arbitrarily overwritten by a user.|
+| X-Azure-Ref | `X-Azure-Ref: 0zxV+XAAAAABKMMOjBv2NT4TY6SQVjC0zV1NURURHRTA2MTkANDM3YzgyY2QtMzYwYS00YTU0LTk0YzMtNWZmNzA3NjQ3Nzgz` </br> A unique reference string that identifies a request served by Azure Front Door. This string is used to search access logs and critical for troubleshooting.|
+| X-Azure-RequestChain | `X-Azure-RequestChain: hops=1` </br> A header that Front Door uses to detect request loops, and users shouldn't take a dependency on it. |
+| X-Azure-FDID | `X-Azure-FDID: 55ce4ed1-4b06-4bf1-b40e-4638452104da` <br/> A reference string that identifies the request came from a specific Front Door resource. The value can be seen in the Azure portal or retrieved using the management API. You can use this header in combination with IP ACLs to lock down your endpoint to only accept requests from a specific Front Door resource. See the FAQ for [more detail](front-door-faq.yml#what-are-the-steps-to-restrict-the-access-to-my-backend-to-only-azure-front-door-) |
+| X-Forwarded-For | `X-Forwarded-For: 127.0.0.1` </br> The X-Forwarded-For (XFF) HTTP header field often identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. If there's an existing XFF header, then Front Door appends the client socket IP to it or adds the XFF header with the client socket IP. |
+| X-Forwarded-Host | `X-Forwarded-Host: contoso.azurefd.net` </br> The X-Forwarded-Host HTTP header field is a common method used to identify the original host requested by the client in the Host HTTP request header. This is because the host name from Azure Front Door might differ for the backend server handling the request. Any previous value is overridden by Azure Front Door. |
+| X-Forwarded-Proto | `X-Forwarded-Proto: http` </br> The `X-Forwarded-Proto` HTTP header field is often used to identify the originating protocol of an HTTP request. Front Door based on configuration might communicate with the backend by using HTTPS. This is true even if the request to the reverse proxy is HTTP. Any previous value will be overridden by Front Door. |
+| X-FD-HealthProbe | `X-FD-HealthProbe` HTTP header field is used to identify the health probe from Front Door. If this header is set to 1, the request is from the health probe. It can be used to restrict access from Front Door with a particular value for the `X-Forwarded-Host` header field. |
## From the Front Door to the client
Any headers sent to Azure Front Door from the backend are also passed through to
| Header | Example and description | | - | - |
-| X-Azure-Ref | *X-Azure-Ref: 0zxV+XAAAAABKMMOjBv2NT4TY6SQVjC0zV1NURURHRTA2MTkANDM3YzgyY2QtMzYwYS00YTU0LTk0YzMtNWZmNzA3NjQ3Nzgz* </br> This is a unique reference string that identifies a request served by Front Door, which is critical for troubleshooting as it's used to search access logs.|
-| X-Cache | *X-Cache:* This header describes the caching status of the request. For more information, see [Caching with Azure Front Door](front-door-caching.md#response-headers). |
+| X-Azure-Ref | `X-Azure-Ref: 0zxV+XAAAAABKMMOjBv2NT4TY6SQVjC0zV1NURURHRTA2MTkANDM3YzgyY2QtMzYwYS00YTU0LTk0YzMtNWZmNzA3NjQ3Nzgz` </br> This is a unique reference string that identifies a request served by Front Door, which is critical for troubleshooting as it's used to search access logs.|
+| X-Cache | `X-Cache:` This header describes the caching status of the request. For more information, see [Caching with Azure Front Door](front-door-caching.md#response-headers). |
### Optional debug response headers
-You need to send "X-Azure-DebugInfo: 1" request header to enable the following optional response headers.
+You need to send `X-Azure-DebugInfo: 1` request header to enable the following optional response headers.
| Header | Example and description | | - | - |
-| X-Azure-OriginStatusCode | *X-Azure-OriginStatusCode: 503* </br> This header contains the HTTP status code returned by the backend. Using this header you can identify the HTTP status code returned by the application running in your backend without going through backend logs. This status code might be different from the HTTP status code in the response sent to the client by Front Door. This header allows you to determine if the backend is misbehaving or if the issue is with the Front Door service. |
-| X-Azure-InternalError | This header will contain the error code that Front Door comes across when processing the request. This error indicates the issue is internal to the Front Door service/infrastructure. Report issue to support. |
-| X-Azure-ExternalError | *X-Azure-ExternalError: 0x830c1011, The certificate authority is unfamiliar.* </br> This header shows the error code that Front Door servers come across while establishing connectivity to the backend server to process a request. This header will help identify issues in the connection between Front Door and the backend application. This header will include a detailed error message to help you identify connectivity issues to your backend (for example, DNS resolution, invalid cert, and so on.). |
+| X-Azure-OriginStatusCode | `X-Azure-OriginStatusCode: 503` </br> This header contains the HTTP status code returned by the backend. Using this header you can identify the HTTP status code returned by the application running in your backend without going through backend logs. This status code might be different from the HTTP status code in the response sent to the client by Front Door. This header allows you to determine if the backend is misbehaving or if the issue is with the Front Door service. |
+| X-Azure-InternalError | This header contains the error code that Azure Front Door comes across when processing the request. This error indicates the issue is internal to the Azure Front Door service/infrastructure. Report issue to support. |
+| X-Azure-ExternalError | `X-Azure-ExternalError: 0x830c1011, The certificate authority is unfamiliar` </br> This header shows the error code that Front Door servers come across while establishing connectivity to the backend server to process a request. This header helps identify issues in the connection between Front Door and the backend application. This header includes a detailed error message to help you identify connectivity issues to your backend (for example, DNS resolution, invalid cert, and so on.). |
## Next steps
frontdoor Front Door Wildcard Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-wildcard-domain.md
Previously updated : 02/07/2023 Last updated : 03/31/2024 zone_pivot_groups: front-door-tiers
By using wildcard domains, you can simplify the configuration of your Azure Fron
Wildcard domains give you several advantages, including: -- You don't need to onboard each subdomain in your Azure Front Door profile. For example, suppose you create new subdomains every customer, and route all customers' requests to a single origin group. Whenever you add a new customer, Azure Front Door understands how to route traffic to your origin group even though the subdomain hasn't been explicitly configured.-- You don't need to generate a new TLS certificate, or manage any subdomain-specific HTTPS settings, to bind a certificate for each subdomain.
+- You don't need to onboard each subdomain in your Azure Front Door profile. For example, suppose you create new subdomains every customer, and route all customers' requests to a single origin group. Whenever you add a new customer, Azure Front Door understands how to route traffic to your origin group even though the subdomain isn't explicitly configured.
+- You don't need to generate a new Transport Layer Security (TLS) certificate, or manage any subdomain-specific HTTPS settings, to bind a certificate for each subdomain.
- You can use a single web application firewall (WAF) policy for all of your subdomains. Commonly, wildcard domains are used to support software as a service (SaaS) solutions, and other multitenant applications. When you build these application types, you need to give special consideration to how you route traffic to your origin servers. For more information, see [Use Azure Front Door in a multitenant solution](/azure/architecture/guide/multitenant/service/front-door).
Commonly, wildcard domains are used to support software as a service (SaaS) solu
## Add a wildcard domain and certificate binding
-You can add a wildcard domain following similar steps to those for subdomains. For more information about adding a subdomain to Azure Front Door, see [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
+You can add a wildcard domain following steps similar for subdomains. For more information about adding a subdomain to Azure Front Door, see [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
> [!NOTE] > * Azure DNS supports wildcard records.
Subdomains like `www.image.contoso.com` aren't a single-level subdomain of `*.co
## Adding wildcard domains
-You can add a wildcard domain under the section for front-end hosts or domains. Similar to subdomains, Azure Front Door (classic) validates that there's CNAME record mapping for your wildcard domain. This DNS mapping can be a direct CNAME record mapping like `*.contoso.com` mapped to `endpoint.azurefd.net`. Or you can use afdverify temporary mapping. For example, `afdverify.contoso.com` mapped to `afdverify.endpoint.azurefd.net` validates the CNAME record map for the wildcard.
+You can add a wildcard domain under the section for front-end hosts or domains. Similar to subdomains, Azure Front Door (classic) validates that there's CNAME record mapping for your wildcard domain. This Domain Name System (DNS) mapping can be a direct CNAME record mapping like `*.contoso.com` mapped to `endpoint.azurefd.net`. Or you can use afdverify temporary mapping. For example, `afdverify.contoso.com` mapped to `afdverify.endpoint.azurefd.net` validates the CNAME record map for the wildcard.
> [!NOTE] > Azure DNS supports wildcard records.
If a subdomain is added for a wildcard domain that already has a certificate ass
::: zone pivot="front-door-standard-premium"
-WAF policies can be attached to wildcard domains, similar to other domains. A different WAF policy can be applied to a subdomain of a wildcard domain. Subdomains will automatically inherit the WAF policy from the wildcard domain if there is no explicit WAF policy associated to the subdomain. However, if the subdomain is added to a different profile from the wildcard domain profile, the subdomain cannot inherit the WAF policy associated with the wildcard domain.
+WAF policies can be attached to wildcard domains, similar to other domains. A different WAF policy can be applied to a subdomain of a wildcard domain. Subdomains automatically inherit the WAF policy from the wildcard domain if there's no explicit WAF policy associated to the subdomain. However, if the subdomain is added to a different profile from the wildcard domain profile, the subdomain can't inherit the WAF policy associated with the wildcard domain.
::: zone-end
frontdoor How To Configure Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-configure-caching.md
Title: Configure caching
description: This article shows you how to configure caching on Azure Front Door. -+ Previously updated : 01/16/2023 Last updated : 03/31/2024
Before you can create an Azure Front Door endpoint with Front Door manager, you
To create an Azure Front Door profile and endpoint, see [Create an Azure Front Door profile](create-front-door-portal.md).
-Caching can significantly decrease latency and reduce the load on origin servers. However, not all types of traffic can benefit from caching. Static assets such as images, CSS, and JavaScript files are ideal for caching. While dynamic assets, such as authenticated API endpoints, shouldn't be cached to prevent the leakage of personal information. It's recommended to have separate routes for static and dynamic assets, with caching disabled for the latter.
+Caching can significantly decrease latency and reduce the load on origin servers. However, not all types of traffic can benefit from caching. Static assets such as images, CSS, and JavaScript files are ideal for caching. While dynamic assets, such as authenticated API endpoints, shouldn't be cached to prevent the leakage of personal information. We recommend having separate routes for static and dynamic assets, with caching disabled for the latter.
> [!WARNING] > Before you enable caching, thoroughly review the caching documentation, and test all possible scenarios before enabling caching. As noted previously, with misconfiguration you can inadvertently cache user specific data that can be shared by multiple users resulting privacy incidents.
Caching can significantly decrease latency and reduce the load on origin servers
* Learn about the use of [origins and origin groups](origin.md) in an Azure Front Door configuration. * Learn about [rules match conditions](rules-match-conditions.md) in an Azure Front Door rule set.
-* Learn more about [policy settings](../web-application-firewall/afds/waf-front-door-policy-settings.md) for WAF with Azure Front Door.
+* Learn more about [policy settings](../web-application-firewall/afds/waf-front-door-policy-settings.md) for Web Application Firewall (WAF) with Azure Front Door.
* Learn how to create [custom rules](../web-application-firewall/afds/waf-front-door-custom-rules.md) to protect your Azure Front Door profile.
frontdoor How To Enable Private Link Storage Static Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-enable-private-link-storage-static-website.md
Previously updated : 03/03/2023 Last updated : 03/31/2024
In this section, you map the Private Link service to a private endpoint created
| Priority | Different origin can have different priorities to provide primary, secondary, and backup origins. | | Weight | 1000 (default). Assign weights to your different origin when you want to distribute traffic.| | Region | Select the region that is the same or closest to your origin. |
- | Target sub resource | The type of sub-resource for the resource selected previously that your private endpoint can access. You can select *web* or *web_secondary*. |
+ | Target sub resource | The type of subresource for the resource selected previously that your private endpoint can access. You can select *web* or *web_secondary*. |
| Request message | Custom message to see while approving the Private Endpoint. | 1. Then select **Add** to save your configuration. Then select **Update** to save your changes.
When creating a private endpoint connection to the storage static website's seco
:::image type="content" source="./media/how-to-enable-private-link-storage-static-website/private-endpoint-storage-static-website-secondary.png" alt-text="Screenshot of enabling private link to a storage static website secondary.":::
-Once the origin has been added and the private endpoint connection has been approved, you can test your private link connection to your storage static website.
+Once the origin is added and the private endpoint connection is approved, you can test your private link connection to your storage static website.
## Next steps
frontdoor How To Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-compression.md
Previously updated : 01/16/2023 Last updated : 03/31/2024 # Improve performance by compressing files in Azure Front Door
-File compression is an effective method to improve file transfer speed and increase page-load performance. The compression reduces the size of the file before it's sent by the server. File compression can reduce bandwidth costs and provide a better experience for your users.
+File compression is an effective method to improve file transfer speed and increase page-load performance. The server compresses the file to reduce its size before sending it. File compression can reduce bandwidth costs and provide a better experience for your users.
There are two ways to enable file compression:
There are two ways to enable file compression:
## Enabling compression
-> [!Note]
+> [!NOTE]
> In Azure Front Door, compression is part of **Enable Caching** in Route. Only when you **Enable Caching**, can you take advantage of compression in Azure Front Door. You can enable compression in the following ways: * During quick create - When you enable caching, you can enable compression.
-* During custom create - Enable caching and compression when you're adding a route.
+* During custom, create - Enable caching and compression when you're adding a route.
* In Front Door manager. * On the Optimization page.
You can enable compression in the following ways:
1. Within the endpoint, select the **route** you want to enable compression on.
- :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-1.png" alt-text="Screenshot of the Front Door manager landing page." lightbox="../media/how-to-compression/front-door-compression-endpoint-manager-1-expanded.png":::
+ :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-1.png" alt-text="Screenshot of the Azure Front Door manager landing page." lightbox="../media/how-to-compression/front-door-compression-endpoint-manager-1-expanded.png":::
1. Ensure **Enable caching** is checked, then select the checkbox for **Enable compression**.
- :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Screenshot of Front Door Manager showing the 'Enable compression' radio button.":::
+ :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Screenshot of Azure Front Door Manager showing the 'Enable compression' radio button.":::
1. Select **Update** to save the configuration.
You can enable compression in the following ways:
:::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Screenshot of the Optimizations page showing the 'Enable compression' radio button.":::
-1. Click **Update**.
+1. Select **Update**.
## Modify compression content type
You can modify the default list of MIME types on Optimizations page.
## Disabling compression You can disable compression in the following ways:
-* Disable compression in Front Door manager route.
+* Disable compression in Azure Front Door manager route.
* Disable compression in Optimizations page.
-### Disable compression in Front Door manager
+### Disable compression in Azure Front Door manager
1. From the Azure Front Door Standard/Premium profile page, go to **Front Door manager** under Settings.
frontdoor How To Enable Private Link Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-storage-account.md
Previously updated : 03/18/2022 Last updated : 03/31/2024 # Connect Azure Front Door Premium to a storage account origin with Private Link
-This article will guide you through how to configure Azure Front Door Premium tier to connect to your storage account origin privately using the Azure Private Link service.
+This article guides you through how to configure Azure Front Door Premium tier to connect to your storage account origin privately using the Azure Private Link service.
## Prerequisites
Sign in to the [Azure portal](https://portal.azure.com).
## Enable Private Link to a storage account
-In this section, you'll map the Private Link service to a private endpoint created in Azure Front Door's private network.
+In this section, you map the Private Link service to a private endpoint created in Azure Front Door's private network.
1. Within your Azure Front Door Premium profile, under *Settings*, select **Origin groups**.
In this section, you'll map the Private Link service to a private endpoint creat
:::image type="content" source="../media/how-to-enable-private-link-storage-account/private-endpoint-storage-account.png" alt-text="Screenshot of enabling private link to a storage account.":::
-1. The table below has information of what values to select in the respective fields while enabling private link with Azure Front Door. Select or enter the following settings to configure the storage blob you want Azure Front Door Premium to connect with privately.
+1. The following table has information of what values to select in the respective fields while enabling private link with Azure Front Door. Select or enter the following settings to configure the storage blob you want Azure Front Door Premium to connect with privately.
| Setting | Value | | - | -- |
In this section, you'll map the Private Link service to a private endpoint creat
| Priority | Different origin can have different priorities to provide primary, secondary, and backup origins. | | Weight | 1000 (default). Assign weights to your different origin when you want to distribute traffic.| | Region | Select the region that is the same or closest to your origin. |
- | Target sub resource | The type of sub-resource for the resource selected above that your private endpoint will be able to access. You can select *blob* or *web*. |
+ | Target sub resource | The type of subresource for the resource selected previously that your private endpoint can access. You can select *blob* or *web*. |
| Request message | Custom message to see while approving the Private Endpoint. | 1. Then select **Add** to save your configuration. Then select **Update** to save the origin group settings.
In this section, you'll map the Private Link service to a private endpoint creat
:::image type="content" source="../media/how-to-enable-private-link-storage-account/private-endpoint-pending-approval.png" alt-text="Screenshot of pending storage private endpoint request.":::
-1. Once approved, it should look like the screenshot below. It will take a few minutes for the connection to fully establish. You can now access your storage account from Azure Front Door Premium.
+1. Once approved, it should look like the following screenshot. It takes a few minutes for the connection to fully establish. You can now access your storage account from Azure Front Door Premium.
:::image type="content" source="../media/how-to-enable-private-link-storage-account/private-endpoint-approved.png" alt-text="Screenshot of approved storage endpoint request.":::
frontdoor How To Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-logs.md
Title: 'Logs - Azure Front Door'
+ Title: Configure Azure Front Door logs
description: This article explains how to configure Azure Front Door logs. Previously updated : 02/23/2023 Last updated : 03/27/2024
Azure Front Door captures several types of logs. Logs can help you monitor your application, track requests, and debug your Front Door configuration. For more information about Azure Front Door's logs, see [Monitor metrics and logs in Azure Front Door](../front-door-diagnostics.md).
-Access logs, health probe logs, and WAF logs aren't enabled by default. In this article, you'll learn how to enable diagnostic logs for your Azure Front Door profile.
+Access logs, health probe logs, and Web Application Firewall (WAF) logs aren't enabled by default. In this article, you learn how to enable diagnostic logs for your Azure Front Door profile.
## Configure logs 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for Azure Front Door and select the Azure Front Door profile.
+1. Search for **Azure Front Door** and then select the relevant Azure Front Door profile.
-1. In the profile, go to **Monitoring**, select **Diagnostic Setting**. Select **Add diagnostic setting**.
+1. Within the profile, navigate to **Monitoring**, select **Diagnostic Setting** and then choose **Add diagnostic setting**.
:::image type="content" source="../media/how-to-logging/front-door-logging-1.png" alt-text="Screenshot of diagnostic settings landing page."::: 1. Under **Diagnostic settings**, enter a name forΓÇ»**Diagnostic settings name**.
-1. Select theΓÇ»**log** from **FrontDoorAccessLog**, **FrontDoorHealthProbeLog**, and **FrontDoorWebApplicationFirewallLog**.
+1. Select theΓÇ»**log** options for **FrontDoorAccessLog**, **FrontDoorHealthProbeLog**, and **FrontDoorWebApplicationFirewallLog**.
-1. Select theΓÇ»**Destination details**. Destination options are:
+1. Select theΓÇ»**Destination details**. The destination options are:
* **Send to Log Analytics** * Azure Log Analytics in Azure Monitor is best used for general real-time monitoring and analysis of Azure Front Door performance.
Access logs, health probe logs, and WAF logs aren't enabled by default. In this
* Select the *Subscription, Event hub namespace, Event hub name (optional)*, and *Event hub policy name*. > [!TIP]
- > Most Azure customers use Log Analytics.
+ > Microsoft recommends using Log Analytics for real-time monitoring and analysis of Azure Front Door performance.
:::image type="content" source="../media/how-to-logging/front-door-logging-2.png" alt-text="Screenshot of diagnostic settings page.":::
-1. Click on **Save**.
+1. Select **Save** to begin logging.
## View your activity logs To view activity logs:
-1. Select your Front Door profile.
+1. Select your Azure Front Door profile.
1. Select **Activity log.**
frontdoor How To Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-monitor-metrics.md
Previously updated : 02/23/2023 Last updated : 03/31/2024 # Real-time monitoring in Azure Front Door
-Azure Front Door is integrated with Azure Monitor. You can use metrics in real time to measure traffic to your application, and to track, troubleshoot, and debug issues.
+Azure Front Door is integrated with Azure Monitor. You can use metrics in real time to measure traffic to your application, and to track, troubleshoot, and debug issues.
-You can also configure alerts for each metric such as a threshold for 4XXErrorRate or 5XXErrorRate. When the error rate exceeds the threshold, it will trigger an alert as configured. For more information, see [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/alerts/alerts-metric.md).
+You can also configure alerts for each metric such as a threshold for 4XXErrorRate or 5XXErrorRate. When the error rate exceeds the threshold, it triggers an alert as configured. For more information, see [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/alerts/alerts-metric.md).
## Access metrics in the Azure portal
You can also configure alerts for each metric such as a threshold for 4XXErrorRa
1. Select **New alert rule** for metrics listed in Metrics section.
-Alert will be charged based on Azure Monitor. For more information about alerts, see [Azure Monitor alerts](../../azure-monitor/alerts/alerts-overview.md).
+Alert is charged based on Azure Monitor. For more information about alerts, see [Azure Monitor alerts](../../azure-monitor/alerts/alerts-overview.md).
## Next steps
frontdoor How To Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-reports.md
Previously updated : 02/23/2023 Last updated : 03/31/2024
Reports support any selected date range from the previous 90 days. With data poi
- **Domains** - Select one or more endpoints or custom domains. By default, all endpoints and custom domains are selected.
- * If you delete an endpoint or a custom domain in one profile and then recreate the same endpoint or domain in another profile, the report counts the new endpoint as a second endpoint.
- * If you delete a custom domain and bind it to a different endpoint, the behavior depends on how you view the report. If you view the report by custom domain then they'll be treated as one custom domain. If you view the report by endpoint, they'll be treated as separate items.
+ * If you delete an endpoint or a custom domain in one profile and then recreate the same endpoint or domain in another profile, the report counts the new endpoint as a different endpoint.
+ * If you delete a custom domain and bind it to a different endpoint, the behavior depends on how you view the report. If you view the report by custom domains, then they're treated as one custom domain. If you view the report by endpoint, they're treated as separate items.
:::image type="content" source="../media/how-to-reports/front-door-reports-dimension-domain.png" alt-text="Screenshot of Reports for domain dimension.":::
The **traffic by domain** report provides a grid view of all the domains under t
:::image type="content" source="../media/how-to-reports/front-door-reports-landing-page.png" alt-text="Screenshot of the landing page for reports.":::
-In this report you can view:
+In this report, you can view:
* Request counts * Data transferred out from Azure Front Door to client
-* Requests with status code (3XX, 4XX and 5XX) of each domain
+* Requests with status code (3XX, 4XX, and 5XX) of each domain
Domains include endpoint domains and custom domains.
The following items are included in the reports:
* A world map view of the top 50 countries/regions by data transferred out or requests of your choice. * Two line charts showing a trend view of the top five countries/regions by data transferred out and requests of your choice.
-* A grid of the top countries/regions with corresponding data transferred out from Azure Front Door to clients, the percentage of data transferred out, the number of requests, the percentage of requests by the country/region, cache hit ratio, 4XX response code counts, and 5XX response code counts.
+* A grid of the top countries or regions with corresponding data transferred out from Azure Front Door to clients, the percentage of data transferred out, the number of requests, the percentage of requests by the country or region, cache hit ratio, 4XX response code counts, and 5XX response code counts.
## Caching report
The caching report includes:
Cache hits/misses describe the request number cache hits and cache misses for client requests.
-* Hits: the client requests that are served directly from Azure Front Door edge PoPs. Refers to those requests whose values for CacheStatus in the raw access logs are *HIT*, *PARTIAL_HIT*, or *REMOTE_HIT*.
-* Miss: the client requests that are served by Azure Front Door edge POPs fetching contents from origin. Refers to those requests whose values for the field CacheStatus in the raw access raw logs are *MISS*.
+* Hits: client requests that get served directly from Azure Front Door edge PoPs. Refers to those requests whose values for CacheStatus in the raw access logs are *HIT*, *PARTIAL_HIT*, or *REMOTE_HIT*.
+* Miss: client requests that get served by Azure Front Door edge POPs fetching contents from origin. Refers to those requests whose values for the field CacheStatus in the raw access raw logs are *MISS*.
**Cache hit ratio** describes the percentage of cached requests that are served from edge directly. The formula of the cache hit ratio is: `(PARTIAL_HIT +REMOTE_HIT+HIT/ (HIT + MISS + PARTIAL_HIT + REMOTE_HIT)*100%`.
Requests that meet the following requirements are included in the calculation:
It excludes all of the following cases: * Requests that are denied because of a Rule Set.
-* Requests that contain matching Rules Set, which has been set to disable the cache.
-* Requests that are blocked by the Azure Front Door WAF.
+* Requests that contain matching Rules Set, which is set to disable the cache.
+* Requests that get blocked by the Azure Front Door WAF.
* Requests when the origin response headers indicate that they shouldn't be cached. For example, requests with `Cache-Control: private`, `Cache-Control: no-cache`, or `Pragma: no-cache` headers prevent the response from being cached. ## Top URL report
-The **top URL report** allow you to view the amount of traffic incurred through a particular endpoint or custom domain. You'll see data for the most requested 50 assets during any period in the past 90 days.
+The **top URL report** allow you to view the amount of traffic incurred through a particular endpoint or custom domain. You see data for the most requested 50 assets during any period in the past 90 days.
:::image type="content" source="../media/how-to-reports/front-door-reports-top-url.png" alt-text="Screenshot of the 'top URL' report.":::
-Popular URLs will be displayed with the following values:
+Popular URLs are displayed with the following values:
* URL, which refers to the full path of the requested asset in the format of `http(s)://contoso.com/https://docsupdatetracker.net/index.html/images/example.jpg`. URL refers to the value of the RequestUri field in the raw access log. * Request counts.
Popular URLs will be displayed with the following values:
* Requests with response codes of 4XX. * Requests with response codes of 5XX.
-User can sort URLs by request count, request count percentage, data transferred, and data transferred percentage. All the metrics are aggregated by hour and might vary based on the timeframe selected.
+User can sort URLs by request count, request count percentage, data transferred, and data transferred percentage. The system aggregates all metrics by hour, and they might vary based on the selected time frame.
> [!NOTE] > Top URLs might change over time. To get an accurate list of the top 50 URLs, Azure Front Door counts all your URL requests by hour and keep the running total over the course of a day. The URLs at the bottom of the 50 URLs may rise onto or drop off the list over the day, so the total number of these URLs are approximations.
User can sort URLs by request count, request count percentage, data transferred,
## Top referrer report
-The **top referrer** report shows you the top 50 referrers to a particular Azure Front Door endpoint or custom domain. You can view data for any period in the past 90 days. A referrer indicates the URL from which a request was generated. Referrer may come from a search engine or other websites. If a user types a URL (for example, `https://contoso.com/https://docsupdatetracker.net/index.html`) directly into the address bar of a browser, the referrer for the requested is *Empty*.
+The **top referrer** report shows you the top 50 referrers to a particular Azure Front Door endpoint or custom domain. You can view data for any period in the past 90 days. A referrer indicates the URL from which a request was generated. Referrer might come from a search engine or other websites. If a user types a URL (for example, `https://contoso.com/https://docsupdatetracker.net/index.html`) directly into the address bar of a browser, the referrer for the requested is *Empty*.
:::image type="content" source="../media/how-to-reports/front-door-reports-top-referrer.png" alt-text="Screenshot of the 'top referrer' report.":::
The top referrer report includes the following values.
* Requests with response code as 4XX. * Requests with response code as 5XX.
-You can sort by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected.
+You can sort by request count, request %, data transferred and data transferred %. The system aggregates all metrics by hour, and they might vary based on the selected time frame.
## Top user agent report
The **security report** provides graphical and statistics views of WAF activity.
| Dimensions | Description | ||| | Overview metrics - Matched WAF rules | Requests that match custom WAF rules, managed WAF rules and bot protection rules. |
-| Overview metrics - Blocked Requests | The percentage of requests that are blocked by WAF rules among all the requests that matched WAF rules. |
+| Overview metrics - Blocked Requests | The percentage of requests that gets blocked by WAF rules among all the requests that matched WAF rules. |
| Overview metrics - Matched Managed Rules | Requests that match managed WAF rules. | | Overview metrics - Matched Custom Rule | Requests that match custom WAF rules. | | Overview metrics - Matched Bot Rule | Requests that match bot protection rules. |
governance First Query Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-portal.md
Title: 'Quickstart: Your first portal query'
-description: In this quickstart, you follow the steps to run your first query from Azure portal using Azure Resource Graph Explorer.
Previously updated : 10/12/2022
+ Title: 'Quickstart: Run first Azure Resource Graph query in portal'
+description: In this quickstart, you run your first Azure Resource Graph Explorer query using Azure portal.
Last updated : 03/29/2024 -
-# Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer
-The power of Azure Resource Graph is available directly in the Azure portal through Azure Resource
-Graph Explorer. Resource Graph Explorer provides browsable information about the Azure Resource
-Manager resource types and properties that you can query. Resource Graph Explorer also provides a
-clean interface for working with multiple queries, evaluating the results, and even converting the
-results of some queries into a chart that can be pinned to an Azure dashboard.
+# Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer
-At the end of this quickstart, you'll have used Azure portal and Resource Graph Explorer to run your
-first Resource Graph query and pinned the results to a dashboard.
+The power of Azure Resource Graph is available directly in the Azure portal through Azure Resource Graph Explorer. Resource Graph Explorer allows you to query information about the Azure Resource Manager resource types and properties. Resource Graph Explorer also provides an interface for working with multiple queries, evaluating the results, and even converting the results of some queries into a chart that can be pinned to an Azure dashboard.
## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Run your first Resource Graph query
-Open the [Azure portal](https://portal.azure.com) to find and use the Resource Graph Explorer
-following these steps to run your first Resource Graph query:
+Run your first query from the Azure portal using Azure Resource Graph Explorer.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for _resource graph_ and select **Resource Graph Explorer**.
+
+ :::image type="content" source="./media/first-query-portal/search-resource-graph.png" alt-text="Screenshot of the Azure portal to search for resource graph.":::
+
+1. In the **Query 1** portion of the window, copy and paste the following query. Then select **Run query**.
+
+ ```kusto
+ resources
+ | project name, type
+ | limit 5
+ ```
+
+ :::image type="content" source="./media/first-query-portal/run-query.png" alt-text="Screenshot of Azure Resource Graph Explorer that highlights run query, results, and messages.":::
-1. Select **All services** in the left pane. Search for and select **Resource Graph Explorer**.
+ This query example doesn't provide a sort modifier like `order by`. If you run this query multiple times, it's likely to yield a different set of resources per request.
-1. In the **Query 1** portion of the window, enter the query
- `Resources | project name, type | limit 5` and select **Run query**.
+1. Review the query response in the **Results** tab and select the **Messages** tab to see details about the query, including the count of results and duration of the query. Errors, if any, are displayed in **Messages**.
- > [!NOTE]
- > As this query example doesn't provide a sort modifier such as `order by`, running this query
- > multiple times is likely to yield a different set of resources per request.
+1. Update the query to `order by` the **name** property. Then, select **Run query**
-1. Review the query response in the **Results** tab. Select the **Messages** tab to see details
- about the query, including the count of results and duration of the query. Errors, if any, are
- displayed under this tab.
+ ```kusto
+ resources
+ | project name, type
+ | limit 5
+ | order by name asc
+ ```
-1. Update the query to `order by` the **Name** property:
- `Resources | project name, type | limit 5 | order by name asc`. Then, select **Run query**.
+ Like the first query, running this query multiple times is likely to yield a different set of resources per request. The order of the query commands is important. In this example, the `order by` comes after the `limit`. This command order first limits the query results and then orders them.
- > [!NOTE]
- > Just as with the first query, running this query multiple times is likely to yield a different
- > set of resources per request. The order of the query commands is important. In this example,
- > the `order by` comes after the `limit`. This command order first limits the query results and
- > then orders them.
+1. Update the query to `order by` the **name** property and then `limit` to the top five results. Then, select **Run query**.
-1. Update the query to first `order by` the **Name** property and then `limit` to the top five
- results: `Resources | project name, type | order by name asc | limit 5`. Then, select **Run
- query**.
+ ```kusto
+ resources
+ | project name, type
+ | order by name asc
+ | limit 5
+ ```
-When the final query is run several times, assuming that nothing in your environment is changing,
-the results returned are consistent and ordered by the **Name** property, but still limited to the
-top five results.
+ When the final query is run several times, and with no changes in your environment, the results are consistent and ordered by the **name** property, but still limited to the top five results.
### Schema browser
-The schema browser is located in the left pane of Resource Graph Explorer. This list of resources
-shows all the _resource types_ of Azure resources that are both supported by Azure Resource Graph
-and that exist in a tenant that you have access to. Expanding a resource type or subproperties show
-child properties that can be used to create a Resource Graph query.
+The schema browser is located in the left pane of Resource Graph Explorer. This list of resources shows all the _resource types_ of Azure resources supported by Azure Resource Graph and that exist in your tenant. Select a resource type or property to show child properties that can be used to create a Resource Graph query.
+
+Select a table name from the schema browser and it gets added to the query. When you select a resource type it gets added to the query, like `where type =="<resource type>"`. If you select a property it gets added to the next line in the query, like `where <propertyName> == "INSERT_VALUE_HERE"`. You can use the schema browser to find properties that you can use in queries. Be sure to replace `INSERT_VALUE_HERE` with your own value, and adjust the query with conditions, operators, and functions.
+
+This example shows a query that was built from the schema browser by selecting the table `authorizationresources` with resource type `microsoft.authorization/roledefinitions` and the property `roleName`.
+
+```kusto
+authorizationresources
+| where type == "microsoft.authorization/roledefinitions"
+| where properties['roleName'] == "INSERT_VALUE_HERE"
+```
-Selecting the resource type places `where type =="<resource type>"` into the query box. Selecting
-one of the child properties adds `where <propertyName> == "INSERT_VALUE_HERE"` into the query box.
-The schema browser is a great way to discover properties for use in queries. Be sure to replace
-_INSERT\_VALUE\_HERE_ with your own value, adjust the query with conditions, operators, and
-functions to achieve your intended results.
## Download query results as a CSV file
-To download CSV results from the Azure portal, browse to the Azure Resource Graph Explorer and run a
-query. On the toolbar, click **Download as CSV** as shown in the following screenshot:
+To download comma-separated values (CSV) results from the Azure portal, browse to the Azure Resource Graph Explorer and run a query. On the toolbar, select **Download as CSV** as shown in the following screenshot:
-> [!NOTE]
-> When using the comma-separated value (CSV) export functionality of Azure Resource Graph Explorer, the result set is limited to 55,000 records. This is a platform limit that cannot be overridden by filing an Azure support ticket.
+When you use the **Download as CSV** export functionality of Azure Resource Graph Explorer, the result set is limited to 55,000 records. This limitation is a platform limit that can't be overridden by filing an Azure support ticket.
## Create a chart from the Resource Graph query
-After running the previous query, if you select the **Charts** tab, you get a message that "the
-result set isn't compatible with a pie chart visualization." Queries that list results can't be made
-into a chart, but queries that provide counts of resources can. Using the
-[Sample query - Count virtual machines by OS type](./samples/starter.md#count-virtual-machines-by-os-type), let's create a
-visualization from the Resource Graph query.
+After running the previous query, if you select the **Charts** tab, you get a message that "the result set isn't compatible with a pie chart visualization." Queries that list results can't be made into a chart, but queries that provide counts of resources can.
1. In the **Query 1** portion of the window, enter the following query and select **Run query**. ```kusto
- Resources
- | where type =~ 'Microsoft.Compute/virtualMachines'
+ resources
+ | where type == "microsoft.compute/virtualmachines"
| summarize count() by tostring(properties.storageProfile.osDisk.osType) ``` 1. Select the **Results** tab and note that the response for this query provides counts.
-1. Select the **Charts** tab. Now, the query results in visualizations. Change the type from _Select
- chart type..._ to either _Bar chart_ or _Donut chart_ to experiment with the available
- visualization options.
+1. Select the **Charts** tab. Change the type from _Select chart type..._ to either _Bar chart_ or _Donut chart_.
-## Pin the query visualization to a dashboard
-
-When you have results from a query that can be visualized, that data visualization can then be
-pinned to one of your dashboards. After running the previous query, follow these steps:
+ :::image type="content" source="./media/first-query-portal/query-chart.png" alt-text="Screenshot of Azure Resource Graph Explorer with charts drop-down menu highlighted.":::
-1. Select **Save** and provide the name "VMs by OS Type". Then select **Save** at the bottom of the
- right pane.
+## Pin the query visualization to a dashboard
-1. Select **Run query** to rerun the query now that it's been saved.
+When you have results from a query that can be visualized, that data visualization can be pinned to your Azure portal dashboard. After running the previous query, follow these steps:
+1. Select **Save** and provide the name _VM by OS type_. Then select **Save** at the bottom of the right pane.
+1. Select **Run query** to rerun the query you saved.
1. On the **Charts** tab, select a data visualization. Then select **Pin to dashboard**.
+1. From **Pin to Dashboard** select the existing dashboard where you want the chart to appear.
-1. Either select the portal notification that appears or select **Dashboard** from the left pane.
-
-The query is now available on your dashboard with the title of the tile matching the query name. If
-the query was unsaved when it was pinned, it's named 'Query 1' instead.
-
-The query and resulting data visualization run and update each time the dashboard loads, providing
-real-time and dynamic insights to your Azure environment directly in your workflow.
-
-> [!NOTE]
-> Queries that result in a list can also be pinned to the dashboard. The feature isn't limited to
-> data visualizations of queries.
-
-## Import example Resource Graph Explorer dashboards
-
-To provide examples of Resource Graph queries and how Resource Graph Explorer can be used to enhance
-your Azure portal workflow, try out these example dashboards.
+The query is now available on your dashboard with the title **VM by OS type**. If the query wasn't saved before it was pinned, the name is _Query 1_ instead.
-- [Resource Graph Explorer - Sample Dashboard #1](https://github.com/Azure-Samples/Governance/blob/master/src/resource-graph/portal-dashboards/sample-1/resourcegraphexplorer-sample-1.json)
+The query and resulting data visualization run and update each time the dashboard loads, providing real time and dynamic insights to your Azure environment directly in your workflow.
- :::image type="content" source="./media/first-query-portal/arge-sample1-small.png" alt-text="Example image for Sample Dashboard #1" lightbox="./media/first-query-portal/arge-sample1-large.png":::
+Queries that result in a list can also be pinned to the dashboard. The feature isn't limited to data visualizations of queries.
-- [Resource Graph Explorer - Sample Dashboard #2](https://github.com/Azure-Samples/Governance/blob/master/src/resource-graph/portal-dashboards/sample-2/resourcegraphexplorer-sample-2.json)-
- :::image type="content" source="./media/first-query-portal/arge-sample2-small.png" alt-text="Example image for Sample Dashboard #2" lightbox="./media/first-query-portal/arge-sample2-large.png":::
-
-> [!NOTE]
-> Counts and charts in the above example dashboard screenshots vary depending on your Azure
-> environment.
-
-1. Select and download the sample dashboard you want to evaluate.
-
-1. In the Azure portal, select **Dashboard** from the left pane.
-
-1. Select **Upload**, then locate and select the downloaded sample dashboard file. Then select
- **Open**.
-
-The imported dashboard is automatically displayed. Since it now exists in your Azure portal, you may
-explore and make changes as needed or create new dashboards from the example to share with your
-teams. For more information about working with dashboards, see
-[Create and share dashboards in the Azure portal](../../azure-portal/azure-portal-dashboards.md).
+For more information about working with dashboards, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md).
## Clean up resources
-If you wish to remove the sample Resource Graph dashboards from your Azure portal environment, you
-can do so with the following steps:
-
-1. Select **Dashboard** from the left pane.
-
-1. From the dashboard dropdown list, select the sample Resource Graph dashboard you wish to delete.
+If you want to remove the sample Resource Graph dashboards from your Azure portal environment, do the following steps:
-1. Select **Delete** from the dashboard menu at the top of the dashboard and select **Ok** to
- confirm.
+1. Select **Dashboard** from the _hamburger menu_ (three horizontal lines) on the top, left side of any portal page.
+1. On your dashboard, find the **VM by OS type** chart and select the ellipsis (`...`) to display the menu.
+1. Select **Remove from dashboard** select **Save** to confirm.
## Next steps
-In this quickstart, you've used Azure Resource Graph Explorer to run your first query and looked at
-dashboard examples powered by Resource Graph. To learn more about the Resource Graph language,
-continue to the query language details page.
+In this quickstart, you used Azure Resource Graph Explorer to run your first query and looked at dashboard examples powered by Resource Graph. To learn more about the Resource Graph language, continue to the query language details page.
> [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
+> [Understanding the Azure Resource Graph query language](./concepts/query-language.md)
hdinsight Using Json In Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/using-json-in-hive.md
Title: Analyze & process JSON with Apache Hive - Azure HDInsight
description: Learn how to use JSON documents and analyze them by using Apache Hive in Azure HDInsight. Previously updated : 04/24/2023 Last updated : 03/31/2024 # Process and analyze JSON documents by using Apache Hive in Azure HDInsight
The **INSERT** statement populates the **StudentOneLine** table with the flatten
The **SELECT** statement only returns one row.
-Here is the output of the **SELECT** statement:
+Here's the output of the **SELECT** statement:
:::image type="content" source="./media/using-json-in-hive/hdinsight-flatten-json.png" alt-text="HDInsight flattening the JSON document." border="true":::
SELECT
FROM StudentsOneLine; ```
-Here is the output when you run this query in the console window:
+Here's the output when you run this query in the console window:
:::image type="content" source="./media/using-json-in-hive/hdinsight-get-json-object.png" alt-text="Apache Hive gets json object UDF." border="true"::: There are limitations of the get_json_object UDF: * Because each field in the query requires reparsing of the query, it affects the performance.
-* **GET\_JSON_OBJECT()** returns the string representation of an array. To convert this array to a Hive array, you have to use regular expressions to replace the square brackets "[" and "]", and then you also have to call split to get the array.
+* **GET\_JSON_OBJECT()** returns the string representation of an array. To convert this array to a Hive array, you've to use regular expressions to replace the square brackets "[" and "]", and then you also have to call split to get the array.
This conversion is why the Hive wiki recommends that you use **json_tuple**.
The type of JSON operator in Hive that you choose depends on your scenario. With
For related articles, see:
-* [Use Apache Hive and HiveQL with Apache Hadoop in HDInsight to analyze a sample Apache log4j file](./hdinsight-use-hive.md)
+* [Use Apache Hive and HiveQL with Apache Hadoop in HDInsight to analyze a sample Apache `log4j` file](./hdinsight-use-hive.md)
* [Analyze flight delay data by using Interactive Query in HDInsight](../interactive-query/interactive-query-tutorial-analyze-flight-data.md)
iot-operations Quickstart Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-deploy.md
This script automates the following steps:
* Apply all the required configurations for Azure IoT Operations, including:
- * Enable a firewall rule and port forwarding for port 8883 to enable incoming connections to Azure IoT Operations MQ broker.
+ * Enable a firewall rule and port forwarding for port 8883 to enable incoming connections to Azure IoT Operations broker.
* Install Storage local-path provisioner.
az iot ops verify-host
This helper command checks connectivity to Azure Resource Manager and Microsoft Container Registry endpoints.
-## Configure cluster and deploy Azure IoT Operations Preview
+## Deploy Azure IoT Operations Preview
-Part of the deployment process is to configure your cluster so that it can communicate securely with your Azure IoT Operations components and key vault. The Azure CLI command `az iot ops init` does this for you. Once your cluster is configured, then you can deploy Azure IoT Operations.
+In this section, you use the [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init) command to configure your cluster so that it can communicate securely with your Azure IoT Operations components and key vault, then deploy Azure IoT Operations.
-In this section, you use the Azure CLI to create a key vault, build the `az iot ops init` command based on your resources, and then deploy Azure IoT Operations components to your Arc-enabled Kubernetes cluster.
+1. Create a key vault. Replace the placeholder parameters with your own information.
-### Create a key vault
-
-You can use an existing key vault for your secrets, but verify that the **Permission model** is set to **Vault access policy**. You can check this setting in the Azure portal in the **Access configuration** section of an existing key vault. Or use the [az keyvault show](/cli/azure/keyvault#az-keyvault-show) command to check that `enableRbacAuthorization` is false.
-
-To create a new key vault, use the following command:
-
-```azurecli
-az keyvault create --enable-rbac-authorization false --name "<KEYVAULT_NAME>" --resource-group "<RESOURCE_GROUP_NAME>"
-```
-
-### Deploy Azure IoT Operations
-
-In this section, you use the Azure CLI to deploy Azure IoT Operations, but the Azure portal has a helper wizard to build the correct CLI command based on your cluster, cloud resources, and configuration choices.
-
-1. In a web browser, open the [Azure portal](https://portal.azure.com). In the Azure portal search bar, search for and select **Azure Arc**.
-
-1. Select **Azure IoT Operations (preview)** from the **Application Services** section of the Azure Arc menu.
-
- :::image type="content" source="./media/quickstart-deploy/arc-iot-operations.png" alt-text="Screenshot of selecting Azure IoT Operations from Azure Arc.":::
-
-1. Select **Create**.
-
-1. On the **Basics** tab of the **Install Azure IoT Operations Arc Extension** page, provide the following information:
-
- | Field | Value |
- | -- | -- |
- | **Subscription** | Select the subscription that contains your Arc-enabled Kubernetes cluster. |
- | **Resource group** | Select the resource group that contains your Arc-enabled Kubernetes cluster. |
- | **Cluster name** | Select your cluster. When you do, the **Custom location** and **Deployment details** sections autofill. |
-
- :::image type="content" source="./media/quickstart-deploy/install-extension-basics.png" alt-text="Screenshot of the basics tab for installing the Azure IoT Operations Arc extension in the Azure portal.":::
-
-1. Select **Next: Configuration**.
-
-1. On the **Configuration** tab, provide the following information:
-
- | Field | Value |
- | -- | -- |
- | **Deploy a simulated PLC** | Switch this toggle to **Yes**. The simulated PLC creates demo data that you use in the following quickstarts. |
- | **Mode** | Set the MQ configuration mode to **Auto**. |
-
- :::image type="content" source="./media/quickstart-deploy/install-extension-configuration.png" alt-text="Screenshot of the configuration tab for installing the Azure IoT Operations Arc extension in the Azure portal.":::
+ | Placeholder | Value |
+ | -- | -- |
+ | **RESOURCE_GROUP** | The name of your resource group that contains the connected cluster. |
+ | **KEYVAULT_NAME** | A name for a new key vault. |
-1. Select **Next: Automation**.
+ ```azurecli
+ az keyvault create --enable-rbac-authorization false --name "<KEYVAULT_NAME>" --resource-group "<RESOURCE_GROUP>"
+ ```
-1. On the **Automation** tab, provide the following information:
+ >[!TIP]
+ > You can use an existing key vault for your secrets, but verify that the **Permission model** is set to **Vault access policy**. You can check this setting in the Azure portal in the **Access configuration** section of an existing key vault. Or use the [az keyvault show](/cli/azure/keyvault#az-keyvault-show) command to check that `enableRbacAuthorization` is false.
- | Field | Value |
- | -- | -- |
- | **Subscription** | Select the subscription that contains your Arc-enabled Kubernetes cluster. |
- | **Azure Key Vault** | Use the **Select a key vault** drop-down menu to choose the key vault that you set up in the previous section. |
+1. Run the following CLI command on your development machine or in your codespace terminal. Replace the placeholder parameters with your own information.
-1. Once you select a key vault, the **Automation** tab uses all the information you selected in the previous tabs to populate an Azure CLI command that configures your cluster and deploys Azure IoT Operations. Copy the CLI command.
+ | Placeholder | Value |
+ | -- | -- |
+ | **CLUSTER_NAME** | The name of your connected cluster. |
+ | **RESOURCE_GROUP** | The name of your resource group that contains the connected cluster. |
+ | **KEYVAULT_NAME** | The name of your key vault. |
- :::image type="content" source="./media/quickstart-deploy/install-extension-automation.png" alt-text="Screenshot of copying the CLI command from the automation tab for installing the Azure IoT Operations Arc extension in the Azure portal.":::
+ ```azurecli
+ az iot ops init --simulate-plc --cluster <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --kv-id $(az keyvault show --name <KEYVAULT_NAME> -o tsv --query id)
+ ```
-1. Run the copied `az iot ops init` command on your development machine or in your codespace terminal.
+ If you get an error that says *Your device is required to be managed to access your resource*, run `az login` again and make sure that you sign in interactively with a browser.
>[!TIP]
- >If you get an error that says *Your device is required to be managed to access your resource*, run `az login` again and make sure that you sign in interactively with a browser.
+ >If you've run `az iot ops init` before, it automatically created an app registration in Microsoft Entra ID for you. You can reuse that registration rather than creating a new one each time. To use an existing app registration, add the optional parameter `--sp-app-id <APPLICATION_CLIENT_ID>`.
1. These quickstarts use the **OPC PLC simulator** to generate sample data. To configure the simulator for the quickstart scenario, run the following command:
- > [!IMPORTANT]
- > Don't use the following example in production, use it for simulation and test purposes only. The example lowers the security level for the OPC PLC so that it accepts connections from any client without an explicit peer certificate trust operation.
+ > [!IMPORTANT]
+ > Don't use the following example in production, use it for simulation and test purposes only. The example lowers the security level for the OPC PLC so that it accepts connections from any client without an explicit peer certificate trust operation.
- ```azurecli
- az k8s-extension update --version 0.3.0-preview --name opc-ua-broker --release-train preview --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --auto-upgrade-minor-version false --config opcPlcSimulation.deploy=true --config opcPlcSimulation.autoAcceptUntrustedCertificates=true
- ```
+ ```azurecli
+ az k8s-extension update --version 0.3.0-preview --name opc-ua-broker --release-train preview --cluster-name <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --cluster-type connectedClusters --auto-upgrade-minor-version false --config opcPlcSimulation.deploy=true --config opcPlcSimulation.autoAcceptUntrustedCertificates=true
+ ```
## View resources in your cluster
It can take several minutes for the deployment to complete. Continue running the
To view your cluster on the Azure portal, use the following steps:
-1. In the Azure portal, navigate to the resource group that contains your cluster.
+1. In the [Azure portal](https://portal.azure.com), navigate to the resource group that contains your cluster.
1. From the **Overview** of the resource group, select the name of your cluster.
iot-operations Howto Configure Aks Edge Essentials Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-aks-edge-essentials-layered-network.md
Follow the steps in [Quickstart: Deploy Azure IoT Operations Preview to an Arc-e
- In earlier steps, you completed the [prerequisites](../get-started/quickstart-deploy.md#prerequisites) and [connected your cluster to Azure Arc](../get-started/quickstart-deploy.md#connect-a-kubernetes-cluster-to-azure-arc) for Azure IoT Operations. You can review these steps to make sure nothing is missing. -- Start from the [Configure cluster and deploy Azure IoT Operations](../get-started/quickstart-deploy.md#configure-cluster-and-deploy-azure-iot-operations-preview) and complete all the further steps.
+- Start from the [Configure cluster and deploy Azure IoT Operations](../get-started/quickstart-deploy.md#deploy-azure-iot-operations-preview) and complete all the further steps.
## Next steps
machine-learning Concept Error Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-error-analysis.md
Previously updated : 08/17/2022 Last updated : 03/29/2024 # Assess errors in machine learning models
machine-learning How To Deploy Pipeline Component As Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-pipeline-component-as-batch-endpoint.md
Title: 'How to deploy pipeline as batch endpoint'
-description: Learn how to deploy pipeline component as batch endpoint to trigger the pipeline using REST endpoint
+description: Learn how to deploy pipeline component as batch endpoint to trigger the pipeline using REST endpoint.
- ignite-2023 Previously updated : 4/28/2023 Last updated : 03/29/2024 # Deploy your pipeline as batch endpoint
After building your machine learning pipeline, you can [deploy your pipeline as
## Pipeline component deployment as batch endpoint
-Pipeline component deployment as batch endpoint is the feature that allows you to achieve the goals for the previously-listed scenarios. This is the equivalent feature with published pipeline/pipeline endpoint in SDK v1.
+Pipeline component deployment as batch endpoint is the feature that allows you to achieve the goals for the previously listed scenarios. This is the equivalent feature with published pipeline/pipeline endpoint in SDK v1.
To deploy your pipeline as a batch endpoint, we recommend that you first convert your pipeline into a [pipeline component](./how-to-use-pipeline-component.md), and then deploy the pipeline component as a batch endpoint. For more information on deploying pipelines as batch endpoints, see [How to deploy pipeline component as batch endpoint](how-to-use-batch-pipeline-deployments.md).
-It's also possible to deploy your pipeline job as a batch endpoint. In this case, Azure Machine Learning can accept that job as the input to your batch endpoint and create the pipeline component automatically for you. For more information. see [Deploy existing pipeline jobs to batch endpoints](how-to-use-batch-pipeline-from-job.md).
+It's also possible to deploy your pipeline job as a batch endpoint. In this case, Azure Machine Learning can accept that job as the input to your batch endpoint and create the pipeline component automatically for you. For more information, see [Deploy existing pipeline jobs to batch endpoints](how-to-use-batch-pipeline-from-job.md).
> [!NOTE] > The consumer of the batch endpoint that invokes the pipeline job should be the user application, not the final end user. The application should control the inputs to the endpoint to prevent malicious inputs.
machine-learning How To Responsible Ai Insights Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-insights-sdk-cli.md
Previously updated : 11/09/2022 Last updated : 03/29/2024
In the following sections are specifications of the Responsible AI components an
### Limitations
-The current set of components have a number of limitations on their use:
+The current set of components have many limitations on their use:
- All models must be registered in Azure Machine Learning in MLflow format with a sklearn (scikit-learn) flavor. - The models must be loadable in the component environment.
The constructor component also accepts the following parameters:
| Parameter name | Description | Type | |||| | `title` | Brief description of the dashboard. | String |
-| `task_type` | Specifies whether the model is for classification or regression. | String, `classification` or `regression` |
+| `task_type` | Specifies whether the model is for classification, regression, or forecasting. | String, `classification`, `regression`, or `forecasting` |
| `target_column_name` | The name of the column in the input datasets, which the model is trying to predict. | String | | `maximum_rows_for_test_dataset` | The maximum number of rows allowed in the test dataset, for performance reasons. | Integer, defaults to 5,000 | | `categorical_column_names` | The columns in the datasets, which represent categorical data. | Optional list of strings<sup>1</sup> | | `classes` | The full list of class labels in the training dataset. | Optional list of strings<sup>1</sup> |
+| `feature_metadata`| Specifies additional information the dashboard might need depending on task type. For forecasting, this includes specifying which column is the `datetime` column and which column is the `time_series_id` column. For vision, this might include mean pixel value or location data of an image.| Optional list of strings<sup>1</sup> |
+| `use_model_dependency`| Specifies if the model requires a separate docker container to be served in due to conflicting dependencies with the RAI dashboard. For forecasting, this must be enabled. Typically for other scenarios this isn't enabled. | Boolean |
-<sup>1</sup> The lists should be supplied as a single JSON-encoded string for `categorical_column_names` and `classes` inputs.
+<sup>1</sup> The lists should be supplied as a single JSON-encoded string for `categorical_column_names`, `classes`, `feature_metadata` inputs.
The constructor component has a single output named `rai_insights_dashboard`. This is an empty dashboard, which the individual tool components operate on. All the results are assembled by the `Gather RAI Insights dashboard` component at the end.
machine-learning Tutorial Feature Store Domain Specific Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-feature-store-domain-specific-language.md
+
+ Title: "Tutorial 7: Develop a feature set using Domain Specific Language (preview)"
+
+description: This is part 7 of the managed feature store tutorial series.
+++++++ Last updated : 03/29/2024++
+#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Tutorial 7: Develop a feature set using Domain Specific Language (preview)
++
+An Azure Machine Learning managed feature store lets you discover, create, and operationalize features. Features serve as the connective tissue in the machine learning lifecycle, starting from the prototyping phase, where you experiment with various features. That lifecycle continues to the operationalization phase, where you deploy your models, and proceeds to the inference steps that look up feature data. For more information about feature stores, visit [feature store concepts](./concept-what-is-managed-feature-store.md).
+
+This tutorial describes how to develop a feature set using Domain Specific Language. The Domain Specific Language (DSL) for the managed feature store provides a simple and user-friendly way to define the most commonly used feature aggregations. With the feature store SDK, users can perform the most commonly used aggregations with a DSL *expression*. Aggregations that use the DSL *expression* ensure consistent results, compared with user-defined functions (UDFs). Additionally, those aggregations avoid the overhead of writing UDFs.
+
+This Tutorial shows how to
+
+> [!div class="checklist"]
+> * Create a new, minimal feature store workspace
+> * Locally develop and test a feature, through use of Domain Specific Language (DSL)
+> * Develop a feature set through use of User Defined Functions (UDFs) that perform the same transformations as a feature set created with DSL
+> * Compare the results of the feature sets created with DSL, and feature sets created with UDFs
+> * Register a feature store entity with the feature store
+> * Register the feature set created using DSL with the feature store
+> * Generate sample training data using the created features
+
+## Prerequisites
+
+> [!NOTE]
+> This tutorial uses an Azure Machine Learning notebook with **Serverless Spark Compute**.
+
+Before you proceed with this tutorial, make sure that you cover these prerequisites:
+
+1. An Azure Machine Learning workspace. If you don't have one, visit [Quickstart: Create workspace resources](./quickstart-create-resources.md?view-azureml-api-2) to learn how to create one.
+1. To perform the steps in this tutorial, your user account needs either the **Owner** or **Contributor** role to the resource group where the feature store will be created.
+
+## Set up
+
+ This tutorial relies on the Python feature store core SDK (`azureml-featurestore`). This SDK is used for create, read, update, and delete (CRUD) operations, on feature stores, feature sets, and feature store entities.
+
+ You don't need to explicitly install these resources for this tutorial, because in the set-up instructions shown here, the `conda.yml` file covers them.
+
+ To prepare the notebook environment for development:
+
+ 1. Clone the [examples repository - (azureml-examples)](https://github.com/azure/azureml-examples) to your local machine with this command:
+
+ `git clone --depth 1 https://github.com/Azure/azureml-examples`
+
+ You can also download a zip file from the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). At this page, first select the `code` dropdown, and then select `Download ZIP`. Then, unzip the contents into a folder on your local machine.
+
+ 1. Upload the feature store samples directory to project workspace
+ 1. Open Azure Machine Learning studio UI of your Azure Machine Learning workspace
+ 1. Select **Notebooks** in left navigation panel
+ 1. Select your user name in the directory listing
+ 1. Select the ellipses (**...**), and then select **Upload folder**
+ 1. Select the feature store samples folder from the cloned directory path: `azureml-examples/sdk/python/featurestore-sample`
+
+ 1. Run the tutorial
+
+ * Option 1: Create a new notebook, and execute the instructions in this document, step by step
+ * Option 2: Open existing notebook `featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb`. You can keep this document open, and refer to it for more explanation and documentation links
+
+ 1. To configure the notebook environment, you must upload the `conda.yml` file
+
+ 1. Select **Notebooks** on the left navigation panel, and then select the **Files** tab
+ 1. Navigate to the `env` directory (select **Users** > *your_user_name* > **featurestore_sample** > **project** > **env**), and then select the `conda.yml` file
+ 1. Select **Download**
+ 1. Select **Serverless Spark Compute** in the top navigation **Compute** dropdown. This operation might take one to two minutes. Wait for the status bar in the top to display the **Configure session** link
+ 1. Select **Configure session** in the top status bar
+ 1. Select **Settings**
+ 1. Select **Apache Spark version** as `Spark version 3.3`
+ 1. Optionally, increase the **Session timeout** (idle time) if you want to avoid frequent restarts of the serverless Spark session
+ 1. Under **Configuration settings**, define *Property* `spark.jars.packages` and *Value* `com.microsoft.azure:azureml-fs-scala-impl:1.0.4`
+ :::image type="content" source="./media/tutorial-feature-store-domain-specific-language/dsl-spark-jars-property.png" lightbox="./media/tutorial-feature-store-domain-specific-language/dsl-spark-jars-property.png" alt-text="This screenshot shows the Spark session property for a package that contains the jar file used by managed feature store domain-specific language.":::
+ 1. Select **Python packages**
+ 1. Select **Upload conda file**
+ 1. Select the `conda.yml` you downloaded on your local device
+ 1. Select **Apply**
+
+ > [!TIP]
+ > Except for this specific step, you must run all the other steps every time you start a new spark session, or after session time out.
+
+ 1. This code cell sets up the root directory for the samples and starts the Spark session. It needs about 10 minutes to install all the dependencies and start the Spark session:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=setup-root-dir)]
+
+## Provision the necessary resources
+
+ 1. Create a minimal feature store:
+
+ Create a feature store in a region of your choice, from the Azure Machine Learning studio UI or with Azure Machine Learning Python SDK code.
+
+ * Option 1: Create feature store from the Azure Machine Learning studio UI
+
+ 1. Navigate to the feature store UI [landing page](https://ml.azure.com/featureStores)
+ 1. Select **+ Create**
+ 1. The **Basics** tab appears
+ 1. Choose a **Name** for your feature store
+ 1. Select the **Subscription**
+ 1. Select the **Resource group**
+ 1. Select the **Region**
+ 1. Select **Apache Spark version** 3.3, and then select **Next**
+ 1. The **Materialization** tab appears
+ 1. Toggle **Enable materialization**
+ 1. Select **Subscription** and **User identity** to **Assign user managed identity**
+ 1. Select **From Azure subscription** under **Offline store**
+ 1. Select **Store name** and **Azure Data Lake Gen2 file system name**, then select **Next**
+ 1. On the **Review** tab, verify the displayed information and then select **Create**
+
+ * Option 2: Create a feature store using the Python SDK
+ Provide `featurestore_name`, `featurestore_resource_group_name`, and `featurestore_subscription_id` values, and execute this cell to create a minimal feature store:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=create-min-fs)]
+
+ 1. Assign permissions to your user identity on the offline store:
+
+ If feature data is materialized, then you must assign the **Storage Blob Data Reader** role to your user identity to read feature data from offline materialization store.
+ 1. Open the [Azure ML global landing page](https://ml.azure.com/home)
+ 1. Select **Feature stores** in the left navigation
+ 1. You'll see the list of feature stores that you have access to. Select the feature store that you created above
+ 1. Select the storage account link under **Account name** on the **Offline materialization store** card, to navigate to the ADLS Gen2 storage account for the offline store
+ :::image type="content" source="./media/tutorial-feature-store-domain-specific-language/offline-store-link.png" lightbox="./media/tutorial-feature-store-domain-specific-language/offline-store-link.png" alt-text="This screenshot shows the storage account link for the offline materialization store on the feature store UI.":::
+ 1. Visit [this resource](../role-based-access-control/role-assignments-portal.md) for more information about how to assign the **Storage Blob Data Reader** role to your user identity on the ADLS Gen2 storage account for offline store. Allow some time for permissions to propagate.
+
+## Available DSL expressions and benchmarks
+
+ Currently, these aggregation expressions are supported:
+ - Average - `avg`
+ - Sum - `sum`
+ - Count - `count`
+ - Min - `min`
+ - Max - `max`
+
+ This table provides benchmarks that compare performance of aggregations that use DSL *expression* with the aggregations that use UDF, using a representative dataset of size 23.5 GB with the following attributes:
+ - `numberOfSourceRows`: 348,244,374
+ - `numberOfOfflineMaterializedRows`: 227,361,061
+
+ |Function|*Expression*|UDF execution time|DSL execution time|
+ |--||||
+ |`get_offline_features(use_materialized_store=false)`|`sum`, `avg`, `count`|~2 hours|< 5 minutes|
+ |`get_offline_features(use_materialized_store=true)`|`sum`, `avg`, `count`|~1.5 hours|< 5 minutes|
+ |`materialize()`|`sum`, `avg`, `count`|~1 hour|< 15 minutes|
+
+ > [!NOTE]
+ > The `min` and `max` DSL expressions provide no performance improvement over UDFs. We recommend that you use UDFs for `min` and `max` transformations.
+
+## Create a feature set specification using DSL expressions
+
+ 1. Execute this code cell to create a feature set specification, using DSL expressions and parquet files as source data.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=create-dsl-parq-fset)]
+
+ 1. This code cell defines the start and end times for the feature window.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=define-feat-win)]
+
+ 1. This code cell uses `to_spark_dataframe()` to get a dataframe in the defined feature window from the above feature set specification defined using DSL expressions:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=sparkdf-dsl-parq)]
+
+ 1. Print some sample feature values from the feature set defined with DSL expressions:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=display-dsl-parq)]
+
+## Create a feature set specification using UDF
+
+ 1. Create a feature set specification that uses UDF to perform the same transformations:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=create-udf-parq-fset)]
+
+ This transformation code shows that the UDF defines the same transformations as the DSL expressions:
+
+ ```python
+ class TransactionFeatureTransformer(Transformer):
+ def _transform(self, df: DataFrame) -> DataFrame:
+ days = lambda i: i * 86400
+ w_3d = (
+ Window.partitionBy("accountID")
+ .orderBy(F.col("timestamp").cast("long"))
+ .rangeBetween(-days(3), 0)
+ )
+ w_7d = (
+ Window.partitionBy("accountID")
+ .orderBy(F.col("timestamp").cast("long"))
+ .rangeBetween(-days(7), 0)
+ )
+ res = (
+ df.withColumn("transaction_7d_count", F.count("transactionID").over(w_7d))
+ .withColumn(
+ "transaction_amount_7d_sum", F.sum("transactionAmount").over(w_7d)
+ )
+ .withColumn(
+ "transaction_amount_7d_avg", F.avg("transactionAmount").over(w_7d)
+ )
+ .withColumn("transaction_3d_count", F.count("transactionID").over(w_3d))
+ .withColumn(
+ "transaction_amount_3d_sum", F.sum("transactionAmount").over(w_3d)
+ )
+ .withColumn(
+ "transaction_amount_3d_avg", F.avg("transactionAmount").over(w_3d)
+ )
+ .select(
+ "accountID",
+ "timestamp",
+ "transaction_3d_count",
+ "transaction_amount_3d_sum",
+ "transaction_amount_3d_avg",
+ "transaction_7d_count",
+ "transaction_amount_7d_sum",
+ "transaction_amount_7d_avg",
+ )
+ )
+ return res
+
+ ```
+
+ 1. Use `to_spark_dataframe()` to get a dataframe from the above feature set specification, defined using UDF:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=sparkdf-udf-parq)]
+
+ 1. Compare the results and verify consistency between the results from the DSL expressions and the transformations performed with UDF. To verify, select one of the `accountID` values to compare the values in the two dataframes:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=display-dsl-acct)]
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=display-udf-acct)]
+
+## Export feature set specifications as YAML
+
+ To register the feature set specification with the feature store, it must be saved in a specific format. To review the generated `transactions-dsl` feature set specification, open this file from the file tree, to see the specification: `featurestore/featuresets/transactions-dsl/spec/FeaturesetSpec.yaml`
+
+ The feature set specification contains these elements:
+
+ 1. `source`: Reference to a storage resource; in this case, a parquet file in a blob storage
+ 1. `features`: List of features and their datatypes. If you provide transformation code, the code must return a dataframe that maps to the features and data types
+ 1. `index_columns`: The join keys required to access values from the feature set
+
+ For more information, read the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set specification YAML reference](./reference-yaml-featureset-spec.md) resources.
+
+ As an extra benefit of persisting the feature set specification, it can be source controlled.
+
+ 1. Execute this code cell to write YAML specification file for the feature set, using parquet data source and DSL expressions:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=dump-dsl-parq-fset-spec)]
+
+ 1. Execute this code cell to write a YAML specification file for the feature set, using UDF:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=dump-udf-parq-fset-spec)]
+
+## Initialize SDK clients
+
+ The following steps of this tutorial use two SDKs.
+
+ 1. Feature store CRUD SDK: The Azure Machine Learning (AzureML) SDK `MLClient` (package name `azure-ai-ml`), similar to the one used with Azure Machine Learning workspace. This SDK facilitates feature store CRUD operations
+
+ - Create
+ - Read
+ - Update
+ - Delete
+
+ for feature store and feature set entities, because feature store is implemented as a type of Azure Machine Learning workspace
+
+ 1. Feature store core SDK: This SDK (`azureml-featurestore`) facilitates feature set development and consumption:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=init-python-clients)]
+
+## Register `account` entity with the feature store
+
+ Create an account entity that has a join key `accountID` of `string` type:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=register-account-entity)]
+
+## Register the feature set with the feature store
+
+ 1. Register the `transactions-dsl` feature set (that uses DSL) with the feature store, with offline materialization enabled, using the exported feature set specification:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=register-dsl-trans-fset)]
+
+ 1. Materialize the feature set to persist the transformed feature data to the offline store:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=mater-dsl-trans-fset)]
+
+ 1. Execute this code cell to track the progress of the materialization job:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=track-mater-job)]
+
+ 1. Print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method used to retrieve the training/inference data also uses the materialization store by default:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=lookup-trans-dsl-fset)]
+
+## Generate a training dataframe using the registered feature set
+
+### Load observation data
+
+ Observation data is typically the core data used in training and inference steps. Then, the observation data is joined with the feature data, to create a complete training data resource. Observation data is the data captured during the time of the event. In this case, it has core transaction data including transaction ID, account ID, and transaction amount. Since this data is used for training, it also has the target variable appended (`is_fraud`).
+
+ 1. First, explore the observation data:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=load-obs-data)]
+
+ 1. Select features that would be part of the training data, and use the feature store SDK to generate the training data:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=select-features-dsl)]
+
+ 1. The `get_offline_features()` function appends the features to the observation data with a point-in-time join. Display the training dataframe obtained from the point-in-time join:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=get-offline-features-dsl)]
+
+### Generate a training dataframe from feature sets using DSL and UDF
+
+ 1. Register the `transactions-udf` feature set (that uses UDF) with the feature store, using the exported feature set specification. Enable offline materialization for this feature set while registering with the feature store:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=register-udf-trans-fset)]
+
+ 1. Select features from the feature sets (created using DSL and UDF) that you would like to become part of the training data, and use the feature store SDK to generate the training data:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=select-features-dsl-udf)]
+
+ 1. The function `get_offline_features()` appends the features to the observation data with a point-in-time join. Display the training dataframe obtained from the point-in-time join:
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/7. Develop a feature set using Domain Specific Language (DSL).ipynb?name=get-offline-features-dsl-udf)]
+
+The features are appended to the training data with a point-in-time join. The generated training data can be used for subsequent training and batch inferencing steps.
+
+## Clean up
+
+The [fifth tutorial in the series](./tutorial-develop-feature-set-with-custom-source.md#clean-up) describes how to delete the resources.
+
+## Next steps
+
+* [Part 2: Experiment and train models using features](./tutorial-experiment-train-models-using-features.md)
+* [Part 3: Enable recurrent materialization and run batch inference](./tutorial-enable-recurrent-materialization-run-batch-inference.md)
postgresql Concepts Networking Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private-link.md
Private endpoints support network policies. Network policies enable support for
## Private Link and DNS
-When using a private endpoint, you need to connect to the same Azure service but use the private endpoint IP address. The intimate endpoint connection requires separate DNS settings to resolve the private IP address to the resource name.
-Private DNS zones provide domain name resolution within a virtual network without a custom DNS solution. You link the private DNS zones to each virtual network to provide DNS services to that network.
+When using a private endpoint, you need to connect to the same Azure service but use the private endpoint IP address. The intimate endpoint connection requires separate **Domain Name System (DNS)** settings to resolve the private IP address to the resource name.
+**[Private DNS zones](../../dns/private-dns-overview.md)** provide domain name resolution within a virtual network without a custom DNS solution. You link the **private DNS zones** to each virtual network to provide DNS services to that network.
+
+**Private DNS zones** provide separate DNS zone names for each Azure service. For example, if you configured a private DNS zone for the storage account blob service in the previous image, the DNS zones name is **privatelink.blob.core.windows.net**. Check out the Microsoft documentation here to see more of the private DNS zone names for all Azure services.
-Private DNS zones provide separate DNS zone names for each Azure service. For example, if you configured a private DNS zone for the storage account blob service in the previous image, the DNS zones name is **privatelink.blob.core.windows.net**. Check out the Microsoft documentation here to see more of the private DNS zone names for all Azure services.
> [!NOTE] > Private endpoint private DNS zone configurations will only automatically generate if you use the recommended naming scheme: **privatelink.postgres.database.azure.com** > On newly provisioned public access (non VNET injected) servers there is a temporary DNS layout change. The server's FQDN will now be a CName, resolving to A record, in format **servername.privatelink.postgres.database.azure.com**. In the near future, this format will apply only when private endpoints are created on the server.
+### Hybrid DNS for Azure and on-premises resources
+
+**Domain Name System (DNS)** is a critical design topic in the overall landing zone architecture. Some organizations might want to use their existing investments in DNS, while others may want to adopt native Azure capabilities for all their DNS needs.
+You can use [Azure DNS Private Resolver service](../../dns/dns-private-resolver-overview.md) in conjunction with Azure Private DNS Zones for cross-premises name resolution. DNS Private Resolver can forward DNS request to another DNS server and also provides an IP address that can be used by external DNS server to forward requests. So external On-Premises DNS servers are able to resolve name located in a private DNS zone.
+
+More information on using [Private DNS Resolver]() with on-premises DNS forwarder to forward DNS traffic to Azure DNS see this [document](../../private-link/private-endpoint-dns-integration.md#on-premises-workloads-using-a-dns-forwarder), as well as this [document](../../private-link/tutorial-dns-on-premises-private-resolver.md) . Solutions described allow to extend on-premises network that already has a DNS solution in place to resolve resources in Azure.
+Microsoft architecture.
+
+### Private Link and DNS integration in hub and spoke network architectures
+
+Private DNS zones are typically hosted centrally in the same Azure subscription where the hub VNet deploys. This central hosting practice is driven by cross-premises DNS name resolution and other needs for central DNS resolution such as Active Directory. In most cases, only networking and identity administrators have permissions to manage DNS records in the zones.
+
+In such architecture following is usually configured:
+* On-premises DNS servers have conditional forwarders configured for each private endpoint public DNS zone, pointing to the Private DNS Resolver hosted in the hub VNet.
+* The Private DNS Resolver hosted in the hub VNet use the Azure-provided DNS (168.63.129.16) as a forwarder.
+* The hub VNet must be linked to the Private DNS zone names for Azure services (such as *privatelink.postgres.database.azure.com*, for Azure Database for PostgreSQL - Flexible Server).
+* All Azure VNets use Private DNS Resolver hosted in the hub VNet.
+* As the Private DNS Resolver isn't authoritative for customer's corporate domains, as it's just a forwarder, (for example, Active Directory domain names), it should have outbound endpoint forwarders to the customer's corporate domains, pointing to the on-premises DNS Servers or DNS servers deployed in Azure that are authoritative for such zones.
++ ## Private Link and Network Security Groups
resource-mover About Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/about-move-process.md
description: Learn about the process for moving resources across regions with Az
Previously updated : 02/02/2023 Last updated : 03/29/2024 #Customer intent: As an Azure admin, I want to understand how Azure Resource Mover works.
resource-mover Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/common-questions.md
Title: Common questions about Azure Resource Mover?
-description: Get answers to common questions about Azure Resource Mover
+description: Get answers to common questions about Azure Resource Mover.
Previously updated : 10/12/2023 Last updated : 03/29/2024
Azure Resource Mover is currently available as follows:
Using Resource Mover, you can currently move the following resources across regions: -- Azure VMs and associated disks (Azure Spot VMs are not currently supported)-- NICs
+- Azure virtual machines and associated disks (Azure Spot virtual machines are not currently supported)
+- Network Interface Cards
- Availability sets - Azure virtual networks -- Public IP addresses (Public IP will not be retained across Azure region)
+- Public IP addresses (Public IP are be retained across Azure region)
- Network security groups (NSGs) - Internal and public load balancers - Azure SQL databases and elastic pools ### Can I move disks across regions?
-You can't select disks as resources to the moved across regions. However, disks are moved as part of a VM move.
+You can't select disks as resources to the moved across regions. However, disks are moved as part of a virtual machine move.
### How can I move my resources across subscription?
No. Resource Mover service doesn't store customer data, it only stores metadata
### Where is the metadata for moving across regions stored?
-It's stored in an [Azure Cosmos DB](../cosmos-db/database-encryption-at-rest.md) database, and in [Azure Blob storage](../storage/common/storage-service-encryption.md), in a Microsoft subscription. Currently, metadata is stored in East US 2 and North Europe. We will expand this coverage to other regions. This doesn't restrict you from moving resources across any public region.
+It's stored in an [Azure Cosmos DB](../cosmos-db/database-encryption-at-rest.md) database, and in [Azure Blob storage](../storage/common/storage-service-encryption.md), in a Microsoft subscription. Currently, metadata is stored in East US 2 and North Europe. We plan to expand this coverage to other regions. This doesn't restrict you from moving resources across any public region.
### Is the collected metadata encrypted?
Change the source/target combinations as needed using the change option in the p
### What happens when I remove a resource from a list of move resources?
-You can remove resources that you've added to the move list. The exact remove behavior depends on the resource state. [Learn more](remove-move-resources.md#vm-resource-state-after-removing).
+You can remove resources that you added to the move list. The exact remove behavior depends on the resource state. [Learn more](remove-move-resources.md#vm-resource-state-after-removing).
## Next steps
resource-mover Manage Resources Created Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/manage-resources-created-move-process.md
Title: Manage resources that are created during the VM move process in Azure Resource Mover
-description: Learn how to manage resources that are created during the VM move process in Azure Resource Mover
+ Title: Manage resources that are created during the virtual machine move process in Azure Resource Mover
+description: Learn how to manage resources that are created during the virtual machine move process in Azure Resource Mover.
Previously updated : 10/31/2023 Last updated : 03/29/2024
-# Manage resources created for the VM move
+# Manage resources created for the virtual machine move
-This article describes how to manage resources that are created explicitly by [Azure Resource Mover](overview.md) to facilitate the VM move process.
+This article describes how to manage resources that are created explicitly by [Azure Resource Mover](overview.md) to facilitate the virtual machine move process.
-After moving VMs across regions, there are a number of resources created by Resource Mover that should be cleaned up manually.
+After moving virtual machines across regions, there are a number of resources created by Resource Mover that should be cleaned up manually.
-## Delete resources created for VM move
+## Delete resources created for virtual machine move
-Manually delete the move collection, and Site Recovery resources created for the VM move.
+Manually delete the move collection, and Site Recovery resources created for the virtual machine move.
1. Review the resources in resource group ```ResourceMoverRG-<sourceregion>-<target-region>-<metadataRegionShortName>```.
-2. Check that the VM and all other source resources in the move collection have been moved/deleted. This ensures that there are no pending resources using them.
+2. Check that the virtual machine and all other source resources in the move collection have been moved/deleted. This ensures that there are no pending resources using them.
2. Delete these resources. - The move collection name is ```movecollection-<sourceregion>-<target-region>-<metadata-region>```.
Manually delete the move collection, and Site Recovery resources created for the
## Next steps
-Try [moving a VM](tutorial-move-region-virtual-machines.md) to another region with Resource Mover.
+Try [moving a virtual machine](tutorial-move-region-virtual-machines.md) to another region with Resource Mover.
resource-mover Modify Target Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/modify-target-settings.md
description: Learn how to modify destination settings when moving Azure VMs betw
Previously updated : 10/31/2023 Last updated : 03/29/2024
-#Customer intent: As an Azure admin, I want to modify destination settings when moving resources to another region.
+#Customer intent: As an Azure admin, I want to modify destination settings when moving resources to another region using Azure Resource Mover.
# Modify destination settings
When moving Azure SQL Database resources, you can modify the destination setting
### Edit SQL destination settings
-You modify the destination settings for a Azure SQL Database resource as follows:
+You modify the destination settings for an Azure SQL Database resource as follows:
1. In **Across regions**, for the resource you want to modify, click the **Destination configuration** entry. 2. In **Configuration settings**, specify the destination settings summarized in the table above.
resource-mover Move Across Region Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-across-region-dashboard.md
Previously updated : 10/31/2023 Last updated : 03/29/2024
The **Move across region dashboard** page combines all monitoring information of
[![Move across region dashboard tab](media\move-across-region-dashboard\move-across-region-dashboard-tab.png)](media\move-across-region-dashboard\move-across-region-dashboard-tab.png) 2. The dashboard lists all the move combinations created by you. The following two sections are used to capture the status of your move across regions. In **Resources by move status**, monitor the percentage and number of resources in each state.
- In **Error Summary**, monitor the active errors that needs to be resolved before you can successfully move to the destination region.
+ In **Error Summary**, monitor the active errors that need to be resolved before you can successfully move to the destination region.
[![Status and issues section](media\move-across-region-dashboard\move-across-region-dashboard-status-issues.png)](media\move-across-region-dashboard\move-across-region-dashboard-status-issues.png) > [!NOTE] > Only the source-destination combinations that are already created in your chosen subscription would be listed in the dashboard.
resource-mover Move Region Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-availability-zone.md
description: Learn how to move Azure VMs to availability zones with Azure Resour
Previously updated : 09/29/2023 Last updated : 03/29/2024
-#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
+#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region using Azure Resource Mover.
# Move Azure VMs to an availability zone in another region
resource-mover Move Region Within Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-within-resource-group.md
description: Learn how to move resources within a resource group to another regi
Previously updated : 02/10/2023 Last updated : 03/29/2024
-#Customer intent: As an Azure admin, I want to move Azure resources to a different Azure region.
+#Customer intent: As an Azure admin, I want to move Azure resources to a different Azure region using Azure Resource Mover.
# Move resources across regions (from resource group)
resource-mover Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/overview.md
description: Learn about Azure Resource Mover
Previously updated : 02/02/2023 Last updated : 03/29/2024
resource-mover Remove Move Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/remove-move-resources.md
description: Learn how to remove resources from a move collection in Azure Resou
Previously updated : 10/30/2023 Last updated : 03/29/2024 #Customer intent: As an Azure admin, I want remove resources I've added to a move collection.
resource-mover Select Move Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/select-move-tool.md
description: Review options and tools for moving Azure resources across regions
Previously updated : 12/23/2022 Last updated : 03/29/2024
-#Customer intent: As an Azure admin, I need to compare tools for moving resources in Azure.
+#Customer intent: As an Azure admin, I need to compare tools for moving resources in Azure using Azure Resource Mover.
# Choose a tool for moving Azure resources
resource-mover Support Matrix Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-extension-resource-types.md
Previously updated : 03/02/2023 Last updated : 03/29/2024
resource-mover Support Matrix Move Region Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-azure-vm.md
description: Review support for moving Azure VMs between regions with Azure Reso
Previously updated : 03/21/2023 Last updated : 03/29/2024
resource-mover Support Matrix Move Region Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-sql.md
description: Review support for moving Azure SQL resources between regions with
Previously updated : 03/21/2023 Last updated : 03/29/2024
resource-mover Tutorial Move Region Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
description: Learn how to move encrypted Azure VMs to another region by using Az
Previously updated : 10/12/2023 Last updated : 03/29/2024
-#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
+#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region using Azure Resource Mover.
# Move encrypted Azure VMs across regions
resource-mover Tutorial Move Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-powershell.md
description: Learn how to move resources across regions using PowerShell in Azur
Previously updated : 10/30/2023 Last updated : 03/29/2024
-#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region using Azure Resource Mover with PowerShell
+#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region using Azure Resource Mover with PowerShell.
# Move resources across regions in PowerShell
resource-mover Tutorial Move Region Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-sql.md
description: Learn how to move Azure SQL resources to another region with Azure
Previously updated : 02/10/2023 Last updated : 03/29/2024
-#Customer intent: As an Azure admin, I want to move SQL Server databases to a different Azure region.
+#Customer intent: As an Azure admin, I want to move SQL Server databases to a different Azure region using Azure Resource Mover.
# Move Azure SQL Database resources to another region
resource-mover Tutorial Move Region Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-virtual-machines.md
description: Learn how to move Azure VMs to another region with Azure Resource M
Previously updated : 10/12/2023 Last updated : 03/29/2024
-#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.
+#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region using Azure Resource Mover.
# Move Azure VMs across regions
resource-mover Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/whats-new.md
Previously updated : 03/09/2023 Last updated : 03/29/2024 # What's new in Resource Mover
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | <a name='azure-arc-kubernetes-cluster-admin'></a>[Azure Arc Kubernetes Cluster Admin](./built-in-roles/containers.md#azure-arc-kubernetes-cluster-admin) | Lets you manage all resources in the cluster. | 8393591c-06b9-48a2-a542-1bd6b377f6a2 | > | <a name='azure-arc-kubernetes-viewer'></a>[Azure Arc Kubernetes Viewer](./built-in-roles/containers.md#azure-arc-kubernetes-viewer) | Lets you view all resources in cluster/namespace, except secrets. | 63f0a09d-1495-4db4-a681-037d84835eb4 | > | <a name='azure-arc-kubernetes-writer'></a>[Azure Arc Kubernetes Writer](./built-in-roles/containers.md#azure-arc-kubernetes-writer) | Lets you update everything in cluster/namespace, except (cluster)roles and (cluster)role bindings. | 5b999177-9696-4545-85c7-50de3797e5a1 |
-> | <a name='azure-kubernetes-fleet-manager-rbac-admin'></a>[Azure Kubernetes Fleet Manager RBAC Admin](./built-in-roles/containers.md#azure-kubernetes-fleet-manager-rbac-admin) | This role grants admin access - provides write permissions on most objects within a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces. | 434fb43a-c01c-447e-9f67-c3ad923cfaba |
-> | <a name='azure-kubernetes-fleet-manager-rbac-cluster-admin'></a>[Azure Kubernetes Fleet Manager RBAC Cluster Admin](./built-in-roles/containers.md#azure-kubernetes-fleet-manager-rbac-cluster-admin) | Lets you manage all resources in the fleet manager cluster. | 18ab4d3d-a1bf-4477-8ad9-8359bc988f69 |
-> | <a name='azure-kubernetes-fleet-manager-rbac-reader'></a>[Azure Kubernetes Fleet Manager RBAC Reader](./built-in-roles/containers.md#azure-kubernetes-fleet-manager-rbac-reader) | Allows read-only access to see most objects in a namespace. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces. | 30b27cfc-9c84-438e-b0ce-70e35255df80 |
-> | <a name='azure-kubernetes-fleet-manager-rbac-writer'></a>[Azure Kubernetes Fleet Manager RBAC Writer](./built-in-roles/containers.md#azure-kubernetes-fleet-manager-rbac-writer) | Allows read/write access to most objects in a namespace. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces. | 5af6afb3-c06c-4fa4-8848-71a8aee05683 |
+> | <a name='azure-kubernetes-fleet-manager-contributor-role'></a>[Azure Kubernetes Fleet Manager Contributor Role](./built-in-roles/containers.md#azure-kubernetes-fleet-manager-contributor-role) | Grants read/write access to Azure resources provided by Azure Kubernetes Fleet Manager, including fleets, fleet members, fleet update strategies, fleet update runs, etc. | 63bb64ad-9799-4770-b5c3-24ed299a07bf |
+> | <a name='azure-kubernetes-fleet-manager-rbac-admin'></a>[Azure Kubernetes Fleet Manager RBAC Admin](./built-in-roles/containers.md#azure-kubernetes-fleet-manager-rbac-admin) | Grants read/write access to Kubernetes resources within a namespace in the fleet-managed hub cluster - provides write permissions on most objects within a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces. | 434fb43a-c01c-447e-9f67-c3ad923cfaba |
+> | <a name='azure-kubernetes-fleet-manager-rbac-cluster-admin'></a>[Azure Kubernetes Fleet Manager RBAC Cluster Admin](./built-in-roles/containers.md#azure-kubernetes-fleet-manager-rbac-cluster-admin) | Grants read/write access to all Kubernetes resources in the fleet-managed hub cluster. | 18ab4d3d-a1bf-4477-8ad9-8359bc988f69 |
+> | <a name='azure-kubernetes-fleet-manager-rbac-reader'></a>[Azure Kubernetes Fleet Manager RBAC Reader](./built-in-roles/containers.md#azure-kubernetes-fleet-manager-rbac-reader) | Grants read-only access to most Kubernetes resources within a namespace in the fleet-managed hub cluster. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces. | 30b27cfc-9c84-438e-b0ce-70e35255df80 |
+> | <a name='azure-kubernetes-fleet-manager-rbac-writer'></a>[Azure Kubernetes Fleet Manager RBAC Writer](./built-in-roles/containers.md#azure-kubernetes-fleet-manager-rbac-writer) | Grants read/write access to most Kubernetes resources within a namespace in the fleet-managed hub cluster. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace.  Applying this role at cluster scope will give access across all namespaces. | 5af6afb3-c06c-4fa4-8848-71a8aee05683 |
> | <a name='azure-kubernetes-service-cluster-admin-role'></a>[Azure Kubernetes Service Cluster Admin Role](./built-in-roles/containers.md#azure-kubernetes-service-cluster-admin-role) | List cluster admin credential action. | 0ab0b1a8-8aac-4efd-b8c2-3ee1fb270be8 | > | <a name='azure-kubernetes-service-cluster-monitoring-user'></a>[Azure Kubernetes Service Cluster Monitoring User](./built-in-roles/containers.md#azure-kubernetes-service-cluster-monitoring-user) | List cluster monitoring user credential action. | 1afdec4b-e479-420e-99e7-f82237c7c5e6 | > | <a name='azure-kubernetes-service-cluster-user-role'></a>[Azure Kubernetes Service Cluster User Role](./built-in-roles/containers.md#azure-kubernetes-service-cluster-user-role) | List cluster user credential action. | 4abbcc35-e782-43d8-92c5-2d3f1bd2253f |
The following table provides a brief description of each built-in role. Click th
> | <a name='billing-reader'></a>[Billing Reader](./built-in-roles/management-and-governance.md#billing-reader) | Allows read access to billing data | fa23ad8b-c56e-40d8-ac0c-ce449e1d2c64 | > | <a name='blueprint-contributor'></a>[Blueprint Contributor](./built-in-roles/management-and-governance.md#blueprint-contributor) | Can manage blueprint definitions, but not assign them. | 41077137-e803-4205-871c-5a86e6a753b4 | > | <a name='blueprint-operator'></a>[Blueprint Operator](./built-in-roles/management-and-governance.md#blueprint-operator) | Can assign existing published blueprints, but cannot create new blueprints. Note that this only works if the assignment is done with a user-assigned managed identity. | 437d2ced-4a38-4302-8479-ed2bcb43d090 |
+> | <a name='carbon-optimization-reader'></a>[Carbon Optimization Reader](./built-in-roles/management-and-governance.md#carbon-optimization-reader) | Allow read access to Azure Carbon Optimization data | fa0d39e6-28e5-40cf-8521-1eb320653a4c |
> | <a name='cost-management-contributor'></a>[Cost Management Contributor](./built-in-roles/management-and-governance.md#cost-management-contributor) | Can view costs and manage cost configuration (e.g. budgets, exports) | 434105ed-43f6-45c7-a02f-909b2ba83430 | > | <a name='cost-management-reader'></a>[Cost Management Reader](./built-in-roles/management-and-governance.md#cost-management-reader) | Can view cost data and configuration (e.g. budgets, exports) | 72fafb9e-0641-4937-9268-a91bfd8191a3 | > | <a name='hierarchy-settings-administrator'></a>[Hierarchy Settings Administrator](./built-in-roles/management-and-governance.md#hierarchy-settings-administrator) | Allows users to edit and delete Hierarchy Settings | 350f8d15-c687-4448-8ae1-157740a3936d |
role-based-access-control Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/containers.md
Lets you update everything in cluster/namespace, except (cluster)roles and (clus
} ```
+## Azure Kubernetes Fleet Manager Contributor Role
+
+Grants read/write access to Azure resources provided by Azure Kubernetes Fleet Manager, including fleets, fleet members, fleet update strategies, fleet update runs, etc.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.ContainerService](../permissions/containers.md#microsoftcontainerservice)/fleets/* | |
+> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Grants read/write access to Azure resources provided by Azure Kubernetes Fleet Manager, including fleets, fleet members, fleet update strategies, fleet update runs, etc.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/63bb64ad-9799-4770-b5c3-24ed299a07bf",
+ "name": "63bb64ad-9799-4770-b5c3-24ed299a07bf",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.ContainerService/fleets/*",
+ "Microsoft.Resources/deployments/*"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Kubernetes Fleet Manager Contributor Role",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ## Azure Kubernetes Fleet Manager RBAC Admin
-This role grants admin access - provides write permissions on most objects within a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.
+Grants read/write access to Kubernetes resources within a namespace in the fleet-managed hub cluster - provides write permissions on most objects within a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.
+
+[Learn more](/azure/kubernetes-fleet/access-fleet-kubernetes-api)
> [!div class="mx-tableFixed"] > | Actions | Description |
This role grants admin access - provides write permissions on most objects withi
"assignableScopes": [ "/" ],
- "description": "This role grants admin access - provides write permissions on most objects within a a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.",
+ "description": "Grants read/write access to Kubernetes resources within a namespace in the fleet-managed hub cluster - provides write permissions on most objects within a a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.",
"id": "/providers/Microsoft.Authorization/roleDefinitions/434fb43a-c01c-447e-9f67-c3ad923cfaba", "name": "434fb43a-c01c-447e-9f67-c3ad923cfaba", "permissions": [
This role grants admin access - provides write permissions on most objects withi
## Azure Kubernetes Fleet Manager RBAC Cluster Admin
-Lets you manage all resources in the fleet manager cluster.
+Grants read/write access to all Kubernetes resources in the fleet-managed hub cluster.
+
+[Learn more](/azure/kubernetes-fleet/access-fleet-kubernetes-api)
> [!div class="mx-tableFixed"] > | Actions | Description |
Lets you manage all resources in the fleet manager cluster.
"assignableScopes": [ "/" ],
- "description": "Lets you manage all resources in the fleet manager cluster.",
+ "description": "Grants read/write access to all Kubernetes resources in the fleet-managed hub cluster.",
"id": "/providers/Microsoft.Authorization/roleDefinitions/18ab4d3d-a1bf-4477-8ad9-8359bc988f69", "name": "18ab4d3d-a1bf-4477-8ad9-8359bc988f69", "permissions": [
Lets you manage all resources in the fleet manager cluster.
## Azure Kubernetes Fleet Manager RBAC Reader
-Allows read-only access to see most objects in a namespace. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces.
+Grants read-only access to most Kubernetes resources within a namespace in the fleet-managed hub cluster. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces.
+
+[Learn more](/azure/kubernetes-fleet/access-fleet-kubernetes-api)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allows read-only access to see most objects in a namespace. It does not allow vi
"assignableScopes": [ "/" ],
- "description": "Allows read-only access to see most objects in a namespace. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces.",
+ "description": "Grants read-only access to most Kubernetes resources within a namespace in the fleet-managed hub cluster. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces.",
"id": "/providers/Microsoft.Authorization/roleDefinitions/30b27cfc-9c84-438e-b0ce-70e35255df80", "name": "30b27cfc-9c84-438e-b0ce-70e35255df80", "permissions": [
Allows read-only access to see most objects in a namespace. It does not allow vi
## Azure Kubernetes Fleet Manager RBAC Writer
-Allows read/write access to most objects in a namespace. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.
+Grants read/write access to most Kubernetes resources within a namespace in the fleet-managed hub cluster. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace.  Applying this role at cluster scope will give access across all namespaces.
+
+[Learn more](/azure/kubernetes-fleet/access-fleet-kubernetes-api)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allows read/write access to most objects in a namespace. This role does not allo
"assignableScopes": [ "/" ],
- "description": "Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.",
+ "description": "Grants read/write access to most Kubernetes resources within a namespace in the fleet-managed hub cluster. This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace.  Applying this role at cluster scope will give access across all namespaces.",
"id": "/providers/Microsoft.Authorization/roleDefinitions/5af6afb3-c06c-4fa4-8848-71a8aee05683", "name": "5af6afb3-c06c-4fa4-8848-71a8aee05683", "permissions": [
role-based-access-control Management And Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/management-and-governance.md
Can assign existing published blueprints, but cannot create new blueprints. Note
} ```
+## Carbon Optimization Reader
+
+Allow read access to Azure Carbon Optimization data
+
+[Learn more](/azure/carbon-optimization/permissions)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Carbon](../permissions/management-and-governance.md#microsoftcarbon)/carbonEmissionReports/action | API for Carbon Emissions Reports |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allow read access to Azure Carbon Optimization data",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/fa0d39e6-28e5-40cf-8521-1eb320653a4c",
+ "name": "fa0d39e6-28e5-40cf-8521-1eb320653a4c",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Carbon/carbonEmissionReports/action"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Carbon Optimization Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ## Cost Management Contributor Can view costs and manage cost configuration (e.g. budgets, exports)
role-based-access-control Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/containers.md
Azure service: [Container Registry](/azure/container-registry/)
Accelerate your containerized application development without compromising security.
-Azure service: [Azure Kubernetes Service (AKS)](/azure/aks/)
+Azure service: [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes)
> [!div class="mx-tableFixed"] > | Action | Description |
Azure service: [Azure Kubernetes Service (AKS)](/azure/aks/)
> | Microsoft.ContainerService/fleets/write | Create or Update a fleet | > | Microsoft.ContainerService/fleets/delete | Delete a fleet | > | Microsoft.ContainerService/fleets/listCredentials/action | List fleet credentials |
+> | Microsoft.ContainerService/fleets/autoUpgradeProfiles/read | Get a fleet auto upgrade profile |
+> | Microsoft.ContainerService/fleets/autoUpgradeProfiles/write | Create or Update a fleet auto upgrade profile |
+> | Microsoft.ContainerService/fleets/autoUpgradeProfiles/delete | Delete a fleet auto upgrade profile |
> | Microsoft.ContainerService/fleets/members/read | Get a fleet member | > | Microsoft.ContainerService/fleets/members/write | Create or Update a fleet member | > | Microsoft.ContainerService/fleets/members/delete | Delete a fleet member |
Azure service: [Azure Kubernetes Service (AKS)](/azure/aks/)
> | Microsoft.ContainerService/locations/guardrailsVersions/read | Get Guardrails Versions | > | Microsoft.ContainerService/locations/kubernetesversions/read | List available Kubernetes versions in the region. | > | Microsoft.ContainerService/locations/meshRevisionProfiles/read | Read service mesh revision profiles in a location |
+> | Microsoft.ContainerService/locations/nodeimageversions/read | List available Node Image versions in the region. |
> | Microsoft.ContainerService/locations/operationresults/read | Gets the status of an asynchronous operation result | > | Microsoft.ContainerService/locations/operations/read | Gets the status of an asynchronous operation | > | Microsoft.ContainerService/locations/orchestrators/read | Lists the supported orchestrators |
Azure service: [Azure Kubernetes Service (AKS)](/azure/aks/)
> | Microsoft.ContainerService/managedClusters/extensionaddons/read | Gets an extension addon | > | Microsoft.ContainerService/managedClusters/extensionaddons/write | Creates a new extension addon or updates an existing one | > | Microsoft.ContainerService/managedClusters/extensionaddons/delete | Deletes an extension addon |
+> | Microsoft.ContainerService/managedClusters/loadBalancers/read | Gets a load balancer configuration |
+> | Microsoft.ContainerService/managedClusters/loadBalancers/write | Creates a new LoadBalancerConfiguration or updates an existing one |
+> | Microsoft.ContainerService/managedClusters/loadBalancers/delete | Deletes a load balancer configuration |
> | Microsoft.ContainerService/managedClusters/maintenanceConfigurations/read | Gets a maintenance configuration | > | Microsoft.ContainerService/managedClusters/maintenanceConfigurations/write | Creates a new MaintenanceConfiguration or updates an existing one | > | Microsoft.ContainerService/managedClusters/maintenanceConfigurations/delete | Deletes a maintenance configuration |
role-based-access-control Management And Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/permissions/management-and-governance.md
Azure service: [Azure Blueprints](/azure/governance/blueprints/)
> | Microsoft.Blueprint/blueprints/versions/delete | Delete any blueprints | > | Microsoft.Blueprint/blueprints/versions/artifacts/read | Read any blueprint artifacts |
+## Microsoft.Carbon
+
+Azure service: [Azure carbon optimization](/azure/carbon-optimization/overview)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.Carbon/carbonEmissionReports/action | API for Carbon Emissions Reports |
+> | Microsoft.Carbon/queryCarbonEmissionDataAvailableDateRange/action | API for query carbon emission data available date range |
+> | Microsoft.Carbon/register/action | Register the subscription for Microsoft.Carbon |
+> | Microsoft.Carbon/unregister/action | Unregister the subscription for Microsoft.Carbon |
+> | Microsoft.Carbon/operations/read | read operations |
+ ## Microsoft.Consumption Programmatic access to cost and usage data for your Azure resources.
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Click the resource provider name in the following list to see the list of permis
> | | | | > | [Microsoft.ContainerInstance](./permissions/containers.md#microsoftcontainerinstance) | Easily run containers on Azure without managing servers. | [Container Instances](/azure/container-instances/) | > | [Microsoft.ContainerRegistry](./permissions/containers.md#microsoftcontainerregistry) | Store and manage container images across all types of Azure deployments. | [Container Registry](/azure/container-registry/) |
-> | [Microsoft.ContainerService](./permissions/containers.md#microsoftcontainerservice) | Accelerate your containerized application development without compromising security. | [Azure Kubernetes Service (AKS)](/azure/aks/) |
+> | [Microsoft.ContainerService](./permissions/containers.md#microsoftcontainerservice) | Accelerate your containerized application development without compromising security. | [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes) |
> | [Microsoft.RedHatOpenShift](./permissions/containers.md#microsoftredhatopenshift) | | [Azure Red Hat OpenShift](/azure/openshift/) | <a name='microsoftdocumentdb'></a>
Click the resource provider name in the following list to see the list of permis
> | [Microsoft.Automation](./permissions/management-and-governance.md#microsoftautomation) | Simplify cloud management with process automation. | [Automation](/azure/automation/) | > | [Microsoft.Billing](./permissions/management-and-governance.md#microsoftbilling) | Manage your subscriptions and see usage and billing. | [Cost Management + Billing](/azure/cost-management-billing/) | > | [Microsoft.Blueprint](./permissions/management-and-governance.md#microsoftblueprint) | Enabling quick, repeatable creation of governed environments. | [Azure Blueprints](/azure/governance/blueprints/) |
+> | [Microsoft.Carbon](./permissions/management-and-governance.md#microsoftcarbon) | | [Azure carbon optimization](/azure/carbon-optimization/overview) |
> | [Microsoft.Consumption](./permissions/management-and-governance.md#microsoftconsumption) | Programmatic access to cost and usage data for your Azure resources. | [Cost Management](/azure/cost-management-billing/) | > | [Microsoft.CostManagement](./permissions/management-and-governance.md#microsoftcostmanagement) | Optimize what you spend on the cloud, while maximizing cloud potential. | [Cost Management](/azure/cost-management-billing/) | > | [Microsoft.Features](./permissions/management-and-governance.md#microsoftfeatures) | | [Azure Resource Manager](/azure/azure-resource-manager/) |
storage Storage Ref Azcopy Bench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-bench.md
description: This article provides reference information for the azcopy bench co
Previously updated : 05/26/2022 Last updated : 03/29/2024
Run an upload that doesn't delete the transferred files. (These files can then s
`--number-of-folders` (uint) If larger than 0, create folders to divide up the data.
+`--put-blob-size-mb` Use this size (specified in MiB) as a threshold to determine whether to upload a blob as a single PUT request when uploading to Azure Storage. The default value is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
+ `--put-md5` Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob/file. (By default the hash is NOT created.) Identical to the same-named parameter in the copy command `--size-per-file` (string) Size of each auto-generated data file. Must be a number immediately followed by K, M or G. E.g. 12k or 200G (default "250M")
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
description: This article provides reference information for the azcopy copy com
Previously updated : 10/31/2023 Last updated : 03/29/2024
Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from
`--preserve-smb-info` For SMB-aware locations, flag is set to true by default. Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Windows and Azure Files). Only the attribute bits supported by Azure Files are transferred; any others are ignored. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). The info transferred for folders is the same as that for files, except for `Last Write Time` which is never preserved for folders. (default true)
+`--preserve-symlinks` If enabled, symlink destinations are preserved as the blob content, rather than uploading the file or folder on the other end of the symlink.
+
+`--put-blob-size-mb` Use this size (specified in MiB) as a threshold to determine whether to upload a blob as a single PUT request when uploading to Azure Storage. The default value is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
+ `--put-md5` Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob or file. (By default the hash is NOT created.) Only available when uploading. `--recursive` Look into subdirectories recursively when uploading from local file system.
storage Storage Ref Azcopy Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-sync.md
description: This article provides reference information for the azcopy sync com
Previously updated : 02/09/2023 Last updated : 03/29/2024
Note: if include and exclude flags are used together, only files matching the in
`--mirror-mode` Disable last-modified-time based comparison and overwrites the conflicting files and blobs at the destination if this flag is set to true. Default is false
+`--put-blob-size-mb` Use this size (specified in MiB) as a threshold to determine whether to upload a blob as a single PUT request when uploading to Azure Storage. The default value is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
+ `--preserve-permissions` False by default. Preserves ACLs between aware resources (Windows and Azure Files, or ADLS Gen 2 to ADLS Gen 2). For Hierarchical Namespace accounts, you'll need a container SAS or OAuth token with Modify Ownership and Modify Permissions permissions. For downloads, you'll also need the `--backup` flag to restore permissions where the new Owner won't be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). `--preserve-smb-info` For SMB-aware locations, flag will be set to true by default. Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Azure Files). This flag applies to both files and folders, unless a file-only filter is specified (for example, include-pattern). The info transferred for folders is the same as that for files, except for Last Write Time that isn't preserved for folders. (default true)
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
Title: Network Watcher Agent VM extension - Linux
-description: Deploy the Network Watcher Agent virtual machine extension on Linux virtual machines.
+ Title: Manage Network Watcher Agent VM extension - Linux
+description: Learn about the Network Watcher Agent virtual machine extension for Linux virtual machines and how to install and uninstall it.
- Previously updated : 03/26/2024-+ Last updated : 03/31/2024++
+#CustomerIntent: As an Azure administrator, I want to install Network Watcher Agent VM extension and manage it so that I can use Network watcher features to diagnose and monitor my Linux virtual machines (VMs).
-# Network Watcher Agent virtual machine extension for Linux
+# Manage Network Watcher Agent virtual machine extension for Linux
> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](../workloads/centos/centos-end-of-life.md).
-[Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) is a network performance monitoring, diagnostic, and analytics service that allows monitoring for Azure networks. The Network Watcher Agent virtual machine extension is a requirement for some of the Network Watcher features on Azure virtual machines (VMs), such as capturing network traffic on demand, and other advanced functionality.
+The Network Watcher Agent virtual machine extension is a requirement for some of Azure Network Watcher features that capture network traffic to diagnose and monitor Azure virtual machines (VMs). For more information, see [What is Azure Network Watcher?](../../network-watcher/network-watcher-overview.md)
-This article details the supported platforms and deployment options for the Network Watcher Agent VM extension for Linux. Installation of the agent doesn't disrupt, or require a reboot of the virtual machine. You can install the extension on virtual machines that you deploy. If the virtual machine is deployed by an Azure service, check the documentation for the service to determine whether or not it permits installing extensions in the virtual machine.
+In this article, you learn how to install and uninstall Network Watcher Agent for Linux. Installation of the agent doesn't disrupt, or require a reboot of the virtual machine. If the virtual machine is deployed by an Azure service, check the documentation of the service to determine whether or not it permits installing extensions in the virtual machine.
> [!NOTE] > Network Watcher Agent extension is not supported on AKS clusters. ## Prerequisites
-### Operating system
+# [**Portal**](#tab/portal)
+
+- An Azure Linux virtual machine (VM). For more information, see [Supported Linux distributions and versions](#supported-operating-systems).
+
+- Outbound TCP connectivity to `169.254.169.254` over `port 80` and `168.63.129.16` over `port 8037`. The agent uses these IP addresses to communicate with the Azure platform.
+
+- Internet connectivity: Network Watcher Agent requires internet connectivity for some features to properly work. For example, it requires connectivity to your storage account to upload packet captures. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+
+# [**PowerShell**](#tab/powershell)
+
+- An Azure Linux virtual machine (VM). For more information, see [Supported Linux distributions and versions](#supported-operating-systems).
+
+- Outbound TCP connectivity to `169.254.169.254` over `port 80` and `168.63.129.16` over `port 8037`. The agent uses these IP addresses to communicate with the Azure platform.
+
+- Internet connectivity: Network Watcher Agent requires internet connectivity for some features to properly work. For example, it requires connectivity to your storage account to upload packet captures. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+
+- Azure Cloud Shell or Azure PowerShell.
+
+ The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. This article requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+# [**Azure CLI**](#tab/cli)
+
+- An Azure Linux virtual machine (VM). For more information, see [Supported Linux distributions and versions](#supported-operating-systems).
+
+- Outbound TCP connectivity to `169.254.169.254` over `port 80` and `168.63.129.16` over `port 8037`. The agent uses these IP addresses to communicate with the Azure platform.
+
+- Internet connectivity: Network Watcher Agent requires internet connectivity for some features to properly work. For example, it requires connectivity to your storage account to upload packet captures. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+
+- Azure Cloud Shell or Azure CLI.
+
+ The steps in this article run the Azure CLI commands interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+ You can also [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. If you run Azure CLI locally, sign in to Azure using the [az login](/cli/azure/reference-index#az-login) command.
+
+# [**Resource Manager**](#tab/arm)
+
+- An Azure Linux virtual machine (VM). For more information, see [Supported Linux distributions and versions](#supported-operating-systems).
+
+- Outbound TCP connectivity to `169.254.169.254` over `port 80` and `168.63.129.16` over `port 8037`. The agent uses these IP addresses to communicate with the Azure platform.
+
+- Internet connectivity: Network Watcher Agent requires internet connectivity for some features to properly work. For example, it requires connectivity to your storage account to upload packet captures. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+
+- Azure PowerShell or Azure CLI installed locally to deploy the template.
+
+ - You can [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet to sign in to Azure.
-The Network Watcher Agent extension can be configured for the following Linux distributions:
+ - You can [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. Use [az login](/cli/azure/reference-index#az-login) command to sign in to Azure.
+++
+## Supported operating systems
+
+Network Watcher Agent extension for Linux can be installed on the following Linux distributions:
| Distribution | Version | |||
The Network Watcher Agent extension can be configured for the following Linux di
| Oracle Linux | 6.10, 7 and 8+ | | Red Hat Enterprise Linux (RHEL) | 6.10, 7, 8 and 9.2 | | Rocky Linux | 9.1 |
-| SUSE Linux Enterprise Server (SLES) | 12 and 15 (SP2, SP3 and SP4) |
+| SUSE Linux Enterprise Server (SLES) | 12 and 15 (SP2, SP3, and SP4) |
| Ubuntu | 16+ | > [!NOTE] > - Red Hat Enterprise Linux 6.X and Oracle Linux 6.x have reached their end-of-life (EOL). RHEL 6.10 has available [extended life cycle (ELS) support](https://www.redhat.com/en/resources/els-datasheet) through [June 30, 2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204). > - Oracle Linux version 6.10 has available [ELS support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf) through [July 1, 2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf).
-### Internet connectivity
-
-Some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. Without the ability to establish outgoing connections, some of the Network Watcher Agent features may malfunction, or become unavailable. For more information about Network Watcher functionality that requires the agent, see the [Network Watcher documentation](../../network-watcher/index.yml).
- ## Extension schema The following JSON shows the schema for the Network Watcher Agent extension. The extension doesn't require, or support, any user-supplied settings. The extension relies on its default configuration. ```json {
- "type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'), '/AzureNetworkWatcherExtension')]",
- "apiVersion": "[variables('apiVersion')]",
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2023-03-01",
"location": "[resourceGroup().location]", "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
+ "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
], "properties": { "autoUpgradeMinorVersion": true,
The following JSON shows the schema for the Network Watcher Agent extension. The
"typeHandlerVersion": "1.4" } }
+```
+
+## List installed extensions
+
+# [**Portal**](#tab/portal)
+
+From the virtual machine page in the Azure portal, you can view the installed extension by following these steps:
+1. Under **Settings**, select **Extensions + applications**.
+
+1. In the **Extensions** tab, you can see all installed extensions on the virtual machine. If the list is long, you can use the search box to filter the list.
+
+ :::image type="content" source="./media/network-watcher/list-vm-extensions.png" alt-text="Screenshot that shows how to view installed extensions on a VM in the Azure portal." lightbox="./media/network-watcher/list-vm-extensions.png":::
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Get-AzVMExtension](/powershell/module/az.compute/get-azvmextension) cmdlet to list all installed extensions on the virtual machine:
+
+```azurepowershell-interactive
+# List the installed extensions on the virtual machine.
+Get-AzVMExtension -ResourceGroupName 'myResourceGroup' -VMName 'myVM' | format-table Name, Publisher, ExtensionType, AutoUpgradeMinorVersion, EnableAutomaticUpgrade
```
-### Property values
+The output of the cmdlet lists the installed extensions:
-| Name | Value / Example |
-| - | - |
-| apiVersion | 2023-03-01 |
-| publisher | Microsoft.Azure.NetworkWatcher |
-| type | NetworkWatcherAgentLinux |
-| typeHandlerVersion | 1.4 |
+```output
+Name Publisher ExtensionType AutoUpgradeMinorVersion EnableAutomaticUpgrade
+- - -- -
+AzureNetworkWatcherExtension Microsoft.Azure.NetworkWatcher NetworkWatcherAgentLinux True True
+```
-## Template deployment
+# [**Azure CLI**](#tab/cli)
-You can deploy Azure VM extensions with an Azure Resource Manager template (ARM template) using the previous JSON [schema](#extension-schema).
+Use [az vm extension list](/cli/azure/vm/extension#az-vm-extension-list) command to list all installed extensions on the virtual machine:
-## Azure classic CLI deployment
+```azurecli
+# List the installed extensions on the virtual machine.
+az vm extension list --resource-group 'myResourceGroup' --vm-name 'myVM' --out table
+```
+
+The output of the command lists the installed extensions:
+
+```output
+Name ProvisioningState Publisher Version AutoUpgradeMinorVersion
+- - -
+AzureNetworkWatcherExtension Succeeded Microsoft.Azure.NetworkWatcher 1.4 True
+```
+# [**Resource Manager**](#tab/arm)
-The following example deploys the Network Watcher Agent VM extension to an existing VM deployed through the classic deployment model:
+N/A
-```console
-azure config mode asm
-azure vm extension set myVM1 NetworkWatcherAgentLinux Microsoft.Azure.NetworkWatcher 1.4
++
+## Install Network Watcher Agent VM extension
+
+# [**Portal**](#tab/portal)
+
+From the virtual machine page in the Azure portal, you can install the Network Watcher Agent VM extension by following these steps:
+
+1. Under **Settings**, select **Extensions + applications**.
+
+1. Select **+ Add** and search for **Network Watcher Agent** and install it. If the extension is already installed, you can see it in the list of extensions.
+
+ :::image type="content" source="./media/network-watcher/vm-extensions.png" alt-text="Screenshot that shows the VM's extensions page in the Azure portal." lightbox="./media/network-watcher/vm-extensions.png":::
+
+1. In the search box of **Install an Extension**, enter *Network Watcher Agent for Linux*. Select the extension from the list and select **Next**.
+
+ :::image type="content" source="./media/network-watcher/install-extension-linux.png" alt-text="Screenshot that shows how to install Network Watcher Agent for Linux in the Azure portal." lightbox="./media/network-watcher/install-extension-linux.png":::
+
+1. Select **Review + create** and then select **Create**.
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) cmdlet to install Network Watcher Agent VM extension on the virtual machine:
+
+```azurepowershell-interactive
+# Install Network Watcher Agent for Linux on the virtual machine.
+Set-AzVMExtension -Name 'AzureNetworkWatcherExtension' -Publisher 'Microsoft.Azure.NetworkWatcher' -ExtensionType 'NetworkWatcherAgentLinux' -EnableAutomaticUpgrade 1 -TypeHandlerVersion '1.4' -ResourceGroupName 'myResourceGroup' -VMName 'myVM'
+```
+
+Once the installation is successfully completed, you see the following output:
+
+```output
+RequestId IsSuccessStatusCode StatusCode ReasonPhrase
+ - -
+ True OK
```
-## Azure CLI deployment
+# [**Azure CLI**](#tab/cli)
-The following example deploys the Network Watcher Agent VM extension to an existing VM deployed through Resource
+Use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) command to install Network Watcher Agent VM extension on the virtual machine:
```azurecli
-az vm extension set --resource-group myResourceGroup1 --vm-name myVM1 --name NetworkWatcherAgentLinux --publisher Microsoft.Azure.NetworkWatcher --version 1.4
+# Install Network Watcher Agent for Windows on the virtual machine.
+az vm extension set --name 'NetworkWatcherAgentLinux' --extension-instance-name 'AzureNetworkWatcherExtension' --publisher 'Microsoft.Azure.NetworkWatcher' --enable-auto-upgrade 'true' --version '1.4' --resource-group 'myResourceGroup' --vm-name 'myVM'
```
-## Troubleshooting and support
+# [**Resource Manager**](#tab/arm)
-### Troubleshooting
+Use the following Azure Resource Manager template (ARM template) to install Network Watcher Agent VM extension on a Linux virtual machine:
-You can retrieve data about the state of extension deployments using either the Azure portal or Azure CLI.
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "type": "string"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "name": "[parameters('vmName')]",
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2023-03-01",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ }
+ },
+ {
+ "name": "[concat(parameters('vmName'), '/AzureNetworkWatcherExtension')]",
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2023-03-01",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
+ ],
+ "properties": {
+ "autoUpgradeMinorVersion": true,
+ "publisher": "Microsoft.Azure.NetworkWatcher",
+ "type": "NetworkWatcherAgentLinux",
+ "typeHandlerVersion": "1.4"
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
-The following example shows the deployment state of the NetworkWatcherAgentLinux extension for a VM deployed through Resource Manager, using the Azure CLI:
+You can use either Azure PowerShell or Azure CLI to deploy the Resource Manager template:
+
+```azurepowershell
+# Deploy the JSON template file using Azure PowerShell.
+New-AzResourceGroupDeployment -ResourceGroupName 'myResourceGroup' -TemplateFile 'agent.json'
+```
```azurecli
-az vm extension show --name NetworkWatcherAgentLinux --resource-group myResourceGroup1 --vm-name myVM1
+# Deploy the JSON template file using the Azure CLI.
+az deployment group create --resource-group 'myResourceGroup' --template-file 'agent.json'
``` ++
+## Uninstall Network Watcher Agent VM extension
+
+# [**Portal**](#tab/portal)
+
+From the virtual machine page in the Azure portal, you can uninstall the Network Watcher Agent VM extension by following these steps:
+
+1. Under **Settings**, select **Extensions + applications**.
+
+1. Select **AzureNetworkWatcherExtension** from the list of extensions, and then select **Uninstall**.
+
+ :::image type="content" source="./media/network-watcher/uninstall-extension-linux.png" alt-text="Screenshot that shows how to uninstall Network Watcher Agent for Linux in the Azure portal." lightbox="./media/network-watcher/uninstall-extension-linux.png":::
+
+ > [!NOTE]
+ > You might see Network Watcher Agent VM extension named differently than **AzureNetworkWatcherExtension**.
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Remove-AzVMExtension](/powershell/module/az.compute/remove-azvmextension) cmdlet to remove Network Watcher Agent VM extension from the virtual machine:
+
+```azurepowershell-interactive
+# Uninstall Network Watcher Agent VM extension.
+Remove-AzureVMExtension -Name 'AzureNetworkWatcherExtension' -ResourceGroupName 'myResourceGroup' -VMName 'myVM'
+```
+
+# [**Azure CLI**](#tab/cli)
+
+Use [az vm extension delete](/cli/azure/vm/extension#az-vm-extension-delete) command to remove Network Watcher Agent VM extension from the virtual machine:
+
+```azurecli-interactive
+# Uninstall Network Watcher Agent VM extension.
+az vm extension delete --name 'AzureNetworkWatcherExtension' --resource-group 'myResourceGroup' --vm-name 'myVM'
+```
+
+# [**Resource Manager**](#tab/arm)
+
+N/A
+++
+## Frequently asked questions (FAQ)
+
+To get answers to most frequently asked questions about Network Watcher Agent, see [Network Watcher Agent FAQ](../../network-watcher/frequently-asked-questions.yml#network-watcher-agent).
+ ## Related content - [Update Azure Network Watcher extension to the latest version](network-watcher-update.md).
virtual-machines Network Watcher Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-windows.md
Title: Manage Network Watcher Agent VM extension - Windows
+ Title: Manage Network Watcher Agent VM extension - Windows
description: Learn about the Network Watcher Agent virtual machine extension on Windows virtual machines and how to deploy it. - Previously updated : 03/29/2024 + Last updated : 03/31/2024+ #CustomerIntent: As an Azure administrator, I want to learn about Network Watcher Agent VM extension so that I can use Network watcher features to diagnose and monitor my virtual machines (VMs). # Manage Network Watcher Agent virtual machine extension for Windows
-[Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) is a network performance monitoring, diagnostic, and analytics service that allows monitoring for Azure networks. The Network Watcher Agent virtual machine extension is a requirement for some of the Network Watcher features on Azure virtual machines (VMs). For more information, see [Network Watcher Agent FAQ](../../network-watcher/frequently-asked-questions.yml#network-watcher-agent).
+The Network Watcher Agent virtual machine extension is a requirement for some of Azure Network Watcher features that capture network traffic to diagnose and monitor Azure virtual machines (VMs). For more information, see [What is Azure Network Watcher?](../../network-watcher/network-watcher-overview.md)
-In this article, you learn about the supported platforms and deployment options for the Network Watcher Agent VM extension for Windows. Installation of the agent doesn't disrupt, or require a reboot of the virtual machine. You can install the extension on virtual machines that you deploy. If the virtual machine is deployed by an Azure service, check the documentation for the service to determine whether or not it permits installing extensions in the virtual machine.
+In this article, you learn how to install and uninstall Network Watcher Agent for Windows. Installation of the agent doesn't disrupt, or require a reboot of the virtual machine. If the virtual machine is deployed by an Azure service, check the documentation of the service to determine whether or not it permits installing extensions in the virtual machine.
## Prerequisites
In this article, you learn about the supported platforms and deployment options
- An Azure Windows virtual machine (VM). For more information, see [Supported Windows versions](#supported-operating-systems). -- Internet connectivity: some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. For example, without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+- Outbound TCP connectivity to `169.254.169.254` over `port 80` and `168.63.129.16` over `port 8037`. The agent uses these IP addresses to communicate with the Azure platform.
+
+- Internet connectivity: Network Watcher Agent requires internet connectivity for some features to properly work. For example, it requires connectivity to your storage account to upload packet captures. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
# [**PowerShell**](#tab/powershell) - An Azure Windows virtual machine (VM). For more information, see [Supported Windows versions](#supported-operating-systems). -- Internet connectivity: some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. For example, without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+- Outbound TCP connectivity to `169.254.169.254` over `port 80` and `168.63.129.16` over `port 8037`. The agent uses these IP addresses to communicate with the Azure platform.
+
+- Internet connectivity: Network Watcher Agent requires internet connectivity for some features to properly work. For example, it requires connectivity to your storage account to upload packet captures. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
- Azure Cloud Shell or Azure PowerShell.
In this article, you learn about the supported platforms and deployment options
- An Azure Windows virtual machine (VM). For more information, see [Supported Windows versions](#supported-operating-systems). -- Internet connectivity: some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. For example, without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+- Outbound TCP connectivity to `169.254.169.254` over `port 80` and `168.63.129.16` over `port 8037`. The agent uses these IP addresses to communicate with the Azure platform.
+
+- Internet connectivity: Network Watcher Agent requires internet connectivity for some features to properly work. For example, it requires connectivity to your storage account to upload packet captures. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
- Azure Cloud Shell or Azure CLI.
In this article, you learn about the supported platforms and deployment options
- An Azure Windows virtual machine (VM). For more information, see [Supported Windows versions](#supported-operating-systems). -- Internet connectivity: some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. For example, without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+- Outbound TCP connectivity to `169.254.169.254` over `port 80` and `168.63.129.16` over `port 8037`. The agent uses these IP addresses to communicate with the Azure platform.
+
+- Internet connectivity: Network Watcher Agent requires internet connectivity for some features to properly work. For example, it requires connectivity to your storage account to upload packet captures. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
- Azure PowerShell or Azure CLI installed locally to deploy the template.
Use [Get-AzVMExtension](/powershell/module/az.compute/get-azvmextension) cmdlet
```azurepowershell-interactive # List the installed extensions on the virtual machine.
-Get-AzVMExtension -VMName 'myVM' -ResourceGroupName 'myResourceGroup' | format-table Name, Publisher, ExtensionType, EnableAutomaticUpgrade
+Get-AzVMExtension -ResourceGroupName 'myResourceGroup' -VMName 'myVM' | format-table Name, Publisher, ExtensionType, AutoUpgradeMinorVersion, EnableAutomaticUpgrade
``` The output of the cmdlet lists the installed extensions: ```output
-Name Publisher ExtensionType EnableAutomaticUpgrade
-- - -
-AzureNetworkWatcherExtension Microsoft.Azure.NetworkWatcher NetworkWatcherAgentWindows True
-AzurePolicyforWindows Microsoft.GuestConfiguration ConfigurationforWindows True
+Name Publisher ExtensionType AutoUpgradeMinorVersion EnableAutomaticUpgrade
+- - -- -
+AzureNetworkWatcherExtension Microsoft.Azure.NetworkWatcher NetworkWatcherAgentWindows True True
```
The output of the command lists the installed extensions:
Name ProvisioningState Publisher Version AutoUpgradeMinorVersion - - - AzureNetworkWatcherExtension Succeeded Microsoft.Azure.NetworkWatcher 1.4 True
-AzurePolicyforWindows Succeeded Microsoft.GuestConfiguration 1.1 True
``` # [**Resource Manager**](#tab/arm)
New-AzResourceGroupDeployment -ResourceGroupName 'myResourceGroup' -TemplateFile
```azurecli # Deploy the JSON template file using the Azure CLI.
-az deployment group create --resource-group 'myResourceGroup' --template-file
+az deployment group create --resource-group 'myResourceGroup' --template-file 'agent.json'
```
From the virtual machine page in the Azure portal, you can uninstall the Network
:::image type="content" source="./media/network-watcher/uninstall-extension-windows.png" alt-text="Screenshot that shows how to uninstall Network Watcher Agent for Windows in the Azure portal." lightbox="./media/network-watcher/uninstall-extension-windows.png"::: > [!NOTE]
- > In the list of extensions, you might see Network Watcher Agent VM extension named differently than **AzureNetworkWatcherExtension**.
+ > You might see Network Watcher Agent VM extension named differently than **AzureNetworkWatcherExtension**.
# [**PowerShell**](#tab/powershell)
N/A
+## Frequently asked questions (FAQ)
+
+To get answers to most frequently asked questions about Network Watcher Agent, see [Network Watcher Agent FAQ](../../network-watcher/frequently-asked-questions.yml#network-watcher-agent).
## Related content
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
You can review per peer and instance metrics by selecting **Apply splitting** an
| Metric | Description| | | | | **Gateway Bandwidth** | Average site-to-site aggregate bandwidth of a gateway in bytes per second.|
+| **Gateway Inbound Flows** | Number of distinct 5-tuple flows (protocol, local IP address, remote IP address, local port, and remote port) flowing into a VPN Gateway. Limit is 250k flows.|
+| **Gateway Outbound Flows** | Number of distinct 5-tuple flows (protocol, local IP address, remote IP address, local port, and remote port) flowing out of a VPN Gateway. Limit is 250k flows.|
| **Tunnel Bandwidth** | Average bandwidth of a tunnel in bytes per second.| | **Tunnel Egress Bytes** | Outgoing bytes of a tunnel. | | **Tunnel Egress Packets** | Outgoing packet count of a tunnel. | | **Tunnel Ingress Bytes** | Incoming bytes of a tunnel.| | **Tunnel Ingress Packet** | Incoming packet count of a tunnel.| | **Tunnel Peak PPS** | Number of packets per second per link connection in the last minute.|
-| **Tunnel Flow Count** | Number of distinct flows created per link connection.|
+| **Tunnel Flow Count** | Number of distinct 3-tupe (protocol, local IP address, remote IP address) flows created per link connection.|
### <a name="p2s-metrics"></a>Point-to-site VPN gateway metrics
vpn-gateway Monitor Vpn Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/monitor-vpn-gateway-reference.md
Metrics in Azure Monitor are numerical values that describe some aspect of a sys
| **BGP Peer Status** | Count | 5 minutes | Average BGP connectivity status per peer and per instance. | | **BGP Routes Advertised** | Count | 5 minutes | Number of routes advertised per peer and per instance. | | **BGP Routes Learned** | Count | 5 minutes | Number of routes learned per peer and per instance. |
+| **Gateway Inbound Flows** | Count | 5 minutes | Number of distinct 5-tuple flows flowing into a VPN Gateway. Limit is 250k flows. |
+| **Gateway Outbound Flows** | Count | 5 minutes | Number of distinct 5-tuple flows flowing out of a VPN Gateway. Limit is 250k flows. |
| **Gateway P2S Bandwidth** | Bytes/s | 1 minute | Average combined bandwidth utilization of all point-to-site connections on the gateway. | | **Gateway S2S Bandwidth** | Bytes/s | 5 minutes | Average combined bandwidth utilization of all site-to-site connections on the gateway. | | **P2S Connection Count** | Count | 1 minute | Count of point-to-site connections on the gateway. |
Metrics in Azure Monitor are numerical values that describe some aspect of a sys
| **Tunnel MMSA Count** | Count | 5 minutes | Number of main mode security associations present. | | **Tunnel Peak PPS** | Count | 5 minutes | Max number of packets per second per tunnel. | | **Tunnel QMSA Count** | Count | 5 minutes | Number of quick mode security associations present. |
-| **Tunnel Total Flow Count** | Count | 5 minutes | Number of distinct flows created per tunnel. |
+| **Tunnel Total Flow Count** | Count | 5 minutes | Number of distinct 3-tuple flows created per tunnel. |
| **User Vpn Route Count** | Count | 5 minutes | Number of user VPN routes configured on the VPN Gateway. |
-| **VNet Address Prefix Count** | Count | 5 minutes | Number of virtual network address prefixes that are used/advertised by the gateway. |
+| **VNet Address Prefix Count** | Count | 5 minutes | Number of virtual network address prefixes that are used/advertised by the gateway. |
## Resource logs
The following resource logs are available in Azure:
## Next steps * For more information about VPN Gateway monitoring, see [Monitoring Azure VPN Gateway](monitor-vpn-gateway.md).
-* To learn more about metrics in Azure Monitor, see [Metrics in Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md).
+* To learn more about metrics in Azure Monitor, see [Metrics in Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md).
vpn-gateway Vpn Gateway Howto Setup Alerts Virtual Network Gateway Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-setup-alerts-virtual-network-gateway-metric.md
For steps, see [Tutorial: Create a metric alert for an Azure resource](../azure-
| **BGP Peer Status** | Count | 5 minutes | Average BGP connectivity status per peer and per instance. | | **BGP Routes Advertised** | Count | 5 minutes | Number of routes advertised per peer and per instance. | | **BGP Routes Learned** | Count | 5 minutes | Number of routes learned per peer and per instance. |
+| **Gateway Inbound Flows** | Count | 5 minutes | Number of distinct 5-tuple flows flowing into a VPN Gateway. Limit is 250k flows. |
+| **Gateway Outbound Flows** | Count | 5 minutes | Number of distinct 5-tuple flows flowing out of a VPN Gateway. Limit is 250k flows. |
| **Gateway P2S Bandwidth** | Bytes/s | 1 minute | Average combined bandwidth utilization of all point-to-site connections on the gateway. | | **Gateway S2S Bandwidth** | Bytes/s | 5 minutes | Average combined bandwidth utilization of all site-to-site connections on the gateway. | | **P2S Connection Count** | Count | 1 minute | Count of point-to-site connections on the gateway. |
For steps, see [Tutorial: Create a metric alert for an Azure resource](../azure-
| **Tunnel MMSA Count** | Count | 5 minutes | Number of main mode security associations present. | | **Tunnel Peak PPS** | Count | 5 minutes | Max number of packets per second per tunnel. | | **Tunnel QMSA Count** | Count | 5 minutes | Number of quick mode security associations present. |
-| **Tunnel Total Flow Count** | Count | 5 minutes | Number of distinct flows created per tunnel. |
+| **Tunnel Total Flow Count** | Count | 5 minutes | Number of distinct 3-tuple flows created per tunnel. |
| **User Vpn Route Count** | Count | 5 minutes | Number of user VPN routes configured on the VPN Gateway. | | **VNet Address Prefix Count** | Count | 5 minutes | Number of VNet address prefixes that are used/advertised by the gateway. |