Updates from: 02/12/2024 02:06:58
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
Title: Use pronunciation assessment
-description: Learn about pronunciation assessment features that are currently publicly available.
+description: Learn about pronunciation assessment features that are currently publicly available. Choose the programming solution for your needs.
- devx-track-python - ignite-2023 Previously updated : 10/25/2023 Last updated : 02/07/2024 zone_pivot_groups: programming-languages-ai-services
+#Customer intent: As a developer, I want to implement pronunciation assessment on spoken language using a technology that works in my environment to gives feedback on accuracy and fluency.
# Use pronunciation assessment
-In this article, you learn how to evaluate pronunciation with speech to text through the Speech SDK. To [get pronunciation assessment results](#get-pronunciation-assessment-results), you apply the `PronunciationAssessmentConfig` settings to a `SpeechRecognizer` object.
+In this article, you learn how to evaluate pronunciation with speech to text through the Speech SDK. Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio.
-> [!NOTE]
-> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
->
-> As a baseline, usage of pronunciation assessment costs the same as speech to text for pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
->
-> For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing).
+## Use pronunciation assessment in streaming mode
+
+Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently.
-## Pronunciation assessment in streaming mode
+For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
-Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore`, `ProsodyScore`, and `CompletenessScore` will vary over time throughout the recording and evaluation process.
+As a baseline, usage of pronunciation assessment costs the same as speech to text for pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for speech to text, the spend for pronunciation assessment goes towards meeting the commitment. For more information, see [Pricing](./pronunciation-assessment-tool.md#pricing).
::: zone pivot="programming-language-csharp"
For how to use Pronunciation Assessment in streaming mode in your own applicatio
::: zone-end
-## Configuration parameters
+## Set configuration parameters
::: zone pivot="programming-language-go" > [!NOTE]
-> Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts in this guide, but you must select another programming language for implementation details.
+> Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts in this guide. Select another programming language for your solution.
::: zone-end
-In the `SpeechRecognizer`, you can specify the language that you're learning or practicing improving pronunciation. The default locale is `en-US` if not otherwise specified. To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98).
+In the `SpeechRecognizer`, you can specify the language to learn or practice improving pronunciation. The default locale is `en-US`. To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98).
> [!TIP]
-> If you aren't sure which locale to set when a language has multiple locales (such as Spanish), try each locale (such as `es-ES` and `es-MX`) separately. Evaluate the results to determine which locale scores higher for your specific scenario.
+> If you aren't sure which locale to set for a language that has multiple locales, try each locale separately. For instance, for Spanish, try `es-ES` and `es-MX`. Determine which locale scores higher for your scenario.
-You must create a `PronunciationAssessmentConfig` object. Optionally you can set `EnableProsodyAssessment` and `EnableContentAssessmentWithTopic` to enable prosody and content assessment. For more information, see [configuration methods](#configuration-methods).
+You must create a `PronunciationAssessmentConfig` object. You can set `EnableProsodyAssessment` and `EnableContentAssessmentWithTopic` to enable prosody and content assessment. For more information, see [configuration methods](#configuration-methods).
::: zone pivot="programming-language-csharp"
var pronunciationAssessmentConfig = new PronunciationAssessmentConfig(
pronunciationAssessmentConfig.EnableProsodyAssessment(); pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting"); ```
-
+ ::: zone-end ::: zone pivot="programming-language-cpp"
SPXPronunciationAssessmentConfiguration *pronunicationConfig =
::: zone-end - ::: zone pivot="programming-language-swift" ```swift
pronAssessmentConfig.enableContentAssessment(withTopic: "greeting")
This table lists some of the key configuration parameters for pronunciation assessment.
-| Parameter | Description |
+| Parameter | Description |
|--|-|
-| `ReferenceText` | The text that the pronunciation is evaluated against.<br/><br/>The `ReferenceText` parameter is optional. Set the reference text if you want to run a [scripted assessment](#scripted-assessment-results) for the reading language learning scenario. Don't set the reference text if you want to run an [unscripted assessment](#unscripted-assessment-results) for the speaking language learning scenario.<br/><br/>For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing) |
-| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. |
-| `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels greater than or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text. Default: `Phoneme`.|
-| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. Enabling miscue is optional. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table. |
-| `ScenarioId` | A GUID indicating a customized point system. |
+| `ReferenceText` | The text that the pronunciation is evaluated against.<br/><br/>The `ReferenceText` parameter is optional. Set the reference text if you want to run a [scripted assessment](#scripted-assessment-results) for the reading language learning scenario. Don't set the reference text if you want to run an [unscripted assessment](#unscripted-assessment-results).<br/><br/>For pricing differences between scripted and unscripted assessment, see [Pricing](./pronunciation-assessment-tool.md#pricing). |
+| `GradingSystem` | The point system for score calibration. `FivePoint` gives a 0-5 floating point score. `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. |
+| `Granularity` | Determines the lowest level of evaluation granularity. Returns scores for levels greater than or equal to the minimal value. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph. It depends on your input reference text. Default: `Phoneme`.|
+| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. Enabling miscue is optional. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table. |
+| `ScenarioId` | A GUID for a customized point system. |
### Configuration methods
This table lists some of the optional methods you can set for the `Pronunciation
> [!NOTE] > Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale.
-| Method | Description |
+| Method | Description |
|--|-|
-| `EnableProsodyAssessment` | Enables prosody assessment for your pronunciation evaluation. This feature assesses aspects like stress, intonation, speaking speed, and rhythm, providing insights into the naturalness and expressiveness of your speech.<br/><br/>Enabling prosody assessment is optional. If this method is called, the `ProsodyScore` result value is returned. |
-| `EnableContentAssessmentWithTopic` | Enables content assessment. A content assessment is part of the [unscripted assessment](#unscripted-assessment-results) for the speaking language learning scenario. By providing a topic description via this method, you can enhance the assessment's understanding of the specific topic being spoken about. For example, in C# call `pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting");`, you can replace 'greeting' with your desired text to describe a topic. The topic value has no length limit and currently only supports `en-US` locale . |
+| `EnableProsodyAssessment` | Enables prosody assessment for your pronunciation evaluation. This feature assesses aspects like stress, intonation, speaking speed, and rhythm. This feature provides insights into the naturalness and expressiveness of your speech.<br/><br/>Enabling prosody assessment is optional. If this method is called, the `ProsodyScore` result value is returned. |
+| `EnableContentAssessmentWithTopic` | Enables content assessment. A content assessment is part of the [unscripted assessment](#unscripted-assessment-results) for the speaking language learning scenario. By providing a description, you can enhance the assessment's understanding of the specific topic being spoken about. For example, in C# call `pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting");`. You can replace 'greeting' with your desired text to describe a topic. The description has no length limit and currently only supports the `en-US` locale. |
-## Get pronunciation assessment results
+## Get pronunciation assessment results
-When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string.
+When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string.
::: zone pivot="programming-language-csharp"
using (var speechRecognizer = new SpeechRecognizer(
} ``` ::: zone pivot="programming-language-cpp"
-Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK for C++. Word, syllable, and phoneme results are only available in the JSON string.
+Word, syllable, and phoneme results aren't available by using SDK objects with the Speech SDK for C++. Word, syllable, and phoneme results are only available in the JSON string.
```cpp auto speechRecognizer = SpeechRecognizer::FromConfig(
auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.Get
``` To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L624).
-
+ ::: zone pivot="programming-language-java"
-For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK for Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string.
+For Android application development, the word, syllable, and phoneme results are available by using SDK objects with the Speech SDK for Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string.
```Java SpeechRecognizer speechRecognizer = new SpeechRecognizer(
pronunciationAssessmentConfig.close();
speechRecognitionResult.close(); ``` ::: zone pivot="programming-language-javascript"
pronunciationAssessmentConfig.applyTo(speechRecognizer);
speechRecognizer.recognizeOnceAsync((speechRecognitionResult: SpeechSDK.SpeechRecognitionResult) => { // The pronunciation assessment result as a Speech SDK object
- var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult);
+ var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult);
- // The pronunciation assessment result as a JSON string
- var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult);
+ // The pronunciation assessment result as a JSON string
+ var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult);
}, {}); ```
To learn how to specify the learning language for pronunciation assessment in yo
::: zone-end ::: zone pivot="programming-language-objectivec"
-
+ ```ObjectiveC SPXSpeechRecognizer* speechRecognizer = \ [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
NSString* pronunciationAssessmentResultJson = [speechRecognitionResult.propertie
To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L862). ::: zone pivot="programming-language-swift"
let pronunciationAssessmentResultJson = speechRecognitionResult!.properties?.get
### Result parameters
-Depending on whether you're using [scripted](#scripted-assessment-results) or [unscripted](#unscripted-assessment-results) assessment, you can get different pronunciation assessment results. Scripted assessment is for the reading language learning scenario, and unscripted assessment is for the speaking language learning scenario.
+Depending on whether you're using [scripted](#scripted-assessment-results) or [unscripted](#unscripted-assessment-results) assessment, you can get different pronunciation assessment results. Scripted assessment is for the reading language learning scenario. Unscripted assessment is for the speaking language learning scenario.
> [!NOTE]
-> For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing).
+> For pricing differences between scripted and unscripted assessment, see [Pricing](./pronunciation-assessment-tool.md#pricing).
#### Scripted assessment results
-This table lists some of the key pronunciation assessment results for the scripted assessment (reading scenario) and the supported granularity for each.
+This table lists some of the key pronunciation assessment results for the scripted assessment, or reading scenario.
-| Parameter | Description |Granularity|
-|--|-|-|
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives.|Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level|
-| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |Full Text level|
+| Parameter | Description | Granularity |
+|:-|:|:|
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from the phoneme-level accuracy score, and refined with assessment objectives. | Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level |
+| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | Full Text level |
| `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |Full Text level|
-| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level|
-| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |Full Text level|
-| `ErrorType` | This value indicates whether a word is omitted, inserted, improperly inserted with a break, or missing a break at punctuation compared to the reference text. It also indicates whether a word is badly pronounced, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.| Word level|
+| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level|
+| `PronScore` | Overall score of the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |Full Text level|
+| `ErrorType` | This value indicates the error type compared to the reference text. Options include whether a word is omitted, inserted, or improperly inserted with a break. It also indicates a missing break at punctuation. It also indicates whether a word is badly pronounced, or monotonically rising, falling, or flat on the utterance. Possible values are `None` for no error on this word, `Omission`, `Insertion`, `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.| Word level|
#### Unscripted assessment results
-This table lists some of the key pronunciation assessment results for the unscripted assessment (speaking scenario) and the supported granularity for each.
+This table lists some of the key pronunciation assessment results for the unscripted assessment, or speaking scenario.
+
+`VocabularyScore`, `GrammarScore`, and `TopicScore` parameters roll up to the combined content assessment.
> [!NOTE]
-> VocabularyScore, GrammarScore, and TopicScore parameters roll up to the combined content assessment.
->
> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale.
-| Response parameter | Description |Granularity|
-|--|-|-|
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives. | Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level|
-| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | Full Text level|
-| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level|
-| `VocabularyScore` | Proficiency in lexical usage. It evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, and the level of lexical complexity. | Full Text level|
-| `GrammarScore` | Correctness in using grammar and variety of sentence patterns. Grammatical errors are jointly evaluated by lexical accuracy, grammatical accuracy, and diversity of sentence structures. | Full Text level|
-| `TopicScore` | Level of understanding and engagement with the topic, which provides insights into the speakerΓÇÖs ability to express their thoughts and ideas effectively and the ability to engage with the topic. | Full Text level|
-| `PronScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from AccuracyScore, FluencyScore, and CompletenessScore with weight. | Full Text level|
-| `ErrorType` | This value indicates whether a word is badly pronounced, improperly inserted with a break, missing a break at punctuation, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. | Word level|
-
-The following table describes the prosody assessment results in more detail:
-
-| Field | Description |
-|-|--|
-| `ProsodyScore` | Prosody score of the entire utterance. |
-| `Feedback` | Feedback on the word level, including Break and Intonation. |
-|`Break` | |
-| `ErrorTypes` | Error types related to breaks, including `UnexpectedBreak` and `MissingBreak`. In the current version, we donΓÇÖt provide the break error type. You need to set thresholds on the following fields ΓÇ£UnexpectedBreak ΓÇô ConfidenceΓÇ¥ and ΓÇ£MissingBreak ΓÇô confidenceΓÇ¥, respectively to decide whether there's an unexpected break or missing break before the word. |
-| `UnexpectedBreak` | Indicates an unexpected break before the word. |
-| `MissingBreak` | Indicates a missing break before the word. |
-| `Thresholds` | Suggested thresholds on both confidence scores are 0.75. That means, if the value of ΓÇÿUnexpectedBreak ΓÇô ConfidenceΓÇÖ is larger than 0.75, it can be decided to have an unexpected break. If the value of ΓÇÿMissingBreak ΓÇô confidenceΓÇÖ is larger than 0.75, it can be decided to have a missing break. If you want to have variable detection sensitivity on these two breaks, itΓÇÖs suggested to assign different thresholds to the 'UnexpectedBreak - Confidence' and 'MissingBreak - Confidence' fields. |
-|`Intonation`| Indicates intonation in speech. |
-| `ErrorTypes` | Error types related to intonation, currently supporting only Monotone. If the ΓÇÿMonotoneΓÇÖ exists in the field ΓÇÿErrorTypesΓÇÖ, the utterance is detected to be monotonic. Monotone is detected on the whole utterance, but the tag is assigned to all the words. All the words in the same utterance share the same monotone detection information. |
-| `Monotone` | Indicates monotonic speech. |
-| `Thresholds (Monotone Confidence)` | The fields 'Monotone - SyllablePitchDeltaConfidence' are reserved for user-customized monotone detection. If you're unsatisfied with the provided monotone decision, you can adjust the thresholds on these fields to customize the detection according to your preferences. |
+| Response parameter | Description | Granularity |
+|:-|:|:|
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives. | Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level |
+| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | Full Text level |
+| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level |
+| `VocabularyScore` | Proficiency in lexical usage. It evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, and the level of lexical complexity. | Full Text level |
+| `GrammarScore` | Correctness in using grammar and variety of sentence patterns. Lexical accuracy, grammatical accuracy, and diversity of sentence structures jointly elevate grammatical errors. | Full Text level|
+| `TopicScore` | Level of understanding and engagement with the topic, which provides insights into the speakerΓÇÖs ability to express their thoughts and ideas effectively and the ability to engage with the topic. | Full Text level|
+| `PronScore` | Overall score of the pronunciation quality of the given speech. This value is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. | Full Text level |
+| `ErrorType` | A word is badly pronounced, improperly inserted with a break, or missing a break at punctuation. It also indicates whether a pronunciation is monotonically rising, falling, or flat on the utterance. Possible values are `None` for no error on this word, `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. | Word level |
+
+The following table describes the prosody assessment results in more detail:
+
+| Field | Description |
+|:|:|
+| `ProsodyScore` | Prosody score of the entire utterance. |
+| `Feedback` | Feedback on the word level, including `Break` and `Intonation`. |
+| `Break` | |
+| `ErrorTypes` | Error types related to breaks, including `UnexpectedBreak` and `MissingBreak`. The current version doesn't provide the break error type. You need to set thresholds on the fields `UnexpectedBreak ΓÇô Confidence` and `MissingBreak ΓÇô confidence` to decide whether there's an unexpected break or missing break before the word. |
+| `UnexpectedBreak` | Indicates an unexpected break before the word. |
+| `MissingBreak` | Indicates a missing break before the word. |
+| `Thresholds` | Suggested thresholds on both confidence scores are 0.75. That means, if the value of `UnexpectedBreak ΓÇô Confidence` is larger than 0.75, it has an unexpected break. If the value of `MissingBreak ΓÇô confidence` is larger than 0.75, it has a missing break. If you want to have variable detection sensitivity on these two breaks, you can assign different thresholds to the `UnexpectedBreak - Confidence` and `MissingBreak - Confidence` fields. |
+| `Intonation` | Indicates intonation in speech. |
+| `ErrorTypes` | Error types related to intonation, currently supporting only Monotone. If the `Monotone` exists in the field `ErrorTypes`, the utterance is detected to be monotonic. Monotone is detected on the whole utterance, but the tag is assigned to all the words. All the words in the same utterance share the same monotone detection information. |
+| `Monotone` | Indicates monotonic speech. |
+| `Thresholds (Monotone Confidence)` | The fields `Monotone - SyllablePitchDeltaConfidence` are reserved for user-customized monotone detection. If you're unsatisfied with the provided monotone decision, adjust the thresholds on these fields to customize the detection according to your preferences. |
### JSON result example
-The [scripted](#scripted-assessment-results) pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know:
+The [scripted](#scripted-assessment-results) pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example.
+ - The phoneme [alphabet](#phoneme-alphabet-format) is IPA.-- The [syllables](#syllable-groups) are returned alongside phonemes for the same word. -- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l"). The offset represents the time at which the recognized speech begins in the audio stream, and it's measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties).-- There are five `NBestPhonemes` corresponding to the number of [spoken phonemes](#spoken-phoneme) requested.-- Within `Phonemes`, the most likely [spoken phonemes](#spoken-phoneme) was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
+- The [syllables](#syllable-groups) are returned alongside phonemes for the same word.
+- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable `loʊ` aligns with the third phoneme, `l`. The offset represents the time at which the recognized speech begins in the audio stream. The value is measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties).
+- There are five `NBestPhonemes` that correspond to the number of [spoken phonemes](#assess-spoken-phonemes) requested.
+- Within `Phonemes`, the most likely [spoken phonemes](#assess-spoken-phonemes) was `ə` instead of the expected phoneme `ɛ`. The expected phoneme `ɛ` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
```json {
- "Id": "bbb42ea51bdb46d19a1d685e635fe173",
- "RecognitionStatus": 0,
- "Offset": 7500000,
- "Duration": 13800000,
- "DisplayText": "Hello.",
- "NBest": [
- {
- "Confidence": 0.975003,
- "Lexical": "hello",
- "ITN": "hello",
- "MaskedITN": "hello",
- "Display": "Hello.",
- "PronunciationAssessment": {
- "AccuracyScore": 100,
- "FluencyScore": 100,
- "CompletenessScore": 100,
- "PronScore": 100
- },
- "Words": [
- {
- "Word": "hello",
- "Offset": 7500000,
- "Duration": 13800000,
- "PronunciationAssessment": {
- "AccuracyScore": 99.0,
- "ErrorType": "None"
- },
- "Syllables": [
- {
- "Syllable": "hɛ",
- "PronunciationAssessment": {
- "AccuracyScore": 91.0
- },
- "Offset": 7500000,
+ "Id": "bbb42ea51bdb46d19a1d685e635fe173",
+ "RecognitionStatus": 0,
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "DisplayText": "Hello.",
+ "NBest": [
+ {
+ "Confidence": 0.975003,
+ "Lexical": "hello",
+ "ITN": "hello",
+ "MaskedITN": "hello",
+ "Display": "Hello.",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100,
+ "FluencyScore": 100,
+ "CompletenessScore": 100,
+ "PronScore": 100
+ },
+ "Words": [
+ {
+ "Word": "hello",
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 99.0,
+ "ErrorType": "None"
+ },
+ "Syllables": [
+ {
+ "Syllable": "hɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 91.0
+ },
+ "Offset": 7500000,
"Duration": 4100000
- },
- {
- "Syllable": "loʊ",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0
- },
- "Offset": 11700000,
+ },
+ {
+ "Syllable": "loʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ },
+ "Offset": 11700000,
"Duration": 9600000
- }
- ],
- "Phonemes": [
- {
- "Phoneme": "h",
- "PronunciationAssessment": {
- "AccuracyScore": 98.0,
- "NBestPhonemes": [
- {
- "Phoneme": "h",
- "Score": 100.0
- },
- {
- "Phoneme": "oʊ",
- "Score": 52.0
- },
- {
- "Phoneme": "ə",
- "Score": 35.0
- },
- {
- "Phoneme": "k",
- "Score": 23.0
- },
- {
- "Phoneme": "æ",
- "Score": 20.0
- }
- ]
- },
- "Offset": 7500000,
+ }
+ ],
+ "Phonemes": [
+ {
+ "Phoneme": "h",
+ "PronunciationAssessment": {
+ "AccuracyScore": 98.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "h",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 35.0
+ },
+ {
+ "Phoneme": "k",
+ "Score": 23.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 20.0
+ }
+ ]
+ },
+ "Offset": 7500000,
"Duration": 3500000
- },
- {
- "Phoneme": "ɛ",
- "PronunciationAssessment": {
- "AccuracyScore": 47.0,
- "NBestPhonemes": [
- {
- "Phoneme": "ə",
- "Score": 100.0
- },
- {
- "Phoneme": "l",
- "Score": 52.0
- },
- {
- "Phoneme": "ɛ",
- "Score": 47.0
- },
- {
- "Phoneme": "h",
- "Score": 17.0
- },
- {
- "Phoneme": "æ",
- "Score": 2.0
- }
- ]
- },
- "Offset": 11100000,
+ },
+ {
+ "Phoneme": "ɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 47.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "ə",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 47.0
+ },
+ {
+ "Phoneme": "h",
+ "Score": 17.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 2.0
+ }
+ ]
+ },
+ "Offset": 11100000,
"Duration": 500000
- },
- {
- "Phoneme": "l",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0,
- "NBestPhonemes": [
- {
- "Phoneme": "l",
- "Score": 100.0
- },
- {
- "Phoneme": "oʊ",
- "Score": 46.0
- },
- {
- "Phoneme": "ə",
- "Score": 5.0
- },
- {
- "Phoneme": "ɛ",
- "Score": 3.0
- },
- {
- "Phoneme": "u",
- "Score": 1.0
- }
- ]
- },
- "Offset": 11700000,
+ },
+ {
+ "Phoneme": "l",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "l",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 46.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 5.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 3.0
+ },
+ {
+ "Phoneme": "u",
+ "Score": 1.0
+ }
+ ]
+ },
+ "Offset": 11700000,
"Duration": 1100000
- },
- {
- "Phoneme": "oʊ",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0,
- "NBestPhonemes": [
- {
- "Phoneme": "oʊ",
- "Score": 100.0
- },
- {
- "Phoneme": "d",
- "Score": 29.0
- },
- {
- "Phoneme": "t",
- "Score": 24.0
- },
- {
- "Phoneme": "n",
- "Score": 22.0
- },
- {
- "Phoneme": "l",
- "Score": 18.0
- }
- ]
- },
- "Offset": 12900000,
+ },
+ {
+ "Phoneme": "oʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "oʊ",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "d",
+ "Score": 29.0
+ },
+ {
+ "Phoneme": "t",
+ "Score": 24.0
+ },
+ {
+ "Phoneme": "n",
+ "Score": 22.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 18.0
+ }
+ ]
+ },
+ "Offset": 12900000,
"Duration": 8400000
- }
- ]
- }
- ]
- }
- ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
} ```
You can get pronunciation assessment scores for:
- Syllable groups - Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format
-## Supported features per locale
+### Supported features per locale
-The following table summarizes which feature are supported per locale. You can read more on the specific feature in the sections below.
+The following table summarizes which features that locales support. For more specifies, see the following sections.
-| Phoneme alphabet | IPA | SAPI |
-|--|-|-|
-| Phoneme name | `en-US` | `en-US`, `en-GB`, `zh-CN` |
-| Syllable group | `en-US` | `en-US`, `en-GB` |
-| Spoken phoneme | `en-US` | `en-US`, `en-GB` |
+| Phoneme alphabet | IPA | SAPI |
+|:--|:--|:--|
+| Phoneme name | `en-US` | `en-US`, `en-GB`, `zh-CN` |
+| Syllable group | `en-US` | `en-US`, `en-GB` |
+| Spoken phoneme | `en-US` | `en-US`, `en-GB` |
-## Syllable groups
+### Syllable groups
-Pronunciation assessment can provide syllable-level assessment results. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme.
+Pronunciation assessment can provide syllable-level assessment results. A word is typically pronounced syllable by syllable rather than phoneme by phoneme. Grouping in syllables is more legible and aligned with speaking habits.
Pronunciation assessment supports syllable groups only in `en-US` with IPA and in both `en-US` and `en-GB` with SAPI. The following table compares example phonemes with the corresponding syllables. | Sample word | Phonemes | Syllables |
-|--|-|-|
-|technological|teknələdʒɪkl|tek·nə·lɑ·dʒɪkl|
-|hello|hɛloʊ|hɛ·loʊ|
-|luck|lʌk|lʌk|
-|photosynthesis|foʊtəsɪnθəsɪs|foʊ·tə·sɪn·θə·sɪs|
+|:|:|:-|
+| technological | teknələdʒɪkl | tek·nə·lɑ·dʒɪkl |
+| hello | hɛloʊ | hɛ·loʊ |
+| luck | lʌk |lʌk |
+| photosynthesis | foʊtəsɪnθəsɪs | foʊ·tə·sɪn·θə·sɪs |
-To request syllable-level results along with phonemes, set the granularity [configuration parameter](#configuration-parameters) to `Phoneme`.
+To request syllable-level results along with phonemes, set the granularity [configuration parameter](#set-configuration-parameters) to `Phoneme`.
-## Phoneme alphabet format
+### Phoneme alphabet format
Pronunciation assessment supports phoneme name in `en-US` with IPA and in `en-US`, `en-GB` and `zh-CN` with SAPI.
-For locales that support phoneme name, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
+For locales that support phoneme name, the phoneme name is provided together with the score. Phoneme names help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
The following table compares example SAPI phonemes with the corresponding IPA phonemes. | Sample word | SAPI Phonemes | IPA phonemes |
-|--|-|-|
-|hello|h eh l ow|h ɛ l oʊ|
-|luck|l ah k|l ʌ k|
-|photosynthesis|f ow t ax s ih n th ax s ih s|f oʊ t ə s ɪ n θ ə s ɪ s|
+|:|:--|:-|
+| hello | h eh l ow | h ɛ l oʊ |
+| luck | l ah k | l ʌ k |
+| photosynthesis | f ow t ax s ih n th ax s ih s | f oʊ t ə s ɪ n θ ə s ɪ s |
-To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes are in SAPI format by default.
+To request IPA phonemes, set the phoneme alphabet to `IPA`. If you don't specify the alphabet, the phonemes are in SAPI format by default.
::: zone pivot="programming-language-csharp" ```csharp pronunciationAssessmentConfig.PhonemeAlphabet = "IPA"; ```
-
+ ::: zone-end ::: zone pivot="programming-language-cpp"
pronunciationAssessmentConfig.PhonemeAlphabet = "IPA";
```cpp auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}"); ```
-
+ ::: zone pivot="programming-language-java"
var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.from
::: zone-end ::: zone pivot="programming-language-objectivec"
-
+ ```ObjectiveC pronunciationAssessmentConfig.phonemeAlphabet = @"IPA"; ``` ::: zone-end - ::: zone pivot="programming-language-swift" ```swift
pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
::: zone-end
+## Assess spoken phonemes
-## Spoken phoneme
-
-With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes.
+With spoken phonemes, you can get confidence scores that indicate how likely the spoken phonemes matched the expected phonemes.
Pronunciation assessment supports spoken phonemes in `en-US` with IPA and in both `en-US` and `en-GB` with SAPI.
-For example, to obtain the complete spoken sound for the word "Hello", you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". However, the actual spoken phonemes are "h ə l oʊ". You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
+For example, to obtain the complete spoken sound for the word `Hello`, you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word `hello`, the expected IPA phonemes are `h ɛ l oʊ`. However, the actual spoken phonemes are `h ə l oʊ`. You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `ə` instead of the expected phoneme `ɛ`. The expected phoneme `ɛ` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
```json {
- "Id": "bbb42ea51bdb46d19a1d685e635fe173",
- "RecognitionStatus": 0,
- "Offset": 7500000,
- "Duration": 13800000,
- "DisplayText": "Hello.",
- "NBest": [
- {
- "Confidence": 0.975003,
- "Lexical": "hello",
- "ITN": "hello",
- "MaskedITN": "hello",
- "Display": "Hello.",
- "PronunciationAssessment": {
- "AccuracyScore": 100,
- "FluencyScore": 100,
- "CompletenessScore": 100,
- "PronScore": 100
- },
- "Words": [
- {
- "Word": "hello",
- "Offset": 7500000,
- "Duration": 13800000,
- "PronunciationAssessment": {
- "AccuracyScore": 99.0,
- "ErrorType": "None"
- },
- "Syllables": [
- {
- "Syllable": "hɛ",
- "PronunciationAssessment": {
- "AccuracyScore": 91.0
- },
- "Offset": 7500000,
+ "Id": "bbb42ea51bdb46d19a1d685e635fe173",
+ "RecognitionStatus": 0,
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "DisplayText": "Hello.",
+ "NBest": [
+ {
+ "Confidence": 0.975003,
+ "Lexical": "hello",
+ "ITN": "hello",
+ "MaskedITN": "hello",
+ "Display": "Hello.",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100,
+ "FluencyScore": 100,
+ "CompletenessScore": 100,
+ "PronScore": 100
+ },
+ "Words": [
+ {
+ "Word": "hello",
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 99.0,
+ "ErrorType": "None"
+ },
+ "Syllables": [
+ {
+ "Syllable": "hɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 91.0
+ },
+ "Offset": 7500000,
"Duration": 4100000
- },
- {
- "Syllable": "loʊ",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0
- },
- "Offset": 11700000,
+ },
+ {
+ "Syllable": "loʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ },
+ "Offset": 11700000,
"Duration": 9600000
- }
- ],
- "Phonemes": [
- {
- "Phoneme": "h",
- "PronunciationAssessment": {
- "AccuracyScore": 98.0,
- "NBestPhonemes": [
- {
- "Phoneme": "h",
- "Score": 100.0
- },
- {
- "Phoneme": "oʊ",
- "Score": 52.0
- },
- {
- "Phoneme": "ə",
- "Score": 35.0
- },
- {
- "Phoneme": "k",
- "Score": 23.0
- },
- {
- "Phoneme": "æ",
- "Score": 20.0
- }
- ]
- },
- "Offset": 7500000,
+ }
+ ],
+ "Phonemes": [
+ {
+ "Phoneme": "h",
+ "PronunciationAssessment": {
+ "AccuracyScore": 98.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "h",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 35.0
+ },
+ {
+ "Phoneme": "k",
+ "Score": 23.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 20.0
+ }
+ ]
+ },
+ "Offset": 7500000,
"Duration": 3500000
- },
- {
- "Phoneme": "ɛ",
- "PronunciationAssessment": {
- "AccuracyScore": 47.0,
- "NBestPhonemes": [
- {
- "Phoneme": "ə",
- "Score": 100.0
- },
- {
- "Phoneme": "l",
- "Score": 52.0
- },
- {
- "Phoneme": "ɛ",
- "Score": 47.0
- },
- {
- "Phoneme": "h",
- "Score": 17.0
- },
- {
- "Phoneme": "æ",
- "Score": 2.0
- }
- ]
- },
- "Offset": 11100000,
+ },
+ {
+ "Phoneme": "ɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 47.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "ə",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 47.0
+ },
+ {
+ "Phoneme": "h",
+ "Score": 17.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 2.0
+ }
+ ]
+ },
+ "Offset": 11100000,
"Duration": 500000
- },
- {
- "Phoneme": "l",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0,
- "NBestPhonemes": [
- {
- "Phoneme": "l",
- "Score": 100.0
- },
- {
- "Phoneme": "oʊ",
- "Score": 46.0
- },
- {
- "Phoneme": "ə",
- "Score": 5.0
- },
- {
- "Phoneme": "ɛ",
- "Score": 3.0
- },
- {
- "Phoneme": "u",
- "Score": 1.0
- }
- ]
- },
- "Offset": 11700000,
+ },
+ {
+ "Phoneme": "l",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "l",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 46.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 5.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 3.0
+ },
+ {
+ "Phoneme": "u",
+ "Score": 1.0
+ }
+ ]
+ },
+ "Offset": 11700000,
"Duration": 1100000
- },
- {
- "Phoneme": "oʊ",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0,
- "NBestPhonemes": [
- {
- "Phoneme": "oʊ",
- "Score": 100.0
- },
- {
- "Phoneme": "d",
- "Score": 29.0
- },
- {
- "Phoneme": "t",
- "Score": 24.0
- },
- {
- "Phoneme": "n",
- "Score": 22.0
- },
- {
- "Phoneme": "l",
- "Score": 18.0
- }
- ]
- },
- "Offset": 12900000,
+ },
+ {
+ "Phoneme": "oʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "oʊ",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "d",
+ "Score": 29.0
+ },
+ {
+ "Phoneme": "t",
+ "Score": 24.0
+ },
+ {
+ "Phoneme": "n",
+ "Score": 22.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 18.0
+ }
+ ]
+ },
+ "Offset": 12900000,
"Duration": 8400000
- }
- ]
- }
- ]
- }
- ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
} ```
-To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`.
-
+To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`.
+ ::: zone pivot="programming-language-csharp" ```csharp pronunciationAssessmentConfig.NBestPhonemeCount = 5; ```
-
+ ::: zone pivot="programming-language-cpp" ```cpp auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}"); ```
-
+ ::: zone-end ::: zone pivot="programming-language-java"
auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJs
```Java PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}"); ```
-
+ ::: zone pivot="programming-language-python"
var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.from
::: zone-end ::: zone pivot="programming-language-objectivec"
-
-
+ ```ObjectiveC pronunciationAssessmentConfig.nbestPhonemeCount = 5; ``` ::: zone-end - ::: zone pivot="programming-language-swift" ```swift
pronunciationAssessmentConfig?.nbestPhonemeCount = 5
::: zone-end
-## Next steps
+## Related content
-- Learn our quality [benchmark](https://aka.ms/pronunciationassessment/techblog)-- Try out [pronunciation assessment in Speech Studio](pronunciation-assessment-tool.md)-- Check out easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video demo](https://www.youtube.com/watch?v=NQi4mBiNNTE) of pronunciation assessment.
+- Learn about quality [benchmark](https://aka.ms/pronunciationassessment/techblog).
+- Try [pronunciation assessment in Speech Studio](pronunciation-assessment-tool.md).
+- Check out an easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS).
+- Watch the [video demo](https://www.youtube.com/watch?v=NQi4mBiNNTE) of pronunciation assessment.
ai-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-identification.md
Title: Language identification - Speech service
+ Title: Implement language identification - Speech service
-description: Language identification is used to determine the language being spoken in audio when compared against a list of provided languages.
+description: Learn how language identification can determine the language being spoken in audio when compared against a list of provided languages.
Previously updated : 1/21/2024 Last updated : 02/08/2024 zone_pivot_groups: programming-languages-speech-services-nomore-variant
+#customer intent: As an application developer, I want to use language recognition or translations in order to make my apps work seamlessly for more customers.
-# Language identification
+# Implement language identification
-Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).
+Language identification is used to identify languages spoken in audio when compared against a list of supported languages.
Language identification (LID) use cases include:
-* [Speech to text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text.
-* [Speech translation](#speech-translation) when you need to identify the language in an audio source and then translate it to another language.
+- Speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
+- Speech translation when you need to identify the language in an audio source and then translate it to another language.
For speech recognition, the initial latency is higher with language identification. You should only include this optional feature as needed.
-## Configuration options
+## Set configuration options
-> [!IMPORTANT]
-> Language Identification APIs are simplified with the Speech SDK version 1.25 and later. The
-`SpeechServiceConnection_SingleLanguageIdPriority` and `SpeechServiceConnection_ContinuousLanguageIdPriority` properties have been removed and replaced by a single property `SpeechServiceConnection_LanguageIdMode`. Prioritizing between low latency and high accuracy is no longer necessary following recent model improvements. Now, you only need to select whether to run at-start or continuous Language Identification when doing continuous speech recognition or translation.
-
-Whether you use language identification with [speech to text](#speech-to-text) or with [speech translation](#speech-translation), there are some common concepts and configuration options.
+Whether you use language identification with [speech to text](#use-speech-to-text) or with [speech translation](#run-speech-translation), there are some common concepts and configuration options.
- Define a list of [candidate languages](#candidate-languages) that you expect in the audio. - Decide whether to use [at-start or continuous](#at-start-and-continuous-language-identification) language identification. Then you make a [recognize once or continuous recognition](#recognize-once-or-continuous) request to the Speech service.
-Code snippets are included with the concepts described next. Complete samples for each use case are provided later.
+> [!IMPORTANT]
+> Language Identification APIs are simplified with the Speech SDK version 1.25 and later. The `SpeechServiceConnection_SingleLanguageIdPriority` and `SpeechServiceConnection_ContinuousLanguageIdPriority` properties have been removed. A single property `SpeechServiceConnection_LanguageIdMode` replaces them. You no longer need to prioritize between low latency and high accuracy. For continuous speech recognition or translation, you only need to select whether to run at-start or continuous Language Identification.
+
+This article provides code snippets to describe the concepts. Links to complete samples for each use case are provided.
### Candidate languages
-You provide candidate languages with the `AutoDetectSourceLanguageConfig` object, at least one of which is expected to be in the audio. You can include up to four languages for [at-start LID](#at-start-and-continuous-language-identification) or up to 10 languages for [continuous LID](#at-start-and-continuous-language-identification). The Speech service returns one of the candidate languages provided even if those languages weren't in the audio. For example, if `fr-FR` (French) and `en-US` (English) are provided as candidates, but German is spoken, either `fr-FR` or `en-US` would be returned.
+You provide candidate languages with the `AutoDetectSourceLanguageConfig` object. You expect that at least one of the candidates is in the audio. You can include up to four languages for [at-start LID](#at-start-and-continuous-language-identification) or up to 10 languages for [continuous LID](#at-start-and-continuous-language-identification). The Speech service returns one of the candidate languages provided even if those languages weren't in the audio. For example, if `fr-FR` (French) and `en-US` (English) are provided as candidates, but German is spoken, the service returns either `fr-FR` or `en-US`.
-You must provide the full locale with dash (`-`) separator, but language identification only uses one locale per base language. Don't include multiple locales (for example, "en-US" and "en-GB") for the same language.
+You must provide the full locale with dash (`-`) separator, but language identification only uses one locale per base language. Don't include multiple locales for the same language, for example, `en-US` and `en-GB`.
::: zone pivot="programming-language-csharp"
var autoDetectSourceLanguageConfig =
``` ::: zone-end+ ::: zone pivot="programming-language-cpp" ```cpp
auto autoDetectSourceLanguageConfig =
``` ::: zone-end+ ::: zone pivot="programming-language-python" ```python
auto_detect_source_language_config = \
``` ::: zone-end+ ::: zone pivot="programming-language-java" ```java
AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
``` ::: zone-end+ ::: zone pivot="programming-language-javascript" ```javascript
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fr
``` ::: zone-end+ ::: zone pivot="programming-language-objectivec" ```objective-c
For more information, see [supported languages](language-support.md?tabs=languag
Speech supports both at-start and continuous language identification (LID). > [!NOTE]
-> Continuous language identification is only supported with Speech SDKs in C#, C++, Java ([for speech to text only](#speech-to-text)), JavaScript ([for speech to text only](#speech-to-text)), and Python.
-- At-start LID identifies the language once within the first few seconds of audio. Use at-start LID if the language in the audio doesn't change. With at-start LID, a single language is detected and returned in less than 5 seconds.-- Continuous LID can identify multiple languages for the duration of the audio. Use continuous LID if the language in the audio could change. Continuous LID doesn't support changing languages within the same sentence. For example, if you're primarily speaking Spanish and insert some English words, it will not detect the language change per word.
+> Continuous language identification is only supported with Speech SDKs in C#, C++, Java ([for speech to text only](#use-speech-to-text)), JavaScript ([for speech to text only](#use-speech-to-text)), and Python.
+>
+>- At-start LID identifies the language once within the first few seconds of audio. Use at-start LID if the language in the audio doesn't change. With at-start LID, a single language is detected and returned in less than 5 seconds.
+>- Continuous LID can identify multiple languages during the audio. Use continuous LID if the language in the audio could change. Continuous LID doesn't support changing languages within the same sentence. For example, if you're primarily speaking Spanish and insert some English words, it doesn't detect the language change per word.
You implement at-start LID or continuous LID by calling methods for [recognize once or continuous](#recognize-once-or-continuous). Continuous LID is only supported with continuous recognition. ### Recognize once or continuous
-Language identification is completed with recognition objects and operations. You make a request to the Speech service for recognition of audio.
+Language identification is completed with recognition objects and operations. Make a request to the Speech service for recognition of audio.
> [!NOTE] > Don't confuse recognition with identification. Recognition can be used with or without language identification.
-You either call the "recognize once" method, or the start and stop continuous recognition methods. You choose from:
+Either call the "recognize once" method, or the start and stop continuous recognition methods. You choose from:
- Recognize once with At-start LID. Continuous LID isn't supported for recognize once.-- Continuous recognition with at-start LID-- Continuous recognition with continuous LID
+- Use continuous recognition with at-start LID.
+- Use continuous recognition with continuous LID.
-The `SpeechServiceConnection_LanguageIdMode` property is only required for continuous LID. Without it, the Speech service defaults to at-start lid. The supported values are "AtStart" for at-start LID or "Continuous" for continuous LID.
+The `SpeechServiceConnection_LanguageIdMode` property is only required for continuous LID. Without it, the Speech service defaults to at-start LID. The supported values are `AtStart` for at-start LID or `Continuous` for continuous LID.
::: zone pivot="programming-language-csharp"
await recognizer.StopContinuousRecognitionAsync();
``` ::: zone-end+ ::: zone pivot="programming-language-cpp" ```cpp
recognizer->StopContinuousRecognitionAsync().get();
``` ::: zone-end+ ::: zone pivot="programming-language-java" ```java
recognizer.stopContinuousRecognitionAsync().get();
``` ::: zone-end+ ::: zone pivot="programming-language-python" ```python
recognizer.stop_continuous_recognition()
::: zone-end
-## Speech to text
+## Use speech to text
You use Speech to text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech to text overview](speech-to-text.md). > [!NOTE] > Speech to text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech to text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, Java, JavaScript, and Python.
->
+>
> Currently for speech to text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it. ::: zone pivot="programming-language-csharp"
speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult)
> Language detection with custom models can only be used with real-time speech to text and speech translation. Batch transcription only supports language detection for default base models. ::: zone pivot="programming-language-csharp"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, the example uses the default model. If the detected language is `fr-FR`, the example uses the custom model endpoint. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```csharp var sourceLanguageConfigs = new SourceLanguageConfig[]
var autoDetectSourceLanguageConfig =
::: zone-end ::: zone pivot="programming-language-cpp"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, the example uses the default model. If the detected language is `fr-FR`, the example uses the custom model endpoint. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```cpp std::vector<std::shared_ptr<SourceLanguageConfig>> sourceLanguageConfigs;
auto autoDetectSourceLanguageConfig =
::: zone-end ::: zone pivot="programming-language-java"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, the example uses the default model. If the detected language is `fr-FR`, the example uses the custom model endpoint. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```java List sourceLanguageConfigs = new ArrayList<SourceLanguageConfig>();
AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
::: zone-end ::: zone pivot="programming-language-python"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, the example uses the default model. If the detected language is `fr-FR`, the example uses the custom model endpoint. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```Python en_language_config = speechsdk.languageconfig.SourceLanguageConfig("en-US")
This sample shows how to use language detection with a custom endpoint. If the d
::: zone-end ::: zone pivot="programming-language-objectivec"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, the example uses the default model. If the detected language is `fr-FR`, the example uses the custom model endpoint. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```Objective-C SPXSourceLanguageConfiguration* enLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"en-US"];
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fr
::: zone-end
-## Speech translation
+## Run speech translation
-You use Speech translation when you need to identify the language in an audio source and then translate it to another language. For more information, see [Speech translation overview](speech-translation.md).
+Use Speech translation when you need to identify the language in an audio source and then translate it to another language. For more information, see [Speech translation overview](speech-translation.md).
> [!NOTE]
-> Speech translation with language identification is only supported with Speech SDKs in C#, C++, JavaScript, and Python.
+> Speech translation with language identification is only supported with Speech SDKs in C#, C++, JavaScript, and Python.
> Currently for speech translation with language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it. ::: zone pivot="programming-language-csharp"
public static async Task MultiLingualTranslation()
} } ```+ ::: zone-end
void MultiLingualTranslation()
recognizer->StopContinuousRecognitionAsync().get(); } ```+ ::: zone-end
When you run language ID in a container, use the `SourceLanguageRecognizer` obje
For more information about containers, see the [language identification speech containers](speech-container-lid.md#use-the-container) how-to guide.
+## Implement speech to text batch transcription
-## Speech to text batch transcription
-
-To identify languages with [Batch transcription REST API](batch-transcription.md), you need to use `languageIdentification` property in the body of your [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request.
+To identify languages with [Batch transcription REST API](batch-transcription.md), use `languageIdentification` property in the body of your [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request.
> [!WARNING]
-> Batch transcription only supports language identification for default base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results.
+> Batch transcription only supports language identification for default base models. If both language identification and a custom model are specified in the transcription request, the service falls back to use the base models for the specified candidate languages. This might result in unexpected recognition results.
> > If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#speech-to-text-custom-models) instead of batch transcription.
The following example shows the usage of the `languageIdentification` property w
} ```
-## Next steps
+## Related content
-* [Try the speech to text quickstart](get-started-speech-to-text.md)
-* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
-* [Use batch transcription](batch-transcription.md)
+- [Try the speech to text quickstart](get-started-speech-to-text.md)
+- [Improve recognition accuracy with custom speech](custom-speech-overview.md)
+- [Use batch transcription](batch-transcription.md)
ai-services Openai Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/openai-speech.md
Title: "Azure OpenAI speech to speech chat - Speech service"
-description: In this how-to guide, you can use Speech to converse with Azure OpenAI. The text recognized by the Speech service is sent to Azure OpenAI. The text response from Azure OpenAI is then synthesized by the Speech service.
+description: In this how-to guide, use Speech to converse with Azure OpenAI. Speech recognizes audio, sends it to Azure OpenAI, and synthesizes speech responses.
Previously updated : 1/21/2024 Last updated : 02/08/2024 zone_pivot_groups: programming-languages-csharp-python keywords: speech to text, openai
+#customer intent: As a developer, I want to create a voice-based chat system to talk to the OpenAI application I host through Azure to simplify AI interactions.
-# Azure OpenAI speech to speech chat
+# Azure OpenAI speech to speech chat
::: zone pivot="programming-language-csharp" [!INCLUDE [C# include](./includes/quickstarts/openai-speech/csharp.md)]
keywords: speech to text, openai
[!INCLUDE [Python include](./includes/quickstarts/openai-speech/python.md)] ::: zone-end
-## Next steps
+## Related content
- [Learn more about Speech](overview.md) - [Learn more about Azure OpenAI](../openai/overview.md)
ai-services Personal Voice How To Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-how-to-use.md
You need to use [speech synthesis markup language (SSML)](./speech-synthesis-mar
> [!NOTE] > The voice names labeled with the `Latest`, such as `DragonLatestNeural` or `PhoenixLatestNeural`, will be updated from time to time; its performance may vary with updates for ongoing improvements. If you would like to use a fixed version, select one labeled with a version number, such as `PhoenixV2Neural`. -- `DragonLatestNeural` is a base model with superior voice cloning similarity compared to `PhoenixLatestNeural`. `PhoenixLatestNeural` is a base model with more accurate pronunciation and lower latency than `DragonLatestNeural`. ΓÇâ
+- `DragonLatestNeural` is a base model with superior voice cloning similarity compared to `PhoenixLatestNeural`. `PhoenixLatestNeural` is a base model with more accurate pronunciation and lower latency than `DragonLatestNeural`.
+
+- `Dragon` model doesn't support `<lang xml:lang>` element in SSML.
Here's example SSML in a request for text to speech with the voice name and the speaker profile ID.
aks Istio Plugin Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-plugin-ca.md
az provider register --namespace Microsoft.ContainerService
``` > [!NOTE]
- > For existing clusters with Istio addon using self-signed root certificate generated by Istio CA, switching to plugin CA is not supported. You need to [disable the mesh][disable-mesh] on these clusters first and then enable it again using the above command to pass through the plugin CA inputs.
+ > For existing clusters with Istio addon using self-signed root certificate generated by Istio CA, switching to plugin CA is not supported. You need to [disable the mesh][az-aks-mesh-disable] on these clusters first and then enable it again using the above command to pass through the plugin CA inputs.
1. Verify that the `cacerts` gets created on the cluster:
aks Istio Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-upgrade.md
Last updated 05/04/2023
# Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview)
-This article addresses upgrade experiences for Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+This article addresses upgrade experiences for Istio-based service mesh add-on for Azure Kubernetes Service (AKS).
-## How Istio components are upgraded
-
-### Minor version upgrade
+## Minor version upgrade
Istio add-on allows upgrading the minor version using [canary upgrade process][istio-canary-upstream]. When an upgrade is initiated, the control plane of the new (canary) revision is deployed alongside the old (stable) revision's control plane. You can then manually roll over data plane workloads while using monitoring tools to track the health of workloads during this process. If you don't observe any issues with the health of your workloads, you can complete the upgrade so that only the new revision remains on the cluster. Else, you can roll back to the previous revision of Istio.
The following example illustrates how to upgrade from revision `asm-1-17` to `as
1. Check your monitoring tools and dashboards to determine whether your workloads are all running in a healthy state after the restart. Based on the outcome, you have two options:
- * **Complete the canary upgrade**: If you're satisfied that the workloads are all running in a healthy state as expected, you can complete the canary upgrade. This will remove the previous revision's control plane and leave behind the new revision's control plane on the cluster. Run the following command to complete the canary upgrade:
+ * **Complete the canary upgrade**: If you're satisfied that the workloads are all running in a healthy state as expected, you can complete the canary upgrade. Completion of the upgrade removes the previous revision's control plane and leaves behind the new revision's control plane on the cluster. Run the following command to complete the canary upgrade:
```bash az aks mesh upgrade complete --resource-group $RESOURCE_GROUP --name $CLUSTER
The following example illustrates how to upgrade from revision `asm-1-17` to `as
> [!NOTE] > Manually relabeling namespaces when moving them to a new revision can be tedious and error-prone. [Revision tags](https://istio.io/latest/docs/setup/upgrade/canary/#stable-revision-labels) solve this problem. Revision tags are stable identifiers that point to revisions and can be used to avoid relabeling namespaces. Rather than relabeling the namespace, a mesh operator can simply change the tag to point to a new revision. All namespaces labeled with that tag will be updated at the same time. However, note that you still need to restart the workloads to make sure the correct version of `istio-proxy` sidecars are injected.
-### Patch version upgrade
+## Patch version upgrade
* Istio add-on patch version availability information is published in [AKS weekly release notes][aks-release-notes]. * Patches are rolled out automatically for istiod and ingress pods as part of these AKS weekly releases, which respect the `default` [planned maintenance window](./planned-maintenance.md) set up for the cluster.
The following example illustrates how to upgrade from revision `asm-1-17` to `as
productpage-v1-979d4d9fc-p4764: docker.io/istio/examples-bookinfo-productpage-v1:1.17.0, mcr.microsoft.com/oss/istio/proxyv2:1.17.1-distroless ```
- * Restart the workloads to trigger reinjection. For example:
+ * To trigger reinjection, restart the workloads. For example:
```bash kubectl rollout restart deployments/productpage-v1 -n default
aks Quick Kubernetes Deploy Azd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-azd.md
+
+ Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure Developer CLI (AZD)'
+description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using the AZD CLI.
+ Last updated : 02/06/2024+
+#Customer intent: As a developer or cluster operator, I want to deploy an AKS cluster and deploy an application so I can see how to run applications using the managed Kubernetes service in Azure.
++
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure Developer CLI (AZD)
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you learn to:
+
+- Deploy an AKS cluster using the Azure CLI.
+- Run a sample multi-container application with a group of microservices simulating a retail app.
+
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+
+## Before you begin
+
+There are two methods for the quickstart. Choosing Azure Developer CLI is a more automated process that uses scripts to run the Azure CLI commands and resource provisioning.
+
+> [!NOTE]
+> For Windows users, follow the guide for the Azure CLI instead. The AZD Template repository doesn't support PowerShell commands yet.
+
+## Azure Developer CLI
+
+- An Azure account with an active subscription. Create an account for free
+- The Azure Developer CLI
+- The latest .NET 7.0 SDK
+- A Linux OS
+
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+
+- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
++
+- This article requires version 2.0.64 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed there.
+- Check the identity you're using to create your cluster has the appropriate minimum permissions. For more information on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+
+## Sample Code
+
+All code used in the quickstart is available at [Azure-Samples/aks-store-demo](https://github.com/Azure-Samples/aks-store-demo).
+
+The quickstart application includes the following Kubernetes deployments and
++
+- **Store front**: Web application for customers to view products and place orders.
+- **Product service**: Shows product information.
+- **Order service**: Places orders.
+- **Rabbit MQ**: Message queue for an order queue.
+
+> [!NOTE]
+> We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production use. These are used here for simplicity, but we recommend using managed services instead, such as Azure CosmosDB or Azure Service Bus.
+
+### Template Command
+
+You can quickly clone the application with `azd init` with the name of the repo using the template argument.
+
+For instance, our code sample is at: `azd init --template aks-store-demo`.
+
+### Git
+
+Alternatively, you can clone the application directly through GitHub, then run `azd init` from inside the directory to create configurations for the AZD CLI.
+
+When prompted for an environment name you can choose anything, but our quickstart uses `aksqs`.
+
+## Sign in to your Azure Cloud account
+
+The Azure Development Template contains all the code needed to create the services, but you need to sign in to host them on Azure Kubernetes Service.
+
+Run `azd auth login`
+
+1. Copy the device code that appears.
+2. Hit enter to open in a new tab the auth portal.
+3. Enter in your Microsoft Credentials in the new page.
+4. Confirm that it's you trying to connect to Azure CLI. If you encounter any issues, skip to the Troubleshooting section.
+5. Verify the message "Device code authentication completed. Logged in to Azure." appears in your original terminal.
+
+### Troubleshooting: Can't Connect to Localhost
+
+Certain Azure security policies cause conflicts when trying to sign in. As a workaround, you can perform a curl request to the localhost url you were redirected to after you logged in.
+
+The workaround requires the Azure CLI for authentication. If you don't have it or aren't using GitHub Codespaces, install the [Azure CLI][install-azure-cli].
+
+1. Inside a terminal, run `az login --scope https://graph.microsoft.com/.default`
+2. Copy the "localhost" URL from the failed redirect
+3. In a new terminal window, type `curl` and paste your url
+4. If it works, code for a webpage saying "You have logged into Microsoft Azure!" appears
+5. Close the terminal and go back to the old terminal
+6. Copy and note down which subscription_id you want to use
+7. Paste in the subscription_ID to the command `az account set -n {sub}`
+
+- If you have multiple Azure subscriptions, select the appropriate subscription for billing using the [az account set](/cli/azure/account#az-account-set) command.
+
+## Create resources for your cluster
+
+The step can take longer depending on your internet speed.
+
+1. Create all your resources with the `azd up` command.
+2. Select which Azure subscription and region for your AKS Cluster.
+3. Wait as azd automatically runs the commands for pre-provision and post-provision steps.
+4. At the end, your output shows the newly created deployments and services.
+
+ ```output
+ deployment.apps/rabbitmq created
+ service/rabbitmq created
+ deployment.apps/order-service created
+ service/order-service created
+ deployment.apps/product-service created
+ service/product-service created
+ deployment.apps/store-front created
+ service/store-front created
+ ```
+
+## Test the application
+
+When your application is created, a Kubernetes service exposes the application's front end service to the internet. This process can take a few minutes to complete. Once completed, follow these steps verify and test the application by opening up the store-front page.
+
+1. View the status of the deployed pods with the [kubectl get pods][kubectl-get] command.
+
+ Check that all pods are in the `Running` state before proceeding:
+
+ ```console
+ kubectl get pods
+ ```
+
+1. Search for a public IP address for the front end store-front application.
+
+ Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument:
+
+ ```azurecli
+ kubectl get service store-front --watch
+ ```
+
+ The **EXTERNAL-IP** output for the `store-front` service initially shows as *pending*:
+
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m
+ ```
+
+1. When the **EXTERNAL-IP** address changes from *pending* to a public IP address, use `CTRL-C` to stop the `kubectl` watch process.
+
+ The following sample output shows a valid public IP address assigned to the service:
+
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m
+ ```
+
+1. Open a web browser using the external IP address of your service to view the Azure Store app in action.
+
+ :::image type="content" source="media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-cli/aks-store-application.png":::
+
+### Visit the store-front
+
+Once on the store page, you can add new items to your cart and check them out. To verify, visit the Azure Service in your portal to view the records of the transactions for your store app.
+
+<!-- Image of Storefront Checkout -->
+
+## Delete the cluster
+
+Once you're finished with the quickstart, remember to clean up all your resources to avoid Azure charges.
+
+Run `azd down` to delete all your resources used in the quickstart, which includes your resource group, cluster, and related Azure Services.
+
+> [!NOTE]
+> This sample application is for demo purposes and doesn't represent all the best practices for Kubernetes applications.
+> For guidance on creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance].
+
+## Next steps
+
+In this quickstart, you deployed a Kubernetes cluster and then deployed a simple multi-container application to it. You hosted a store app, but there's more to learn in the [AKS tutorial][aks-tutorial].
+
+> [!div class="nextstepaction"]
+> [AKS tutorial][aks-tutorial]
+
+<!-- LINKS - external -->
+[kubectl]: https://kubernetes.io/docs/reference/kubectl/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+
+<!-- LINKS - internal -->
+[kubernetes-concepts]: ../concepts-clusters-workloads.md
+[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
+[azure-resource-group]: ../../azure-resource-manager/management/overview.md
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
+[az-group-create]: /cli/azure/group#az-group-create
+[az-group-delete]: /cli/azure/group#az-group-delete
+[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
application-gateway Application Gateway Externally Managed Scheduled Autoscaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-externally-managed-scheduled-autoscaling.md
For those experiencing predictable daily traffic patterns and who have a reliabl
While autoscaling is commonly utilized, itΓÇÖs important to note that Application Gateway doesn't currently support prescheduled capacity adjustments natively.
-The goal is to use Azure Automation to create a schedule for running runbooks that adjust the minimum autoscaling capacity of Application Gateway to meet traffic demands.
+The goal is to use Azure Automation to create a schedule for running runbooks that adjust the minimum autoscaling capacity of Application Gateway to meet traffic demands during peak vs non peak hours.
## Set up scheduled autoscaling
To implement scheduled autoscaling:
3. Create PowerShell runbooks for increasing and decreasing min autoscaling capacity for the Application Gateway resource. 4. Create the schedules during which the runbooks need to be implemented. 5. Associate the runbooks with their respective schedules.
-6. Associate the system assigned managed identity noted in step 2 with the Application Gateway resource.
+6. Associate the system assigned managed identity noted in step 2 with the Application Gateway and Application Gateway VNET resource.
## Configure automation
Next, create the following two schedules:
| | | |IncreaseMin | Falls back on native autoscaling. Next run of DecreaseMin should be no-op as the count doesnΓÇÖt need to be adjusted. | |DecreaseMin | Additional cost to the customer for the (unintended) capacity that is provisioned for those hours. Next run of IncreaseMin should be no-op because the count doesnΓÇÖt need to be adjusted. | +
+- Can the autoscale configurations be changed multiple times per day?
+ Frequent adjustments to autoscale configurations are not advised. For optimal balance, consider scheduling updates twice
+ daily to coincide with peak and non-peak usage pattern.
+
> [!NOTE] > Send email to agschedule-autoscale@microsoft.com if you have questions or need help to set up managed and scheduled autoscale for your deployments.
azure-functions Durable Functions Http Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-http-api.md
POST /admin/extensions/DurableTaskExtension/instances/bcf6fb5067b046fbb021b52ba7
The responses for this API do not contain any content.
-## Suspend instance (preview)
+## Suspend instance
Suspends a running orchestration instance.
Several possible status code values can be returned.
The responses for this API do not contain any content.
-## Resume instance (preview)
+## Resume instance
Resumes a suspended orchestration instance.
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
A terminated instance will eventually transition into the `Terminated` state. Ho
> [!NOTE] > Instance termination doesn't currently propagate. Activity functions and sub-orchestrations run to completion, regardless of whether you've terminated the orchestration instance that called them.
-## Suspend and Resume instances (preview)
+## Suspend and Resume instances
Suspending an orchestration allows you to stop a running orchestration. Unlike with termination, you have the option to resume a suspended orchestrator at a later point in time.
public static async Task Run(
string suspendReason = "Need to pause workflow"; await client.SuspendAsync(instanceId, suspendReason);
- // ... wait for some period of time since suspending is an async operation...
+ // Wait for 30 seconds to ensure that the orchestrator state is updated to suspended.
+ DateTime dueTime = context.CurrentUtcDateTime.AddSeconds(30);
+ await context.CreateTimer(dueTime, CancellationToken.None);
string resumeReason = "Continue workflow"; await client.ResumeAsync(instanceId, resumeReason);
public static async Task Run(
``` # [JavaScript](#tab/javascript)
-> [!NOTE]
-> This feature is currently not supported in JavaScript.
+
+```javascript
+const df = require("durable-functions");
+
+module.exports = async function(context, instanceId) {
+ const client = df.getClient(context);
+
+ const suspendReason = "Need to pause workflow";
+ await client.suspend(instanceId, suspendReason);
+
+ // Wait for 30 seconds to ensure that the orchestrator state is updated to suspended.
+ const deadline = DateTime.fromJSDate(context.df.currentUtcDateTime, {zone: 'utc'}).plus({ seconds: 30 });
+ yield context.df.createTimer(deadline.toJSDate());
+
+ const resumeReason = "Continue workflow";
+ await client.resume(instanceId, resumeReason);
+};
+```
# [Python](#tab/python)
-> [!NOTE]
-> This feature is currently not supported in Python.
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+from datetime import timedelta
+
+async def main(req: func.HttpRequest, starter: str, instance_id: str):
+ client = df.DurableOrchestrationClient(starter)
+
+ suspend_reason = "Need to pause workflow"
+ await client.suspend(instance_id, suspend_reason)
+
+ # Wait for 30 seconds to ensure that the orchestrator state is updated to suspended.
+ due_time = context.current_utc_datetime + timedelta(seconds=30)
+ yield context.create_timer(due_time)
+
+ resume_reason = "Continue workflow"
+ await client.resume(instance_id, resume_reason)
+```
# [PowerShell](#tab/powershell)
-> [!NOTE]
-> This feature is currently not supported in PowerShell.
+
+```powershell
+param($Request, $TriggerMetadata)
+
+# Get instance id from body
+$InstanceId = $Request.Body.InstanceId
+$SuspendReason = 'Need to pause workflow'
+
+Suspend-DurableOrchestration -InstanceId $InstanceId -Reason $SuspendReason
+
+# Wait for 30 seconds to ensure that the orchestrator state is updated to suspended.
+$duration = New-TimeSpan -Seconds 30
+Start-DurableTimer -Duration $duration
+
+$ResumeReason = 'Continue workflow'
+Resume-DurableOrchestration -InstanceId $InstanceId -Reason $ResumeReason
+```
# [Java](#tab/java)
-> [!NOTE]
-> This feature is currently not supported in Java.
+
+```java
+@FunctionName("SuspendResumeInstance")
+public void suspendResumeInstance(
+ @HttpTrigger(name = "req", methods = {HttpMethod.POST}) HttpRequestMessage<String> req,
+ @DurableClientInput(name = "durableContext") DurableClientContext durableContext) {
+ String instanceID = req.getBody();
+ DurableTaskClient client = durableContext.getClient();
+ String suspendReason = "Need to pause workflow";
+ client.suspendInstance(instanceID, suspendReason);
+
+ // Wait for 30 seconds to ensure that the orchestrator state is updated to suspended.
+ ctx.createTimer(Duration.ofSeconds(30)).await();
+
+ String resumeReason = "Continue workflow";
+ client.getClient().resumeInstance(instanceID, resumeReason);
+}
+```
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Application Gateways | [AGWAccessLogs](/azure/azure-monitor/reference/tables/AGWAccessLogs)<br>[AGWPerformanceLogs](/azure/azure-monitor/reference/tables/AGWPerformanceLogs)<br>[AGWFirewallLogs](/azure/azure-monitor/reference/tables/AGWFirewallLogs) | | Application Gateway for Containers | [AGCAccessLogs](/azure/azure-monitor/reference/tables/AGCAccessLogs) | | Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) |
-| Bare Metal Machines | [NCBMSecurityDefenderLogs](/azure/azure-monitor/reference/tables/ncbmsecuritydefenderlogs)<br>[NCBMSystemLogs](/azure/azure-monitor/reference/tables/NCBMSystemLogs)<br>[NCBMSecurityLogs](/azure/azure-monitor/reference/tables/NCBMSecurityLogs) |
+| Bare Metal Machines | [NCBMSecurityDefenderLogs](/azure/azure-monitor/reference/tables/ncbmsecuritydefenderlogs)<br>[NCBMSystemLogs](/azure/azure-monitor/reference/tables/NCBMSystemLogs)<br>[NCBMSecurityLogs](/azure/azure-monitor/reference/tables/NCBMSecurityLogs) <br>[NCBMBreakGlassAuditLogs](/azure/azure-monitor/reference/tables/ncbmbreakglassauditlogs)|
| Chaos Experiments | [ChaosStudioExperimentEventLogs](/azure/azure-monitor/reference/tables/ChaosStudioExperimentEventLogs) | | Cloud HSM | [CHSMManagementAuditLogs](/azure/azure-monitor/reference/tables/CHSMManagementAuditLogs) | | Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) |
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Cloud Security Posture Management (CSPM)
description: Learn more about CSPM in Microsoft Defender for Cloud. Previously updated : 01/02/2024 Last updated : 02/11/2024 # Cloud security posture management (CSPM)
You can choose which ticketing system to integrate. For preview, only ServiceNow
- Review the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn about Defender CSPM pricing. -- Defender CSPM for GCP is free until January 31, 2024.- - From March 7, 2024, advanced DevOps security posture capabilities will only be available through the paid Defender CSPM plan. Free foundational security posture management in Defender for Cloud will continue providing a number of Azure DevOps recommendations. Learn more about [DevOps security features](devops-support.md#azure-devops). - For subscriptions that use both Defender CSPM and Defender for Containers plans, free vulnerability assessment is calculated based on free image scans provided via the Defender for Containers plan, as summarized [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
The **Insights** pane offers customized items for your environment including:
## Next steps - [Learn more](concept-cloud-security-posture-management.md) about cloud security posture management.-- [Learn more](security-policy-concept.md) about security standards and
+- [Learn more](security-policy-concept.md) about security standards and recommendations
- [Review your asset inventory](asset-inventory.md)
logic-apps Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap.md
Previously updated : 12/12/2023 Last updated : 02/10/2024 tags: connectors
tags: connectors
This multipart how-to guide shows how to access your SAP server from a workflow in Azure Logic Apps using the SAP connector. You can use the SAP connector's operations to create automated workflows that run when triggered by events in your SAP server or in other systems and run actions to manage resources on your SAP server.
-Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps. If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](#connector-technical-reference).
+Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multitenant Azure. Standard workflows also offer the SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps. If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](#connector-technical-reference).
## SAP compatibility
The SAP connector has different versions, based on [logic app type and host envi
| Logic app | Environment | Connector version | |--|-|-|
-| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [Managed connectors in Azure Logic Apps](../../connectors/managed.md) |
+| **Consumption** | Multitenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [Managed connectors in Azure Logic Apps](../../connectors/managed.md) |
| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label, and the ISE-native version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. <br><br>**Note**: Make sure to use the ISE-native version, not the managed version. <br><br>For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [ISE message limits](../logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../../connectors/managed.md) | | **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../../connectors/built-in.md) |
SAP upgraded their .NET connector (NCo) to version 3.1, which changed the way th
* The logic app workflow from where you want to access your SAP server.
- * For a Consumption workflow in multi-tenant Azure Logic Apps, see [Multi-tenant prerequisites](#multi-tenant-prerequisites).
+ * For a Consumption workflow in multitenant Azure Logic Apps, see [Multitenant prerequisites](#multitenant-prerequisites).
* For a Standard workflow in single-tenant Azure Logic Apps, see [Single-tenant prerequisites](#single-tenant-prerequisites).
SAP upgraded their .NET connector (NCo) to version 3.1, which changed the way th
The SAP system requires network connectivity from the host of the SAP .NET Connector (NCo) library:
-* For Consumption logic app workflows in multi-tenant Azure Logic Apps, the on-premises data gateway hosts the SAP .NET Connector (NCo) library. If you use an on-premises data gateway cluster, all nodes of the cluster require network connectivity to the SAP system.
+* For Consumption logic app workflows in multitenant Azure Logic Apps, the on-premises data gateway hosts the SAP .NET Connector (NCo) library. If you use an on-premises data gateway cluster, all nodes of the cluster require network connectivity to the SAP system.
* For Standard logic app workflows in single-tenant Azure Logic Apps, the logic app resource hosts the SAP .NET Connector (NCo) library. So, the logic app resource itself must enable virtual network integration, and that virtual network must have network connectivity to the SAP system.
To use the SAP connector, you have to install the SAP Connector NCo client libra
* For Standard logic app workflows, you can install the latest 64-bit or 32-bit version for [SAP Connector (NCo 3.1) for Microsoft .NET 3.1.3.0 compiled with .NET Framework 4.6.2](https://support.sap.com/en/product/connectors/msnet.html). However, make sure that you install the version that matches the configuration in your Standard logic app resource. To check the version used by your logic app, follow these steps:
- 1. In the [Azure portal](https://portal.azure.com), open your Standard logic app.
+ 1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
1. On the logic app resource menu, under **Settings**, select **Configuration**.
- 1. On the **Configuration** pane, under **Platform settings**, check whether the **Platform** value is set to 64-bit or 32-bit.
+ 1. On the **Configuration** page, select the **General settings** tab. Under **Platform settings**, check whether the **Platform** value is set to **64 Bit** or **32 Bit**.
1. Make sure to install the version of the [SAP Connector (NCo 3.1) for Microsoft .NET 3.1.3.0 compiled with .NET Framework 4.6.2](https://support.sap.com/en/product/connectors/msnet.html) that matches your platform configuration. * From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows. Or, optionally, if you're using only the SAP managed connector, when you install the SAP NCo client library, select **Global Assembly Cache registration**. The ISE zip archive and SAP built-in connector currently doesn't support GAC registration.
- * For a Consumption workflow that runs in multi-tenant Azure Logic Apps and uses your on-premises data gateway, copy the following assembly (.dll) files to the on-premises data gateway installation folder, for example, **C:\Program Files\On-Premises Data Gateway**. The SAP NCo 3.0 client library contains the following assemblies:
+ * For a Consumption workflow that runs in multitenant Azure Logic Apps and uses your on-premises data gateway, copy the following assembly (.dll) files to the on-premises data gateway installation folder, for example, **C:\Program Files\On-Premises Data Gateway**. The SAP NCo 3.0 client library contains the following assemblies:
- **libicudecnumber.dll** - **rscp4n.dll**
The following relationships exist between the SAP NCo client library, the .NET F
<a name="snc-prerequisites-consumption"></a>
-For Consumption workflows in multi-tenant Azure Logic Apps that use the on-premises data gateway, and optionally SNC, you must also configure the following settings.
+For Consumption workflows in multitenant Azure Logic Apps that use the on-premises data gateway, and optionally SNC, you must also configure the following settings.
* Make sure that your SNC library version and its dependencies are compatible with your SAP environment. To troubleshoot any library compatibility issues, you can use your on-premises data gateway and data gateway logs.
After you delete the SAP connections, you must delete the SAP connector from you
### [Consumption](#tab/consumption)
-<a name="multi-tenant-prerequisites"></a>
+<a name="multitenant-prerequisites"></a>
-For a Consumption workflow in multi-tenant Azure Logic Apps, the SAP managed connector integrates with SAP systems through an [on-premises data gateway](../connect-on-premises-data-sources.md). For example, in scenarios where your workflow sends a message to the SAP system, the data gateway acts as an RFC client and forwards the requests received from your workflow to SAP. Likewise, in scenarios where your workflow receives a message from SAP, the data gateway acts as an RFC server that receives requests from SAP and forwards them to your workflow.
+For a Consumption workflow in multitenant Azure Logic Apps, the SAP managed connector integrates with SAP systems through an [on-premises data gateway](../connect-on-premises-data-sources.md). For example, in scenarios where your workflow sends a message to the SAP system, the data gateway acts as an RFC client and forwards the requests received from your workflow to SAP. Likewise, in scenarios where your workflow receives a message from SAP, the data gateway acts as an RFC server that receives requests from SAP and forwards them to your workflow.
1. On a host computer or virtual machine that exists in the same virtual network as the SAP system to which you're connecting, [download and install the on-premises data gateway](../install-on-premises-data-gateway.md).
For a Consumption workflow in an ISE, the ISE provides access to resources that
> before this date are supported through August 31, 2024. For more information, see the following resources: > > - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/ise-retirement-what-you-need-to-know/ba-p/3645220)
-> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](../single-tenant-overview-compare.md)
+> - [Single-tenant versus multitenant and integration service environment for Azure Logic Apps](../single-tenant-overview-compare.md)
> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) > - [Export ISE workflows to a Standard logic app](../export-from-ise-to-standard-logic-app.md) > - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/)
For a Consumption workflow in an ISE, the ISE provides access to resources that
### [Consumption](#tab/consumption)
-For a Consumption workflow that runs in multi-tenant Azure Logic Apps, you can enable SNC for authentication, which applies only when you use the data gateway. Before you start, make sure that you met all the necessary [prerequisites](sap.md?tabs=multi-tenant#prerequisites) and [SNC prerequisites](sap.md?tabs=multi-tenant#snc-prerequisites).
+For a Consumption workflow that runs in multitenant Azure Logic Apps, you can enable SNC for authentication, which applies only when you use the data gateway. Before you start, make sure that you met all the necessary [prerequisites](sap.md?tabs=consumption#prerequisites) and [SNC prerequisites](sap.md?tabs=consumption#snc-prerequisites).
1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and workflow in the designer.
For a Standard workflow that runs in single-tenant Azure Logic Apps, you can ena
1. To specify your SNC Personal Security Environment (PSE) and PSE password, follow these steps:
- 1. On your logic app resource menu, under **Settings**, select **Configuration**.
+ 1. On your logic app resource menu, under **Settings**, select **Environment variables**.
- 1. On the **Application settings** tab, check whether the settings named **SAP_PSE** and **SAP__PSE_Password** already exist. If they don't exist, you have to add both settings. To add a new setting, select **New application setting**, provide the following required information, and select **OK** for each setting:
+ 1. On the **App settings** tab, check whether the settings named **SAP_PSE** and **SAP__PSE_Password** already exist. If they don't exist, you have to add each setting at the end of the settings list, provide the following required information, and select **Apply** for each setting:
| Name | Value | Description | ||-|-|
To work with the resulting ETL files, you can use [PerfView](https://github.com/
### Test your workflow
-Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
### [Consumption](#tab/consumption)
Based on whether you have a Consumption workflow in multi-tenant Azure Logic App
### [ISE](#tab/ise)
-See the steps for [SAP logging for Consumption logic apps in multi-tenant workflows](?tabs=multi-tenant#test-workflow-logging).
+See the steps for [SAP logging for Consumption logic apps in multitenant workflows](?tabs=consumption#test-workflow-logging).
See the steps for [SAP logging for Consumption logic apps in multi-tenant workfl
When you have to investigate any problems with this component, you can set up custom text file-based NCo tracing, which SAP or Microsoft support might request from you. By default, this capability is disabled because enabling this trace might negatively affect performance and quickly consume the application host's storage space.
-You can control this tracing capability at the application level by using the following settings:
+You can control this tracing capability at the application level by adding the following settings:
1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
-1. On the resource menu, under **Settings**, select **Configuration** to review the application settings.
+1. On the logic app menu, under **Settings**, select **Environment variables** to review the application settings.
-1. On the **Configuration** page, add the following application settings:
+1. On the **Environment variables** page, on the **App settings** tab, add the following application settings:
* **SAP_RFC_TRACE_DIRECTORY**: The directory where to store the NCo trace files, for example, **C:\home\LogFiles\NCo**.
You can control this tracing capability at the application level by using the fo
After you open the **$SAP_RFC_TRACE_DIRECTORY** folder, you'll find a file named **dev_nco_rfc.log**, one or multiple files named **dev_nco_rfcNNNN.log**, and one or multiple files named **dev_nco_rfcNNNN.trc** where **NNNN** is a thread identifier.
-1. To view the contant in a log or trace file, select the **Edit** button next to a file.
+1. To view the content in a log or trace file, select the **Edit** button next to a file.
> [!NOTE] >
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
Your logic app also has *host settings*, which specify the runtime configuration
## App settings, parameters, and deployment
-In *multitenant* Azure Logic Apps, deployment depends on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both logic apps and infrastructure. This design poses a challenge when you have to maintain environment variables for logic apps across various dev, test, and production environments. Everything in an ARM template is defined at deployment. If you need to change just a single variable, you have to redeploy everything.
+In multitenant Azure Logic Apps, deployment depends on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both logic apps and infrastructure. This design poses a challenge when you have to maintain environment variables for logic apps across various dev, test, and production environments. Everything in an ARM template is defined at deployment. If you need to change just a single variable, you have to redeploy everything.
In *single-tenant* Azure Logic Apps, deployment becomes easier because you can separate resource provisioning between apps and infrastructure. You can use *parameters* to abstract values that might change between environments. By defining parameters to use in your workflows, you can first focus on designing your workflows, and then insert your environment-specific variables later. You can call and reference your environment variables at runtime by using app settings and parameters. That way, you don't have to redeploy as often.
role-based-access-control Role Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-definitions.md
Previously updated : 11/06/2023 Last updated : 02/12/2024
NotActions []
DataActions [] NotDataActions [] AssignableScopes []
+Condition
+ConditionVersion
```
-The following shows an example of the properties in a role definition when displayed using the [Azure portal](role-definitions-list.md#azure-portal), [Azure CLI](role-definitions-list.md#azure-cli), or the [REST API](role-definitions-list.md#rest-api):
+The following shows an example of the properties in a role definition when displayed using the [Azure CLI](role-definitions-list.md#azure-cli) or [REST API](role-definitions-list.md#rest-api):
``` roleName name
+id
+roleType
type description actions []
notActions []
dataActions [] notDataActions [] assignableScopes []
+condition
+conditionVersion
+createdOn
+updatedOn
+createdBy
+updatedBy
``` The following table describes what the role properties mean. | Property | Description | | | |
-| `Name`</br>`roleName` | The display name of the role. |
-| `Id`</br>`name` | The unique ID of the role. Built-in roles have the same role ID across clouds. |
-| `IsCustom`</br>`roleType` | Indicates whether this is a custom role. Set to `true` or `CustomRole` for custom roles. Set to `false` or `BuiltInRole` for built-in roles. |
-| `Description`</br>`description` | The description of the role. |
-| `Actions`</br>`actions` | An array of strings that specifies the control plane actions that the role allows to be performed. |
-| `NotActions`</br>`notActions` | An array of strings that specifies the control plane actions that are excluded from the allowed `Actions`. |
-| `DataActions`</br>`dataActions` | An array of strings that specifies the data plane actions that the role allows to be performed to your data within that object. |
-| `NotDataActions`</br>`notDataActions` | An array of strings that specifies the data plane actions that are excluded from the allowed `DataActions`. |
-| `AssignableScopes`</br>`assignableScopes` | An array of strings that specifies the scopes that the role is available for assignment. |
+| `Name`</br>`roleName` | Display name of the role. |
+| `Id`</br>`name` | Unique ID of the role. Built-in roles have the same role ID across clouds. |
+| `id` | Fully qualified unique ID of the role. |
+| `IsCustom`</br>`roleType` | Indicates whether this role is a custom role. Set to `true` or `CustomRole` for custom roles. Set to `false` or `BuiltInRole` for built-in roles. |
+| `type` | Type of object. Set to `Microsoft.Authorization/roleDefinitions`. |
+| `Description`</br>`description` | Description of the role. |
+| `Actions`</br>`actions` | Array of strings that specifies the control plane actions that the role allows to be performed. |
+| `NotActions`</br>`notActions` | Array of strings that specifies the control plane actions that are excluded from the allowed `Actions`. |
+| `DataActions`</br>`dataActions` | Array of strings that specifies the data plane actions that the role allows to be performed to your data within that object. |
+| `NotDataActions`</br>`notDataActions` | Array of strings that specifies the data plane actions that are excluded from the allowed `DataActions`. |
+| `AssignableScopes`</br>`assignableScopes` | Array of strings that specifies the scopes that the role is available for assignment. |
+| `Condition`<br/>`condition` | For built-in roles, condition statement based on one or more actions in role definition. |
+| `ConditionVersion`<br/>`conditionVersion` | Condition version number. Defaults to 2.0 and is the only supported version. |
+| `createdOn` | Date and time role was created. |
+| `updatedOn` | Date and time role was last updated. |
+| `createdBy` | For custom roles, principal that created role. |
+| `updatedBy` | For custom roles, principal that updated role. |
### Actions format
Contributor role as displayed in [Azure PowerShell](role-definitions-list.md#azu
"Name": "Contributor", "Id": "b24988ac-6180-42a0-ab88-20f7382dd24c", "IsCustom": false,
- "Description": "Lets you manage everything except access to resources.",
+ "Description": "Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.",
"Actions": [ "*" ],
Contributor role as displayed in [Azure PowerShell](role-definitions-list.md#azu
"Microsoft.Authorization/*/Write", "Microsoft.Authorization/elevateAccess/Action", "Microsoft.Blueprint/blueprintAssignments/write",
- "Microsoft.Blueprint/blueprintAssignments/delete"
+ "Microsoft.Blueprint/blueprintAssignments/delete",
+ "Microsoft.Compute/galleries/share/action",
+ "Microsoft.Purview/consents/write",
+ "Microsoft.Purview/consents/delete"
], "DataActions": [], "NotDataActions": [], "AssignableScopes": [ "/"
- ]
+ ],
+ "Condition": null,
+ "ConditionVersion": null
} ``` Contributor role as displayed in [Azure CLI](role-definitions-list.md#azure-cli): ```json
-{
- "assignableScopes": [
- "/"
- ],
- "description": "Lets you manage everything except access to resources.",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
- "name": "b24988ac-6180-42a0-ab88-20f7382dd24c",
- "permissions": [
- {
- "actions": [
- "*"
- ],
- "notActions": [
- "Microsoft.Authorization/*/Delete",
- "Microsoft.Authorization/*/Write",
- "Microsoft.Authorization/elevateAccess/Action",
- "Microsoft.Blueprint/blueprintAssignments/write",
- "Microsoft.Blueprint/blueprintAssignments/delete"
- ],
- "dataActions": [],
- "notDataActions": []
- }
- ],
- "roleName": "Contributor",
- "roleType": "BuiltInRole",
- "type": "Microsoft.Authorization/roleDefinitions"
-}
+[
+ {
+ "assignableScopes": [
+ "/"
+ ],
+ "createdBy": null,
+ "createdOn": "2015-02-02T21:55:09.880642+00:00",
+ "description": "Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
+ "name": "b24988ac-6180-42a0-ab88-20f7382dd24c",
+ "permissions": [
+ {
+ "actions": [
+ "*"
+ ],
+ "condition": null,
+ "conditionVersion": null,
+ "dataActions": [],
+ "notActions": [
+ "Microsoft.Authorization/*/Delete",
+ "Microsoft.Authorization/*/Write",
+ "Microsoft.Authorization/elevateAccess/Action",
+ "Microsoft.Blueprint/blueprintAssignments/write",
+ "Microsoft.Blueprint/blueprintAssignments/delete",
+ "Microsoft.Compute/galleries/share/action",
+ "Microsoft.Purview/consents/write",
+ "Microsoft.Purview/consents/delete"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions",
+ "updatedBy": null,
+ "updatedOn": "2023-07-10T15:10:53.947865+00:00"
+ }
+]
``` ## Control and data actions
Storage Blob Data Reader role as displayed in Azure PowerShell:
"NotDataActions": [], "AssignableScopes": [ "/"
- ]
+ ],
+ "Condition": null,
+ "ConditionVersion": null
} ``` Storage Blob Data Reader role as displayed in Azure CLI: ```json
-{
- "assignableScopes": [
- "/"
- ],
- "description": "Allows for read access to Azure Storage blob containers and data",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-4ae2-8e65-a410df84e7d1",
- "name": "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1",
- "permissions": [
- {
- "actions": [
- "Microsoft.Storage/storageAccounts/blobServices/containers/read",
- "Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action"
- ],
- "notActions": [],
- "dataActions": [
- "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read"
- ],
- "notDataActions": []
- }
- ],
- "roleName": "Storage Blob Data Reader",
- "roleType": "BuiltInRole",
- "type": "Microsoft.Authorization/roleDefinitions"
-}
+[
+ {
+ "assignableScopes": [
+ "/"
+ ],
+ "createdBy": null,
+ "createdOn": "2017-12-21T00:01:24.797231+00:00",
+ "description": "Allows for read access to Azure Storage blob containers and data",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-4ae2-8e65-a410df84e7d1",
+ "name": "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Storage/storageAccounts/blobServices/containers/read",
+ "Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action"
+ ],
+ "condition": null,
+ "conditionVersion": null,
+ "dataActions": [
+ "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read"
+ ],
+ "notActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Storage Blob Data Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions",
+ "updatedBy": null,
+ "updatedOn": "2021-11-11T20:13:55.297507+00:00"
+ }
+]
``` Only data plane actions can be added to the `DataActions` and `NotDataActions` properties. Resource providers identify which actions are data actions, by setting the `isDataAction` property to `true`. To see a list of the actions where `isDataAction` is `true`, see [Resource provider operations](resource-provider-operations.md). Roles that do not have data actions are not required to have `DataActions` and `NotDataActions` properties within the role definition.
Storage Blob Data Contributor
&nbsp;&nbsp;&nbsp;&nbsp;DataActions<br> &nbsp;&nbsp;&nbsp;&nbsp;`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete`<br> &nbsp;&nbsp;&nbsp;&nbsp;`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read`<br>
+&nbsp;&nbsp;&nbsp;&nbsp;`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write`<br>
&nbsp;&nbsp;&nbsp;&nbsp;`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action`<br>
-&nbsp;&nbsp;&nbsp;&nbsp;`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write`
+&nbsp;&nbsp;&nbsp;&nbsp;`Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action`
Since Alice has a wildcard (`*`) action at a subscription scope, their permissions inherit down to enable them to perform all control plane actions. Alice can read, write, and delete containers. However, Alice cannot perform data plane actions without taking additional steps. For example, by default, Alice cannot read the blobs inside a container. To read the blobs, Alice would have to retrieve the storage access keys and use them to access the blobs.
Examples of valid assignable scopes include:
You can define only one management group in `AssignableScopes` of a custom role.
-Although it's possible to create a custom role with a resource instance in `AssignableScopes` using the command line, it's not recommended. Each tenant supports a maximum of 5000 custom roles. Using this strategy could potentially exhaust your available custom roles. Ultimately, the level of access is determined by the custom role assignment (scope + role permissions + security principal) and not the `AssignableScopes` listed in the custom role. So, create your custom roles with `AssignableScopes` of management group, subscription, or resource group, but assign the custom roles with narrow scope, such as resource or resource group.
+Although it's possible to create a custom role with a resource instance in `AssignableScopes` using the command line, it's not recommended. Each tenant supports a maximum of 5,000 custom roles. Using this strategy could potentially exhaust your available custom roles. Ultimately, the level of access is determined by the custom role assignment (scope + role permissions + security principal) and not the `AssignableScopes` listed in the custom role. So, create your custom roles with `AssignableScopes` of management group, subscription, or resource group, but assign the custom roles with narrow scope, such as resource or resource group.
For more information about `AssignableScopes` for custom roles, see [Azure custom roles](custom-roles.md).
security Key Management Choose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/key-management-choose.md
Previously updated : 07/25/2023 Last updated : 02/08/2024
Use the table to compare all the solutions side by side. Begin from top to botto
| | **AKV Standard** | **AKV Premium** | **Azure Managed HSM** | **Azure Dedicated HSM** | **Azure Payment HSM** | | | | | | | |
-| What level of **compliance** do you need? | FIPS 140-2 level 1 | FIPS 140-2 level 2, PCI DSS | FIPS 140-2 level 3, PCI DSS, PCI 3DS | FIPS 140-2 level 3, HIPPA, PCI DSS, PCI 3DS, eIDAS CC EAL4+, GSMA | FIPS 140-2 level 3, PCI PTS HSM v3, PCI DSS, PCI 3DS, PCI PIN |
+| What level of **compliance** do you need? | FIPS 140-2 level 1 | FIPS 140-2 level 3, PCI DSS, PCI 3DS** | FIPS 140-2 level 3, PCI DSS, PCI 3DS | FIPS 140-2 level 3, HIPPA, PCI DSS, PCI 3DS, eIDAS CC EAL4+, GSMA | FIPS 140-2 level 3, PCI PTS HSM v3, PCI DSS, PCI 3DS, PCI PIN |
| Do you need **key sovereignty**? | No | No | Yes | Yes | Yes |
-| What kind of **tenancy** are you looking for? | Multi Tenant | Multi Tenant | Single Tenant | Single Tenant | Single Tenant |
+| What kind of **tenancy** are you looking for? | Multitenant | Multitenant | Single Tenant | Single Tenant | Single Tenant |
| What are your **use cases**? | Encryption at Rest, CMK, custom | Encryption at Rest, CMK, custom | Encryption at Rest, TLS Offload, CMK, custom | PKCS11, TLS Offload, code/document signing, custom | Payment PIN processing, custom | | Do you want **HSM hardware protection**? | No | Yes | Yes | Yes | Yes | | What is your **budget**? | $ | $$ | $$$ | $$$$ | $$$$ |
Here is a list of the key management solutions we commonly see being utilized ba
## Learn more about Azure key management solutions
-**Azure Key Vault (Standard Tier)**: A FIPS 140-2 Level 1 validated multi-tenant cloud key management service that can be used to store both asymmetric and symmetric keys, secrets, and certificates. Keys stored in Azure Key Vault are software-protected and can be used for encryption-at-rest and custom applications. Azure Key Vault Standard provides a modern API and a breadth of regional deployments and integrations with Azure Services. For more information, see [About Azure Key Vault](../../key-vault/general/overview.md).
+**Azure Key Vault (Standard Tier)**: A FIPS 140-2 Level 1 validated multitenant cloud key management service that can be used to store both asymmetric and symmetric keys, secrets, and certificates. Keys stored in Azure Key Vault are software-protected and can be used for encryption-at-rest and custom applications. Azure Key Vault Standard provides a modern API and a breadth of regional deployments and integrations with Azure Services. For more information, see [About Azure Key Vault](../../key-vault/general/overview.md).
-**Azure Key Vault (Premium Tier)**: A FIPS 140-2 Level 2 validated multi-tenant HSM offering that can be used to store both asymmetric and symmetric keys, secrets, and certificates. Keys are stored in a secure hardware boundary*. Microsoft manages and operates the underlying HSM, and keys stored in Azure Key Vault Premium can be used for encryption-at-rest and custom applications. Azure Key Vault Premium also provides a modern API and a breadth of regional deployments and integrations with Azure Services. If you are an AKV Premium customer looking for higher security compliance, key sovereignty, single tenancy, and/or higher crypto operations per second, you may want to consider Managed HSM instead. For more information, see [About Azure Key Vault](../../key-vault/general/overview.md).
+**Azure Key Vault (Premium Tier)**: A FIPS 140-2 Level 3** validated multitenant HSM offering that can be used to store both asymmetric and symmetric keys, secrets, and certificates. Keys are stored in a secure hardware boundary*. Microsoft manages and operates the underlying HSM, and keys stored in Azure Key Vault Premium can be used for encryption-at-rest and custom applications. Azure Key Vault Premium also provides a modern API and a breadth of regional deployments and integrations with Azure Services. If you are an AKV Premium customer looking for key sovereignty, single tenancy, and/or higher crypto operations per second, you may want to consider Managed HSM instead. For more information, see [About Azure Key Vault](../../key-vault/general/overview.md).
**Azure Managed HSM**: A FIPS 140-2 Level 3 validated, PCI compliant, single-tenant HSM offering that gives customers full control of an HSM for encryption-at-rest, Keyless SSL/TLS offload, and custom applications. Azure Managed HSM is the only key management solution offering confidential keys. Customers receive a pool of three HSM partitionsΓÇötogether acting as one logical, highly available HSM applianceΓÇöfronted by a service that exposes crypto functionality through the Key Vault API. Microsoft handles the provisioning, patching, maintenance, and hardware failover of the HSMs, but doesn't have access to the keys themselves, because the service executes within Azure's Confidential Compute Infrastructure. Azure Managed HSM is integrated with the Azure SQL, Azure Storage, and Azure Information Protection PaaS services and offers support for Keyless TLS with F5 and Nginx. For more information, see [What is Azure Key Vault Managed HSM?](../../key-vault/managed-hsm/overview.md)
Here is a list of the key management solutions we commonly see being utilized ba
> [!NOTE] > \* Azure Key Vault Premium allows the creation of both software-protected and HSM protected keys. If using Azure Key Vault Premium, check to ensure that the key created is HSM protected.
+>
+> \*\* Except UK Regions which are FIPS 140-2 level 2, PCI DSS.
+ ## What's next
sentinel Connect Logstash Data Connection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash-data-connection-rules.md
After you retrieve the required values:
|Field |Description |Default value | ||||
+|`azure_cloud` |Used to specify the name of the Azure cloud that is being used, Available values are: `AzureCloud`, `AzureChinaCloud`, and `AzureUSGovernment`. | `AzureCloud` |
|`key_names` |An array of strings. Provide this field if you want to send a subset of the columns to Log Analytics. |None (field is empty) | |`plugin_flush_interval` |Defines the maximal time difference (in seconds) between sending two messages to Log Analytics. |`5` | |`retransmission_time` |Sets the amount of time in seconds for retransmitting messages once sending failed. |`10` |
To monitor the connectivity and activity of the Microsoft Sentinel output plugin
If you are not seeing any data in this log file, generate and send some events locally (through the input and filter plugins) to make sure the output plugin is receiving data. Microsoft Sentinel will support only issues relating to the output plugin. +
+### Network security
+Define network settings and enable network isolation for Microsoft Sentinel Logstash output plugin.
+
+#### Virtual network service tags
+
+Microsoft Sentinel output plugin supports [Azure virtual network service tags](/azure/virtual-network/service-tags-overview). Both *AzureMonitor* and *AzureActiveDirectory* tags are required.
+
+Azure Virtual Network service tags can be used to define network access controls on [network security groups](/azure/virtual-network/network-security-groups-overview#security-rules), [Azure Firewall](/azure/firewall/service-tags), and user-defined routes. Use service tags in place of specific IP addresses when you create security rules and routes. For scenarios where Azure Virtual Network service tags cannot be used, the firewall requirements are given below.
+
+#### Firewall requirements
+
+The following table lists the firewall requirements for scenarios where Azure virtual network service tags can't be used.
+
+| Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection|
+|||||--|--|
+| Azure Commercial |https://login.microsoftonline.com |Authorization server (the Microsoft identity platform)|Port 443 |Outbound|Yes |
+| Azure Commercial |`https://<data collection endpoint name>.<Azure cloud region>.ingest.monitor.azure.com`| Data collection Endpoint|Port 443 |Outbound|Yes |
+| Azure Government |https://login.microsoftonline.us |Authorization server (the Microsoft identity platform)|Port 443 |Outbound|Yes |
+| Azure Government |Replace '.com' above with '.us' | Data collection Endpoint|Port 443 |Outbound|Yes |
+| Microsoft Azure operated by 21Vianet |https://login.chinacloudapi.cn |Authorization server (the Microsoft identity platform)|Port 443 |Outbound|Yes |
+| Microsoft Azure operated by 21Vianet |Replace '.com' above with '.cn' | Data collection Endpoint|Port 443 |Outbound|Yes |
+ ## Limitations - Ingestion into standard tables is limited only to [standard tables supported for custom logs ingestion](data-transformation.md#data-transformation-support-for-custom-data-connectors).
sentinel Monitor Data Connector Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-data-connector-health.md
Title: Monitor the health of your Microsoft Sentinel data connectors description: Use the SentinelHealth data table and the Health Monitoring workbook to keep track of your data connectors' connectivity and performance.--++ Previously updated : 11/09/2022 Last updated : 02/11/2024
To ensure complete and uninterrupted data ingestion in your Microsoft Sentinel s
The following features allow you to perform this monitoring from within Microsoft Sentinel: -- **Data connectors health monitoring workbook**: This workbook provides additional monitors, detects anomalies, and gives insight regarding the workspaceΓÇÖs data ingestion status. You can use the workbookΓÇÖs logic to monitor the general health of the ingested data, and to build custom views and rule-based alerts.
+- **Data collection health monitoring workbook**: This workbook provides additional monitors, detects anomalies, and gives insight regarding the workspaceΓÇÖs data ingestion status. You can use the workbookΓÇÖs logic to monitor the general health of the ingested data, and to build custom views and rule-based alerts.
- ***SentinelHealth* data table (Preview)**: Querying this table provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions. The *SentinelHealth* data table is currently supported only for [selected data connectors](#supported-data-connectors).
The following features allow you to perform this monitoring from within Microsof
## Use the health monitoring workbook
-1. From the Microsoft Sentinel portal, select **Workbooks** from the **Threat management** menu.
+1. From the Microsoft Sentinel portal, select **Content hub** from the **Content management** section of the navigation menu.
-1. In the **Workbooks** gallery, enter *health* in the search bar, and select **Data collection health monitoring** from among the results.
+1. In the **Content hub**, enter *health* in the search bar, and select **Data collection health monitoring** from among the results.
+
+1. Select **Install** from the details pane. When you see a notification message that the workbook is installed, or if instead of *Install*, you see *Configuration*, proceed to the next step.
+
+1. Select **Workbooks** from the **Threat management** section of the navigation menu.
+
+1. In the **Workbooks** page, select the **Templates** tab, enter *health* in the search bar, and select **Data collection health monitoring** from among the results.
1. Select **View template** to use the workbook as is, or select **Save** to create an editable copy of the workbook. When the copy is created, select **View saved workbook**.
virtual-machine-scale-sets Virtual Machine Scale Sets Attach Detach Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-attach-detach-vm.md
Update-AzVMΓÇ»-ResourceGroupNameΓÇ»$resourceGroupNameΓÇ»-VMΓÇ»$vm -VirtualMachin
- The scale set must use Flexible orchestration mode. - The scale set must have a `platformFaultDomainCount` of **1**. - VMs created by the scale set must be `Stopped` prior to being detached.-- Detach of VMs created by the scale set is currently not supported in UK South and North Europe. ## Moving VMs between scale sets (Preview)