Updates from: 01/22/2024 02:18:12
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Here are some property options that you can use to configure a transcription whe
|`languageIdentification`|Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).<br/><br/>If you set the `languageIdentification` property, then you must also set its enclosed `candidateLocales` property.| |`languageIdentification.candidateLocales`|The candidate locales for language identification such as `"properties": { "languageIdentification": { "candidateLocales": ["en-US", "de-DE", "es-ES"]}}`. A minimum of 2 and a maximum of 10 candidate locales, including the main locale for the transcription, is supported.| |`locale`|The locale of the batch transcription. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.<br/><br/>This property is required.|
-|`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).|
+|`model`|You can set the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).|
|`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. | |`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.<br/><br/>This property isn't applicable for Whisper models.| |`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. As an alternative, you can call [Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete) regularly after you retrieve the transcription results.|
spx help batch transcription create advanced
Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
-Optionally, you can modify the previous [create transcription example](#create-a-batch-transcription) by setting the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model.
+Optionally, you can modify the previous [create transcription example](#create-a-batch-transcription) by setting the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model.
::: zone pivot="rest-api"
spx batch transcription create --name "My Transcription" --language "en-US" --co
::: zone-end
-To use a Custom Speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide.
+To use a custom speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide.
> [!TIP] > A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription.
-Batch transcription requests for expired models fail with a 4xx error. You want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
+Batch transcription requests for expired models fail with a 4xx error. You want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
## Using Whisper models
ai-services Bring Your Own Storage Speech Resource Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource-speech-to-text.md
This article explains in depth how to use a BYOS-enabled Speech resource in all
## Data storage
-When using BYOS, the Speech service doesn't keep any customer artifacts after the data processing (transcription, model training, model testing) is complete. However, some metadata that isn't derived from the user content is stored within Speech service premises. For example, in the Custom speech scenario, the Service keeps certain information about the custom endpoints, like which models they use.
+When using BYOS, the Speech service doesn't keep any customer artifacts after the data processing (transcription, model training, model testing) is complete. However, some metadata that isn't derived from the user content is stored within Speech service premises. For example, in the custom speech scenario, the Service keeps certain information about the custom endpoints, like which models they use.
BYOS-associated Storage account stores the following data:
URL of this format ensures that only Microsoft Entra identities (users, service
## Custom speech
-With Custom speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for real-time speech to text, speech translation, and batch transcription. For more information, see the [Custom speech overview](custom-speech-overview.md).
+With custom speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for real-time speech to text, speech translation, and batch transcription. For more information, see the [custom speech overview](custom-speech-overview.md).
-There's nothing specific about how you use Custom speech with BYOS-enabled Speech resource. The only difference is where all custom model related data, which Speech service collects and produces for you, is stored. The data is stored in the following Blob containers of BYOS-associated Storage account:
+There's nothing specific about how you use custom speech with BYOS-enabled Speech resource. The only difference is where all custom model related data, which Speech service collects and produces for you, is stored. The data is stored in the following Blob containers of BYOS-associated Storage account:
-- `customspeech-models` - Location of Custom speech models-- `customspeech-artifacts` - Location of all other Custom speech related data
+- `customspeech-models` - Location of custom speech models
+- `customspeech-artifacts` - Location of all other custom speech related data
The Blob container structure is provided for your information only and subject to change without a notice. > [!CAUTION]
-> Speech service relies on pre-defined Blob container paths and file names for Custom speech module to correctly function. Don't move, rename or in any way alter the contents of `customspeech-models` container and Custom speech related folders of `customspeech-artifacts` container.
+> Speech service relies on pre-defined Blob container paths and file names for custom speech module to correctly function. Don't move, rename or in any way alter the contents of `customspeech-models` container and custom speech related folders of `customspeech-artifacts` container.
> > Failure to do so very likely will result in hard to debug errors and may lead to the necessity of custom model retraining. >
-> Use standard tools, like REST API and Speech Studio to interact with the Custom speech related data. See details in [Custom speech section](custom-speech-overview.md).
+> Use standard tools, like REST API and Speech Studio to interact with the custom speech related data. See details in [custom speech section](custom-speech-overview.md).
-### Use of REST API with Custom speech
+### Use of REST API with custom speech
[Speech to text REST API](rest-speech-to-text.md) fully supports BYOS-enabled Speech resources. However, because the data is now stored within the BYOS-enabled Storage account, requests like [Get Dataset Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles) interact with the BYOS-associated Storage account Blob storage, instead of Speech service internal resources. It allows using the same REST API based code for both "regular" and BYOS-enabled Speech resources.
ai-services Bring Your Own Storage Speech Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource.md
BYOS can be used with several Azure AI services. For Speech, it can be used in t
- [Batch transcription](batch-transcription.md) - Real-time transcription with [audio and transcription result logging](logging-audio-transcription.md) enabled-- [Custom Speech](custom-speech-overview.md) (Custom models for Speech recognition)
+- [Custom speech](custom-speech-overview.md) (Custom models for Speech recognition)
**Text to speech**
ai-services Call Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-overview.md
The Speech service works well with prebuilt models. However, you might want to f
| Speech customization | Description | | -- | -- |
-| [Custom Speech](./custom-speech-overview.md) | A speech to text feature used to evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
+| [Custom speech](./custom-speech-overview.md) | A speech to text feature used to evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
| [Custom neural voice](./custom-neural-voice.md) | A text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. | ### Language service
ai-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-speech-overview.md
Title: Custom Speech overview - Speech service
+ Title: Custom speech overview - Speech service
-description: Custom Speech is a set of online tools that allows you to evaluate and improve the speech to text accuracy for your applications, tools, and products.
+description: Custom speech is a set of online tools that allows you to evaluate and improve the speech to text accuracy for your applications, tools, and products.
Previously updated : 1/18/2024 Last updated : 1/19/2024
-# What is Custom Speech?
+# What is custom speech?
-With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
+With custom speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing various common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works well in most speech recognition scenarios.
-A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
+A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
+
+You can also train a model with structured text when the data follows a pattern, to specify custom pronunciations, and to customize display text formatting with custom inverse text normalization, custom rewrite, and custom profanity filtering.
## How does it work?
-With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
+With custom speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
-![Diagram that highlights the components that make up the Custom Speech area of the Speech Studio.](./media/custom-speech/custom-speech-overview.png)
+![Diagram that highlights the components that make up the custom speech area of the Speech Studio.](./media/custom-speech/custom-speech-overview.png)
Here's more information about the sequence of steps shown in the previous diagram:
-1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. If you train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
+1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. If you train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. For more information, see footnotes in the [regions](regions.md#speech-service) table.
1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the speech to text offering for your applications, tools, and products. 1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data. 1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if more training is required. 1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended. > [!NOTE]
- > You pay for Custom Speech model usage and [endpoint hosting](how-to-custom-speech-deploy-model.md). You'll also be charged for custom speech model training if the base model was created on October 1, 2023 and later. You are not charged for training if the base model was created prior to October 2023. For more information, see [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and the [Charge for adaptation section in the speech to text 3.2 migration guide](./migrate-v3-1-to-v3-2.md#charge-for-adaptation).
-1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. Except for [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
+ > You pay for custom speech model usage and [endpoint hosting](how-to-custom-speech-deploy-model.md). You'll also be charged for custom speech model training if the base model was created on October 1, 2023 and later. You are not charged for training if the base model was created prior to October 2023. For more information, see [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and the [Charge for adaptation section in the speech to text 3.2 migration guide](./migrate-v3-1-to-v3-2.md#charge-for-adaptation).
+1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. Except for [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a custom speech model.
> [!TIP]
- > A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the custom speech model is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+ > A hosted deployment endpoint isn't required to use custom speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the custom speech model is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
## Responsible AI
-An AI system includes not only the technology, but also the people who use it, the people who will be affected by it, and the environment in which it's deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
+An AI system includes not only the technology, but also the people who use it, the people who are affected by it, and the environment in which it's deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context) * [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/ai-services/speech-service/context/context)
ai-services How To Custom Speech Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md
Title: CI/CD for Custom Speech - Speech service
+ Title: CI/CD for custom speech - Speech service
-description: Apply DevOps with Custom Speech and CI/CD workflows. Implement an existing DevOps solution for your own project.
+description: Apply DevOps with custom speech and CI/CD workflows. Implement an existing DevOps solution for your own project.
Previously updated : 05/08/2022 Last updated : 1/19/2024
-# CI/CD for Custom Speech
+# CI/CD for custom speech
-Implement automated training, testing, and release management to enable continuous improvement of Custom Speech models as you apply updates to training and testing data. Through effective implementation of CI/CD workflows, you can ensure that the endpoint for the best-performing Custom Speech model is always available.
+Implement automated training, testing, and release management to enable continuous improvement of custom speech models as you apply updates to training and testing data. Through effective implementation of CI/CD workflows, you can ensure that the endpoint for the best-performing custom speech model is always available.
-[Continuous integration](/devops/develop/what-is-continuous-integration) (CI) is the engineering practice of frequently committing updates in a shared repository, and performing an automated build on it. CI workflows for Custom Speech train a new model from its data sources and perform automated testing on the new model to ensure that it performs better than the previous model.
+[Continuous integration](/devops/develop/what-is-continuous-integration) (CI) is the engineering practice of frequently committing updates in a shared repository, and performing an automated build on it. CI workflows for custom speech train a new model from its data sources and perform automated testing on the new model to ensure that it performs better than the previous model.
-[Continuous delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes models from the CI process and creates an endpoint for each improved Custom Speech model. CD makes endpoints easily available to be integrated into solutions.
+[Continuous delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes models from the CI process and creates an endpoint for each improved custom speech model. CD makes endpoints easily available to be integrated into solutions.
Custom CI/CD solutions are possible, but for a robust, pre-built solution, use the [Speech DevOps template repository](https://github.com/Azure-Samples/Speech-Service-DevOps-Template), which executes CI/CD workflows using GitHub Actions.
-## CI/CD workflows for Custom Speech
+## CI/CD workflows for custom speech
-The purpose of these workflows is to ensure that each Custom Speech model has better recognition accuracy than the previous build. If the updates to the testing and/or training data improve the accuracy, these workflows create a new Custom Speech endpoint.
+The purpose of these workflows is to ensure that each custom speech model has better recognition accuracy than the previous build. If the updates to the testing and/or training data improve the accuracy, these workflows create a new custom speech endpoint.
-Git servers such as GitHub and Azure DevOps can run automated workflows when specific Git events happen, such as merges or pull requests. For example, a CI workflow can be triggered when updates to testing data are pushed to the *main* branch. Different Git Servers will have different tooling, but will allow scripting command-line interface (CLI) commands so that they can execute on a build server.
+Git servers such as GitHub and Azure DevOps can run automated workflows when specific Git events happen, such as merges or pull requests. For example, a CI workflow can be triggered when updates to testing data are pushed to the *main* branch. Different Git Servers have different tooling, but allow scripting command-line interface (CLI) commands so that they can execute on a build server.
-Along the way, the workflows should name and store data, tests, test files, models, and endpoints such that they can be traced back to the commit or version they came from. It is also helpful to name these assets so that it is easy to see which were created after updating testing data versus training data.
+Along the way, the workflows should name and store data, tests, test files, models, and endpoints such that they can be traced back to the commit or version they came from. It's also helpful to name these assets so that it's easy to see which were created after updating testing data versus training data.
### CI workflow for testing data updates
-The principal purpose of the CI/CD workflows is to build a new model using the training data, and to test that model using the testing data to establish whether the [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate-wer) (WER) has improved compared to the previous best-performing model (the "benchmark model"). If the new model performs better, it becomes the new benchmark model against which future models are compared.
+The principal purpose of the CI/CD workflows is to build a new model using the training data, and to test that model using the testing data to establish whether the [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate-wer) (WER) improved compared to the previous best-performing model (the "benchmark model"). If the new model performs better, it becomes the new benchmark model against which future models are compared.
-The CI workflow for testing data updates should retest the current benchmark model with the updated test data to calculate the revised WER. This ensures that when the WER of a new model is compared to the WER of the benchmark, both models have been tested against the same test data and you're comparing like with like.
+The CI workflow for testing data updates should retest the current benchmark model with the updated test data to calculate the revised WER. This ensures that when the WER of a new model is compared to the WER of the benchmark, both models were tested against the same test data and you're comparing like with like.
This workflow should trigger on updates to testing data and: - Test the benchmark model against the updated testing data. - Store the test output, which contains the WER of the benchmark model, using the updated data. - The WER from these tests will become the new benchmark WER that future models must beat.-- The CD workflow does not execute for updates to testing data.
+- The CD workflow doesn't execute for updates to testing data.
### CI workflow for training data updates
This workflow should trigger on updates to training data and:
- Test the new model against the testing data. - Store the test output, which contains the WER. - Compare the WER from the new model to the WER from the benchmark model.-- If the WER does not improve, stop the workflow.-- If the WER improves, execute the CD workflow to create a Custom Speech endpoint.
+- If the WER doesn't improve, stop the workflow.
+- If the WER improves, execute the CD workflow to create a custom speech endpoint.
### CD workflow
After an update to the training data improves a model's recognition, the CD work
Most teams require a manual review and approval process for deployment to a production environment. For a production deployment, you might want to make sure it happens when key people on the development team are available for support, or during low-traffic periods.
-### Tools for Custom Speech workflows
+### Tools for custom speech workflows
-Use the following tools for CI/CD automation workflows for Custom Speech:
+Use the following tools for CI/CD automation workflows for custom speech:
- [Azure CLI](/cli/azure/) to create an Azure service principal authentication, query Azure subscriptions, and store test results in Azure Blob. - [Azure AI Speech CLI](spx-overview.md) to interact with the Speech service from the command line or an automated workflow.
-## DevOps solution for Custom Speech using GitHub Actions
+## DevOps solution for custom speech using GitHub Actions
-For an already-implemented DevOps solution for Custom Speech, go to the [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template). Create a copy of the template and begin development of custom models with a robust DevOps system that includes testing, training, and versioning using GitHub Actions. The repository provides sample testing and training data to aid in setup and explain the workflow. After initial setup, replace the sample data with your project data.
+For an already-implemented DevOps solution for custom speech, go to the [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template). Create a copy of the template and begin development of custom models with a robust DevOps system that includes testing, training, and versioning using GitHub Actions. The repository provides sample testing and training data to aid in setup and explain the workflow. After initial setup, replace the sample data with your project data.
The [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) provides the infrastructure and detailed guidance to:
The [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Servic
## Next steps -- Use the [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) to implement DevOps for Custom Speech with GitHub Actions.
+- Use the [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) to implement DevOps for custom speech with GitHub Actions.
ai-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-create-project.md
Title: Create a Custom Speech project - Speech service
+ Title: Create a custom speech project - Speech service
-description: Learn about how to create a project for Custom Speech.
+description: Learn about how to create a project for custom speech.
Previously updated : 11/29/2022 Last updated : 1/19/2024 zone_pivot_groups: speech-studio-cli-rest
-# Create a Custom Speech project
+# Create a custom speech project
-Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
+Custom speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
## Create a project ::: zone pivot="speech-studio"
-To create a Custom Speech project, follow these steps:
+To create a custom speech project, follow these steps:
1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech). 1. Select the subscription and Speech resource to work with.
To create a Custom Speech project, follow these steps:
1. Select **Custom speech** > **Create a new project**. 1. Follow the instructions provided by the wizard to create your project.
-Select the new project by name or select **Go to project**. You will see these menu items in the left panel: **Speech datasets**, **Train custom models**, **Test models**, and **Deploy models**.
+Select the new project by name or select **Go to project**. You'll see these menu items in the left panel: **Speech datasets**, **Train custom models**, **Test models**, and **Deploy models**.
::: zone-end
Select the new project by name or select **Go to project**. You will see these m
To create a project, use the `spx csr project create` command. Construct the request parameters according to the following instructions: - Set the required `language` parameter. The locale of the project and the contained datasets should be the same. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that is displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
Here's an example Speech CLI command that creates a project:
spx help csr project
To create a project, use the [Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the required `locale` property. This should be the locale of the contained datasets. The locale can't be changed later.-- Set the required `displayName` property. This is the project name that will be displayed in the Speech Studio.
+- Set the required `displayName` property. This is the project name that is displayed in the Speech Studio.
Make an HTTP POST request using the URI as shown in the following [Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
The top-level `self` property in the response body is the project's URI. Use thi
## Choose your model
-There are a few approaches to using Custom Speech models:
+There are a few approaches to using custom speech models:
- The base model provides accurate speech recognition out of the box for a range of [scenarios](overview.md#speech-scenarios). Base models are updated periodically to improve accuracy and quality. We recommend that if you use base models, use the latest default base models. If a required customization capability is only available with an older model, then you can choose an older base model. - A custom model augments the base model to include domain-specific vocabulary shared across all areas of the custom domain. - Multiple custom models can be used when the custom domain has multiple areas, each with a specific vocabulary.
-One recommended way to see if the base model will suffice is to analyze the transcription produced from the base model and compare it with a human-generated transcript for the same audio. You can compare the transcripts and obtain a [word error rate (WER)](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate-wer) score. If the WER score is high, training a custom model to recognize the incorrectly identified words is recommended.
+One recommended way to see if the base model suffices is to analyze the transcription produced from the base model and compare it with a human-generated transcript for the same audio. You can compare the transcripts and obtain a [word error rate (WER)](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate-wer) score. If the WER score is high, training a custom model to recognize the incorrectly identified words is recommended.
Multiple models are recommended if the vocabulary varies across the domain areas. For instance, Olympic commentators report on various events, each associated with its own vernacular. Because each Olympic event vocabulary differs significantly from others, building a custom model specific to an event increases accuracy by limiting the utterance data relative to that particular event. As a result, the model doesn't need to sift through unrelated data to make a match. Regardless, training still requires a decent variety of training data. Include audio from various commentators who have different accents, gender, age, etcetera. ## Model stability and lifecycle
-A base model or custom model deployed to an endpoint using Custom Speech is fixed until you decide to update it. The speech recognition accuracy and quality will remain consistent, even when a new base model is released. This allows you to lock in the behavior of a specific model until you decide to use a newer model.
+A base model or custom model deployed to an endpoint using custom speech is fixed until you decide to update it. The speech recognition accuracy and quality remain consistent, even when a new base model is released. This allows you to lock in the behavior of a specific model until you decide to use a newer model.
Whether you train your own model or use a snapshot of a base model, you can use the model for a limited time. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
ai-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-deploy-model.md
Title: Deploy a Custom Speech model - Speech service
+ Title: Deploy a custom speech model - Speech service
-description: Learn how to deploy Custom Speech models.
+description: Learn how to deploy custom speech models.
Previously updated : 11/29/2022 Last updated : 1/19/2024 zone_pivot_groups: speech-studio-cli-rest
-# Deploy a Custom Speech model
+# Deploy a custom speech model
-In this article, you'll learn how to deploy an endpoint for a Custom Speech model. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
+In this article, you learn how to deploy an endpoint for a custom speech model. Except for [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a custom speech model.
> [!TIP]
-> A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> A hosted deployment endpoint isn't required to use custom speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
You can deploy an endpoint for a base or custom model, and then [update](#change-model-and-redeploy-endpoint) the endpoint later to use a better trained model.
You can deploy an endpoint for a base or custom model, and then [update](#change
To create a custom endpoint, follow these steps: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select **Custom speech** > Your project name > **Deploy models**.
- If this is your first endpoint, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
+ If this is your first endpoint, you notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
1. Select **Deploy model** to start the new endpoint wizard.
To create an endpoint and deploy a model, use the `spx csr endpoint create` comm
- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can run the `spx csr project list` command to get available projects. - Set the required `model` parameter to the ID of the model that you want deployed to the endpoint. - Set the required `language` parameter. The endpoint locale must match the locale of the model. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that is displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
- Optionally, you can set the `logging` parameter. Set this to `enabled` to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic. The default is `false`. Here's an example Speech CLI command to create an endpoint and deploy a model:
To create an endpoint and deploy a model, use the [Endpoints_Create](https://eas
- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects. - Set the required `model` property to the URI of the model that you want deployed to the endpoint. - Set the required `locale` property. The endpoint locale must match the locale of the model. The locale can't be changed later.-- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+- Set the required `displayName` property. This is the name that is displayed in the Speech Studio.
- Optionally, you can set the `loggingEnabled` property within `properties`. Set this to `true` to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic. The default is `false`. Make an HTTP POST request using the URI as shown in the following [Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
An endpoint can be updated to use another model that was created by the same Spe
To use a new model and redeploy the custom endpoint: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select **Custom speech** > Your project name > **Deploy models**.
1. Select the link to an endpoint by name, and then select **Change model**. 1. Select the new model that you want the endpoint to use. 1. Select **Done** to save and redeploy the endpoint.
You should receive a response body in the following format:
::: zone-end
-The redeployment takes several minutes to complete. In the meantime, your endpoint will use the previous model without interruption of service.
+The redeployment takes several minutes to complete. In the meantime, your endpoint uses the previous model without interruption of service.
## View logging data
Logging data is available for export if you configured it while creating the end
To download the endpoint logs: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select **Custom speech** > Your project name > **Deploy models**.
1. Select the link by endpoint name. 1. Under **Content logging**, select **Download log**.
Here's an example Speech CLI command that gets logs for an endpoint:
spx csr endpoint list --api-version v3.1 --endpoint YourEndpointId ```
-The location of each log file with more details are returned in the response body.
+The locations of each log file with more details are returned in the response body.
::: zone-end
Make an HTTP GET request using the "logs" URI from the previous response body. R
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId/files/logs" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
-The location of each log file with more details are returned in the response body.
+The locations of each log file with more details are returned in the response body.
::: zone-end
-Logging data is available on Microsoft-owned storage for 30 days, after which it will be removed. If your own storage account is linked to the Azure AI services subscription, the logging data won't be automatically deleted.
+Logging data is available on Microsoft-owned storage for 30 days, and then it's removed. If your own storage account is linked to the Azure AI services subscription, the logging data isn't automatically deleted.
## Next steps -- [CI/CD for Custom Speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)-- [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md)
+- [CI/CD for custom speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
+- [Custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md)
ai-services How To Custom Speech Display Text Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-display-text-format.md
Previously updated : 1/10/2024 Last updated : 1/19/2024
The display text pipeline is composed by a sequence of display format builders.
- **Inverse Text Normalization (ITN)** - To convert the text of spoken form numbers to display form. For example: `"I spend twenty dollars" -> "I spend $20"` - **Capitalization** - To upper case entity names, acronyms, or the first letter of a sentence. For example: `"she is from microsoft" -> "She is from Microsoft"`-- **Profanity filtering** - Masking or removal of profanity words from a sentence. For example, assuming "abcd" is a profanity word, then the word will be masked by profanity masking: `"I never say abcd" -> "I never say ****"`
+- **Profanity filtering** - Masking or removal of profanity words from a sentence. For example, assuming "abcd" is a profanity word, then the word is masked by profanity masking: `"I never say abcd" -> "I never say ****"`
-The base builders of the display text pipeline are maintained by Microsoft for the general purpose display processing tasks. You get the base builders by default when you use the Speech service. For more information about out-of-the-box formatting, see [Display text format](./display-text-format.md).
+Microsoft maintains the base builders of the display text pipeline for the general purpose display processing tasks. You get the base builders by default when you use the Speech service. For more information about out-of-the-box formatting, see [Display text format](./display-text-format.md).
## Custom display text formatting
-Beside the base builders maintained by Microsoft for the general purpose display processing tasks, you can define custom display text formatting rules to customize the display text formatting pipeline for your specific scenarios. The custom display text formatting rules are defined in a custom display text formatting file.
+Besides the base builders maintained by Microsoft, you can define custom display text formatting rules to customize the display text formatting pipeline for your specific scenarios. The custom display text formatting rules are defined in a custom display text formatting file.
- [Custom ITN](#custom-itn) - Extend the functionalities of base ITN, by applying a rule based custom ITN model from customer. - [Custom rewrite](#custom-rewrite) - Rewrite one phrase to another based on a rule based model from customer.
There are four categories of pattern matching with custom ITN rules.
### Patterns with literals
-For example, a developer might have an item (such as a product) named with the alphanumeric form `JO:500`. The job of our system will be to figure out that users might say the letter part as `J O`, or they might say `joe`, and the number part as `five hundred` or `five zero zero` or `five oh oh` or `five double zero`, and then build a model that maps all of these possibilities back to `JO:500` (including inserting the colon).
+For example, a developer might have an item (such as a product) named with the alphanumeric form `JO:500`. The Speech service figures out that users might say the letter part as `J O`, or they might say `joe`, and the number part as `five hundred` or `five zero zero` or `five oh oh` or `five double zero`, and then build a model that maps all of these possibilities back to `JO:500` (including inserting the colon).
-Patterns can be applied in parallel by specifying one rule per line in the display text formatting file. Here is an example of a display text formatting file that specifies two rules:
+Patterns can be applied in parallel by specifying one rule per line in the display text formatting file. Here's an example of a display text formatting file that specifies two rules:
```text JO:500
MM:760
### Patterns with wildcards
-Suppose a customer needs to refer to a whole series of alphanumeric items named `JO:500`, `JO:600`, `JO:700`, etc. We can support this without requiring spelling out all possibilities in several ways.
+You can refer to a whole series of alphanumeric items (such as `JO:500`, `JO:600`, `JO:700`) without having to spell out all possibilities in several ways.
Character ranges can be specified with the notation `[...]`, so `JO:[5-7]00` is equivalent to writing out three patterns. There's also a set of wildcard items that can be used. One of these is `\d`, which means any digit. So `JO:\d00` covers `JO:000`, `JO:100`, and others up to `JO:900`.
-Like a regular expression, there are several predefined character classes for an ITN rule:
+Like a regular expression, there are multiple predefined character classes for an ITN rule:
* `\d` - match a digit from '0' to '9', and output it directly * `\l` - match a letter (case-insensitive) and transduce it to lower case
A pattern such as `(AB|CD)-(\d)+` would represent constructs like "AB-9" or "CD-
### Patterns with explicit replacement
-The general philosophy is "you show us what the output should look like, and the Speech service figures out how people say it." But this doesn't always work because some scenarios might have quirky unpredictable ways of saying things, or the Speech service background rules might have gaps. For example, there can be colloquial pronunciations for initials and acronyms--`ZPI` might be spoken as `zippy`. In this case a pattern like `ZPI-\d\d` is unlikely to work if a user says `zippy twenty two`. For this sort of situation, there's a display text format notation `{spoken>written}`. This particular case could be written out `{zippy>ZPI}-\d\d`.
+The general philosophy is "you show us what the output should look like, and the Speech service figures out how people say it." But this doesn't always work because some scenarios might have quirky unpredictable ways of saying things, or the Speech service background rules might have gaps. For example, there can be colloquial pronunciations for initials and acronyms - `ZPI` might be spoken as `zippy`. In this case, a pattern like `ZPI-\d\d` is unlikely to work if a user says `zippy twenty two`. For this sort of situation, there's a display text format notation `{spoken>written}`. This particular case could be written out `{zippy>ZPI}-\d\d`.
This can be useful for handling things that the Speech mapping rules but don't yet support. For example you might write a pattern `\d0-\d0` expecting the system to understand that "-" can mean a range, and should be pronounced `to`, as in `twenty to thirty`. But perhaps it doesn't. So you can write a more explicit pattern like `\d0{to>-}\d0` and tell it how you expect the dash to be read.
ai-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-evaluate-data.md
Title: Test accuracy of a Custom Speech model - Speech service
+ Title: Test accuracy of a custom speech model - Speech service
description: In this article, you learn how to quantitatively measure and improve the quality of our speech to text model or your custom model. Previously updated : 12/20/2023 Last updated : 1/19/2024 zone_pivot_groups: speech-studio-cli-rest show_latex: true no-loc: [$$, '\times', '\over']
-# Test accuracy of a Custom Speech model
+# Test accuracy of a custom speech model
In this article, you learn how to quantitatively measure and improve the accuracy of the base speech to text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy. You should provide from 30 minutes to 5 hours of representative audio.
You can test the accuracy of your custom model by creating a test. A test requir
Follow these steps to create a test: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select **Custom speech** > Your project name > **Test models**.
1. Select **Create new test**. 1. Select **Evaluate accuracy** > **Next**. 1. Select one audio + human-labeled transcription dataset, and then select **Next**. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
Follow these steps to create a test:
To create a test, use the `spx csr evaluation create` command. Construct the request parameters according to the following instructions: -- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the `project` parameter to the ID of an existing project. This parameter is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.
- Set the required `model1` parameter to the ID of a model that you want to test. - Set the required `model2` parameter to the ID of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`. - Set the required `dataset` parameter to the ID of a dataset that you want to use for the test.-- Set the `language` parameter, otherwise the Speech CLI will set "en-US" by default. This should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Set the `language` parameter, otherwise the Speech CLI sets "en-US" by default. This parameter should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This parameter is the name that is displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
Here's an example Speech CLI command that creates a test:
spx help csr evaluation
To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: -- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the `project` property to the URI of an existing project. This property is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
- Set the `testingKind` property to `Evaluation` within `customProperties`. If you don't specify `Evaluation`, the test is treated as a quality inspection test. Whether the `testingKind` property is set to `Evaluation` or `Inspection`, or not set, you can access the accuracy scores via the API, but not in the Speech Studio. - Set the required `model1` property to the URI of a model that you want to test. - Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`. - Set the required `dataset` property to the URI of a dataset that you want to use for the test.-- Set the required `locale` property. This should be the locale of the dataset contents. The locale can't be changed later.-- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+- Set the required `locale` property. This property should be the locale of the dataset contents. The locale can't be changed later.
+- Set the required `displayName` property. This property is the name that is displayed in the Speech Studio.
Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
You should get the test results and [evaluate](#evaluate-word-error-rate-wer) th
Follow these steps to get test results: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select **Custom speech** > Your project name > **Test models**.
1. Select the link by test name. 1. After the test is complete, as indicated by the status set to *Succeeded*, you should see results that include the WER number for each tested model.
-This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
+This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where more training and improvements are required.
::: zone-end
If you want to replicate WER measurements locally, you can use the sclite tool f
## Resolve errors and improve WER
-You can use the WER calculation from the machine recognition results to evaluate the quality of the model you're using with your app, tool, or product. A WER of 5-10% is considered to be good quality and is ready to use. A WER of 20% is acceptable, but you might want to consider additional training. A WER of 30% or more signals poor quality and requires customization and training.
+You can use the WER calculation from the machine recognition results to evaluate the quality of the model you're using with your app, tool, or product. A WER of 5-10% is considered to be good quality and is ready to use. A WER of 20% is acceptable, but you might want to consider more training. A WER of 30% or more signals poor quality and requires customization and training.
-How the errors are distributed is important. When many deletion errors are encountered, it's usually because of weak audio signal strength. To resolve this issue, you need to collect audio data closer to the source. Insertion errors mean that the audio was recorded in a noisy environment and crosstalk might be present, causing recognition issues. Substitution errors are often encountered when an insufficient sample of domain-specific terms has been provided as either human-labeled transcriptions or related text.
+How the errors are distributed is important. When many deletion errors are encountered, it's usually because of weak audio signal strength. To resolve this issue, you need to collect audio data closer to the source. Insertion errors mean that the audio was recorded in a noisy environment and crosstalk might be present, causing recognition issues. Substitution errors are often encountered when an insufficient sample of domain-specific terms is provided as either human-labeled transcriptions or related text.
-By analyzing individual files, you can determine what type of errors exist, and which errors are unique to a specific file. Understanding issues at the file level will help you target improvements.
+By analyzing individual files, you can determine what type of errors exist, and which errors are unique to a specific file. Understanding issues at the file level helps you target improvements.
## Evaluate token error rate (TER)
$$
TER = {{I+D+S}\over N} \times 100 $$
-The formula of TER calculation is also very similar to WER. The only difference is that TER is calculated based on the token level instead of word level.
+The formula of TER calculation is also similar to WER. The only difference is that TER is calculated based on the token level instead of word level.
* Insertion (I): Tokens that are incorrectly added in the hypothesis transcript * Deletion (D): Tokens that are undetected in the hypothesis transcript * Substitution (S): Tokens that were substituted between reference and hypothesis
-In a real-world case, you may analyze both WER and TER results to get the desired improvements.
+In a real-world case, you can analyze both WER and TER results to get the desired improvements.
> [!NOTE] > To measure TER, you need to make sure the [audio + transcript testing data](./how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) includes transcripts with display formatting such as punctuation, capitalization, and ITN.
ai-services How To Custom Speech Human Labeled Transcriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-human-labeled-transcriptions.md
Title: Human-labeled transcriptions guidelines - Speech service
-description: You use human-labeled transcriptions with your audio data to improve speech recognition accuracy. This is especially helpful when words are deleted or incorrectly replaced.
+description: You use human-labeled transcriptions with your audio data to improve speech recognition accuracy. This kind of transcription is especially helpful when words are deleted or incorrectly replaced.
Previously updated : 05/08/2022 Last updated : 1/19/2024
Human-labeled transcriptions are word-by-word transcriptions of an audio file. You use human-labeled transcriptions to improve recognition accuracy, especially when words are deleted or incorrectly replaced. This guide can help you create high-quality transcriptions.
-A large sample of transcription data is required to improve recognition. We suggest providing between 1 and 20 hours of audio data. The Speech service will use up to 20 hours of audio for training. This guide is broken up by locale, with sections for US English, Mandarin Chinese, and German.
+A large sample of transcription data is required to improve recognition. We suggest providing between 1 and 20 hours of audio data. The Speech service uses up to 20 hours of audio for training. This guide has sections for US English, Mandarin Chinese, and German locales.
The transcriptions for all WAV files are contained in a single plain-text file (.txt or .tsv). Each line of the transcription file contains the name of one of the audio files, followed by the corresponding transcription. The file name and transcription are separated by a tab (`\t`).
speech03.wav the lazy dog was not amused
The transcriptions are text-normalized so the system can process them. However, you must do some important normalizations before you upload the dataset.
-Human-labeled transcriptions for languages other than English and Mandarin Chinese, must be UTF-8 encoded with a byte-order marker. For other locales transcription requirements, see the sections below.
+Human-labeled transcriptions for languages other than English and Mandarin Chinese, must be UTF-8 encoded with a byte-order marker. For other locales transcription requirements, see the following sections.
## en-US
Here are a few examples:
| Characters to avoid | Substitution | Notes | | - | | -- |
-| ΓÇ£Hello worldΓÇ¥ | "Hello world" | The opening and closing quotations marks have been substituted with appropriate ASCII characters. |
-| JohnΓÇÖs day | John's day | The apostrophe has been substituted with the appropriate ASCII character. |
-| It was goodΓÇöno, it was great! | it was good--no, it was great! | The em dash was substituted with two hyphens. |
+| ΓÇ£Hello worldΓÇ¥ | "Hello world" | The opening and closing quotations marks are substituted with appropriate ASCII characters. |
+| JohnΓÇÖs day | John's day | The apostrophe is substituted with the appropriate ASCII character. |
+| It was goodΓÇöno, it was great! | it was good--no, it was great! | The em dash is substituted with two hyphens. |
### Text normalization for US English Text normalization is the transformation of words into a consistent format used when training a model. Some normalization rules are applied to text automatically, however, we recommend using these guidelines as you prepare your human-labeled transcription data: - Write out abbreviations in words.-- Write out non-standard numeric strings in words (such as accounting terms).
+- Write out nonstandard numeric strings in words (such as accounting terms).
- Non-alphabetic characters or mixed alphanumeric characters should be transcribed as pronounced. - Abbreviations that are pronounced as words shouldn't be edited (such as "radar", "laser", "RAM", or "NATO"). - Write out abbreviations that are pronounced as separate letters with each letter separated by a space. - If you use audio, transcribe numbers as words that match the audio (for example, "101" could be pronounced as "one oh one" or "one hundred and one").-- Avoid repeating characters, words, or groups of words more than three times, such as "yeah yeah yeah yeah". Lines with such repetitions might be dropped by the Speech service.
+- Avoid repeating characters, words, or groups of words more than three times, such as "yeah yeah yeah yeah". The Speech service might drop lines with such repetition.
Here are a few examples of normalization that you should perform on the transcription:
The following normalization rules are automatically applied to transcriptions:
- Remove all punctuation, including various types of quotation marks ("test", 'test', "test„, and «test» are OK). - Discard rows with any special characters from this set: ¢ ¤ ¥ ¦ § © ª ¬ ® ° ± ² µ × ÿ ج¬. - Expand numbers to spoken form, including dollar or Euro amounts.-- Accept umlauts only for a, o, and u. Others will be replaced by "th" or be discarded.
+- Accept umlauts only for a, o, and u. Others are replaced by "th" or discarded.
Here are a few examples of normalization automatically performed on the transcription:
Here are a few examples of normalization automatically performed on the transcri
## ja-JP
-In Japanese (ja-JP), there's a maximum length of 90 characters for each sentence. Lines with longer sentences will be discarded. To add longer text, insert a period in between.
+In Japanese (ja-JP), there's a maximum length of 90 characters for each sentence. Lines with longer sentences are discarded. To add longer text, insert a period in between.
## zh-CN
Here are a few examples:
| Characters to avoid | Substitution | Notes | | - | -- | -- |
-| "你好" | "你好" | The opening and closing quotations marks have been substituted with appropriate characters. |
-| 需要什么帮助? | 需要什么帮助?| The question mark has been substituted with appropriate character. |
+| "你好" | "你好" | The opening and closing quotations marks are substituted with appropriate characters. |
+| 需要什么帮助? | 需要什么帮助?| The question mark is substituted with the appropriate character. |
### Text normalization for Mandarin Chinese
Here are a few examples of normalization that you should perform on the transcri
The following normalization rules are automatically applied to transcriptions: -- Remove all punctuation-- Expand numbers to spoken form-- Convert full-width letters to half-width letters-- Using uppercase letters for all English words
+- Remove all punctuation.
+- Expand numbers to spoken form.
+- Convert full-width letters to half-width letters.
+- Using uppercase letters for all English words.
Here are some examples of automatic transcription normalization:
ai-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-inspect-data.md
Title: Test recognition quality of a Custom Speech model - Speech service
+ Title: Test recognition quality of a custom speech model - Speech service
-description: Custom Speech lets you qualitatively inspect the recognition quality of a model. You can play back uploaded audio and determine if the provided recognition result is correct.
+description: Custom speech lets you qualitatively inspect the recognition quality of a model. You can play back uploaded audio and determine if the provided recognition result is correct.
Previously updated : 11/29/2022 Last updated : 1/19/2024 zone_pivot_groups: speech-studio-cli-rest
-# Test recognition quality of a Custom Speech model
+# Test recognition quality of a custom speech model
-You can inspect the recognition quality of a Custom Speech model in the [Speech Studio](https://aka.ms/speechstudio/customspeech). You can play back uploaded audio and determine if the provided recognition result is correct. After a test has been successfully created, you can see how a model transcribed the audio dataset, or compare results from two models side by side.
+You can inspect the recognition quality of a custom speech model in the [Speech Studio](https://aka.ms/speechstudio/customspeech). You can play back uploaded audio and determine if the provided recognition result is correct. After a test is successfully created, you can see how a model transcribed the audio dataset, or compare results from two models side by side.
Side-by-side model testing is useful to validate which speech recognition model is best for an application. For an objective measure of accuracy, which requires transcription datasets input, see [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
Side-by-side model testing is useful to validate which speech recognition model
Follow these instructions to create a test: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Navigate to **Speech Studio** > **Custom Speech** and select your project name from the list.
+1. Navigate to **Speech Studio** > **Custom speech** and select your project name from the list.
1. Select **Test models** > **Create new test**. 1. Select **Inspect quality (Audio-only data)** > **Next**. 1. Choose an audio dataset that you'd like to use for testing, and then select **Next**. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
Follow these instructions to create a test:
To create a test, use the `spx csr evaluation create` command. Construct the request parameters according to the following instructions: -- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the `project` parameter to the ID of an existing project. This parameter is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.
- Set the required `model1` parameter to the ID of a model that you want to test. - Set the required `model2` parameter to the ID of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`. - Set the required `dataset` parameter to the ID of a dataset that you want to use for the test.-- Set the `language` parameter, otherwise the Speech CLI will set "en-US" by default. This should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Set the `language` parameter, otherwise the Speech CLI sets "en-US" by default. This parameter should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This parameter is the name that is displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
Here's an example Speech CLI command that creates a test:
spx help csr evaluation
To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: -- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the `project` property to the URI of an existing project. This property is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
- Set the required `model1` property to the URI of a model that you want to test. - Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`. - Set the required `dataset` property to the URI of a dataset that you want to use for the test.-- Set the required `locale` property. This should be the locale of the dataset contents. The locale can't be changed later.-- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+- Set the required `locale` property. This property should be the locale of the dataset contents. The locale can't be changed later.
+- Set the required `displayName` property. This property is the name that is displayed in the Speech Studio.
Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
You should get the test results and [inspect](#compare-transcription-with-audio)
Follow these steps to get test results: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select **Custom speech** > Your project name > **Test models**.
1. Select the link by test name. 1. After the test is complete, as indicated by the status set to *Succeeded*, you should see results that include the WER number for each tested model.
-This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
+This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where more training and improvements are required.
::: zone-end
You can inspect the transcription output by each model tested, against the audio
To review the quality of transcriptions: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select **Custom speech** > Your project name > **Test models**.
1. Select the link by test name. 1. Play an audio file while the reading the corresponding transcription by a model.
-If the test dataset included multiple audio files, you'll see multiple rows in the table. If you included two models in the test, transcriptions are shown in side-by-side columns. Transcription differences between models are shown in blue text font.
+If the test dataset included multiple audio files, you see multiple rows in the table. If you included two models in the test, transcriptions are shown in side-by-side columns. Transcription differences between models are shown in blue text font.
:::image type="content" source="media/custom-speech/custom-speech-inspect-compare.png" alt-text="Screenshot of comparing transcriptions by two models":::
If the test dataset included multiple audio files, you'll see multiple rows in t
::: zone pivot="speech-cli"
-The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value will match `model2`, and the `transcription1` value will match `transcription2`.
+The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value matches `model2`, and the `transcription1` value matches `transcription2`.
To review the quality of transcriptions: 1. Download the audio test dataset, unless you already have a copy.
If you're comparing quality between two models, pay particular attention to diff
::: zone pivot="rest-api"
-The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value will match `model2`, and the `transcription1` value will match `transcription2`.
+The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value matches `model2`, and the `transcription1` value matches `transcription2`.
To review the quality of transcriptions: 1. Download the audio test dataset, unless you already have a copy.
ai-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle.md
Title: Model lifecycle of Custom Speech - Speech service
+ Title: Model lifecycle of custom speech - Speech service
-description: Custom Speech provides base models for training and lets you create custom models from your data. This article describes the timelines for models and for endpoints that use these models.
+description: Custom speech provides base models for training and lets you create custom models from your data. This article describes the timelines for models and for endpoints that use these models.
Previously updated : 11/29/2022 Last updated : 1/19/2024 zone_pivot_groups: speech-studio-cli-rest
-# Custom Speech model lifecycle
+# Custom speech model lifecycle
-You can use a Custom Speech model for some time after it's deployed to your custom endpoint. But when new base models are made available, the older models are expired. You must periodically recreate and train your custom model from the latest base model to take advantage of the improved accuracy and quality.
+You can use a custom speech model for some time after you deploy it to your custom endpoint. But when new base models are made available, the older models are expired. You must periodically recreate and train your custom model from the latest base model to take advantage of the improved accuracy and quality.
Here are some key terms related to the model lifecycle:
Here are some key terms related to the model lifecycle:
Here are timelines for model adaptation and transcription expiration: -- Training is available for one year after the quarter when the base model was created by Microsoft.-- Transcription with a base model is available for two years after the quarter when the base model was created by Microsoft.
+- Training is available for one year after the quarter when Microsoft created the base model.
+- Transcription with a base model is available for two years after the quarter when Microsoft created the base model.
- Transcription with a custom model is available for two years after the quarter when you created the custom model.
-In this context, quarters end on January 15th, April 15th, July 15th, and October 15th.
+In this context, quarters end on January 15, April 15, July 15, and October 15.
## What to do when a model expires
-When a custom model or base model expires, it is no longer available for transcription. You can change the model that is used by your custom speech endpoint without downtime.
+When a custom model or base model expires, it's no longer available for transcription. You can change the model that is used by your custom speech endpoint without downtime.
|Transcription route |Expired model result |Recommendation | ||||
-|Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md?tabs=stt). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
-|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
+|Custom endpoint|Speech recognition requests fall back to the most recent base model for the same [locale](language-support.md?tabs=stt). You get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a custom speech model](how-to-custom-speech-deploy-model.md) guide. |
+|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models fail with a 4xx error. |In each [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) REST API request body, set the `model` property to a base model or custom model that isn't expired. Otherwise don't include the `model` property to always use the latest base model. |
## Get base model expiration dates ::: zone pivot="speech-studio"
-The last date that you could use the base model for training was shown when you created the custom model. For more information, see [Train a Custom Speech model](how-to-custom-speech-train-model.md).
+The last date that you could use the base model for training was shown when you created the custom model. For more information, see [Train a custom speech model](how-to-custom-speech-train-model.md).
Follow these instructions to get the transcription expiration date for a base model: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
-1. The expiration date for the model is shown in the **Expiration** column. This is the last date that you can use the model for transcription.
+1. Select **Custom speech** > Your project name > **Deploy models**.
+1. The expiration date for the model is shown in the **Expiration** column. This date is the last date that you can use the model for transcription.
:::image type="content" source="media/custom-speech/custom-speech-model-expiration.png" alt-text="Screenshot of the deploy models page that shows the transcription expiration date.":::
Here's an example Speech CLI command to get the training and transcription expir
spx csr model status --api-version v3.1 --model https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/b0bbc1e0-78d5-468b-9b7c-a5a43b2bb83f ```
-In the response, take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This is the last date that you can use the base model for transcription.
+In the response, take note of the date in the `adaptationDateTime` property. This property is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This date is the last date that you can use the base model for transcription.
You should receive a response body in the following format:
Make an HTTP GET request using the model URI as shown in the following example.
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/BaseModelId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
-In the response, take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This is the last date that you can use the base model for transcription.
+In the response, take note of the date in the `adaptationDateTime` property. This date is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This date is the last date that you can use the base model for transcription.
You should receive a response body in the following format:
You should receive a response body in the following format:
Follow these instructions to get the transcription expiration date for a custom model: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Train custom models**.
-1. The expiration date the custom model is shown in the **Expiration** column. This is the last date that you can use the custom model for transcription. Base models are not shown on the **Train custom models** page.
+1. Select **Custom speech** > Your project name > **Train custom models**.
+1. The expiration date the custom model is shown in the **Expiration** column. This date is the last date that you can use the custom model for transcription. Base models aren't shown on the **Train custom models** page.
:::image type="content" source="media/custom-speech/custom-speech-custom-model-expiration.png" alt-text="Screenshot of the train custom models page that shows the transcription expiration date."::: You can also follow these instructions to get the transcription expiration date for a custom model: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
-1. The expiration date for the model is shown in the **Expiration** column. This is the last date that you can use the model for transcription.
+1. Select **Custom speech** > Your project name > **Deploy models**.
+1. The expiration date for the model is shown in the **Expiration** column. This date is the last date that you can use the model for transcription.
:::image type="content" source="media/custom-speech/custom-speech-model-expiration.png" alt-text="Screenshot of the deploy models page that shows the transcription expiration date.":::
Here's an example Speech CLI command to get the transcription expiration date fo
spx csr model status --api-version v3.1 --model https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/YourModelId ```
-In the response, take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for transcription. The `adaptationDateTime` property is not applicable, since custom models are not used to train other custom models.
+In the response, take note of the date in the `transcriptionDateTime` property. This date is the last date that you can use your custom model for transcription. The `adaptationDateTime` property isn't applicable, since custom models aren't used to train other custom models.
You should receive a response body in the following format:
Make an HTTP GET request using the model URI as shown in the following example.
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/YourModelId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
-In the response, take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for transcription. The `adaptationDateTime` property is not applicable, since custom models are not used to train other custom models.
+In the response, take note of the date in the `transcriptionDateTime` property. This date is the last date that you can use your custom model for transcription. The `adaptationDateTime` property isn't applicable, since custom models aren't used to train other custom models.
You should receive a response body in the following format:
You should receive a response body in the following format:
## Next steps - [Train a model](how-to-custom-speech-train-model.md)-- [CI/CD for Custom Speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
+- [CI/CD for custom speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
ai-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-test-and-train.md
Title: "Training and testing datasets - Speech service"
-description: Learn about types of training and testing data for a Custom Speech project, along with how to use and manage that data.
+description: Learn about types of training and testing data for a custom speech project, along with how to use and manage that data.
Previously updated : 10/24/2022 Last updated : 1/19/2024 # Training and testing datasets
-In a Custom Speech project, you can upload datasets for training, qualitative inspection, and quantitative measurement. This article covers the types of training and testing data that you can use for Custom Speech.
+In a custom speech project, you can upload datasets for training, qualitative inspection, and quantitative measurement. This article covers the types of training and testing data that you can use for custom speech.
Text and audio that you use to test and train a custom model should include samples from a diverse set of speakers and scenarios that you want your model to recognize. Consider these factors when you're gathering data for custom model testing and training:
-* Include text and audio data to cover the kinds of verbal statements that your users will make when they're interacting with your model. For example, a model that raises and lowers the temperature needs training on statements that people might make to request such changes.
+* Include text and audio data to cover the kinds of verbal statements that your users make when they're interacting with your model. For example, a model that raises and lowers the temperature needs training on statements that people might make to request such changes.
* Include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, language-mixing, age, gender, voice pitch, stress level, and time of day.
-* Include samples from different environments, for example, indoor, outdoor, and road noise, where your model will be used.
-* Record audio with hardware devices that the production system will use. If your model must identify speech recorded on devices of varying quality, the audio data that you provide to train your model must also represent these diverse scenarios.
+* Include samples from different environments, for example, indoor, outdoor, and road noise, where your model is used.
+* Record audio with hardware devices that the production system uses. If your model must identify speech recorded on devices of varying quality, the audio data that you provide to train your model must also represent these diverse scenarios.
* Keep the dataset diverse and representative of your project requirements. You can add more data to your model later. * Only include data that your model needs to transcribe. Including data that isn't within your custom model's recognition requirements can harm recognition quality overall. ## Data types
-The following table lists accepted data types, when each data type should be used, and the recommended quantity. Not every data type is required to create a model. Data requirements will vary depending on whether you're creating a test or training a model.
+The following table lists accepted data types, when each data type should be used, and the recommended quantity. Not every data type is required to create a model. Data requirements vary depending on whether you're creating a test or training a model.
| Data type | Used for testing | Recommended for testing | Used for training | Recommended for training | |--|--|-|-|-|
Training with plain text or structured text usually finishes within a few minute
> [!TIP] > Start with plain-text data or structured-text data. This data will improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes versus days). >
-> Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
+> Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample custom speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
-If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) REST API.
+If you train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. For more information, see footnotes in the [regions](regions.md#speech-service) table. In regions with dedicated hardware for custom speech training, the Speech service uses up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) REST API.
## Consider datasets by scenario
-A model that's trained on a subset of scenarios can perform well in only those scenarios. Carefully choose data that represents the full scope of scenarios that you need your custom model to recognize. The following table shows datasets to consider for some speech recognition scenarios:
+A model trained on a subset of scenarios can perform well in only those scenarios. Carefully choose data that represents the full scope of scenarios that you need your custom model to recognize. The following table shows datasets to consider for some speech recognition scenarios:
| Scenario | Plain text data and structured text data | Audio + human-labeled transcripts | New words with pronunciation | | | | | |
To help determine which dataset to use to address your problems, refer to the fo
You can use audio + human-labeled transcript data for both [training](how-to-custom-speech-train-model.md) and [testing](how-to-custom-speech-evaluate-data.md) purposes. You must provide human-labeled transcriptions (word by word) for comparison: - To improve the acoustic aspects like slight accents, speaking styles, and background noises.-- To measure the accuracy of Microsoft's speech to text accuracy when it's processing your audio files.
+- To measure the accuracy of Microsoft's speech to text accuracy when, it's processing your audio files.
-For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
+For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt). Even if a base model does support training with audio data, the service might use only part of the audio. And it still uses all the transcripts.
> [!IMPORTANT] > If a base model doesn't support customization with audio data, only the transcription text will be used for training. If you switch to a base model that supports customization with audio data, the training time may increase from several hours to several days. The change in training time would be most noticeable when you switch to a base model in a [region](regions.md#speech-service) without dedicated hardware for training. If the audio data is not required, you should remove it to decrease the training time.
Audio with human-labeled transcripts offers the greatest accuracy improvements i
Consider these details:
-* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by using only related text.
-* If you use one of the most heavily used languages, such as US English, it's unlikely that you would need to train with audio data. For such languages, the base models already offer very good recognition results in most scenarios, so it's probably enough to train with related text.
-* Custom Speech can capture word context only to reduce substitution errors, not insertion or deletion errors.
+* Training with audio brings the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by using only related text.
+* If you use one of the most heavily used languages, such as US English, it's unlikely that you would need to train with audio data. For such languages, the base models already offer good recognition results in most scenarios, so it's probably enough to train with related text.
+* Custom speech can capture word context only to reduce substitution errors, not insertion or deletion errors.
* Avoid samples that include transcription errors, but do include a diversity of audio quality. * Avoid sentences that are unrelated to your problem domain. Unrelated sentences can harm your model. * When the transcript quality varies, you can duplicate exceptionally good sentences, such as excellent transcriptions that include key phrases, to increase their weight. * The Speech service automatically uses the transcripts to improve the recognition of domain-specific words and phrases, as though they were added as related text.
-* It can take several days for a training operation to finish. To improve the speed of training, be sure to create your Speech service subscription in a region that has dedicated hardware for training.
+* It can take several days for a training operation to finish. To improve the speed of training, be sure to create your Speech service subscription in a region with dedicated hardware for training.
-A large training dataset is required to improve recognition. Generally, we recommend that you provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help improve recognition results. Although creating human-labeled transcription can take time, improvements in recognition will only be as good as the data that you provide. You should upload only high-quality transcripts.
+A large training dataset is required to improve recognition. Generally, we recommend that you provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help improve recognition results. Although creating human-labeled transcription can take time, improvements in recognition are only as good as the data that you provide. You should upload only high-quality transcripts.
-Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. Although audio with low recording volume or disruptive background noise is not helpful, it shouldn't limit or degrade your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
+Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. Although audio with low recording volume or disruptive background noise isn't helpful, it shouldn't limit or degrade your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
> [!IMPORTANT] > For more information about the best practices of preparing human-labeled transcripts, see [Human-labeled transcripts with audio](how-to-custom-speech-human-labeled-transcriptions.md).
-Custom Speech projects require audio files with these properties:
+Custom speech projects require audio files with these properties:
> [!IMPORTANT] > These are requirements for Audio + human-labeled transcript training and testing. They differ from the ones for Audio only training and testing. If you want to use Audio only training and testing, [see this section](#audio-data-for-training-or-testing).
Custom Speech projects require audio files with these properties:
| File format | RIFF (WAV) | | Sample rate | 8,000 Hz or 16,000 Hz | | Channels | 1 (mono) |
-| Maximum length per audio | 2 hours (testing) / 60 s (training)<br/><br/>Training with audio has a maximum audio length of 60 seconds per file. For audio files longer than 60 seconds, only the corresponding transcription files will be used for training. If all audio files are longer than 60 seconds, the training will fail.|
+| Maximum length per audio | Two hours (testing) / 60 s (training)<br/><br/>Training with audio has a maximum audio length of 60 seconds per file. For audio files longer than 60 seconds, only the corresponding transcription files are used for training. If all audio files are longer than 60 seconds, the training fails.|
| Sample format | PCM, 16-bit | | Archive format | .zip | | Maximum zip size | 2 GB or 10,000 files |
Use this table to ensure that your plain text dataset file is formatted correctl
You must also adhere to the following restrictions:
-* Avoid repeating characters, words, or groups of words more than three times, as in "aaaa," "yeah yeah yeah yeah," or "that's it that's it that's it that's it." The Speech service might drop lines with too many repetitions.
+* Avoid repeating characters, words, or groups of words more than three times. For example, don't use "aaaa," "yeah yeah yeah yeah," or "that's it that's it that's it that's it." The Speech service might drop lines with too many repetitions.
* Don't use special characters or UTF-8 characters above `U+00A1`. * URIs will be rejected. * For some languages such as Japanese or Korean, importing large amounts of text data can take a long time or can time out. Consider dividing the dataset into multiple text files with up to 20,000 lines in each.
Here are key details about the supported Markdown format:
| Property | Description | Limits | |-|-|--| |`@list`|A list of items that can be referenced in an example sentence.|Maximum of 20 lists. Maximum of 35,000 items per list.|
-|`speech:phoneticlexicon`|A list of phonetic pronunciations according to the [Universal Phone Set](customize-pronunciation.md). Pronunciation is adjusted for each instance where the word appears in a list or training sentence. For example, if you have a word that sounds like "cat" and you want to adjust the pronunciation to "k ae t", you would add `- cat/k ae t` to the `speech:phoneticlexicon` list.|Maximum of 15,000 entries. Maximum of 2 pronunciations per word.|
-|`#ExampleSentences`|A pound symbol (`#`) delimits a section of example sentences. The section heading can only contain letters, digits, and underscores. Example sentences should reflect the range of speech that your model should expect. A training sentence can refer to items under a `@list` by using surrounding left and right curly braces (`{@list name}`). You can refer to multiple lists in the same training sentence, or none at all.|Maximum file size of 200MB.|
+|`speech:phoneticlexicon`|A list of phonetic pronunciations according to the [Universal Phone Set](customize-pronunciation.md). Pronunciation is adjusted for each instance where the word appears in a list or training sentence. For example, if you have a word that sounds like "cat" and you want to adjust the pronunciation to "k ae t", you would add `- cat/k ae t` to the `speech:phoneticlexicon` list.|Maximum of 15,000 entries. Maximum of two pronunciations per word.|
+|`#ExampleSentences`|A pound symbol (`#`) delimits a section of example sentences. The section heading can only contain letters, digits, and underscores. Example sentences should reflect the range of speech that your model should expect. A training sentence can refer to items under a `@list` by using surrounding left and right curly braces (`{@list name}`). You can refer to multiple lists in the same training sentence, or none at all.|Maximum file size of 200 MB.|
|`//`|Comments follow a double slash (`//`).|Not applicable| Here's an example structured text file:
Refer to the following table to ensure that your pronunciation dataset files are
### Audio data for training or testing
-Audio data is optimal for testing the accuracy of Microsoft's baseline speech to text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
+Audio data is optimal for testing the accuracy of Microsoft's baseline speech to text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech regarding a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
> [!NOTE] > Audio only data for training is available in preview for the `en-US` locale. For other locales, to train with audio data you must also provide [human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
-Custom Speech projects require audio files with these properties:
+Custom speech projects require audio files with these properties:
> [!IMPORTANT] > These are requirements for Audio only training and testing. They differ from the ones for Audio + human-labeled transcript training and testing. If you want to use Audio + human-labeled transcript training and testing, [see this section](#audio--human-labeled-transcript-data-for-training-or-testing).
Custom Speech projects require audio files with these properties:
| File format | RIFF (WAV) | | Sample rate | 8,000 Hz or 16,000 Hz | | Channels | 1 (mono) |
-| Maximum length per audio | 2 hours |
+| Maximum length per audio | Two hours |
| Sample format | PCM, 16-bit | | Archive format | .zip | | Maximum archive size | 2 GB or 10,000 files |
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
Learn more about [preparing display text formatting data](./how-to-custom-speech-display-text-format.md) and [display text formatting with speech to text](./display-text-format.md).
-Automatic Speech Recognition output display format is critical to downstream tasks and one-size doesnΓÇÖt fit all. Adding Custom Display Format rules allows users to define their own lexical-to-display format rules to improve the speech recognition service quality on top of Microsoft Azure Custom Speech Service.
+Automatic Speech Recognition output display format is critical to downstream tasks and one-size doesnΓÇÖt fit all. Adding Custom Display Format rules allows users to define their own lexical-to-display format rules to improve the speech recognition service quality on top of Microsoft Azure custom speech Service.
It allows you to fully customize the display outputs such as add rewrite rules to capitalize and reformulate certain words, add profanity words and mask from output, define advanced ITN rules for certain patterns such as numbers, dates, email addresses; or preserve some phrases and kept them from any Display processes.
The Display Format file should have an .md extension. The maximum file size is 1
|#ITN|A list of invert-text-normalization rules to define certain display patterns such as numbers, addresses, and dates.|Maximum of 200 lines| |#rewrite|A list of rewrite pairs to replace certain words for reasons such as capitalization and spelling correction.|Maximum of 1,000 lines| |#profanity|A list of unwanted words that will be masked as `******` from Display and Masked output, on top of Microsoft built-in profanity lists.|Maximum of 1,000 lines|
-|#test|A list of unit test cases to validate if the display rules work as expected, including the lexical format input and the expected display format output.|Maximum file size of 10MB|
+|#test|A list of unit test cases to validate if the display rules work as expected, including the lexical format input and the expected display format output.|Maximum file size of 10 MB|
Here's an example display format file:
ai-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-train-model.md
Title: Train a Custom Speech model - Speech service
+ Title: Train a custom speech model - Speech service
-description: Learn how to train Custom Speech models. Training a speech to text model can improve recognition accuracy for the Microsoft base model or a custom model.
+description: Learn how to train custom speech models. Training a speech to text model can improve recognition accuracy for the Microsoft base model or a custom model.
Previously updated : 09/15/2023 Last updated : 1/19/2024 zone_pivot_groups: speech-studio-cli-rest
-# Train a Custom Speech model
+# Train a custom speech model
-In this article, you'll learn how to train a custom model to improve recognition accuracy from the Microsoft base model. The speech recognition accuracy and quality of a Custom Speech model will remain consistent, even when a new base model is released.
+In this article, you learn how to train a custom model to improve recognition accuracy from the Microsoft base model. The speech recognition accuracy and quality of a custom speech model remains consistent, even when a new base model is released.
> [!NOTE]
-> You pay for Custom Speech model usage and [endpoint hosting](how-to-custom-speech-deploy-model.md). You'll also be charged for custom speech model training if the base model was created on October 1, 2023 and later. You are not charged for training if the base model was created prior to October 2023. For more information, see [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and the [Charge for adaptation section in the speech to text 3.2 migration guide](./migrate-v3-1-to-v3-2.md#charge-for-adaptation).
+> You pay for custom speech model usage and [endpoint hosting](how-to-custom-speech-deploy-model.md). You'll also be charged for custom speech model training if the base model was created on October 1, 2023 and later. You are not charged for training if the base model was created prior to October 2023. For more information, see [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and the [Charge for adaptation section in the speech to text 3.2 migration guide](./migrate-v3-1-to-v3-2.md#charge-for-adaptation).
-Training a model is typically an iterative process. You will first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test. If the recognition quality or accuracy doesn't meet your requirements, you can create a new model with additional or modified training data, and then test again.
+Training a model is typically an iterative process. You first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test. If the recognition quality or accuracy doesn't meet your requirements, you can create a new model with more or modified training data, and then test again.
-You can use a custom model for a limited time after it's trained. You must periodically recreate and adapt your custom model from the latest base model to take advantage of the improved accuracy and quality. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+You can use a custom model for a limited time after it was trained. You must periodically recreate and adapt your custom model from the latest base model to take advantage of the improved accuracy and quality. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
> [!IMPORTANT] > If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. After a model is trained, you can [copy it to a Speech resource](#copy-a-model) in another region as needed. >
-> In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. See footnotes in the [regions](regions.md#speech-service) table for more information.
+> In regions with dedicated hardware for custom speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. See footnotes in the [regions](regions.md#speech-service) table for more information.
## Create a model ::: zone pivot="speech-studio"
-After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.md), follow these instructions to start training your model:
+After you upload [training datasets](./how-to-custom-speech-test-and-train.md), follow these instructions to start training your model:
1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Train custom models**.
+1. Select **Custom speech** > Your project name > **Train custom models**.
1. Select **Train a new model**. 1. On the **Select a baseline model** page, select a base model, and then select **Next**. If you aren't sure, select the most recent model from the top of the list. The name of the base model corresponds to the date when it was released in YYYYMMDD format. The customization capabilities of the base model are listed in parenthesis after the model name in Speech Studio.
After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.
To create a model with datasets for training, use the `spx csr model create` command. Construct the request parameters according to the following instructions: -- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the `project` parameter to the ID of an existing project. This parameter is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.
- Set the required `dataset` parameter to the ID of a dataset that you want used for training. To specify multiple datasets, set the `datasets` (plural) parameter and separate the IDs with a semicolon. - Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Set the required `name` parameter. This parameter is the name that is displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
- Optionally, you can set the `base` property. For example: `--base 1aae1070-7972-47e9-a977-87e3b05c457d`. If you don't specify the `base`, the default base model for the locale is used. The Speech CLI `base` parameter corresponds to the `baseModel` property in the JSON request and response. Here's an example Speech CLI command that creates a model with datasets for training:
spx help csr model
To create a model with datasets for training, use the [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: -- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the `project` property to the URI of an existing project. This property is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
- Set the required `datasets` property to the URI of the datasets that you want used for training. - Set the required `locale` property. The model locale must match the locale of the project and base model. The locale can't be changed later.-- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+- Set the required `displayName` property. This property is the name that is displayed in the Speech Studio.
- Optionally, you can set the `baseModel` property. For example: `"baseModel": {"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"}`. If you don't specify the `baseModel`, the default base model for the locale is used. Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
You can copy a model to another project that uses the same locale. For example,
Follow these instructions to copy a model to a project in another region: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Train custom models**.
+1. Select **Custom speech** > Your project name > **Train custom models**.
1. Select **Copy to**. 1. On the **Copy speech model** page, select a target region where you want to copy the model. :::image type="content" source="./media/custom-speech/custom-speech-copy-to-zoom.png" alt-text="Screenshot of a phrase list applied in Speech Studio." lightbox="./media/custom-speech/custom-speech-copy-to-full.png":::
After the model is successfully copied, you'll be notified and can view it in th
::: zone pivot="speech-cli"
-Copying a model directly to a project in another region is not supported with the Speech CLI. You can copy a model to a project in another region using the [Speech Studio](https://aka.ms/speechstudio/customspeech) or [Speech to text REST API](rest-speech-to-text.md).
+Copying a model directly to a project in another region isn't supported with the Speech CLI. You can copy a model to a project in another region using the [Speech Studio](https://aka.ms/speechstudio/customspeech) or [Speech to text REST API](rest-speech-to-text.md).
::: zone-end
Models might have been copied from one project using the Speech CLI or REST API,
::: zone pivot="speech-studio"
-If you are prompted in Speech Studio, you can connect them by selecting the **Connect** button.
+If you're prompted in Speech Studio, you can connect them by selecting the **Connect** button.
:::image type="content" source="./media/custom-speech/custom-speech-connect-model.png" alt-text="Screenshot of the connect training page that shows models that can be connected to the current project.":::
If you are prompted in Speech Studio, you can connect them by selecting the **Co
To connect a model to a project, use the `spx csr model update` command. Construct the request parameters according to the following instructions: -- Set the `project` parameter to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the `project` parameter to the URI of an existing project. This parameter is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.
- Set the required `modelId` parameter to the ID of the model that you want to connect to the project. Here's an example Speech CLI command that connects a model to a project:
spx help csr model
To connect a new model to a project of the Speech resource where the model was copied, use the [Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: -- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the required `project` property to the URI of an existing project. This property is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
ai-services How To Custom Speech Transcription Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-transcription-editor.md
Title: How to use the online transcription editor for Custom Speech - Speech service
+ Title: How to use the online transcription editor for custom speech - Speech service
-description: The online transcription editor allows you to create or edit audio + human-labeled transcriptions for Custom Speech.
+description: The online transcription editor allows you to create or edit audio + human-labeled transcriptions for custom speech.
Previously updated : 05/08/2022 Last updated : 1/19/2024 # How to use the online transcription editor
-The online transcription editor allows you to create or edit audio + human-labeled transcriptions for Custom Speech. The main use cases of the editor are as follows:
+The online transcription editor allows you to create or edit audio + human-labeled transcriptions for custom speech. The main use cases of the editor are as follows:
* You only have audio data, but want to build accurate audio + human-labeled datasets from scratch to use in model training. * You already have audio + human-labeled datasets, but there are errors or defects in the transcription. The editor allows you to quickly modify the transcriptions to get best training accuracy.
You can find the **Editor** tab next to the **Training and testing dataset** tab
:::image type="content" source="media/custom-speech/custom-speech-editor.png" alt-text="Screenshot of the Speech datasets page that shows the Editor tab.":::
-Datasets in the **Training and testing dataset** tab can't be updated. You can import a copy of a training or testing dataset to the **Editor** tab, add or edit human-labeled transcriptions to match the audio, and then export the edited dataset to the **Training and testing dataset** tab. Please also note that you can't use a dataset that's in the Editor to train or test a model.
+Datasets in the **Training and testing dataset** tab can't be updated. You can import a copy of a training or testing dataset to the **Editor** tab, add or edit human-labeled transcriptions to match the audio, and then export the edited dataset to the **Training and testing dataset** tab. Also note that you can't use a dataset that's in the Editor to train or test a model.
## Import datasets to the Editor To import a dataset to the Editor, follow these steps: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
+1. Select **Custom speech** > Your project name > **Speech datasets** > **Editor**.
1. Select **Import data** 1. Select datasets. You can select audio data only, audio + human-labeled data, or both. For audio-only data, you can use the default models to automatically generate machine transcription after importing to the editor. 1. Enter a name and description for the new dataset, and then select **Next**.
-1. Review your settings, and then select **Import and close** to kick off the import process. After data has been successfully imported, you can select datasets and start editing.
+1. Review your settings, and then select **Import and close** to kick off the import process. After data is successfully imported, you can select datasets and start editing.
> [!NOTE] > You can also select a dataset from the main **Speech datasets** page and export them to the Editor. Select a dataset and then select **Export to Editor**. ## Edit transcription to match audio
-Once a dataset has been imported to the Editor, you can start editing the dataset. You can add or edit human-labeled transcriptions to match the audio as you hear it. You do not edit any audio data.
+Once a dataset is imported to the Editor, you can start editing the dataset. You can add or edit human-labeled transcriptions to match the audio as you hear it. You don't edit any audio data.
To edit a dataset's transcription in the Editor, follow these steps: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
+1. Select **Custom speech** > Your project name > **Speech datasets** > **Editor**.
1. Select the link to a dataset by name. 1. From the **Audio + text files** table, select the link to an audio file by name.
-1. After you've made edits, select **Save**.
+1. After you make edits, select **Save**.
If there are multiple files in the dataset, you can select **Previous** and **Next** to move from file to file. Edit and save changes to each file as you go.
Datasets in the Editor can be exported to the **Training and testing dataset** t
To export datasets from the Editor, follow these steps: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
+1. Select **Custom speech** > Your project name > **Speech datasets** > **Editor**.
1. Select the link to a dataset by name. 1. Select one or more rows from the **Audio + text files** table. 1. Select **Export** to export all of the selected files as one new dataset.
-The files are exported as a new dataset, and will not impact or replace other training or testing datasets.
+The files are exported as a new dataset, and don't affect or replace other training or testing datasets.
## Next steps
ai-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-upload-data.md
Title: "Upload training and testing datasets for Custom Speech - Speech service"
+ Title: "Upload training and testing datasets for custom speech - Speech service"
-description: Learn about how to upload data to test or train a Custom Speech model.
+description: Learn about how to upload data to test or train a custom speech model.
Previously updated : 11/29/2022 Last updated : 1/19/2024 zone_pivot_groups: speech-studio-cli-rest
-# Upload training and testing datasets for Custom Speech
+# Upload training and testing datasets for custom speech
You need audio or text data for testing the accuracy of speech recognition or training your custom models. For information about the data types supported for testing or training your model, see [Training and testing datasets](how-to-custom-speech-test-and-train.md).
You need audio or text data for testing the accuracy of speech recognition or tr
To upload your own datasets in Speech Studio, follow these steps: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Speech datasets** > **Upload data**.
+1. Select **Custom speech** > Your project name > **Speech datasets** > **Upload data**.
1. Select the **Training data** or **Testing data** tab. 1. Select a dataset type, and then select **Next**.
-1. Specify the dataset location, and then select **Next**. You can choose a local file or enter a remote location such as Azure Blob URL. If you select remote location, and you don't use trusted Azure services security mechanism (see next Note), then the remote location should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
+1. Specify the dataset location, and then select **Next**. You can choose a local file or enter a remote location such as Azure Blob URL. If you select remote location, and you don't use trusted Azure services security mechanism, then the remote location should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction aren't supported.
> [!NOTE] > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
After your dataset is uploaded, go to the **Train custom models** page to [train
To create a dataset and connect it to an existing project, use the `spx csr dataset create` command. Construct the request parameters according to the following instructions: -- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the `project` parameter to the ID of an existing project. This parameter is recommended so that you can also view and manage the dataset in Speech Studio. You can run the `spx csr project list` command to get available projects.
- Set the required `kind` parameter. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.-- Set the required `contentUrl` parameter. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
+- Set the required `contentUrl` parameter. This parameter is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction aren't supported.
> [!NOTE] > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). - Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Set the required `name` parameter. This parameter is the name that is displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
Here's an example Speech CLI command that creates a dataset and connects it to an existing project:
spx help csr dataset
To create a dataset and connect it to an existing project, use the [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: -- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the `project` property to the URI of an existing project. This property is recommended so that you can also view and manage the dataset in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
- Set the required `kind` property. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.-- Set the required `contentUrl` property. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
+- Set the required `contentUrl` property. This property is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction aren't supported.
> [!NOTE] > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). - Set the required `locale` property. The dataset locale must match the locale of the project. The locale can't be changed later. -- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+- Set the required `displayName` property. This property is the name that is displayed in the Speech Studio.
Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
The top-level `self` property in the response body is the dataset's URI. Use thi
::: zone-end > [!IMPORTANT]
-> Connecting a dataset to a Custom Speech project isn't required to train and test a custom model using the REST API or Speech CLI. But if the dataset is not connected to any project, you can't select it for training or testing in the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+> Connecting a dataset to a custom speech project isn't required to train and test a custom model using the REST API or Speech CLI. But if the dataset is not connected to any project, you can't select it for training or testing in the [Speech Studio](https://aka.ms/speechstudio/customspeech).
## Next steps
ai-services How To Custom Voice Training Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-training-data.md
Previously updated : 10/27/2022 Last updated : 1/21/2024 # Training data for custom neural voice
-When you're ready to create a custom Text to speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
+When you're ready to create a custom Text to speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you train the voice, you can start synthesizing speech in your applications.
> [!TIP] > To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [record voice samples to create a custom neural voice](record-custom-voice-samples.md).
When you're ready to create a custom Text to speech voice for your application,
A voice training dataset includes audio recordings, and a text file with the associated transcriptions. Each audio file should contain a single utterance (a single sentence or a single turn for a dialog system), and be less than 15 seconds long.
-In some cases, you may not have the right dataset ready and will want to test the custom neural voice training with available audio files, short or long, with or without transcripts.
+In some cases, you might not have the right dataset ready. You can test the custom neural voice training with available audio files, short or long, with or without transcripts.
This table lists data types and how each is used to create a custom Text to speech voice model.
Follow these guidelines when preparing audio.
| Property | Value | | -- | -- | | File format | RIFF (.wav), grouped into a .zip file |
-| File name | File name characters supported by Windows OS, with .wav extension.<br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
-| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
+| File name | File name characters supported by Windows OS, with .wav extension.<br>The characters `\ / : * ? " < > \|` aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
+| Sampling rate | When you create a custom neural voice, 24,000 Hz is required. |
| Sample format | PCM, at least 16-bit | | Audio length | Shorter than 15 seconds | | Archive format | .zip | | Maximum archive size | 2048 MB | > [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. If a .zip file contains .wav files with different sample rates, only those equal to or higher than 16,000 Hz will be imported. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. If a .zip file contains .wav files with different sample rates, only those equal to or higher than 16,000 Hz will be imported. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. It's recommended that you should use a sample rate of 24,000 Hz for your training data.
### Transcription data for Individual utterances + matching transcript
The transcription file is a plain text file. Use these guidelines to prepare you
| -- | -- | | File format | Plain text (.txt) | | Encoding format | ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
-| # of utterances per line | **One** - Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t). |
+| # of utterances per line | **One** - Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. You must use a tab (\t) to separate the file name and transcription. |
| Maximum file size | 2048 MB |
-Below is an example of how the transcripts are organized utterance by utterance in one .txt file:
+Here's an example of how the transcripts are organized utterance by utterance in one .txt file:
``` 0000000001[tab] This is the waistline, and it's falling. 0000000002[tab] We have trouble scoring. 0000000003[tab] It was Janet Maslin. ```
-ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts will introduce quality loss during the training.
+It's important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts introduce quality loss during the training.
## Long audio + transcript (Preview) > [!NOTE] > For **Long audio + transcript (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
-In some cases, you may not have segmented audio available. The Speech Studio can help you segment long audio files and create transcriptions. The long-audio segmentation service will use the [Batch Transcription API](batch-transcription.md) feature of speech to text.
+In some cases, you might not have segmented audio available. The Speech Studio can help you segment long audio files and create transcriptions. The long-audio segmentation service uses the [Batch Transcription API](batch-transcription.md) feature of speech to text.
-During the processing of the segmentation, your audio files and the transcripts will also be sent to the Custom Speech service to refine the recognition model so the accuracy can be improved for your data. No data will be retained during this process. After the segmentation is done, only the utterances segmented and their mapping transcripts will be stored for your downloading and training.
+During the processing of the segmentation, your audio files and the transcripts are also sent to the custom speech service to refine the recognition model so the accuracy can be improved for your data. No data is retained during this process. After the segmentation is done, only the utterances segmented and their mapping transcripts will be stored for your downloading and training.
> [!NOTE] > This service will be charged toward your speech to text subscription usage. The long-audio segmentation service is only supported with standard (S0) Speech resources.
Follow these guidelines when preparing audio for segmentation.
| Property | Value | | -- | -- | | File format | RIFF (.wav) or .mp3, grouped into a .zip file |
-| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
-| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
-| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
+| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters `\ / : * ? " < > \|` aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
+| Sampling rate | When you create a custom neural voice, 24,000 Hz is required. |
+| Sample format |RIFF(.wav): PCM, at least 16-bit.<br/><br/>mp3: At least 256 KBps bit rate.|
| Audio length | Longer than 20 seconds | | Archive format | .zip | | Maximum archive size | 2048 MB, at most 1000 audio files included | > [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. It's recommended that you should use a sample rate of 24,000 Hz for your training data.
-All audio files should be grouped into a zip file. ItΓÇÖs OK to put .wav files and .mp3 files into one audio zip. For example, you can upload a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 second long, and another audio named ΓÇÿqueenstory.mp3ΓÇÖ, 200 second long. All .mp3 files will be transformed into the .wav format after processing.
+All audio files should be grouped into a zip file. It's OK to put .wav files and .mp3 files into the same zip file. For example, you can upload a 45 second audio file named 'kingstory.wav' and a 200 second long audio file named 'queenstory.mp3' in the same zip file. All .mp3 files will be transformed into the .wav format after processing.
### Transcription data for Long audio + transcript
Transcripts must be prepared to the specifications listed in this table. Each au
| # of utterances per line | No limit | | Maximum file size | 2048 MB |
-All transcripts files in this data type should be grouped into a zip file. For example, you've uploaded a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 seconds long, and another one named ΓÇÿqueenstory.mp3ΓÇÖ, 200 seconds long. You'll need to upload another zip file containing two transcripts, one named ΓÇÿkingstory.txtΓÇÖ, the other one ΓÇÿqueenstory.txtΓÇÖ. Within each plain text file, you'll provide the full correct transcription for the matching audio.
+All transcripts files in this data type should be grouped into a zip file. For example, you might upload a 45 second audio file named 'kingstory.wav' and a 200 second long audio file named 'queenstory.mp3' in the same zip file. You need to upload another zip file containing the corresponding two transcripts--one named 'kingstory.txt' and the other one named 'queenstory.txt'. Within each plain text file, you provide the full correct transcription for the matching audio.
-After your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on the transcript provided. You can check the segmented utterances and the matching transcripts by downloading the dataset. Unique IDs will be assigned to the segmented utterances automatically. ItΓÇÖs important that you make sure the transcripts you provide are 100% accurate. Errors in the transcripts can reduce the accuracy during the audio segmentation and further introduce quality loss in the training phase that comes later.
+After your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on the transcript provided. You can check the segmented utterances and the matching transcripts by downloading the dataset. Unique IDs are assigned to the segmented utterances automatically. It's important that you make sure the transcripts you provide are 100% accurate. Errors in the transcripts can reduce the accuracy during the audio segmentation and further introduce quality loss in the training phase that comes later.
## Audio only (Preview) > [!NOTE] > For **Audio only (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
-If you don't have transcriptions for your audio recordings, use the **Audio only** option to upload your data. Our system can help you segment and transcribe your audio files. Keep in mind, this service will be charged toward your speech to text subscription usage.
+If you don't have transcriptions for your audio recordings, use the **Audio only** option to upload your data. Our system can help you segment and transcribe your audio files. Keep in mind, this service is charged toward your speech to text subscription usage.
Follow these guidelines when preparing audio.
Follow these guidelines when preparing audio.
| Property | Value | | -- | -- | | File format | RIFF (.wav) or .mp3, grouped into a .zip file |
-| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
-| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
-| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
+| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters `\ / : * ? " < > \|` aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
+| Sampling rate | When you create a custom neural voice, 24,000 Hz is required. |
+| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: At least 256 KBps bit rate.|
| Audio length | No limit | | Archive format | .zip | | Maximum archive size | 2048 MB, at most 1000 audio files included | > [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+> The default sampling rate for a custom neural voice is 24,000 Hz. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. It's recommended that you should use a sample rate of 24,000 Hz for your training data.
-All audio files should be grouped into a zip file. Once your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on our speech batch transcription service. Unique IDs will be assigned to the segmented utterances automatically. Matching transcripts will be generated through speech recognition. All .mp3 files will be transformed into the .wav format after processing. You can check the segmented utterances and the matching transcripts by downloading the dataset.
+All audio files should be grouped into a zip file. Once your dataset is successfully uploaded, the Speech service helps you segment the audio file into utterances based on our speech batch transcription service. Unique IDs are assigned to the segmented utterances automatically. Matching transcripts are generated through speech recognition. All .mp3 files will be transformed into the .wav format after processing. You can check the segmented utterances and the matching transcripts by downloading the dataset.
## Next steps
ai-services How To Develop Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-develop-custom-commands-application.md
Title: 'How-to: Develop Custom Commands applications - Speech service' description: Learn how to develop and customize Custom Commands applications. These voice-command apps are best suited for task completion or command-and-control scenarios.- Previously updated : 12/15/2020 Last updated : 1/21/2024
[!INCLUDE [deprecation notice](./includes/custom-commands-retire.md)]
-In this how-to article, you learn how to develop and configure Custom Commands applications. The Custom Commands feature helps you build rich voice-command apps that are optimized for voice-first interaction experiences. The feature is best suited to task completion or command-and-control scenarios. It's particularly well suited for Internet of Things (IoT) devices and for ambient and headless devices.
+In this how-to article, you learn how to develop and configure Custom Commands applications. The Custom Commands feature helps you build rich voice-command apps that are optimized for voice-first interaction experiences. The feature is best suited to task completion or command-and-control scenarios. It's well suited for Internet of Things (IoT) devices and for ambient and headless devices.
In this article, you create an application that can turn a TV on and off, set the temperature, and set an alarm. After you create these basic commands, you'll learn about the following options for customizing commands:
A prediction resource is used for recognition when your Custom Commands applicat
### Add a TurnOn command
-In the empty Smart-Room-Lite Custom Commands application you created, add a command. The command will process an utterance, `Turn on the tv`. It will respond with the message `Ok, turning the tv on`.
+In the empty Smart-Room-Lite Custom Commands application you created, add a command. The command processes an utterance, `Turn on the tv`. It responds with the message `Ok, turning the tv on`.
1. Create a new command by selecting **New command** at the top of the left pane. The **New command** window opens. 1. For the **Name** field, provide the value `TurnOn`.
For more information about rules and completion rules, see [Custom Commands conc
### Add a SetTemperature command
-Now add one more command, `SetTemperature`. This command will take a single utterance, `Set the temperature to 40 degrees`, and respond with the message `Ok, setting temperature to 40 degrees`.
+Now add one more command, `SetTemperature`. This command takes a single utterance, `Set the temperature to 40 degrees`, and respond with the message `Ok, setting temperature to 40 degrees`.
To create the new command, follow the steps you used for the `TurnOn` command, but use the example sentence `Set the temperature to 40 degrees`.
Try out the following utterance examples by using voice or text:
- You type: *set the temperature to 40 degrees* - Expected response: Ok, setting temperature to 40 degrees - You type: *turn on the tv*-- Expected response: Ok, turning the tv on
+- Expected response: Ok, turning on the tv
- You type: *set an alarm for 9 am tomorrow* - Expected response: Ok, setting an alarm for 9 am tomorrow
In this section, you learn how to add parameters to your commands. Commands requ
Start by editing the existing `TurnOn` command to turn on and turn off multiple devices.
-1. Now that the command will handle both on and off scenarios, rename the command as *TurnOnOff*.
+1. Now that the command handles both on and off scenarios, rename the command as *TurnOnOff*.
1. In the pane on the left, select the **TurnOn** command. Then next to **New command** at the top of the pane, select the edit button. 1. In the **Rename command** window, change the name to *TurnOnOff*.
Start by editing the existing `TurnOn` command to turn on and turn off multiple
1. Add a new parameter to the command. The parameter represents whether the user wants to turn the device on or off. 1. At top of the middle pane, select **Add**. From the drop-down menu, select **Parameter**. 1. In the pane on the right, in the **Parameters** section, in the **Name** box, add `OnOff`.
- 1. Select **Required**. In the **Add response for a required parameter** window, select **Simple editor**. In the **First variation** field, add *On or Off?*.
+ 1. Select **Required**. In the **Add response for a required parameter** window, select **Simple editor**. In the **First variation** field, add *On or Off*.
1. Select **Update**. > [!div class="mx-imgBorder"]
Test the three commands together by using utterances related to different comman
- Input: *Set an alarm* - Output: For what time? - Input: *Turn on the tv*-- Output: Ok, turning the tv on
+- Output: Ok, turning on the tv
- Input: *Set an alarm* - Output: For what time? - Input: *5 pm*-- Output: Ok, alarm set for 2020-05-01 17:00:00
+- Output: Ok, alarm set for `2020-05-01 17:00:00`
## Add configurations to command parameters
The Custom Commands feature allows you to configure string-type parameters to re
Reuse the `SubjectDevice` parameter from the `TurnOnOff` command. The current configuration for this parameter is **Accept predefined inputs from internal catalog**. This configuration refers to a static list of devices in the parameter configuration. Move out this content to an external data source that can be updated independently.
-To move the content, start by adding a new web endpoint. In the pane on the left, go to the **Web endpoints** section. There, add a new web endpoint URL. Use the following configuration.
+To move the content, start by adding a new web endpoint. In the pane on the left, go to the **Web endpoints** section. Add a new web endpoint URL. Use the following configuration.
| Setting | Suggested value | |-|-|
To move the content, start by adding a new web endpoint. In the pane on the left
| **URL** | `<Your endpoint of getDevices.json>` | | **Method** | **GET** |
-Then, configure and host a web endpoint that returns a JSON file that lists the devices that can be controlled. The web endpoint should return a JSON file that's formatted like this example:
+Then, configure and host a web endpoint that returns a JSON file that lists the devices that can be controlled. The web endpoint should return a JSON file formatted like this example:
```json {
Try it out by selecting the **Train** icon at the top of the pane on the right.
- Input: *Set the temperature to 72 degrees* - Output: Ok, setting temperature to 72 degrees - Input: *Set the temperature to 45 degrees*-- Output: Sorry, I can only set temperature between 60 and 80 degrees
+- Output: Sorry, I can only set temperature between 60 degrees and 80 degrees
- Input: *make it 72 degrees instead* - Output: Ok, setting temperature to 72 degrees ## Add interaction rules
-Interaction rules are *additional* rules that handle specific or complex situations. Although you're free to author your own interaction rules, in this example you use interaction rules for the following scenarios:
+Interaction rules are extra rules that handle specific or complex situations. Although you're free to author your own interaction rules, in this example you use interaction rules for the following scenarios:
* Confirming commands * Adding a one-step correction to commands
To add a confirmation, you use the `SetTemperature` command. To achieve confirma
1. Modify the **Confirm command** interaction rule by using the following configuration: 1. Change the name to **Confirm temperature**.
- 1. The condition **All required parameters** has already been added.
+ 1. The condition **All required parameters** is already set.
1. Add a new action: **Type** > **Send speech response** > **Are you sure you want to set the temperature as {TemperatureValue} degrees?** 1. In the **Expectations** section, leave the default value of **Expecting confirmation from user**.
Try out the changes by selecting **Train**. When the training finishes, select *
### Implement corrections in a command
-In this section, you'll configure a one-step correction. This correction is used after the fulfillment action has run. You'll also see an example of how a correction is enabled by default if the command isn't fulfilled yet. To add a correction when the command isn't finished, add the new parameter `AlarmTone`.
+In this section, you configure a one-step correction. This correction is used after the fulfillment action has run. You'll also see an example of how a correction is enabled by default if the command isn't fulfilled yet. To add a correction when the command isn't finished, add the new parameter `AlarmTone`.
In the left pane, select the **SetAlarm** command. Then and add the new parameter **AlarmTone**.
Next, update the response for the **DateTime** parameter to **Ready to set alarm
#### Implement a correction when a command is finished
-The Custom Commands platform allows for one-step correction even when the command has finished. This feature isn't enabled by default. It must be explicitly configured.
+The Custom Commands platform allows for one-step correction even when the command finishes. This feature isn't enabled by default. It must be explicitly configured.
Use the following steps to configure a one-step correction:
Try out the changes by selecting **Train**. Wait for the training to finish, and
- **Input**: *Set an alarm.* - **Output**: Ready to set alarm with tone as Chimes. For what time? - **Input**: *Set an alarm with the tone as Jingle for 9 am tomorrow.*-- **Output**: OK, alarm set for 2020-05-21 09:00:00. The alarm tone is Jingle.
+- **Output**: OK, alarm set for `2020-05-21 09:00:00`. The alarm tone is Jingle.
- **Input**: *No, 8 am.* - **Output**: Updating previous alarm time to 2020-05-29 08:00.
Edit the **Actions** section of the existing completion rule **ConfirmationRespo
Train and test your application by using the following input and output. Notice the variation of responses. The variation is created by multiple alternatives of the template value and also by use of adaptive expressions. * Input: *turn on the tv*
-* Output: Ok, turning the tv on
+* Output: Ok, turning on the tv
* Input: *turn on the tv* * Output: Done, turned on the tv * Input: *turn off the lights*
Another way to customize Custom Commands responses is to select an output voice.
> > You can create custom voices on the **Custom voice** project page. For more information, see [Get started with custom voice](./professional-voice-create-project.md).
-Now the application will respond in the selected voice, instead of the default voice.
+Now the application responds in the selected voice, instead of the default voice.
## Next steps
ai-services How To Get Speech Session Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-get-speech-session-id.md
Title: How to get Speech to text Session ID and Transcription ID
+ Title: How to get speech to text session ID and transcription ID
-description: Learn how to get Speech service Speech to text Session ID and Transcription ID
+description: Learn how to get speech to text session ID and transcription ID
Previously updated : 11/29/2022 Last updated : 1/21/2024
-# How to get Speech to text Session ID and Transcription ID
+# How to get speech to text session ID and transcription ID
-If you use [Speech to text](speech-to-text.md) and need to open a support case, you are often asked to provide a *Session ID* or *Transcription ID* of the problematic transcriptions to debug the issue. This article explains how to get these IDs.
+If you use [speech to text](speech-to-text.md) and need to open a support case, you're often asked to provide a *Session ID* or *Transcription ID* of the problematic transcriptions to debug the issue. This article explains how to get these IDs.
> [!NOTE] > * *Session ID* is used in [real-time speech to text](get-started-speech-to-text.md) and [speech translation](speech-translation.md).
-> * *Transcription ID* is used in [Batch transcription](batch-transcription.md).
+> * *Transcription ID* is used in [batch transcription](batch-transcription.md).
## Getting Session ID
If you use Speech SDK for JavaScript, get the Session ID as described in [this s
If you use [Speech CLI](spx-overview.md), you can also get the Session ID interactively. See details in [this section](#get-session-id-using-speech-cli).
-In case of [Speech to text REST API for short audio](rest-speech-to-text-short.md) you need to "inject" the session information in the requests. See details in [this section](#provide-session-id-using-rest-api-for-short-audio).
+With the [speech to text REST API for short audio](rest-speech-to-text-short.md), you need to inject the session information in the requests. See details in [this section](#provide-session-id-using-rest-api-for-short-audio).
### Enable logging in the Speech SDK
Enable logging for your application as described in [this article](how-to-use-lo
### Get Session ID from the log
-Open the log file your application produced and look for `SessionId:`. The number that would follow is the Session ID you need. In the following log excerpt example `0b734c41faf8430380d493127bd44631` is the Session ID.
+Open the log file your application produced and look for `SessionId:`. The number that would follow is the Session ID you need. In the following log excerpt example, `0b734c41faf8430380d493127bd44631` is the Session ID.
``` [874193]: 218ms SPX_DBG_TRACE_VERBOSE: audio_stream_session.cpp:1238 [0000023981752A40]CSpxAudioStreamSession::FireSessionStartedEvent: Firing SessionStarted event: SessionId: 0b734c41faf8430380d493127bd44631
See an example of getting Session ID using JavaScript in [this sample](https://g
### Get Session ID using Speech CLI
-If you use [Speech CLI](spx-overview.md), then you'll see the Session ID in `SESSION STARTED` and `SESSION STOPPED` console messages.
+If you use [Speech CLI](spx-overview.md), then you see the Session ID in `SESSION STARTED` and `SESSION STOPPED` console messages.
You can also enable logging for your sessions and get the Session ID from the log file as described in [this section](#get-session-id-from-the-log). Run the appropriate Speech CLI command to get the information on using logs:
spx help translate log
### Provide Session ID using REST API for short audio
-Unlike Speech SDK, [Speech to text REST API for short audio](rest-speech-to-text-short.md) does not automatically generate a Session ID. You need to generate it yourself and provide it within the REST request.
+Unlike Speech SDK, [Speech to text REST API for short audio](rest-speech-to-text-short.md) doesn't automatically generate a Session ID. You need to generate it yourself and provide it within the REST request.
-Generate a GUID inside your code or using any standard tool. Use the GUID value *without dashes or other dividers*. As an example we will use `9f4ffa5113a846eba289aa98b28e766f`.
+Generate a GUID inside your code or using any standard tool. Use the GUID value *without dashes or other dividers*. As an example we'll use `9f4ffa5113a846eba289aa98b28e766f`.
-As a part of your REST request use `X-ConnectionId=<GUID>` expression. For our example, a sample request will look like this:
+As a part of your REST request use `X-ConnectionId=<GUID>` expression. For our example, a sample request looks like this:
```http https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&X-ConnectionId=9f4ffa5113a846eba289aa98b28e766f ```
-`9f4ffa5113a846eba289aa98b28e766f` will be your Session ID.
+`9f4ffa5113a846eba289aa98b28e766f` is your Session ID.
> [!WARNING] > The value of the parameter `X-ConnectionId` should be in the format of GUID without dashes or other dividers. All other formats aren't supported and will be discarded by the Service.
The following is and example response body of a [Transcriptions_Create](https://
} ``` > [!NOTE]
-> Use the same technique to determine different IDs required for debugging issues related to [Custom Speech](custom-speech-overview.md), like uploading a dataset using [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) request.
+> Use the same technique to determine different IDs required for debugging issues related to [custom speech](custom-speech-overview.md), like uploading a dataset using [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) request.
> [!NOTE] > You can also see all existing transcriptions and their Transcription IDs for a given Speech resource by using [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) request.
ai-services How To Lower Speech Synthesis Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-lower-speech-synthesis-latency.md
Title: How to lower speech synthesis latency using Speech SDK description: How to lower speech synthesis latency using Speech SDK, including streaming, pre-connection, and so on.-++ Previously updated : 04/29/2021- Last updated : 1/21/2024+ zone_pivot_groups: programming-languages-set-nineteen
zone_pivot_groups: programming-languages-set-nineteen
# Lower speech synthesis latency using Speech SDK The synthesis latency is critical to your applications.
-In this article, we will introduce the best practices to lower the latency and bring the best performance to your end users.
+In this article, we'll introduce the best practices to lower the latency and bring the best performance to your end users.
Normally, we measure the latency by `first byte latency` and `finish latency`, as follows:
NSString *resultId = result.resultId;
::: zone-end
-The first byte latency is much lower than finish latency in most cases.
+The first byte latency is lower than finish latency in most cases.
The first byte latency is independent from text length, while finish latency increases with text length. Ideally, we want to minimize the user-experienced latency (the latency before user hears the sound) to one network route trip time plus the first audio chunk latency of the speech synthesis service.
while ([stream readData:data length:16000] > 0) {
The Speech SDK uses a websocket to communicate with the service. Ideally, the network latency should be one route trip time (RTT).
-If the connection is newly established, the network latency will include extra time to establish the connection.
+If the connection is newly established, the network latency includes extra time to establish the connection.
The establishment of a websocket connection needs the TCP handshake, SSL handshake, HTTP connection, and protocol upgrade, which introduces time delay. To avoid the connection latency, we recommend pre-connecting and reusing the `SpeechSynthesizer`. ### Pre-connect
-To pre-connect, establish a connection to the Speech service when you know the connection will be needed soon. For example, if you are building a speech bot in client, you can pre-connect to the speech synthesis service when the user starts to talk, and call `SpeakTextAsync` when the bot reply text is ready.
+To pre-connect, establish a connection to the Speech service when you know the connection is needed soon. For example, if you're building a speech bot in client, you can pre-connect to the speech synthesis service when the user starts to talk, and call `SpeakTextAsync` when the bot reply text is ready.
::: zone pivot="programming-language-csharp"
We recommend using object pool in service scenario, see our sample code for [C#]
## Transmit compressed audio over the network
-When the network is unstable or with limited bandwidth, the payload size will also impact latency.
+When the network is unstable or with limited bandwidth, the payload size also affects latency.
Meanwhile, a compressed audio format helps to save the users' network bandwidth, which is especially valuable for mobile users. We support many compressed formats including `opus`, `webm`, `mp3`, `silk`, and so on, see the full list in [SpeechSynthesisOutputFormat](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat).
We keep improving the Speech SDK's performance, so try to use the latest Speech
## Load test guideline
-You may use load test to test the speech synthesis service capacity and latency.
-Here are some guidelines.
+You can use load test to test the speech synthesis service capacity and latency. Here are some guidelines:
+ - The speech synthesis service has the ability to autoscale, but takes time to scale out. If the concurrency is increased in a short time, the client might get long latency or `429` error code (too many requests). So, we recommend you increase your concurrency step by step in load test. [See this article](speech-services-quotas-and-limits.md#general-best-practices-to-mitigate-throttling-during-autoscaling) for more details, especially [this example of workload patterns](speech-services-quotas-and-limits.md#example-of-a-workload-pattern-best-practice).
+ - You can use our sample using object pool ([C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_server_scenario_sample.cs) and [Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechSynthesisScenarioSamples.java)) for load test and getting the latency numbers. You can modify the test turns and concurrency in the sample to meet your target concurrency.
+ - The service has quota limitation based on the real traffic, therefore, if you want to perform load test with the concurrency higher than your real traffic, connect before your test.
## Next steps
ai-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-custom-neural-voice.md
Title: Migrate from custom voice to custom neural voice - Speech service description: This document helps users migrate from custom voice to custom neural voice.-++ Previously updated : 11/12/2021- Last updated : 1/21/2024+ # Migrate from custom voice to custom neural voice
> > The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**.
-The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users will benefit from the latest Text to speech technology, in a responsible way.
+The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users benefit from the latest Text to speech technology, in a responsible way.
|Custom voice |Custom neural voice | |--|--|
-| The standard, or "traditional," method of custom voice breaks down spoken language into phonetic snippets that can be remixed and matched using classical programming or statistical methods. | Custom neural voice synthesizes speech using deep neural networks that have "learned" the way phonetics are combined in natural human speech rather than using classical programming or statistical methods.|
-| Custom voice<sup>1</sup> requires a large volume of voice data to produce a more human-like voice model. With fewer recorded lines, a standard custom voice model will tend to sound more obviously robotic. |The custom neural voice capability enables you to create a unique brand voice in multiple languages and styles by using a small set of recordings.|
+| The standard, or "traditional," method of custom voice breaks down spoken language into phonetic snippets that can be remixed and matched using classical programming or statistical methods. | Custom neural voice synthesizes speech using deep neural networks that have "learned" the way phonetics are combined in natural human speech--rather than using classical programming or statistical methods.|
+| Custom voice<sup>1</sup> requires a large volume of voice data to produce a more human-like voice model. With fewer recorded lines, a standard custom voice model tends to sound more obviously robotic. |The custom neural voice capability enables you to create a unique brand voice in multiple languages and styles by using a small set of recordings.|
<sup>1</sup> When creating a custom voice model, the maximum number of data files allowed to be imported per subscription is 10 .zip files for free subscription (F0) users, and 500 for standard subscription (S0) users.
Before you can migrate to custom neural voice, your [application](https://aka.ms
> Even without an Azure account, you can listen to voice samples in [Speech Studio](https://aka.ms/customvoice) and determine the right voice for your business needs. 1. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and then [apply here](https://aka.ms/customneural).
-1. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://aka.ms/speechstudio/customvoice) using the same Azure subscription that you provide in your application.
+1. Once your application is approved, you are provided with the access to the "neural" training feature. Make sure you sign in to [Speech Studio](https://aka.ms/speechstudio/customvoice) using the same Azure subscription that you provide in your application.
1. Before you can [train](professional-voice-train-voice.md) and [deploy](professional-voice-deploy-endpoint.md) a custom voice model, you must [create a voice talent profile](professional-voice-create-consent.md). The profile requires an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model.
-1. Update your code in your apps if you have created a new endpoint with a new model.
+1. Update your code in your apps if you created a new endpoint with a new model.
## Custom voice details (deprecated)
Custom voice supports the following languages (locales).
### Regional support
-If you've created a custom voice font, use the endpoint that you've created. You can also use the endpoints listed below, replacing the `{deploymentId}` with the deployment ID for your voice model.
+If you created a custom voice font, use the endpoint that you created. You can also use the endpoints listed in this section, replacing the `{deploymentId}` with the deployment ID for your voice model.
| Region | Endpoint | |--|-|
ai-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-prebuilt-neural-voice.md
Previously updated : 11/12/2021 Last updated : 1/21/2024
ai-services How To Recognize Intents From Speech Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-recognize-intents-from-speech-csharp.md
Previously updated : 02/08/2022 Last updated : 1/21/2024 ms.devlang: csharp
# How to recognize intents from speech using the Speech SDK for C#
-The Azure AI services [Speech SDK](speech-sdk.md) integrates with the [Language Understanding service (LUIS)](https://www.luis.ai/home) to provide **intent recognition**. An intent is something the user wants to do: book a flight, check the weather, or make a call. The user can use whatever terms feel natural. Using machine learning, LUIS maps user requests to the intents you've defined.
+The Azure AI services [Speech SDK](speech-sdk.md) integrates with the [Language Understanding service (LUIS)](https://www.luis.ai/home) to provide **intent recognition**. An intent is something the user wants to do: book a flight, check the weather, or make a call. The user can use whatever terms feel natural. LUIS maps user requests to the intents you defined.
> [!NOTE] > A LUIS application defines the intents and entities you want to recognize. It's separate from the C# application that uses the Speech service. In this article, "app" means the LUIS app, while "application" means the C# code.
-In this guide, you use the Speech SDK to develop a C# console application that derives intents from user utterances through your device's microphone. You'll learn how to:
+In this guide, you use the Speech SDK to develop a C# console application that derives intents from user utterances through your device's microphone. You learn how to:
> [!div class="checklist"] >
LUIS uses two kinds of keys:
| Authoring | Lets you create and modify LUIS apps programmatically | | Prediction | Used to access the LUIS application in runtime |
-For this guide, you need the prediction key type. This guide uses the example Home Automation LUIS app, which you can create by following the [Use prebuilt Home automation app](../luis/luis-get-started-create-app.md) quickstart. If you've created a LUIS app of your own, you can use it instead.
+For this guide, you need the prediction key type. This guide uses the example Home Automation LUIS app, which you can create by following the [Use prebuilt Home automation app](../luis/luis-get-started-create-app.md) quickstart. If you created a LUIS app of your own, you can use it instead.
-When you create a LUIS app, LUIS automatically generates an authoring key so you can test the app using text queries. This key doesn't enable the Speech service integration and won't work with this guide. Create a LUIS resource in the Azure dashboard and assign it to the LUIS app. You can use the free subscription tier for this guide.
+When you create a LUIS app, LUIS automatically generates an authoring key so you can test the app using text queries. This key doesn't enable the Speech service integration and doesn't work with this guide. Create a LUIS resource in the Azure dashboard and assign it to the LUIS app. You can use the free subscription tier for this guide.
After you create the LUIS resource in the Azure dashboard, log into the [LUIS portal](https://www.luis.ai/home), choose your application on the **My Apps** page, then switch to the app's **Manage** page. Finally, select **Azure Resources** in the sidebar.
After you create the LUIS resource in the Azure dashboard, log into the [LUIS po
On the **Azure Resources** page:
-Select the icon next to a key to copy it to the clipboard. (You may use either key.)
+Select the icon next to a key to copy it to the clipboard. (You can use either key.)
## Create the project and add the workload
To start, create the project in Visual Studio, and make sure that Visual Studio
1. From the Visual Studio menu bar, select **Tools** > **Get Tools and Features**, which opens Visual Studio Installer and displays the **Modifying** dialog box.
-1. Check whether the **.NET desktop development** workload is available. If the workload hasn't been installed, select the check box next to it, and then select **Modify** to start the installation. It may take a few minutes to download and install.
+1. Check whether the **.NET desktop development** workload is available. If the workload isn't installed, select the check box next to it, and then select **Modify** to start the installation. It might take a few minutes to download and install.
If the check box next to **.NET desktop development** is already selected, select **Close** to exit the dialog box.
Next, create an intent recognizer using `new IntentRecognizer(config)`. Since th
Now import the model from the LUIS app using `LanguageUnderstandingModel.FromAppId()` and add the LUIS intents that you wish to recognize via the recognizer's `AddIntent()` method. These two steps improve the accuracy of speech recognition by indicating words that the user is likely to use in their requests. You don't have to add all the app's intents if you don't need to recognize them all in your application.
-To add intents, you must provide three arguments: the LUIS model (which has been created and is named `model`), the intent name, and an intent ID. The difference between the ID and the name is as follows.
+To add intents, you must provide three arguments: the LUIS model (named `model`), the intent name, and an intent ID. The difference between the ID and the name is as follows.
| `AddIntent()`&nbsp;argument | Purpose | | | - |
By default, LUIS recognizes intents in US English (`en-us`). By assigning a loca
## Continuous recognition from a file
-The following code illustrates two additional capabilities of intent recognition using the Speech SDK. The first, previously mentioned, is continuous recognition, where the recognizer emits events when results are available. These events can then be processed by event handlers that you provide. With continuous recognition, you call the recognizer's `StartContinuousRecognitionAsync()` method to start recognition instead of `RecognizeOnceAsync()`.
+The following code illustrates two more capabilities of intent recognition using the Speech SDK. The first, previously mentioned, is continuous recognition, where the recognizer emits events when results are available. These events are processed by event handlers that you provide. With continuous recognition, you call the recognizer's `StartContinuousRecognitionAsync()` method to start recognition instead of `RecognizeOnceAsync()`.
The other capability is reading the audio containing the speech to be processed from a WAV file. Implementation involves creating an audio configuration that can be used when creating the intent recognizer. The file must be single-channel (mono) with a sampling rate of 16 kHz.
To try out these features, delete or comment out the body of the `RecognizeInten
Revise the code to include your LUIS prediction key, region, and app ID and to add the Home Automation intents, as before. Change `whatstheweatherlike.wav` to the name of your recorded audio file. Then build, copy the audio file to the build directory, and run the application.
-For example, if you say "Turn off the lights", pause, and then say "Turn on the lights" in your recorded audio file, console output similar to the following may appear:
+For example, if you say "Turn off the lights", pause, and then say "Turn on the lights" in your recorded audio file, console output similar to the following might appear:
![Audio file LUIS recognition results](media/sdk/luis-results-2.png)
ai-services How To Recognize Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-recognize-speech.md
Previously updated : 09/01/2023 Last updated : 1/21/2024 ms.devlang: cpp
-# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
zone_pivot_groups: programming-languages-speech-services keywords: speech to text, speech to text software
ai-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-select-audio-input-devices.md
Title: Select an audio input device with the Speech SDK description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, and JavaScript) by obtaining the IDs of the audio devices connected to a system.'-++ Previously updated : 07/05/2019-
-# ms.devlang: cpp, csharp, java, javascript, objective-c, python
Last updated : 1/21/2024+
audioConfig = AudioConfiguration.fromMicrophoneInput("<device id>");
audioConfig = AudioConfiguration.fromMicrophoneInput("<device id>"); ```
-> [!Note]
+> [!NOTE]
> Microphone use isn't available for JavaScript running in Node.js. ## Audio device IDs on Windows for desktop applications
ai-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis-viseme.md
Title: Get facial position with viseme
-description: Speech SDK supports viseme events during speech synthesis, which represent key poses in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme.
-
+description: Learn about visemes that represent key poses in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme.
++ Previously updated : 10/23/2022-
-# ms.devlang: cpp, csharp, java, javascript, python
Last updated : 1/21/2024+ zone_pivot_groups: programming-languages-speech-services-nomore-variant
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Get facial position with viseme > [!NOTE]
-> To explore the locales supported for Viseme ID and blend shapes, refer to [the list of all supported locales](language-support.md?tabs=tts#viseme). Scalable Vector Graphics (SVG) is only supported for the `en-US` locale.
+> To explore the locales supported for viseme ID and blend shapes, refer to [the list of all supported locales](language-support.md?tabs=tts#viseme). Scalable Vector Graphics (SVG) is only supported for the `en-US` locale.
A *viseme* is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. Each viseme depicts the key facial poses for a specific set of phonemes.
The overall workflow of viseme is depicted in the following flowchart:
## Viseme ID
-Viseme ID refers to an integer number that specifies a viseme. We offer 22 different visemes, each depicting the mouth position for a specific set of phonemes. There's no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they look the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes).
+Viseme ID refers to an integer number that specifies a viseme. We offer 22 different visemes, each depicting the mouth position for a specific set of phonemes. There's no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they looked the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes).
Speech audio output can be accompanied by viseme IDs and `Audio offset`. The `Audio offset` indicates the offset timestamp that represents the start time of each viseme, in ticks (100 nanoseconds). ### Map phonemes to visemes
-Visemes vary by language and locale. Each locale has a set of visemes that correspond to its specific phonemes. The [SSML phonetic alphabets](speech-ssml-phonetic-sets.md) documentation maps viseme IDs to the corresponding International Phonetic Alphabet (IPA) phonemes. The table below shows a mapping relationship between viseme IDs and mouth positions, listing typical IPA phonemes for each viseme ID.
+Visemes vary by language and locale. Each locale has a set of visemes that correspond to its specific phonemes. The [SSML phonetic alphabets](speech-ssml-phonetic-sets.md) documentation maps viseme IDs to the corresponding International Phonetic Alphabet (IPA) phonemes. The table in this section shows a mapping relationship between viseme IDs and mouth positions, listing typical IPA phonemes for each viseme ID.
| Viseme ID | IPA | Mouth position| ||||
Visemes vary by language and locale. Each locale has a set of visemes that corre
For 2D characters, you can design a character that suits your scenario and use Scalable Vector Graphics (SVG) for each viseme ID to get a time-based face position.
-With temporal tags that are provided in a viseme event, these well-designed SVGs will be processed with smoothing modifications, and provide robust animation to the users. For example, the following illustration shows a red-lipped character that's designed for language learning.
+With temporal tags that are provided in a viseme event, these well-designed SVGs is processed with smoothing modifications, and provide robust animation to the users. For example, the following illustration shows a red-lipped character designed for language learning.
![Screenshot showing a 2D rendering example of four red-lipped mouths, each representing a different viseme ID that corresponds to a phoneme.](media/text-to-speech/viseme-demo-2D.png)
Render the SVG animation along with the synthesized speech to see the mouth move
# [3D blend shapes](#tab/3dblendshapes)
-Each viseme event includes a series of frames in the `Animation` SDK property. These are grouped to best align the facial positions with the audio. Your 3D engine should render each group of `BlendShapes` frames immediately before the corresponding audio chunk. The `FrameIndex` value indicates how many frames preceded the current list of frames.
+Each viseme event includes a series of frames in the `Animation` SDK property. These frames are grouped to best align the facial positions with the audio. Your 3D engine should render each group of `BlendShapes` frames immediately before the corresponding audio chunk. The `FrameIndex` value indicates how many frames preceded the current list of frames.
-The output json looks like the following sample. Each frame within `BlendShapes` contains an array of 55 facial positions represented as decimal values between 0 to 1. The decimal values are in the same order as described in the facial positions table below.
+The output json looks like the following sample. Each frame within `BlendShapes` contains an array of 55 facial positions represented as decimal values between 0 to 1.
```json {
The output json looks like the following sample. Each frame within `BlendShapes`
} ```
-The order of `BlendShapes` is as follows.
+The decimal values in the json response are in the same order as described in the following facial positions table. The order of `BlendShapes` is as follows.
| Order | Facial position in `BlendShapes`| | | -- |
ai-services How To Speech Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis.md
Previously updated : 08/30/2023
-# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
Last updated : 1/21/2024 zone_pivot_groups: programming-languages-speech-services keywords: text to speech
ai-services How To Track Speech Sdk Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-track-speech-sdk-memory-usage.md
Title: How to track Speech SDK memory usage - Speech service
description: The Speech SDK supports numerous programming languages for speech to text and text to speech conversion, along with speech translation. This article discusses memory management tooling built into the SDK. + Previously updated : 12/10/2019-
-# ms.devlang: cpp, csharp, java, objective-c, python
Last updated : 1/21/2024+ zone_pivot_groups: programming-languages-set-two # How to track Speech SDK memory usage
-The Speech SDK is based on a native code base that's projected into multiple programming languages through a series of interoperability layers. Each language-specific projection has idiomatically correct features to manage the object lifecycle. Additionally, the Speech SDK includes memory management tooling to track resource usage with object logging and object limits.
+The Speech SDK is based on a native code base projected into multiple programming languages through a series of interoperability layers. Each language-specific projection has idiomatically correct features to manage the object lifecycle. Additionally, the Speech SDK includes memory management tooling to track resource usage with object logging and object limits.
## How to read object logs
Here's a sample log:
## Set a warning threshold
-You have the option to create a warning threshold, and if that threshold is exceeded (assuming logging is enabled), a warning message is logged. The warning message contains a dump of all objects in existence along with their count. This information can be used to better understand issues.
+You can create a warning threshold, and if that threshold is exceeded (assuming logging is enabled), a warning message is logged. The warning message contains a dump of all objects in existence along with their count. This information can be used to better understand issues.
-To enable a warning threshold, it must be specified on a `SpeechConfig` object. This object is checked when a new recognizer is created. In the following examples, let's assume that you've created an instance of `SpeechConfig` called `config`:
+To enable a warning threshold, it must be specified on a `SpeechConfig` object. This object is checked when a new recognizer is created. In the following examples, let's assume that you created an instance of `SpeechConfig` called `config`:
::: zone pivot="programming-language-csharp"
speech_config.set_property_by_name("SPEECH-ObjectCountWarnThreshold", "10000")?
## Set an error threshold
-Using the Speech SDK, you can set the maximum number of objects allowed at a given time. If this setting is enabled, when the maximum number is hit, attempts to create new recognizer objects will fail. Existing objects will continue to work.
+Using the Speech SDK, you can set the maximum number of objects allowed at a given time. If this setting is enabled, when the maximum number is hit, attempts to create new recognizer objects fail. Existing objects continue to work.
Here's a sample error:
class Microsoft::Cognitive
class Microsoft::Cognitive ```
-To enable an error threshold, it must be specified on a `SpeechConfig` object. This object is checked when a new recognizer is created. In the following examples, let's assume that you've created an instance of `SpeechConfig` called `config`:
+To enable an error threshold, it must be specified on a `SpeechConfig` object. This object is checked when a new recognizer is created. In the following examples, let's assume that you created an instance of `SpeechConfig` called `config`:
::: zone pivot="programming-language-csharp"
ai-services How To Translate Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-translate-speech.md
Previously updated : 06/08/2022 Last updated : 1/21/2024 zone_pivot_groups: programming-languages-speech-services
ai-services How To Use Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-audio-input-streams.md
Title: Speech SDK audio input stream concepts description: An overview of the capabilities of the Speech SDK audio input stream.
-#
Previously updated : 05/09/2023 Last updated : 1/21/2024 ms.devlang: csharp + # How to use the audio input stream The Speech SDK provides a way to stream audio into the recognizer as an alternative to microphone or file input.
See more examples of speech to text recognition with audio input stream on [GitH
## Identify the format of the audio stream
-Identify the format of the audio stream. The format must be supported by the Speech SDK and the Azure AI Speech service.
+Identify the format of the audio stream.
Supported audio samples are:
ai-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-codec-compressed-audio-input-streams.md
Previously updated : 04/25/2022
-# ms.devlang: cpp, csharp, golang, java, python
Last updated : 1/21/2024 zone_pivot_groups: programming-languages-speech-services
ai-services How To Use Custom Entity Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-custom-entity-pattern-matching.md
Previously updated : 11/15/2021 Last updated : 1/21/2024 zone_pivot_groups: programming-languages-set-thirteen
The Azure AI services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
-In this guide, you use the Speech SDK to develop a console application that derives intents from speech utterances spoken through your device's microphone. You'll learn how to:
+In this guide, you use the Speech SDK to develop a console application that derives intents from speech utterances spoken through your device's microphone. You learn how to:
> [!div class="checklist"] >
ai-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-logging.md
Previously updated : 07/05/2019
-# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
Last updated : 1/21/2024 # Enable logging in the Speech SDK
-Logging to file is an optional feature for the Speech SDK. During development logging provides additional information and diagnostics from the Speech SDK's core components. It can be enabled by setting the property `Speech_LogFilename` on a speech configuration object to the location and name of the log file. Logging is handled by a static class in Speech SDKΓÇÖs native library. You can turn on logging for any Speech SDK recognizer or synthesizer instance. All instances in the same process write log entries to the same log file.
+Logging to file is an optional feature for the Speech SDK. During development, logging provides additional information and diagnostics from the Speech SDK's core components. It can be enabled by setting the `Speech_LogFilename` property on a speech configuration object to the location and name of the log file. Logging is handled by a static class in Speech SDKΓÇÖs native library. You can turn on logging for any Speech SDK recognizer or synthesizer instance. All instances in the same process write log entries to the same log file.
## Sample
-The log file name is specified on a configuration object. Taking the `SpeechConfig` as an example and assuming that you've created an instance called `speechConfig`:
+The log file name is specified on a configuration object. Taking the `SpeechConfig` as an example and assuming that you created an instance called `speechConfig`:
```csharp speechConfig.SetProperty(PropertyId.Speech_LogFilename, "LogfilePathAndName");
import ("github.com/Microsoft/cognitive-services-speech-sdk-go/common")
speechConfig.SetProperty(common.SpeechLogFilename, "LogfilePathAndName") ```
-You can create a recognizer from the configuration object. This will enable logging for all recognizers.
+You can create a recognizer from the configuration object. This enables logging for all recognizers.
> [!NOTE] > If you create a `SpeechSynthesizer` from the configuration object, it will not enable logging. If logging is enabled though, you will also receive diagnostics from the `SpeechSynthesizer`.
sdk.Diagnostics.SetLogOutputPath("LogfilePathAndName");
## Create a log file on different platforms
-For Windows or Linux, the log file can be in any path the user has write permission for. Write permissions to file system locations in other operating systems may be limited or restricted by default.
+For Windows or Linux, the log file can be in any path the user has write permission for. Write permissions to file system locations in other operating systems might be limited or restricted by default.
### Universal Windows Platform (UWP)
For more about file access permissions in UWP applications, see [File access per
### Android
-You can save a log file to either internal storage, external storage, or the cache directory. Files created in the internal storage or the cache directory are private to the application. It is preferable to create a log file in external storage.
+You can save a log file to either internal storage, external storage, or the cache directory. Files created in the internal storage or the cache directory are private to the application. It's preferable to create a log file in external storage.
```java File dir = context.getExternalFilesDir(null);
File logFile = new File(dir, "logfile.txt");
speechConfig.setProperty(PropertyId.Speech_LogFilename, logFile.getAbsolutePath()); ```
-The code above will save a log file to the external storage in the root of an application-specific directory. A user can access the file with the file manager (usually in `Android/data/ApplicationName/logfile.txt`). The file will be deleted when the application is uninstalled.
+The code above will save a log file to the external storage in the root of an application-specific directory. A user can access the file with the file manager (usually in `Android/data/ApplicationName/logfile.txt`). The file is deleted when the application is uninstalled.
You also need to request `WRITE_EXTERNAL_STORAGE` permission in the manifest file:
Within a Unity Android application, the log file can be created using the applic
string logFile = Application.persistentDataPath + "/logFile.txt"; speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile); ```
-In addition, you need to also set write permission in your Unity Player settings for Android to "External (SDCard)". The log will be written
-to a directory you can get using a tool such as AndroidStudio Device File Explorer. The exact directory path may vary between Android devices,
-location is typically the `sdcard/Android/data/your-app-packagename/files` directory.
+In addition, you need to also set write permission in your Unity Player settings for Android to "External (SDCard)". The log is written to a directory that you can get by using a tool such as AndroidStudio Device File Explorer. The exact directory path can vary between Android devices. The location is typically the `sdcard/Android/data/your-app-packagename/files` directory.
More about data and file storage for Android applications is available [here](https://developer.android.com/guide/topics/data/data-storage.html).
More about data and file storage for Android applications is available [here](ht
Only directories inside the application sandbox are accessible. Files can be created in the documents, library, and temp directories. Files in the documents directory can be made available to a user.
-If you are using Objective-C on iOS, use the following code snippet to create a log file in the application document directory:
+If you're using Objective-C on iOS, use the following code snippet to create a log file in the application document directory:
```objc NSString *filePath = [
To access a created file, add the below properties to the `Info.plist` property
<true/> ```
-If you're using Swift on iOS, please use the following code snippet to enable logs:
+If you're using Swift on iOS, use the following code snippet to enable logs:
```swift let documentsDirectoryPathString = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first! let documentsDirectoryPath = NSURL(string: documentsDirectoryPathString)!
More about iOS File System is available [here](https://developer.apple.com/libra
## Logging with multiple recognizers
-Although a log file output path is specified as a configuration property into a `SpeechRecognizer` or other SDK object, SDK logging is a singleton, *process-wide* facility with no concept of individual instances. You can think of this as the `SpeechRecognizer` constructor (or similar) implicitly calling a static and internal "Configure Global Logging" routine with the property data available in the corresponding `SpeechConfig`.
+Although a log file output path is specified as a configuration property into a `SpeechRecognizer` or other SDK object, SDK logging is a singleton, process-wide facility with no concept of individual instances. You can think of this as the `SpeechRecognizer` constructor (or similar) implicitly calling a static and internal "Configure Global Logging" routine with the property data available in the corresponding `SpeechConfig`.
-This means that you can't, as an example, configure six parallel recognizers to output simultaneously to six separate files. Instead, the latest recognizer created will configure the global logging instance to output to the file specified in its configuration properties and all SDK logging will be emitted to that file.
+This means that you can't, as an example, configure six parallel recognizers to output simultaneously to six separate files. Instead, the latest recognizer created will configure the global logging instance to output to the file specified in its configuration properties and all SDK logging is emitted to that file.
-This also means that the lifetime of the object that configured logging isn't tied to the duration of logging. Logging will not stop in response to the release of an SDK object and will continue as long as no new logging configuration is provided. Once started, process-wide logging may be stopped by setting the log file path to an empty string when creating a new object.
+This also means that the lifetime of the object that configured logging isn't tied to the duration of logging. Logging won't stop in response to the release of an SDK object and will continue as long as no new logging configuration is provided. Once started, process-wide logging can be stopped by setting the log file path to an empty string when creating a new object.
-To reduce potential confusion when configuring logging for multiple instances, it may be useful to abstract control of logging from objects doing real work. An example pair of helper routines:
+To reduce potential confusion when configuring logging for multiple instances, it might be useful to abstract control of logging from objects doing real work. An example pair of helper routines:
```cpp void EnableSpeechSdkLogging(const char* relativePath)
ai-services How To Use Meeting Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-meeting-transcription.md
Previously updated : 05/06/2023 Last updated : 1/21/2024 zone_pivot_groups: acs-js-csharp-python
-# ms.devlang: csharp, javascript
# Quickstart: Real-time meeting transcription
-You can transcribe meetings with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service. You first create voice signatures for each participant using the REST API, and then use the voice signatures with the Speech SDK to transcribe meetings. See the Meeting Transcription [overview](meeting-transcription.md) for more information.
+You can transcribe meetings with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service. You first create voice signatures for each participant using the REST API, and then use the voice signatures with the Speech SDK to transcribe meetings. See the meeting transcription [overview](meeting-transcription.md) for more information.
## Limitations
You can transcribe meetings with the ability to add, remove, and identify multip
* Requires a 7-mic circular multi-microphone array. The microphone array should meet [our specification](./speech-sdk-microphone.md). > [!NOTE]
-> The Speech SDK for C++, Java, Objective-C, and Swift support Meeting Transcription, but we haven't yet included a guide here.
+> The Speech SDK for C++, Java, Objective-C, and Swift support meeting transcription, but we haven't yet included a guide here.
::: zone pivot="programming-language-javascript" [!INCLUDE [JavaScript Basics include](includes/how-to/meeting-transcription/real-time-javascript.md)]
You can transcribe meetings with the ability to add, remove, and identify multip
## Next steps > [!div class="nextstepaction"]
-> [Asynchronous Meeting Transcription](how-to-async-meeting-transcription.md)
+> [Asynchronous meeting transcription](how-to-async-meeting-transcription.md)
ai-services How To Use Simple Language Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-simple-language-pattern-matching.md
Previously updated : 04/19/2022 Last updated : 1/21/2024 zone_pivot_groups: programming-languages-set-thirteen
The Azure AI services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
-In this guide, you use the Speech SDK to develop a C++ console application that derives intents from user utterances through your device's microphone. You'll learn how to:
+In this guide, you use the Speech SDK to develop a C++ console application that derives intents from user utterances through your device's microphone. You learn how to:
> [!div class="checklist"] >
A pattern is a phrase that includes an Entity somewhere within it. An Entity is
Take me to the {floorName} ```
-All other special characters and punctuation will be ignored.
+All other special characters and punctuation are ignored.
-Intents will be added using calls to the IntentRecognizer->AddIntent() API.
+Intents are added using calls to the IntentRecognizer->AddIntent() API.
::: zone pivot="programming-language-csharp" [!INCLUDE [csharp](includes/how-to/intent-recognition/csharp/simple-pattern-matching.md)]
ai-services How To Windows Voice Assistants Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-windows-voice-assistants-get-started.md
Previously updated : 04/15/2020 Last updated : 1/21/2024
This guide takes you through the steps to begin developing a voice assistant on
## Set up your development environment
-To start developing a voice assistant for Windows, you'll need to make sure you have the proper development environment.
+To start developing a voice assistant for Windows, you need to make sure you have the proper development environment.
-- **Visual Studio:** You'll need to install [Microsoft Visual Studio 2017](https://visualstudio.microsoft.com/), Community Edition or higher-- **Windows version**: A PC with a Windows Insider fast ring build of Windows and the Windows Insider version of the Windows SDK. This sample code is verified as working on Windows Insider Release Build 19025.vb_release_analog.191112-1600 using Windows SDK 19018. Any Build or SDK above the specified versions should be compatible.
+- **Visual Studio:** You need to install [Microsoft Visual Studio 2017](https://visualstudio.microsoft.com/), Community Edition or higher
+- **Windows version**: A PC with a Windows Insider fast ring build of Windows and the Windows Insider version of the Windows SDK. This sample code is verified as working on Windows Insider Release Build `19025.vb_release_analog.191112-1600` using Windows SDK 19018. Any Build or SDK above the specified versions should be compatible.
- **UWP development tools**: The Universal Windows Platform development workload in Visual Studio. See the UWP [Get set up](/windows/uwp/get-started/get-set-up) page to get your machine ready for developing UWP Applications. - **A working microphone and audio output**
To start developing a voice assistant for Windows, you'll need to make 
Some resources necessary for a customized voice agent on Windows requires resources from Microsoft. The [UWP Voice Assistant Sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample) provides sample versions of these resources for initial development and testing, so this section is unnecessary for initial development. - **Keyword model:** Voice activation requires a keyword model from Microsoft in the form of a .bin file. The .bin file provided in the UWP Voice Assistant Sample is trained on the keyword *Contoso*.-- **Limited Access Feature Token:** Since the ConversationalAgent APIs provide access to microphone audio, they're protected under Limited Access Feature restrictions. To use a Limited Access Feature, you'll need to obtain a Limited Access Feature token connected to the package identity of your application from Microsoft.
+- **Limited Access Feature Token:** Since the ConversationalAgent APIs provide access to microphone audio, they're protected under Limited Access Feature restrictions. To use a Limited Access Feature, you need to obtain a Limited Access Feature token connected to the package identity of your application from Microsoft.
## Establish a dialog service
-For a complete voice assistant experience, the application will need a dialog service that
+For a complete voice assistant experience, the application needs a dialog service that
- Detect a keyword in a given audio file - Listen to user input and convert it to text
For a complete voice assistant experience, the application will need a dialog se
Here are the requirements to create a basic dialog service using Direct Line Speech. - **Speech resource:** An Azure resource for Speech features such as speech to text and text to speech. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure AI services resource](../multi-service-resource.md?pivots=azportal).-- **Bot Framework bot:** A bot created using Bot Framework version 4.2 or above that's subscribed to [Direct Line Speech](./direct-line-speech.md) to enable voice input and output. [This guide](./tutorial-voice-enable-your-bot-speech-sdk.md) contains step-by-step instructions to make an "echo bot" and subscribe it to Direct Line Speech. You can also go [here](https://blog.botframework.com/2018/05/07/build-a-microsoft-bot-framework-bot-with-the-bot-builder-sdk-v4/) for steps on how to create a customized bot, then follow the same steps [here](./tutorial-voice-enable-your-bot-speech-sdk.md) to subscribe it to Direct Line Speech, but with your new bot rather than the "echo bot".
+- **Bot Framework bot:** A bot created using Bot Framework version 4.2 or above subscribed to [Direct Line Speech](./direct-line-speech.md) to enable voice input and output. [This guide](./tutorial-voice-enable-your-bot-speech-sdk.md) contains step-by-step instructions to make an "echo bot" and subscribe it to Direct Line Speech. You can also go [this Bot Framework article](https://blog.botframework.com/2018/05/07/build-a-microsoft-bot-framework-bot-with-the-bot-builder-sdk-v4/) for steps on how to create a customized bot. Then follow the same steps [here](./tutorial-voice-enable-your-bot-speech-sdk.md) to subscribe it to Direct Line Speech, but with your new bot rather than the "echo bot".
## Try out the sample app
With your Speech resource key and echo bot's bot ID, you're ready to try out the
## Create your own voice assistant for Windows
-Once you've received your Limited Access Feature token and bin file from Microsoft, you can begin on your own voice assistant on Windows.
+Once you receive your Limited Access Feature token and bin file from Microsoft, you can begin on your own voice assistant on Windows.
## Next steps
ai-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/improve-accuracy-phrase-list.md
- Previously updated : 09/01/2022 + Last updated : 1/21/2024 zone_pivot_groups: programming-languages-set-two-with-js-spx
Examples of phrases include:
Phrase lists are simple and lightweight: - **Just-in-time**: A phrase list is provided just before starting the speech recognition, eliminating the need to train a custom model. -- **Lightweight**: You don't need a large data set. Simply provide a word or phrase to boost its recognition.
+- **Lightweight**: You don't need a large data set. Provide a word or phrase to boost its recognition.
You can use phrase lists with the [Speech Studio](speech-studio-overview.md), [Speech SDK](quickstarts/setup-platform.md), or [Speech Command Line Interface (CLI)](spx-overview.md). The [Batch transcription API](batch-transcription.md) doesn't support phrase lists.
-You can use phrase lists with both standard and [custom speech](custom-speech-overview.md). There are some situations where training a custom model that includes phrases is likely the best option to improve accuracy. For example, in the following cases you would use Custom Speech:
+You can use phrase lists with both standard and [custom speech](custom-speech-overview.md). There are some situations where training a custom model that includes phrases is likely the best option to improve accuracy. For example, in the following cases you would use custom speech:
- If you need to use a large list of phrases. A phrase list shouldn't have more than 500 phrases. -- If you need a phrase list for languages that are not currently supported.
+- If you need a phrase list for languages that aren't currently supported.
## Try it in Speech Studio
-You can use [Speech Studio](speech-studio-overview.md) to test how phrase list would help improve recognition for your audio. To implement a phrase list with your application in production, you'll use the Speech SDK or Speech CLI.
+You can use [Speech Studio](speech-studio-overview.md) to test how phrase list would help improve recognition for your audio. To implement a phrase list with your application in production, you use the Speech SDK or Speech CLI.
For example, let's say that you want the Speech service to recognize this sentence:
-"Hi Rehaan, this is Jessie from Contoso bank. "
+"Hi Rehaan, I'm Jessie from Contoso bank."
-After testing, you might find that it's incorrectly recognized as:
-"Hi **everyone**, this is **Jesse** from **can't do so bank**."
+You might find that a phrase is incorrectly recognized as:
+"Hi **everyone**, I'm **Jesse** from **can't do so bank**."
-In this case you would want to add "Rehaan", "Jessie", and "Contoso" to your phrase list. Then the names should be recognized correctly.
+In the previous scenario, you would want to add "Rehaan", "Jessie", and "Contoso" to your phrase list. Then the names should be recognized correctly.
Now try Speech Studio to see how phrase list can improve recognition accuracy.
Now try Speech Studio to see how phrase list can improve recognition accuracy.
> You may be prompted to select your Azure subscription and Speech resource, and then acknowledge billing for your region. 1. Go to **Real-time Speech to text** in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool).
-1. You test speech recognition by uploading an audio file or recording audio with a microphone. For example, select **record audio with a microphone** and then say "Hi Rehaan, this is Jessie from Contoso bank. " Then select the red button to stop recording.
+1. You test speech recognition by uploading an audio file or recording audio with a microphone. For example, select **record audio with a microphone** and then say "Hi Rehaan, I'm Jessie from Contoso bank. " Then select the red button to stop recording.
1. You should see the transcription result in the **Test results** text box. If "Rehaan", "Jessie", or "Contoso" were recognized incorrectly, you can add the terms to a phrase list in the next step. 1. Select **Show advanced options** and turn on **Phrase list**.
-1. Enter "Contoso;Jessie;Rehaan" in the phrase list text box. Note that multiple phrases need to be separated by a semicolon.
+1. Enter "Contoso;Jessie;Rehaan" in the phrase list text box. Multiple phrases need to be separated by a semicolon.
:::image type="content" source="./media/custom-speech/phrase-list-after-zoom.png" alt-text="Screenshot of a phrase list applied in Speech Studio." lightbox="./media/custom-speech/phrase-list-after-full.png"::: 1. Use the microphone to test recognition again. Otherwise you can select the retry arrow next to your audio file to re-run your audio. The terms "Rehaan", "Jessie", or "Contoso" should be recognized.
Allowed characters include locale-specific letters and digits, white space chara
Check out more options to improve recognition accuracy. > [!div class="nextstepaction"]
-> [Custom Speech](custom-speech-overview.md)
+> [Custom speech](custom-speech-overview.md)
ai-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-identification.md
speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult)
> Language detection with custom models can only be used with real-time speech to text and speech translation. Batch transcription only supports language detection for default base models. ::: zone pivot="programming-language-csharp"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```csharp var sourceLanguageConfigs = new SourceLanguageConfig[]
var autoDetectSourceLanguageConfig =
::: zone-end ::: zone pivot="programming-language-cpp"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```cpp std::vector<std::shared_ptr<SourceLanguageConfig>> sourceLanguageConfigs;
auto autoDetectSourceLanguageConfig =
::: zone-end ::: zone pivot="programming-language-java"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```java List sourceLanguageConfigs = new ArrayList<SourceLanguageConfig>();
AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
::: zone-end ::: zone pivot="programming-language-python"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```Python en_language_config = speechsdk.languageconfig.SourceLanguageConfig("en-US")
This sample shows how to use language detection with a custom endpoint. If the d
::: zone-end ::: zone pivot="programming-language-objectivec"
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md).
```Objective-C SPXSourceLanguageConfiguration* enLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"en-US"];
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md
Language support varies by Speech service functionality.
The table in this section summarizes the locales supported for Speech to text. See the table footnotes for more details.
-Additional remarks for Speech to text locales are included in the [Custom Speech](#custom-speech) section below.
+Additional remarks for Speech to text locales are included in the [custom speech](#custom-speech) section below.
> [!TIP] > Try out the [real-time speech to text tool](https://speech.microsoft.com/portal/speechtotexttool) without having to use any code. [!INCLUDE [Language support include](includes/language-support/stt.md)]
-### Custom Speech
+### Custom speech
-To improve Speech to text recognition accuracy, customization is available for some languages and base models. Depending on the locale, you can upload audio + human-labeled transcripts, plain text, structured text, and pronunciation data. By default, plain text customization is supported for all available base models. To learn more about customization, see [Custom Speech](./custom-speech-overview.md).
+To improve Speech to text recognition accuracy, customization is available for some languages and base models. Depending on the locale, you can upload audio + human-labeled transcripts, plain text, structured text, and pronunciation data. By default, plain text customization is supported for all available base models. To learn more about customization, see [custom speech](./custom-speech-overview.md).
These are the locales that support the [display text format feature](./how-to-custom-speech-display-text-format.md): da-DK, de-DE, en-AU, en-CA, en-GB, en-HK, en-IE, en-IN, en-NG, en-NZ, en-PH, en-SG, en-US, es-ES, es-MX, fi-FI, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nb-NO, nl-NL, pl-PL, pt-BR, pt-PT, sv-SE, tr-TR, zh-CN, zh-HK.
ai-services Logging Audio Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/logging-audio-transcription.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
You can enable logging for both audio input and recognized speech when using [speech to text](get-started-speech-to-text.md) or [speech translation](get-started-speech-to-text.md). For speech translation, only the audio and transcription of the original audio are logged. The translations aren't logged. This article describes how to enable, access and delete the audio and transcription logs.
-Audio and transcription logs can be used as input for [Custom Speech](custom-speech-overview.md) model training. You might have other use cases.
+Audio and transcription logs can be used as input for [custom speech](custom-speech-overview.md) model training. You might have other use cases.
> [!WARNING] > Don't depend on audio and transcription logs when the exact record of input audio is required. In the periods of peak load, the service prioritizes hardware resources for transcription tasks. This may result in minor parts of the audio not being logged. Such occasions are rare, but nevertheless possible.
https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiv
### Enable audio and transcription logging for a custom model endpoint
-This method is applicable for [Custom Speech](custom-speech-overview.md) endpoints only.
+This method is applicable for [custom speech](custom-speech-overview.md) endpoints only.
Logging can be enabled or disabled in the persistent custom model endpoint settings. When logging is enabled (turned on) for a custom model endpoint, then you don't need to enable logging at the [recognition session level with the SDK or REST API](#enable-logging-for-a-single-recognition-session). Even when logging isn't enabled for a custom model endpoint, you can enable logging temporarily at the recognition session level with the SDK or REST API.
Logging can be enabled or disabled in the persistent custom model endpoint setti
> For custom model endpoints, the logging setting of your deployed endpoint is prioritized over your session-level setting (SDK or REST API). If logging is enabled for the custom model endpoint, the session-level setting (whether it's set to true or false) is ignored. If logging isn't enabled for the custom model endpoint, the session-level setting determines whether logging is active. You can enable audio and transcription logging for a custom model endpoint:-- When you create the endpoint using the Speech Studio, REST API, or Speech CLI. For details about how to enable logging for a Custom Speech endpoint, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md#add-a-deployment-endpoint).
+- When you create the endpoint using the Speech Studio, REST API, or Speech CLI. For details about how to enable logging for a custom speech endpoint, see [Deploy a custom speech model](how-to-custom-speech-deploy-model.md#add-a-deployment-endpoint).
- When you update the endpoint ([Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)) using the [Speech to text REST API](rest-speech-to-text.md). For an example of how to update the logging setting for an endpoint, see [Turn off logging for a custom model endpoint](#turn-off-logging-for-a-custom-model-endpoint). But instead of setting the `contentLoggingEnabled` property to `false`, set it to `true` to enable logging for the endpoint. ## Turn off logging for a custom model endpoint
This method is applicable for [custom model](how-to-custom-speech-deploy-model.m
To download the endpoint logs: 1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select **Custom speech** > Your project name > **Deploy models**.
1. Select the link by endpoint name. 1. Under **Content logging**, select **Download log**.
ai-services Migrate V3 0 To V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v3-0-to-v3-1.md
# Migrate code from v3.0 to v3.1 of the REST API
-The Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). Changes from version 3.0 to 3.1 are described in the sections below.
+The Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md). Changes from version 3.0 to 3.1 are described in the sections below.
> [!IMPORTANT] > Speech to text REST API v3.2 is available in preview.
The `filter` property is added to the [Transcriptions_List](https://eastus.dev.c
If you use webhook to receive notifications about transcription status, please note that the webhooks created via V3.0 API cannot receive notifications for V3.1 transcription requests. You need to create a new webhook endpoint via V3.1 API in order to receive notifications for V3.1 transcription requests.
-## Custom Speech
+## Custom speech
### Datasets
ai-services Migrate V3 1 To V3 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v3-1-to-v3-2.md
# Migrate code from v3.1 to v3.2 of the REST API
-The Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). Changes from version 3.1 to 3.2 are described in the sections below.
+The Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md). Changes from version 3.1 to 3.2 are described in the sections below.
> [!IMPORTANT] > Speech to text REST API v3.2 is available in preview.
Azure AI Speech now supports OpenAI's Whisper model via Speech to text REST API
> [!NOTE] > Azure OpenAI Service also supports OpenAI's Whisper model for speech to text with a synchronous REST API. To learn more, check out the [quickstart](../openai/whisper-quickstart.md). Check out [What is the Whisper model?](./whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
-## Custom Speech
+## Custom speech
> [!IMPORTANT] > You'll be charged for custom speech model training if the base model was created on October 1, 2023 and later. You are not charged for training if the base model was created prior to October 2023. For more information, see [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md
The following regions are supported for Speech service features such as speech t
| US | West US 2 | `westus2` <sup>1,2,4,5,7</sup>| | US | West US 3 | `westus3` |
-<sup>1</sup> The region has dedicated hardware for Custom Speech training. If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
+<sup>1</sup> The region has dedicated hardware for custom speech training. If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
<sup>2</sup> The region is available for custom neural voice training. You can copy a trained neural voice model to other regions for deployment.
ai-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/resiliency-and-recovery-plan.md
The Speech service is [available in various regions](./regions.md). Speech resou
Datasets for customer-created data assets, such as customized speech models, custom voice fonts and speaker recognition voice profiles, are also **available only within the service-deployed region**. Such assets are:
-**Custom Speech**
+**Custom speech**
- Training audio/text data - Test audio/text data - Customized speech models
Data assets, models or deployments in one region can't be made visible or access
You should create Speech service resources in both a main and a secondary region by following the same steps as used for default endpoints.
-### Custom Speech
+### Custom speech
-Custom Speech service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
+Custom speech service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
1. Create your custom model in one main region (Primary). 2. Run the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) operation to replicate the custom model to all prepared regions (Secondary).
-3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Deploy a Custom Speech model](./how-to-custom-speech-deploy-model.md).
+3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Deploy a custom speech model](./how-to-custom-speech-deploy-model.md).
- If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md). 4. Configure your client to fail over on persistent errors as with the default endpoints usage.
ai-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text-short.md
Before you use the Speech to text REST API for short audio, consider the followi
* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio. The input [audio formats](#audio-formats) are more limited compared to the [Speech SDK](speech-sdk.md). * The REST API for short audio returns only final results. It doesn't provide partial results. * [Speech translation](speech-translation.md) is not supported via REST API for short audio. You need to use [Speech SDK](speech-sdk.md).
-* [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md) are not supported via REST API for short audio. You should always use the [Speech to text REST API](rest-speech-to-text.md) for batch transcription and Custom Speech.
+* [Batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md) are not supported via REST API for short audio. You should always use the [Speech to text REST API](rest-speech-to-text.md) for batch transcription and custom speech.
Before you use the Speech to text REST API for short audio, understand that you need to complete a token exchange as part of authentication to access the service. For more information, see [Authentication](#authentication).
ai-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text.md
# Speech to text REST API
-Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
+Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md).
> [!IMPORTANT] > Speech to text REST API v3.2 is available in preview.
Speech to text REST API is used for [Batch transcription](batch-transcription.md
Use Speech to text REST API to: -- [Custom Speech](custom-speech-overview.md): With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.
+- [Custom speech](custom-speech-overview.md): With custom speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.
- [Batch transcription](batch-transcription.md): Transcribe audio files as a batch from multiple URLs or an Azure container. Speech to text REST API includes such features as:
Speech to text REST API includes such features as:
## Datasets
-Datasets are applicable for [Custom Speech](custom-speech-overview.md). You can use datasets to train and test the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
+Datasets are applicable for [custom speech](custom-speech-overview.md). You can use datasets to train and test the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
See [Upload training and testing datasets](how-to-custom-speech-upload-data.md?pivots=rest-api) for examples of how to upload datasets. This table includes all the operations that you can perform on datasets.
See [Upload training and testing datasets](how-to-custom-speech-upload-data.md?p
## Endpoints
-Endpoints are applicable for [Custom Speech](custom-speech-overview.md). You must deploy a custom endpoint to use a Custom Speech model.
+Endpoints are applicable for [custom speech](custom-speech-overview.md). You must deploy a custom endpoint to use a custom speech model.
See [Deploy a model](how-to-custom-speech-deploy-model.md?pivots=rest-api) for examples of how to manage deployment endpoints. This table includes all the operations that you can perform on endpoints.
See [Deploy a model](how-to-custom-speech-deploy-model.md?pivots=rest-api) for e
## Evaluations
-Evaluations are applicable for [Custom Speech](custom-speech-overview.md). You can use evaluations to compare the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
+Evaluations are applicable for [custom speech](custom-speech-overview.md). You can use evaluations to compare the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
-See [Test recognition quality](how-to-custom-speech-inspect-data.md?pivots=rest-api) and [Test accuracy](how-to-custom-speech-evaluate-data.md?pivots=rest-api) for examples of how to test and evaluate Custom Speech models. This table includes all the operations that you can perform on evaluations.
+See [Test recognition quality](how-to-custom-speech-inspect-data.md?pivots=rest-api) and [Test accuracy](how-to-custom-speech-evaluate-data.md?pivots=rest-api) for examples of how to test and evaluate custom speech models. This table includes all the operations that you can perform on evaluations.
|Path|Method|Version 3.1|Version 3.0| |||||
Health status provides insights about the overall health of the service and sub-
## Models
-Models are applicable for [Custom Speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). You can use models to transcribe audio files. For example, you can use a model trained with a specific dataset to transcribe audio files.
+Models are applicable for [custom speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). You can use models to transcribe audio files. For example, you can use a model trained with a specific dataset to transcribe audio files.
-See [Train a model](how-to-custom-speech-train-model.md?pivots=rest-api) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md?pivots=rest-api) for examples of how to train and manage Custom Speech models. This table includes all the operations that you can perform on models.
+See [Train a model](how-to-custom-speech-train-model.md?pivots=rest-api) and [custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md?pivots=rest-api) for examples of how to train and manage custom speech models. This table includes all the operations that you can perform on models.
|Path|Method|Version 3.1|Version 3.0| |||||
See [Train a model](how-to-custom-speech-train-model.md?pivots=rest-api) and [Cu
## Projects
-Projects are applicable for [Custom Speech](custom-speech-overview.md). Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
+Projects are applicable for [custom speech](custom-speech-overview.md). Custom speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
See [Create a project](how-to-custom-speech-create-project.md?pivots=rest-api) for examples of how to create projects. This table includes all the operations that you can perform on projects.
See [Create a transcription](batch-transcription-create.md?pivots=rest-api) for
## Web hooks
-Web hooks are applicable for [Custom Speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). In particular, web hooks apply to [datasets](#datasets), [endpoints](#endpoints), [evaluations](#evaluations), [models](#models), and [transcriptions](#transcriptions). Web hooks can be used to receive notifications about creation, processing, completion, and deletion events.
+Web hooks are applicable for [custom speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). In particular, web hooks apply to [datasets](#datasets), [endpoints](#endpoints), [evaluations](#evaluations), [models](#models), and [transcriptions](#transcriptions). Web hooks can be used to receive notifications about creation, processing, completion, and deletion events.
This table includes all the web hook operations that are available with the Speech to text REST API.
This table includes all the web hook operations that are available with the Spee
## Next steps -- [Create a Custom Speech project](how-to-custom-speech-create-project.md)
+- [Create a custom speech project](how-to-custom-speech-create-project.md)
- [Get familiar with batch transcription](batch-transcription.md)
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/role-based-access-control.md
# Role-based access control for Speech resources
-You can manage access and permissions to your Speech resources with Azure role-based access control (Azure RBAC). Assigned roles can vary across Speech resources. For example, you can assign a role to a Speech resource that should only be used to train a Custom Speech model. You can assign another role to a Speech resource that is used to transcribe audio files. Depending on who can access each Speech resource, you can effectively set a different level of access per application or user. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/overview.md).
+You can manage access and permissions to your Speech resources with Azure role-based access control (Azure RBAC). Assigned roles can vary across Speech resources. For example, you can assign a role to a Speech resource that should only be used to train a custom speech model. You can assign another role to a Speech resource that is used to transcribe audio files. Depending on who can access each Speech resource, you can effectively set a different level of access per application or user. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/overview.md).
> [!NOTE] > A Speech resource can inherit or be assigned multiple roles. The final level of access to this resource is a combination of all roles permissions from the operation level.
A role definition is a collection of permissions. When you create a Speech resou
Keep the built-in roles if your Speech resource can have full read and write access to the projects.
-For finer-grained resource access control, you can [add or remove roles](../../role-based-access-control/role-assignments-portal.md?tabs=current) using the Azure portal. For example, you could create a custom role with permission to upload Custom Speech datasets, but without permission to deploy a Custom Speech model to an endpoint.
+For finer-grained resource access control, you can [add or remove roles](../../role-based-access-control/role-assignments-portal.md?tabs=current) using the Azure portal. For example, you could create a custom role with permission to upload custom speech datasets, but without permission to deploy a custom speech model to an endpoint.
## Authentication with keys and tokens
For the SDK, you configure whether to authenticate with a Speech resource key or
Once you're signed into [Speech Studio](speech-studio-overview.md), you select a subscription and Speech resource. You don't choose whether to authenticate with a Speech resource key or Microsoft Entra token. Speech Studio gets the key or token automatically from the Speech resource. If one of the assigned [roles](#roles-for-speech-resources) has permission to list resource keys, Speech Studio will authenticate with the key. Otherwise, Speech Studio will authenticate with the Microsoft Entra token.
-If Speech Studio uses your Microsoft Entra token, but the Speech resource doesn't have a custom subdomain and private endpoint, then you can't use some features in Speech Studio. In this case, for example, the Speech resource can be used to train a Custom Speech model, but you can't use a Custom Speech model to transcribe audio files.
+If Speech Studio uses your Microsoft Entra token, but the Speech resource doesn't have a custom subdomain and private endpoint, then you can't use some features in Speech Studio. In this case, for example, the Speech resource can be used to train a custom speech model, but you can't use a custom speech model to transcribe audio files.
| Authentication credential | Feature availability | | | | |Speech resource key|Full access limited only by the assigned role permissions.| |Microsoft Entra token with custom subdomain and private endpoint|Full access limited only by the assigned role permissions.|
-|Microsoft Entra token without custom subdomain and private endpoint (not recommended)|Features are limited. For example, the Speech resource can be used to train a Custom Speech model or custom neural voice. But you can't use a Custom Speech model or custom neural voice.|
+|Microsoft Entra token without custom subdomain and private endpoint (not recommended)|Features are limited. For example, the Speech resource can be used to train a custom speech model or custom neural voice. But you can't use a custom speech model or custom neural voice.|
## Next steps
ai-services Speech Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-configuration.md
The exact syntax of the host mount location varies depending on the host operati
The custom speech containers use [volume mounts](https://docs.docker.com/storage/volumes/) to persist custom models. You can specify a volume mount by adding the `-v` (or `--volume`) option to the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command. > [!NOTE]
-> The volume mount settings are only applicable for [Custom Speech to text](speech-container-cstt.md) containers.
+> The volume mount settings are only applicable for [custom speech to text](speech-container-cstt.md) containers.
Custom models are downloaded the first time that a new model is ingested as part of the custom speech container docker run command. Sequential runs of the same `ModelId` for a custom speech container will use the previously downloaded model. If the volume mount is not provided, custom models cannot be persisted.
ai-services Speech Container Cstt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-cstt.md
keywords: on-premises, Docker, container
# Custom speech to text containers with Docker
-The Custom speech to text container transcribes real-time speech or batch audio recordings with intermediate results. You can use a custom model that you created in the [Custom Speech portal](https://speech.microsoft.com/customspeech). In this article, you'll learn how to download, install, and run a Custom speech to text container.
+The custom speech to text container transcribes real-time speech or batch audio recordings with intermediate results. You can use a custom model that you created in the [custom speech portal](https://speech.microsoft.com/customspeech). In this article, you'll learn how to download, install, and run a custom speech to text container.
For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md). ## Container images
-The Custom speech to text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`.
+The custom speech to text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`.
:::image type="content" source="./media/containers/mcr-tags-custom-speech-to-text.png" alt-text="A screenshot of the search connectors and triggers dialog." lightbox="./media/containers/mcr-tags-custom-speech-to-text.png":::
Before you can [run](#run-the-container-with-docker-run) the container, you need
# [Custom model ID](#tab/custom-model)
-The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech). For information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
+The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech). For information about how to get the model ID, see [custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
-![Screenshot that shows the Custom Speech training page.](media/custom-speech/custom-speech-model-training.png)
+![Screenshot that shows the custom speech training page.](media/custom-speech/custom-speech-model-training.png)
Obtain the **Model ID** to use as the argument to the `ModelId` parameter of the `docker run` command.
-![Screenshot that shows Custom Speech model details.](media/custom-speech/custom-speech-model-details.png)
+![Screenshot that shows custom speech model details.](media/custom-speech/custom-speech-model-details.png)
# [Base model ID](#tab/base-model)
Mounts:License={CONTAINER_LICENSE_DIRECTORY}
Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} ```
-The Custom Speech to text container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+The custom speech to text container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
ai-services Speech Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-overview.md
The following table lists the Speech containers available in the Microsoft Conta
| Container | Features | Supported versions and locales | |--|--|--| | [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.3.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
-| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.3.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
+| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [custom speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.3.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
| [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). | | [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.17.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
ai-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-private-link.md
Speech service has REST APIs for [Speech to text](rest-speech-to-text.md) and [T
Speech to text has two REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when you're using it in the private-endpoint-enabled scenario. The Speech to text REST APIs are:-- [Speech to text REST API](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
+- [Speech to text REST API](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md).
- [Speech to text REST API for short audio](rest-speech-to-text-short.md), which is used for real-time speech to text. Usage of the Speech to text REST API for short audio and the Text to speech REST API in the private endpoint scenario is the same. It's equivalent to the [Speech SDK case](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk) described later in this article.
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the
#### Model customization
-The limits in this table apply per Speech resource when you create a Custom Speech model.
+The limits in this table apply per Speech resource when you create a custom speech model.
| Quota | Free (F0) | Standard (S0) | |--|--|--|
How to get information for the base model:
How to get information for the custom model: 1. Go to the [Speech Studio](https://aka.ms/speechstudio/customspeech) portal.
-1. Sign in if necessary, and go to **Custom Speech**.
+1. Sign in if necessary, and go to **Custom speech**.
1. Select your project, and go to **Deployment**. 1. Select the required endpoint. 1. Copy and save the values of the following fields:
ai-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-studio-overview.md
In Speech Studio, the following Speech service features are available as project
* [Batch speech to text](https://aka.ms/speechstudio/batchspeechtotext): Quickly test batch transcription capabilities to transcribe a large amount of audio in storage and receive results asynchronously, To learn more about Batch Speech-to-text, see [Batch speech to text overview](batch-transcription.md).
-* [Custom Speech](https://aka.ms/speechstudio/customspeech): Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md).
+* [Custom speech](https://aka.ms/speechstudio/customspeech): Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a custom speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md).
* [Pronunciation assessment](https://aka.ms/speechstudio/pronunciationassessment): Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
ai-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-to-text.md
Batch transcription is available via:
spx help batch transcription ```
-## Custom Speech
+## Custom speech
-With [Custom Speech](./custom-speech-overview.md), you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
+With [custom speech](./custom-speech-overview.md), you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
> [!TIP]
-> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
-A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech to text REST API](rest-speech-to-text.md).
+A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions. For more information, see [custom speech](./custom-speech-overview.md) and [Speech to text REST API](rest-speech-to-text.md).
Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt).
ai-services Swagger Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/swagger-documentation.md
Last updated 10/03/2022
# Generate a REST API client library for the Speech to text REST API
-Speech service offers a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech) can be completed programmatically using these APIs.
+Speech service offers a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through the [custom speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech) can be completed programmatically using these APIs.
> [!NOTE] > Speech service has several REST APIs for [Speech to text](rest-speech-to-text.md) and [Text to speech](rest-text-to-speech.md).
ai-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/voice-assistants.md
Sample code for creating a voice assistant is available on GitHub. The samples c
Voice assistants that you build by using Speech service can use a full range of customization options.
-* [Custom Speech](./custom-speech-overview.md)
+* [Custom speech](./custom-speech-overview.md)
* [Custom voice](professional-voice-create-project.md) * [Custom Keyword](keyword-recognition-overview.md)
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
In this section, you deploy, upgrade, or disable the Vertical Pod Autoscaler on
After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-2. Optionally, to enable VPA on an existing cluster, use the `--enable-vpa` with the [az aks upgrade][az-aks-upgrade] command.
+2. Optionally, to enable VPA on an existing cluster, use the `--enable-vpa` with the [https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update] command.
```azurecli-interactive
- az aks update -n myAKSCluster -g myResourceGroup --enable-vpa
+ az aks update -n myAKSCluster -g myResourceGroup --enable-vpa
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-3. Optionally, to disable VPA on an existing cluster, use the `--disable-vpa` with the [az aks upgrade][az-aks-upgrade] command.
+3. Optionally, to disable VPA on an existing cluster, use the `--disable-vpa` with the [https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update] command.
```azurecli-interactive az aks update -n myAKSCluster -g myResourceGroup --disable-vpa
api-center Set Up Api Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center.md
If you haven't already, you need to register the **Microsoft.ApiCenter** resourc
1. Select an existing resource group, or select **New** to create a new one.
- 1. Enter a **Name** for your API center. It must be unique in your subscription.
+ 1. Enter a **Name** for your API center. It must be unique in the region where you're creating your API center.
1. In **Region**, select one of the [available regions](overview.md#available-regions) for API Center preview.
azure-app-configuration Enable Dynamic Configuration Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md
Title: "Tutorial: Use dynamic configuration in an ASP.NET Core app" description: In this tutorial, you learn how to dynamically update the configuration data for ASP.NET Core apps.- ms.devlang: csharp
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
For more information, see [Remove TLS 1.0 and 1.1 from use with Azure Cache for
## June 2023
-Azure Active Directory for authentication and role-based access control is available across regions that support Azure Cache for Redis.
+Microsoft Entra ID for authentication and role-based access control is available across regions that support Azure Cache for Redis.
## May 2023
-### Azure Active Directory-based authentication and authorization (preview)
+### Microsoft Entra ID authentication and authorization (preview)
-Azure Active Directory (Azure AD) based [authentication and authorization](cache-azure-active-directory-for-authentication.md) is now available for public preview with Azure Cache for Redis. With this Azure AD integration, users can connect to their cache instance without an access key and use [role-based access control](cache-configure-role-based-access-control.md) to connect to their cache instance.
+Microsoft Entra ID based [authentication and authorization](cache-azure-active-directory-for-authentication.md) is now available for public preview with Azure Cache for Redis. With this Microsft Entra ID integration, users can connect to their cache instance without an access key and use [role-based access control](cache-configure-role-based-access-control.md) to connect to their cache instance.
This feature is available for Azure Cache for Redis Basic, Standard, and Premium SKUs. With this update, customers can look forward to increased security and a simplified authentication process when using Azure Cache for Redis.
Active geo-replication is a powerful tool that enables Azure Cache for Redis clu
### Support for managed identity in Azure Cache for Redis in storage
-Azure Cache for Redis now supports authenticating storage account connections using managed identity. Identity is established through Azure Active Directory, and both system-assigned and user-assigned identities are supported. Support for managed identity further allows the service to establish trusted access to storage for uses including data persistence and importing/exporting cache data.
+Azure Cache for Redis now supports authenticating storage account connections using managed identity. Identity is established through Microsoft Entra ID, and both system-assigned and user-assigned identities are supported. Support for managed identity further allows the service to establish trusted access to storage for uses including data persistence and importing/exporting cache data.
For more information, see [Managed identity with Azure Cache for Redis](cache-managed-identity.md).
azure-functions Event Driven Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-driven-scaling.md
$resource | Set-AzResource -Force
## Scale-in behaviors
-Event-driven scaling automatically reduces capacity when demand for your functions is reduced. It does this by shutting down worker instances of your function app. Before an instance is shut down, new events stop being sent to the instance. Also, functions that are currently executing are given time to finish executing. This behavior is logged as drain mode. This shut-down period can extend up to 10 minutes for Consumption plan apps and up to 60 minutes for Premium plan apps. Event-driven scaling and this behavior don't apply to Dedicated plan apps.
+Event-driven scaling automatically reduces capacity when demand for your functions is reduced. It does this by draining instances of their current function executions and then removes those instances. This behavior is logged as drain mode. The grace period for functions that are currently executing can extend up to 10 minutes for Consumption plan apps and up to 60 minutes for Premium plan apps. Event-driven scaling and this behavior don't apply to Dedicated plan apps.
The following considerations apply for scale-in behaviors:
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
Severity | The rule applies only to alerts with the selected severities. |
* If you define multiple filters in a rule, all the rules apply. There's a logical AND between all filters. For example, if you set both `resource type = "Virtual Machines"` and `severity = "Sev0"`, then the rule applies only for `Sev0` alerts on virtual machines in the scope. * Each filter can include up to five values. There's a logical OR between the values.
- For example, if you set `description contains "this, that" (in the field there is no need to write the apostrophes), then the rule applies only to alerts whose description contains either `this` or `that`.
+ For example, if you set `description contains "this, that" (in the field there is no need to write the apostrophes), then the rule applies only to alerts whose description contains either `this` or `that`.
+* Notice that you dont have any spaces (before, after or between) the string that is matched it will effect the matching of the filter.
### What should this rule do?
azure-monitor Autoscale Flapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-flapping.md
Previously updated : 09/13/2022 Last updated : 01/21/2024 #Customer intent: As a cloud administrator, I want understand flapping so that I can configure autoscale correctly.
To learn more about autoscale, see the following resources:
* [Overview of common autoscale patterns](./autoscale-common-scale-patterns.md) * [Automatically scale a virtual machine scale](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
-* [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)
+* [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)
azure-monitor Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection.md
Title: Data collection in Azure Monitor description: Monitoring data collected by Azure Monitor is separated into metrics that are lightweight and capable of supporting near real-time scenarios and logs that are used for advanced analysis. Previously updated : 07/10/2022 Last updated : 11/01/2023 # Data collection in Azure Monitor
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
This section discusses requirements and limitations.
### Time before telemetry gets to destination
-After you set up a diagnostic setting, data should start flowing to your selected destination(s) within 90 minutes. If you get no information within 24 hours, then you might be experiencing one of the following issues:
+After you set up a diagnostic setting, data should start flowing to your selected destination(s) within 90 minutes. When sending logs to a Log Analytics workspace, the table will be created automatically if it doesn't already exist. The table is only created when the first log records are received. If you get no information within 24 hours, then you might be experiencing one of the following issues:
- No logs are being generated. - Something is wrong in the underlying routing mechanism.
azure-monitor Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-troubleshoot.md
Previously updated : 09/27/2022 Last updated : 01/21/2024 # Troubleshooting metrics charts
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
Since the Dependency agent works at the kernel level, support is also dependent
| Distribution | OS version | Kernel version | |:|:|:|
-| Red Hat Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
+| Red Hat Linux 8 | 8.6 | 4.18.0-372.\*el8.x86_64, 4.18.0-372.*el8_6.x86_64 |
+| | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
-| | 8.3 | 4.18.0-240.\*el8_3.x86_64 |
+| | 8.3 | 4.18.0-240.\*el8_3.x86_64 |
| | 8.2 | 4.18.0-193.\*el8_2.x86_64 | | | 8.1 | 4.18.0-147.\*el8_1.x86_64 | | | 8.0 | 4.18.0-80.\*el8.x86_64<br>4.18.0-80.\*el8_0.x86_64 |
Since the Dependency agent works at the kernel level, support is also dependent
| | 7.4 | 3.10.0-693 | | Red Hat Linux 6 | 6.10 | 2.6.32-754 | | | 6.9 | 2.6.32-696 |
-| CentOS Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
+| CentOS Linux 8 | 8.6 | 4.18.0-372.\*el8.x86_64, 4.18.0-372.*el8_6.x86_64 |
+| | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 | | | 8.3 | 4.18.0-240.\*el8_3.x86_64 | | | 8.2 | 4.18.0-193.\*el8_2.x86_64 |
azure-netapp-files Nfs Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/nfs-access-control-lists.md
Inherit flags are a way to more easily manage your NFSv4.x ACLs, sparing you fro
### Administrative flags
-Administrative flags in NFSv4.x ACLs are special flags that are used only with Audit and Alarm ACL types. These flags define either success or failure access attempts for actions to be performed. For instance, if it's desired to audit failed access attempts to a specific file, then an administrative flag of ΓÇ£FΓÇ¥ can be used to control that behavior.
+Administrative flags in NFSv4.x ACLs are special flags that are used only with Audit and Alarm ACL types. These flags define either success (`S`) or failure (`F`) access attempts for actions to be performed.
-This Audit ACL is an example of that, where `user1` is audited for failed access attempts for any permission level: `U:F:user1@contoso.com:rwatTnNcCy`.
-
-Azure NetApp Files only supports setting administrative flags for Audit ACEs. File access logging isn't currently supported. Alarm ACEs aren't supported in Azure NetApp Files.
+Azure NetApp Files supports _setting_ administrative flags for Audit ACEs, however the ACEs do not function. In addition, Alarm ACEs aren't supported in Azure NetApp Files.
## NFSv4.x user and group principals
azure-netapp-files Solutions Windows Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-windows-virtual-desktop.md
Title: Using Azure Virtual Desktop with Azure NetApp Files | Microsoft Docs description: Provides best practice guidance and sample blueprints on deploying Azure Virtual Desktop with Azure NetApp Files. -- Last updated 08/13/2020
azure-netapp-files Storage Service Add Ons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/storage-service-add-ons.md
Title: Storage service add-ons for Azure NetApp Files | Microsoft Docs description: Describes the services provided through the storage service add-ons for Azure NetApp Files. -- Last updated 06/15/2021
azure-netapp-files Terraform Manage Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/terraform-manage-volume.md
Title: Update Terraform-managed Azure resource description: Learn how to safely update Terraform-managed Azure resources to ensure the safety of your data. -- Last updated 12/20/2023
azure-netapp-files Test Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/test-disaster-recovery.md
Title: Test disaster recovery for Azure NetApp Files | Microsoft Docs description: Enhance your disaster recovery preparedness with this test plan for cross-region replication. -- Last updated 09/26/2023
azure-netapp-files Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/tools-reference.md
Title: Azure NetApp Files tools description: Learn about the tools available to you to maximize your experience and savings with Azure NetApp Files. -- Last updated 01/12/2023
azure-netapp-files Troubleshoot Application Volume Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-application-volume-groups.md
Title: Troubleshoot application volume group errors for Azure NetApp Files | Microsoft Docs description: Describes error or warning conditions and their resolutions for application volume groups for Azure NetApp Files. -- Last updated 11/19/2021
azure-netapp-files Troubleshoot Capacity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-capacity-pools.md
Title: Troubleshoot capacity pool errors for Azure NetApp Files | Microsoft Docs description: Describes potential issues you might have when managing capacity pools and provides solutions for the issues. -- Last updated 04/18/2022
azure-netapp-files Troubleshoot Cross Region Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-cross-region-replication.md
Title: Troubleshoot cross-region replication errors for Azure NetApp Files | Microsoft Docs description: Describes error messages and resolutions that can help you troubleshoot cross-region replication issues for Azure NetApp Files. -- Last updated 08/02/2022
azure-netapp-files Troubleshoot Diagnose Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-diagnose-solve-problems.md
Title: Troubleshoot Azure NetApp Files using diagnose and solve problems tool description: Describes how to use the Azure diagnose and solve problems tool to troubleshoot issues of Azure NetApp Files. -- Last updated 10/15/2023
azure-netapp-files Troubleshoot File Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-file-locks.md
Title: Troubleshoot file locks for an Azure NetApp Files volume | Microsoft Docs description: This article explains how to break file locks in an Azure NetApp Files volume. -- Last updated 05/03/2023
azure-netapp-files Troubleshoot Snapshot Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-snapshot-policies.md
Title: Troubleshoot snapshot policy errors for Azure NetApp Files | Microsoft Docs description: Describes error messages and resolutions that can help you troubleshoot snapshot policy management issues for Azure NetApp Files. -- Last updated 09/23/2020
azure-netapp-files Troubleshoot User Access Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-user-access-ldap.md
Title: Troubleshoot user access on LDAP volumes | Microsoft Docs description: Describes the steps for troubleshooting user access on LDAP-enabled volumes. -- Last updated 09/06/2023
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md
Title: Troubleshoot volume errors for Azure NetApp Files | Microsoft Docs description: Describes error messages and resolutions that can help you troubleshoot Azure NetApp Files volumes. -- Last updated 02/21/2023
azure-netapp-files Understand File Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-file-locks.md
Title: Understand file locking and lock types in Azure NetApp Files | Microsoft Docs description: Understand the concept of file locking and the different types of NFS locks. -- Last updated 06/12/2023
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
Title: Understand guidelines for Active Directory Domain Services site design and planning description: Proper Active Directory Domain Services (AD DS) design and planning are key to solution architectures that use Azure NetApp Files volumes. -- Last updated 02/21/2023
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
Title: Use availability zones for high availability in Azure NetApp Files | Microsoft Docs description: Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. -- Last updated 11/17/2022
azure-netapp-files Use Dfs N And Dfs Root Consolidation With Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files.md
Title: Use DFS-N and DFS Root Consolidation with Azure NetApp Files | Microsoft Docs description: Learn how to configure DFS-N and DFS Root Consolidation with Azure NetApp Files -- Last updated 06/30/2022
azure-netapp-files Volume Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-delete.md
Title: Delete an Azure NetApp Files volume | Microsoft Docs description: Describes how to delete an Azure NetApp Files volume. -- Last updated 06/22/2023
azure-netapp-files Volume Hard Quota Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-hard-quota-guidelines.md
Title: What changing to volume hard quota means for your Azure NetApp Files service | Microsoft Docs description: Describes the change to using volume hard quota, how to plan for the change, and how to monitor and manage capacities. -- Last updated 09/30/2022
azure-netapp-files Volume Quota Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-quota-introduction.md
Title: Understand volume quota for Azure NetApp Files | Microsoft Docs description: Provides an overview about volume quota. Also provides references about monitoring and managing volume and pool capacity. -- Last updated 04/30/2021
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Title: What's new in Azure NetApp Files | Microsoft Docs description: Provides a summary about the latest new features and enhancements of Azure NetApp Files. -- Last updated 11/27/2023
azure-portal Azure Portal Markdown Tile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-markdown-tile.md
Title: Use a custom markdown tile on Azure dashboards
description: Learn how to add a markdown tile to an Azure dashboard to display static content Last updated 03/27/2023 - # Use a markdown tile on Azure dashboards to show custom content
azure-relay Service Bus Relay Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/service-bus-relay-tutorial.md
Title: Expose an on-premises WCF REST service to clients using Azure Relay description: This tutorial describes how to expose an on-premises WCF REST service to an external client by using Azure WCF Relay. - Last updated 08/11/2023
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-automation.md
Title: SQL DB in Azure VM backup & restore via PowerShell description: Back up and restore SQL Databases in Azure VMs using Azure Backup and PowerShell. Previously updated : 07/15/2022 Last updated : 01/21/2024 ms.assetid: 57854626-91f9-4677-b6a2-5d12b6a866e1
The Recovery Services vault is a Resource Manager resource, so you must place it
3. Specify the type of redundancy to use for the vault storage. * You can use [locally redundant storage](../storage/common/storage-redundancy.md#locally-redundant-storage), [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) or [zone-redundant storage](../storage/common/storage-redundancy.md#zone-redundant-storage) .
- * The following example sets the **-BackupStorageRedundancy** option for the [Set-AzRecoveryServicesBackupProperty](/powershell/module/az.recoveryservices/set-azrecoveryservicesbackupproperty) cmd for **testvault** set to **GeoRedundant**.
+ * The following example sets the `-BackupStorageRedundancy` option for the [Set-AzRecoveryServicesBackupProperty](/powershell/module/az.recoveryservices/set-azrecoveryservicesbackupproperty) cmd for `testvault` set to `GeoRedundant`.
```powershell $vault1 = Get-AzRecoveryServicesVault -Name "testvault"
Store the vault object in a variable, and set the vault context.
* Many Azure Backup cmdlets require the Recovery Services vault object as an input, so it's convenient to store the vault object in a variable. * The vault context is the type of data protected in the vault. Set it with [Set-AzRecoveryServicesVaultContext](/powershell/module/az.recoveryservices/set-azrecoveryservicesvaultcontext). After the context is set, it applies to all subsequent cmdlets.
-The following example sets the vault context for **testvault**.
+The following example sets the vault context for `testvault`
```powershell Get-AzRecoveryServicesVault -Name "testvault" | Set-AzRecoveryServicesVaultContext
Now that we have the required SQL DB and the policy with which it needs to be ba
Enable-AzRecoveryServicesBackupProtection -ProtectableItem $SQLDB -Policy $NewSQLPolicy ```
-The command waits until the configure backup is completed and returns the following output.
+The command waits until the backup configuration is completed and returns the following output.
```output WorkloadName Operation Status StartTime EndTime JobID
The above output means that you can restore to any point-in-time between the dis
### Determine recovery configuration
-In the case of a SQL DB restore, the following restore scenarios are supported.
+For a SQL DB restore, the following restore scenarios are supported.
* Overriding the backed-up SQL DB with data from another recovery point - OriginalWorkloadRestore * Restoring the SQL DB as a new DB in the same SQL instance - AlternateWorkloadRestore
$PairedRegionVault = Get-AzRecoveryServicesVault -ResourceGroupName SecondaryRG
$secContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $PairedRegionVault.ID -FriendlyName "secondaryVM" ```
-Once the registered container is chosen, then we fetch the SQL instances within the container to which the DB should be restored to. Here we assume that there is 1 SQL instance within the "secondaryVM" and we fetch that instance.
+Once the registered container is chosen, then we fetch the SQL instances within the container to which the database should be restored. Here we assume that there's 1 SQL instance within the "secondaryVM" and we fetch that instance.
```powershell $secSQLInstance = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -ItemType SQLInstance -VaultId $PairedRegionVault.ID -Container $secContainer
If the output is lost or if you want to get the relevant Job ID, [get the list o
### Change policy for backup items
-You can change the policy of the backed-up item from *Policy1* to *Policy2*. To switch policies for a backed-up item, fetch the relevant policy and back up item and use the [Enable-AzRecoveryServices](/powershell/module/az.recoveryservices/enable-azrecoveryservicesbackupprotection) command with backup item as the parameter.
+You can change the policy of the backed-up item from `Policy1` to `Policy2`. To switch policies for a backed-up item, fetch the relevant policy and back up item and use the [Enable-AzRecoveryServices](/powershell/module/az.recoveryservices/enable-azrecoveryservicesbackupprotection) command with backup item as the parameter.
```powershell $TargetPol1 = Get-AzRecoveryServicesBackupProtectionPolicy -Name <PolicyName>
$anotherBkpItem = Get-AzRecoveryServicesBackupItem -WorkloadType MSSQL -BackupMa
Enable-AzRecoveryServicesBackupProtection -Item $anotherBkpItem -Policy $TargetPol1 ```
-The command waits until the configure backup is completed and returns the following output.
+The command waits until the backup configuration is completed and returns the following output.
```output WorkloadName Operation Status StartTime EndTime JobID
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
Title: Container workloads on Azure Batch description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks. Previously updated : 01/10/2024 Last updated : 01/19/2024 ms.devlang: csharp # ms.devlang: csharp, python
a `DockerCompatible` capability if the image supports Docker containers. Batch a
support, images published by Mirantis with capability noted as `DockerCompatible`. These images may only be deployed under a User Subscription pool allocation mode Batch account.
-You can also create custom images from VMs running Docker on Windows.
+You can also create a [custom image](batch-sig-images.md) to enable container functionality on Windows.
> [!NOTE] > The image SKUs `-with-containers` or `-with-containers-smalldisk` are retired. Please see the [announcement](https://techcommunity.microsoft.com/t5/containers/updates-to-the-windows-container-runtime-support/ba-p/2788799) for details and alternative container runtime options. ### Linux support
-For Linux container workloads, Batch currently supports the following Linux images published by Microsoft Azure Batch in the Azure Marketplace without the need for a custom image.
-
-#### VM sizes without RDMA
--- Publisher: `microsoft-azure-batch`
- - Offer: `centos-container`
- - Offer: `ubuntu-server-container`
+For Linux container workloads, Batch currently supports the following Linux images published in the Azure Marketplace
+without the need for a custom image.
- Publisher: `microsoft-dsvm` - Offer: `ubuntu-hpc`
-#### VM sizes with RDMA
+#### Alternate image options
-- Publisher: `microsoft-azure-batch`
- - Offer: `centos-container-rdma`
- - Offer: `ubuntu-server-container-rdma`
+Currently there are other images published by `microsoft-azure-batch` that support container workloads:
-- Publisher: `microsoft-dsvm`
- - Offer: `ubuntu-hpc`
+- Publisher: `microsoft-azure-batch`
+ - Offer: `centos-container`
+ - Offer: `centos-container-rdma` (For use exclusively on VM SKUs with Infiniband)
+ - Offer: `ubuntu-server-container`
+ - Offer: `ubuntu-server-container-rdma` (For use exclusively on VM SKUs with Infiniband)
> [!IMPORTANT]
-> It is recommended to use the `microsoft-dsvm` `ubuntu-hpc` VM image if possible.
+> It is recommended to use the `microsoft-dsvm` `ubuntu-hpc` VM image instead of images published by
+> `microsoft-azure-batch`. This image may be used on any VM SKU.
#### Notes The docker data root of the above images lies in different places: - For the Azure Batch published `microsoft-azure-batch` images (Offer: `centos-container-rdma`, etc.), the docker data root is mapped to _/mnt/batch/docker_, which is located on the temporary disk.
- - For the HPC image, or `microsoft-dsvm` (Offer: `ubuntu-hpc`, etc.), the docker data root is unchanged from the Docker default which is _/var/lib/docker_ on Linux and _C:\ProgramData\Docker_ on Windows. These folders are located on the OS disk.
+ - For the HPC image, or `microsoft-dsvm` (Offer: `ubuntu-hpc`, etc.), the docker data root is unchanged from the Docker default, which is _/var/lib/docker_ on Linux and _C:\ProgramData\Docker_ on Windows. These folders are located on the OS disk.
For non-Batch published images, the OS disk has the potential risk of being filled up quickly as container images are downloaded.
These images are only supported for use in Azure Batch pools and are geared for
- Pre-installed NVIDIA GPU drivers and NVIDIA container runtime, to streamline deployment on Azure N-series VMs. - VM images with the suffix of `-rdma` are pre-configured with support for InfiniBand RDMA VM sizes. These VM images shouldn't be used with VM sizes that don't have InfiniBand support.
-You can also create custom images from VMs running Docker on one of the Linux distributions that's compatible with Batch. If you choose to provide your own custom Linux image, see the instructions in [Use a managed image to create a custom image pool](batch-custom-images.md).
+You can also create [custom images](batch-sig-images.md) compatible for Batch containers on one of the Linux distributions
+that's compatible with Batch. For Docker support on a custom image, install a suitable Docker-compatible runtime, such as
+a version of [Docker](https://www.docker.com) or
+[Mirantis Container Runtime](https://www.mirantis.com/software/mirantis-container-runtime/). Installing just
+a Docker-CLI compatible tool is insufficient; a Docker Engine compatible runtime is required.
-For Docker support on a custom image, install [Docker Pro](https://www.docker.com/products/pro/) or the open-source [Docker Community Edition](https://www.docker.com/community).
+> [!IMPORTANT]
+> Neither Microsoft or Azure Batch will provide support for issues related to Docker (any version or edition),
+> Mirantis Container Runtime, or Moby runtimes. Customers electing to use these runtimes in their images should reach
+> out to the company or entity providing support for runtime issues.
-Additional considerations for using a custom Linux image:
+More considerations for using a custom Linux image:
- To take advantage of the GPU performance of Azure N-series sizes when using a custom image, pre-install NVIDIA drivers. Also, you need to install the Docker Engine Utility for NVIDIA GPUs, [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker).-- To access the Azure RDMA network, use an RDMA-capable VM size. Necessary RDMA drivers are installed in the CentOS HPC and Ubuntu images supported by Batch. Additional configuration may be needed to run MPI workloads. See [Use RDMA or GPU instances in Batch pool](batch-pool-compute-intensive-sizes.md).
+- To access the Azure RDMA network, use an RDMA-capable VM size. Necessary RDMA drivers are installed in the CentOS HPC and Ubuntu images supported by Batch. Extra configuration may be needed to run MPI workloads. See [Use RDMA or GPU instances in Batch pool](batch-pool-compute-intensive-sizes.md).
## Container configuration for Batch pool
CloudPool pool = batchClient.PoolOperations.CreatePool(
To run a container task on a container-enabled pool, specify container-specific settings. Settings include the image to use, registry, and container run options. -- Use the `ContainerSettings` property of the task classes to configure container-specific settings. These settings are defined by the [TaskContainerSettings](/dotnet/api/microsoft.azure.batch.taskcontainersettings) class. The `--rm` container option doesn't require an additional `--runtime` option since it's taken care of by Batch.
+- Use the `ContainerSettings` property of the task classes to configure container-specific settings. These settings are defined by the [TaskContainerSettings](/dotnet/api/microsoft.azure.batch.taskcontainersettings) class. The `--rm` container option doesn't require another `--runtime` option since it's taken care of by Batch.
- If you run tasks on container images, the [cloud task](/dotnet/api/microsoft.azure.batch.cloudtask) and [job manager task](/dotnet/api/microsoft.azure.batch.cloudjob.jobmanagertask) require container settings. However, the [start task](/dotnet/api/microsoft.azure.batch.starttask), [job preparation task](/dotnet/api/microsoft.azure.batch.cloudjob.jobpreparationtask), and [job release task](/dotnet/api/microsoft.azure.batch.cloudjob.jobreleasetask) don't require container settings (that is, they can run within a container context or directly on the node).
If the container image for a Batch task is configured with an [ENTRYPOINT](https
- If the image doesn't have an ENTRYPOINT, set a command line appropriate for the container, for example, `/app/myapp` or `/bin/sh -c python myscript.py`
-Optional [ContainerRunOptions](/dotnet/api/microsoft.azure.batch.taskcontainersettings.containerrunoptions) are additional arguments you provide to the `docker create` command that Batch uses to create and run the container. For example, to set a working directory for the container, set the `--workdir <directory>` option. See the [docker create](https://docs.docker.com/engine/reference/commandline/create/) reference for additional options.
+Optional [ContainerRunOptions](/dotnet/api/microsoft.azure.batch.taskcontainersettings.containerrunoptions) are other arguments you provide to the `docker create` command that Batch uses to create and run the container. For example, to set a working directory for the container, set the `--workdir <directory>` option. See the [docker create](https://docs.docker.com/engine/reference/commandline/create/) reference for more options.
### Container task working directory
cdn Cdn Add To Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-add-to-web-app.md
description: In this tutorial, Azure Content Delivery Network (CDN) is added to
- Last updated 02/27/2023
cdn Cdn Advanced Http Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-advanced-http-reports.md
ms.assetid: ef90adc1-580e-4955-8ff1-bde3f3cafc5d - Last updated 02/27/2023
cdn Cdn Analyze Usage Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-analyze-usage-patterns.md
Title: Core Reports from Edgio | Microsoft Docs description: 'Learn how to access and view Edgio Core Reports via the Manage portal for Edgio profiles.' - ms.assetid: 5a0d9018-8bdb-48ff-84df-23648ebcf763 - Last updated 01/23/2017
cdn Cdn App Dev Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-net.md
ms.assetid: 63cf4101-92e7-49dd-a155-a90e54a792ca - Last updated 02/27/2023
cdn Cdn App Dev Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-node.md
documentationcenter: nodejs - ms.assetid: c4bb6a61-de3d-4f0c-9dca-202554c43dfa - Last updated 04/02/2021
cdn Cdn Azure Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-azure-diagnostic-logs.md
- Last updated 02/27/2023
cdn Cdn Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-billing.md
Last updated 02/27/2023
cdn Cdn Caching Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-policy.md
documentationcenter: .NET - ms.assetid: be33aecc-6dbe-43d7-a056-10ba911e0e94 - Last updated 02/04/2017
cdn Cdn Caching Rules Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-rules-tutorial.md
Title: Tutorial - Set Azure CDN caching rules | Microsoft Docs description: In this tutorial, you set an Azure CDN global caching rule and a custom caching rule. - - Last updated 04/20/2018
cdn Cdn China Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-china-delivery.md
Title: China content delivery with Azure CDN | Microsoft Docs description: Learn about using Azure Content Delivery Network (CDN) to deliver content to China users. - Last updated 02/27/2023
cdn Cdn Cors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-cors.md
ms.assetid: 86740a96-4269-4060-aba3-a69f00e6f14e - Last updated 02/27/2023
cdn Cdn Create Endpoint How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-endpoint-how-to.md
Last updated 02/27/2023
cdn Cdn Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-ddos.md
- Last updated 02/27/2023
cdn Cdn Dynamic Site Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-dynamic-site-acceleration.md
- Last updated 02/27/2023
cdn Cdn Edge Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-edge-performance.md
ms.assetid: 8cc596a7-3e01-4f76-af7b-a05a1421517e - Last updated 02/27/2023
cdn Cdn How Caching Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-how-caching-works.md
- Last updated 02/23/2023
cdn Cdn Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-http-debug-headers.md
Title: X-EC-Debug HTTP headers for Azure CDN rules engine | Microsoft Docs description: The X-EC-Debug debug cache request header provides additional information about the cache policy that is applied to the requested asset. These headers are specific to Edgio. - Last updated 04/12/2018
cdn Cdn Http Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-http-variables.md
- Last updated 02/27/2023
cdn Cdn Http2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-http2.md
- Last updated 02/27/2023
cdn Cdn Improve Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-improve-performance.md
ms.assetid: af1cddff-78d8-476b-a9d0-8c2164e4de5d - Last updated 02/27/2023
cdn Cdn Large File Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-large-file-optimization.md
- Last updated 02/27/2023
cdn Cdn Log Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-log-analysis.md
- Last updated 02/27/2023
cdn Cdn Manage Expiration Of Blob Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-blob-content.md
ms.assetid: ad4801e9-d09a-49bf-b35c-efdc4e6034e8 ms.devlang: csharp Last updated 02/27/2023
cdn Cdn Manage Expiration Of Cloud Service Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-cloud-service-content.md
ms.assetid: bef53fcc-bb13-4002-9324-9edee9da8288 ms.devlang: csharp
cdn Cdn Media Streaming Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-media-streaming-optimization.md
- Last updated 02/27/2023
cdn Cdn Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-migrate.md
- Last updated 02/27/2023 - # Migrate an Azure CDN profile from Standard Edgio to Premium Edgio
cdn Cdn Msft Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-msft-http-debug-headers.md
Last updated 02/27/2023
cdn Cdn Pop Abbreviations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-abbreviations.md
Last updated 02/27/2023
cdn Cdn Pop List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-list-api.md
- Last updated 02/27/2023 -
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md
ms.assetid: 669ef140-a6dd-4b62-9b9d-3f375a14215e Last updated 05/30/2023
cdn Cdn Preload Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-preload-endpoint.md
ms.assetid: 5ea3eba5-1335-413e-9af3-3918ce608a83 - Last updated 02/27/2023
cdn Cdn Purge Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-purge-endpoint.md
Title: Purge an Azure CDN endpoint | Microsoft Docs description: Learn how to purge all cached content from an Azure Content Delivery Network endpoint. Edge nodes cache assets until their time-to-live expires. ms.assetid: 0b50230b-fe82-4740-90aa-95d4dde8bd4f - Last updated 02/21/2023
cdn Cdn Query String Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-query-string-premium.md
Title: Control Azure CDN caching behavior with query strings - premium tier description: Azure CDN query string caching controls how files are cached when a web request contains a query string. This article describes query string caching in the Azure CDN Premium from Edgio product. - ms.assetid: 99db4a85-4f5f-431f-ac3a-69e05518c997 - Last updated 06/11/2018
cdn Cdn Real Time Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-real-time-alerts.md
Title: Azure CDN real-time alerts | Microsoft Docs description: Real-time alerts in Microsoft Azure CDN. Real-time alerts provide notifications about the performance of the endpoints in your CDN profile. - ms.assetid: 1e85b809-e1a9-4473-b835-69d1b4ed3393 - Last updated 01/23/2017
cdn Cdn Real Time Stats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-real-time-stats.md
Title: Real-time stats in Azure CDN | Microsoft Docs description: Real-time statistics provides real-time data about the performance of Azure CDN when delivering content to your clients. - ms.assetid: c7989340-1172-4315-acbb-186ba34dd52a - Last updated 01/23/2017
cdn Cdn Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-resource-health.md
ms.assetid: bf23bd89-35b2-4aca-ac7f-68ee02953f31 Last updated 02/27/2023
cdn Cdn Sas Storage Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-sas-storage-support.md
description: Azure CDN supports the use of Shared Access Signature (SAS) to gran
- Last updated 02/27/2023
cdn Cdn Token Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-token-auth.md
ms.assetid: 837018e3-03e6-4f9c-a23e-4b63d5707a64 Last updated 02/27/2023
cdn Cdn Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-traffic-manager.md
Last updated 02/27/2023 - # Failover across multiple endpoints with Azure Traffic Manager
cdn Cdn Troubleshoot Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-troubleshoot-compression.md
ms.assetid: a6624e65-1a77-4486-b473-8d720ce28f8b - Last updated 02/27/2023
cdn Cdn Troubleshoot Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-troubleshoot-endpoint.md
ms.assetid: b588a1eb-ab69-4fc7-ae4d-157c3e46f4a8 - Last updated 02/27/2023
cdn Cdn Verizon Custom Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-custom-reports.md
Title: Custom Reports from Edgio | Microsoft Docs description: 'You can view usage patterns for your CDN by using the following reports: Bandwidth, Data Transferred, Hits, Cache Statuses, Cache Hit Ratio, IPV4/IPV6 Data Transferred.' - - Last updated 10/11/2017
cdn Cdn Verizon Http Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-http-headers.md
Last updated 02/27/2023
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
description: In this quickstart, learn how to create an Azure Content Delivery N
Last updated 03/14/2022
cdn Create Profile Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-template.md
Last updated 02/27/2023
cdn Monitoring And Access Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/monitoring-and-access-log.md
Last updated 12/19/2023
chaos-studio Chaos Studio Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-service-limits.md
Last updated 11/01/2021 - # Azure Chaos Studio service limits
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/available-sizes.md
Last updated 10/13/2020- # Available sizes for Azure Cloud Services (extended support)
cloud-services-extended-support Certificates And Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/certificates-and-key-vault.md
Last updated 10/13/2020- # Use certificates with Azure Cloud Services (extended support)
cloud-services-extended-support Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/cloud-services-model-and-package.md
Last updated 10/13/2020- # What is the Azure Cloud Service model and how do I package it?
cloud-services-extended-support Configure Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/configure-scaling.md
Last updated 10/13/2020- # Configure scaling options with Azure Cloud Services (extended support)
cloud-services-extended-support Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-portal.md
Last updated 10/13/2020- # Deploy a Azure Cloud Services (extended support) using the Azure portal
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-prerequisite.md
Last updated 10/13/2020- # Prerequisites for deploying Azure Cloud Services (extended support)
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-template.md
Last updated 10/13/2020- # Deploy a Cloud Service (extended support) using ARM templates
cloud-services-extended-support Enable Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-alerts.md
Last updated 10/13/2020- # Enable monitoring for Cloud Services (extended support) using the Azure portal
cloud-services-extended-support Enable Key Vault Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-key-vault-virtual-machine.md
Last updated 05/12/2021- # Apply the Key Vault VM extension to Azure Cloud Services (extended support)
cloud-services-extended-support Enable Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-rdp.md
Last updated 10/13/2020- # Apply the Remote Desktop extension to Azure Cloud Services (extended support)
cloud-services-extended-support Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/extensions.md
Last updated 10/13/2020- # Extensions for Cloud Services (extended support)
cloud-services-extended-support Feature Support Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/feature-support-analysis.md
Last updated 11/8/2022- # Feature Analysis: Cloud Services (extended support) and Virtual Machine Scale Sets This article provides a feature analysis of Cloud Services (extended support) and Virtual Machine Scale Sets. For more information on Virtual Machine Scale Sets, please visit the documentation [here](../virtual-machine-scale-sets/overview.md)
cloud-services-extended-support In Place Migration Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-common-errors.md
Last updated 2/08/2021- # Common errors and known issues when migrating to Azure Cloud Services (extended support)
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-overview.md
Last updated 2/08/2021- # Migrate Azure Cloud Services (classic) to Azure Cloud Services (extended support)
cloud-services-extended-support In Place Migration Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-portal.md
Last updated 2/08/2021- # Migrate to Cloud Services (extended support) using the Azure portal
cloud-services-extended-support Post Migration Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/post-migration-changes.md
Last updated 2/08/2021- # Post migration changes
cloud-services-extended-support Schema Cscfg File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-file.md
Last updated 10/14/2020
- # Azure Cloud Services (extended support) config schema (cscfg File)
cloud-services-extended-support Schema Cscfg Networkconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-networkconfiguration.md
Last updated 10/14/2020
- # Azure Cloud Services (extended support) config networkConfiguration schema
cloud-services-extended-support Schema Cscfg Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-cscfg-role.md
Last updated 10/14/2020
- # Azure Cloud Services (extended support) config role schema
cloud-services-extended-support Schema Csdef File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-file.md
Last updated 10/14/2020
- # Azure Cloud Services (extended support) definition schema (csdef file)
cloud-services-extended-support Schema Csdef Loadbalancerprobe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-loadbalancerprobe.md
Last updated 10/14/2020
- # Azure Cloud Services (extended support) definition LoadBalancerProbe schema
cloud-services-extended-support Schema Csdef Networktrafficrules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-networktrafficrules.md
Last updated 10/14/2020
- # Azure Cloud Services (extended support) definition NetworkTrafficRules schema
cloud-services-extended-support Schema Csdef Webrole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-webrole.md
Last updated 10/14/2020
- # Azure Cloud Services (extended support) definition WebRole schema
cloud-services-extended-support Schema Csdef Workerrole https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/schema-csdef-workerrole.md
Last updated 10/14/2020
- # Azure Cloud Services (extended-support) definition WorkerRole schema
cloud-services Applications Dont Support Tls 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/applications-dont-support-tls-1-2.md
tag: top-support-issue - Last updated 02/21/2023
cloud-services Cloud Services Guestos Family1 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family1-retirement.md
Title: Guest OS family 1 retirement notice | Microsoft Docs
description: Provides information about when the Azure Guest OS Family 1 retirement happened and how to determine if you are affected
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
Title: List of updates applied to the Azure Guest OS | Microsoft Docs
description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to the Guest OS you are using. ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 - Last updated 01/16/2024
cloud-services Cloud Services Guestos Retirement Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-retirement-policy.md
Title: Supportability and retirement policy guide for Azure Guest OS | Microsoft
description: Provides information about what Microsoft will support as regards to the Azure Guest OS used by Cloud Services. ms.assetid: 919dd781-4dc6-4e50-bda8-9632966c5458 - Last updated 02/21/2023
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
Title: Learn about the latest Azure Guest OS Releases | Microsoft Docs
description: The latest release news and SDK compatibility for Azure Cloud Services Guest OS. ms.assetid: 6306cafe-1153-44c7-8554-623b03d59a34 - Last updated 01/16/2024
cloud-services Cloud Services Troubleshoot Overconstrained Allocation Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-overconstrained-allocation-request.md
Title: Troubleshoot OverconstrainedAllocationRequest when deploying a Cloud service (classic) to Azure | Microsoft Docs description: This article shows how to resolve an OverconstrainedAllocationRequest exception when deploying a Cloud service (classic) to Azure.
cloud-services Mitigate Se https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/mitigate-se.md
Title: Guidance to mitigate speculative execution in Azure
description: In this article, learn now to mitigate speculative execution side-channel vulnerabilities in Azure. tags: azure-resource-manager keywords: spectre,meltdown,specter
communication-services Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-access.md
Title: Azure Communication Services Calling SDK RAW media overview
description: Provides an overview of media access -
confidential-ledger Quickstart Ledger Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-ledger-explorer.md
Last updated 11/08/2023 -
container-instances Container Instances Image Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-image-security.md
Last updated 06/17/2022- # Security considerations for Azure Container Instances
cosmos-db Programmatic Database Migration Assistant Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/programmatic-database-migration-assistant-legacy.md
- Last updated 04/20/2023
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
Last updated 05/23/2023- # Delete items by partition key value - API for NoSQL (preview)
cost-management-billing Cost Management Billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md
Title: Overview of Cost Management + Billing description: You use Cost Management + Billing features to conduct billing administrative tasks and manage billing access to costs. You also use the features to monitor and control Azure spending and to optimize Azure resource use.
-keywords:
cost-management-billing Overview Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/overview-cost-management.md
Title: Overview of Cost Management description: You use Cost Management features to monitor and control Azure spending and to optimize Azure resource use.
-keywords:
cost-management-billing Capabilities Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-allocation.md
Title: Cost allocation description: This article helps you understand the cost allocation capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Analysis Showback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-analysis-showback.md
Title: Data analysis and showback description: This article helps you understand the data analysis and showback capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-anomalies.md
Title: Managing anomalies description: This article helps you understand the managing anomalies capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-budgets.md
Title: Budget management description: This article helps you understand the budget management capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/23/2023
cost-management-billing Capabilities Chargeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-chargeback.md
Title: Chargeback and finance integration description: This article helps you understand the chargeback and finance integration capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/23/2023
cost-management-billing Capabilities Commitment Discounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-commitment-discounts.md
Title: Managing commitment-based discounts description: This article helps you understand the managing commitment-based discounts capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/23/2023
cost-management-billing Capabilities Culture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-culture.md
Title: Establishing a FinOps culture description: This article helps you understand the Establishing a FinOps culture capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Education https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-education.md
Title: FinOps education and enablement description: This article helps you understand the FinOps education and enablement capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Efficiency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-efficiency.md
Title: Resource utilization and efficiency description: This article helps you understand the resource utilization and efficiency capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/23/2023
cost-management-billing Capabilities Forecasting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-forecasting.md
Title: Forecasting description: This article helps you understand the forecasting capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-frameworks.md
Title: FinOps and intersecting frameworks description: This article helps you understand the FinOps and intersecting frameworks capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Ingestion Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-ingestion-normalization.md
Title: Data ingestion and normalization description: This article helps you understand the data ingestion and normalization capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-onboarding.md
Title: Onboarding workloads description: This article helps you understand the onboarding workloads capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-policy.md
Title: Cloud policy and governance description: This article helps you understand the cloud policy and governance capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Shared Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-shared-cost.md
Title: Managing shared cost description: This article helps you understand the managing shared cost capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-structure.md
Title: Establishing a FinOps decision and accountability structure description: This article helps you understand the establishing a FinOps decision and accountability structure capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/22/2023
cost-management-billing Capabilities Unit Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-unit-costs.md
Title: Measuring unit costs description: This article helps you understand the measuring unit costs capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 10/25/2023
cost-management-billing Capabilities Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-workloads.md
Title: Workload management and automation description: This article helps you understand the workload management and automation capability within the FinOps Framework and how to implement that in the Microsoft Cloud.
-keywords:
Last updated 06/23/2023
cost-management-billing Overview Finops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/overview-finops.md
Title: What is FinOps? description: FinOps combines financial management principles with cloud engineering and operations to provide organizations with a better understanding of their cloud spending. It also helps them make informed decisions on how to allocate and manage their cloud costs.
-keywords:
Last updated 06/21/2023
cost-management-billing Manage Licenses Centrally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/manage-licenses-centrally.md
Title: How Azure applies centrally assigned SQL licenses to hourly usage description: This article provides a detailed explanation about how Azure applies centrally assigned SQL licenses to hourly usage with Azure Hybrid Benefit.
-keywords:
Last updated 04/20/2023
cost-management-billing Overview Azure Hybrid Benefit Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md
Title: What is centrally managed Azure Hybrid Benefit for SQL Server? description: Azure Hybrid Benefit is a licensing benefit that lets you bring your on-premises core-based Windows Server and SQL Server licenses with active Software Assurance (or subscription) to Azure.
-keywords:
Last updated 05/03/2023
cost-management-billing Sql Iaas Extension Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/sql-iaas-extension-registration.md
Title: SQL IaaS extension registration options for Cost Management administrators description: This article explains the SQL IaaS extension registration options available to Cost Management administrators.
-keywords:
Last updated 04/20/2023
cost-management-billing Sql Server Hadr Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/sql-server-hadr-licenses.md
Title: SQL Server HADR and centrally managed Azure Hybrid Benefit coexistence description: This article explains how the SQL Server HADR Software Assurance benefit and centrally managed Azure Hybrid Benefit coexist.
-keywords:
Last updated 04/20/2022
cost-management-billing Transition Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/transition-existing.md
Title: Transition to centrally managed Azure Hybrid Benefit description: This article describes the changes and several transition scenarios to illustrate transitioning to centrally managed Azure Hybrid Benefit.
-keywords:
Last updated 04/20/2023
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
description: Learn more about the change data capture resource in Azure Data Factory. -
data-factory Configure Bcdr Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/configure-bcdr-azure-ssis-integration-runtime.md
Title: Configure Azure-SSIS integration runtime for business continuity and disaster recovery (BCDR) description: This article describes how to configure Azure-SSIS integration runtime in Azure Data Factory with Azure SQL Database/Managed Instance failover group for business continuity and disaster recovery (BCDR). -
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance.md
Title: Copy activity performance and scalability guide description: Learn about key factors that affect the performance of data movement in Azure Data Factory and Azure Synapse Analytics pipelines when you use the copy activity.-
data-factory How To Change Data Capture Resource With Schema Evolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource-with-schema-evolution.md
Title: Capture changed data with schema evolution by using a change data capture
description: Get step-by-step instructions on how to capture changed data with schema evolution from Azure SQL Database to a Delta sink by using a change data capture (CDC) resource. - - - Last updated 07/21/2023
data-factory How To Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource.md
Title: Capture changed data by using a change data capture resource
description: Get step-by-step instructions on how to capture changed data from Azure Data Lake Storage Gen2 to Azure SQL Database by using a change data capture (CDC) resource. -
data-factory How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-settings.md
Title: Managing Azure Data Factory settings and preferences
description: Learn how to manage Azure Data Factory settings and preferences. -
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Title: Managing Azure Data Factory studio preview experience
description: Learn more about the Azure Data Factory studio preview experience. -
data-factory How To Use Trigger Parameterization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-trigger-parameterization.md
- Last updated 07/20/2023
data-factory Iterative Development Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/iterative-development-debugging.md
-
data-factory Scenario Dataflow Process Data Aml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-dataflow-process-data-aml-models.md
Title: Use data flows to process data from automated machine learning (AutoML) models description: Learn how to use Azure Data Factory data flows to process data from automated machine learning(AutoML) models.-
-co-
- Last updated 07/20/2023
-ms.co-
# Process data from automated machine learning models by using data flows
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
Title: Extract data from PDF
description: Learn how to use a solution template to extract data from a PDF source using Azure Data Factory. -
data-factory Solution Template Pii Detection And Masking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-pii-detection-and-masking.md
Title: PII detection and masking
description: Learn how to use a solution template to detect and mask PII data using Azure Data Factory. -
data-lake-store Data Lake Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-best-practices.md
Title: Best practices for using Azure Data Lake Storage Gen1 | Microsoft Docs description: Learn the best practices about data ingestion, date security, and performance related to using Azure Data Lake Storage Gen1 (previously known as Azure Data Lake Store)
data-share Data Share Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/data-share-troubleshoot.md
Title: Troubleshoot Azure Data Share description: Learn how to troubleshoot problems with invitations and errors when you create or receive data shares in Azure Data Share.-
data-share Accept Share Invitations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/accept-share-invitations-powershell.md
Title: "PowerShell script: Accept invitation from an Azure Data Share" description: This PowerShell script accepts invitations from an existing data share.-
data-share Add Datasets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/add-datasets-powershell.md
Title: "PowerShell script: Add a blob dataset to an Azure Data Share" description: This PowerShell script adds a blob dataset to an existing share.-
data-share View Share Details Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/scripts/powershell/view-share-details-powershell.md
Title: "PowerShell script: List existing shares in Azure Data Share" description: This PowerShell script lists and displays details of shares.- Last updated 12/19/2023- # Use PowerShell to view the details of a sent data share
databox Data Box Heavy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-overview.md
Title: Microsoft Azure Data Box Heavy overview | Microsoft Docs in data description: Describes Azure Data Box, a hybrid solution that enables you to transfer massive amounts of data into Azure
databox Data Box Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-overview.md
Title: Microsoft Azure Data Box overview | Microsoft Docs in data description: Describes Azure Data Box, a cloud solution that enables you to transfer massive amounts of data into Azure
dedicated-hsm Deployment Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/deployment-architecture.md
Last updated 06/03/2022
dedicated-hsm High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/high-availability.md
Last updated 03/25/2021
dedicated-hsm Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/monitoring.md
Last updated 11/14/2022
dedicated-hsm Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/networking.md
Last updated 03/25/2021
dedicated-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/overview.md
tags: azure-resource-manager
Last updated 03/25/2021
dedicated-hsm Physical Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/physical-security.md
Last updated 03/25/2021
dedicated-hsm Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/supportability.md
Last updated 03/25/2021
dedicated-hsm Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/troubleshoot.md
tags: azure-resource-manager
Last updated 05/12/2022
dedicated-hsm Tutorial Deploy Hsm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/tutorial-deploy-hsm-cli.md
Title: Tutorial deploys into an existing virtual network using the Azure CLI - Azure Dedicated HSM | Microsoft Docs description: Tutorial showing how to deploy a dedicated HSM using the CLI into an existing virtual network - - Last updated 03/25/2021
dedicated-hsm Tutorial Deploy Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/tutorial-deploy-hsm-powershell.md
Title: Tutorial deploys into an existing virtual network using PowerShell - Azure Dedicated HSM | Microsoft Docs description: Tutorial showing how to deploy a dedicated HSM using PowerShell into an existing virtual network - Last updated 03/25/2021
defender-for-cloud Container Image Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/container-image-mapping.md
After building a container image in an Azure DevOps CI/CD pipeline and pushing i
1. (Optional) Select + by **Container Images** to add other filters to your query, such as **Has vulnerabilities** to filter only container images with CVEs.
-1. After running your query, you will see the mapping between container registry and Azure DevOps pipeline. Click **...** next to the edge to see additional details on where the Azure DevOps pipeline was run.
+1. After running your query, you will see the mapping between container registry and Azure DevOps pipeline. Select **...** next to the edge to see additional details on where the Azure DevOps pipeline was run.
:::image type="content" source="media/container-image-mapping/mapping-results.png" alt-text="Screenshot that shows an advanced query for container mapping results." lightbox="media/container-image-mapping/mapping-results.png":::
-Below is an example of an advanced query that utilizes container image mapping. Starting with a Kubernetes workload that is exposed to the internet, you can trace all container images with high severity CVEs back to the Azure DevOps pipeline where the container image was built, empowering a security practitioner to kick off a developer remediation workflow.
+The following is an example of an advanced query that utilizes container image mapping. Starting with a Kubernetes workload that is exposed to the internet, you can trace all container images with high severity CVEs back to the Azure DevOps pipeline where the container image was built, empowering a security practitioner to kick off a developer remediation workflow.
:::image type="content" source="media/container-image-mapping/advanced-mapping-query.png" alt-text="Screenshot that shows basic container mapping results." lightbox="media/container-image-mapping/advanced-mapping-query.png":::
After building a container image in a GitHub workflow and pushing it to a regist
1. (Optional) Select + by **Container Images** to add other filters to your query, such as **Has vulnerabilities** to filter only container images with CVEs.
-1. After running your query, you will see the mapping between container registry and GitHub workflow. Click **...** next to the edge to see additional details on where the GitHub workflow was run.
+1. After running your query, you will see the mapping between container registry and GitHub workflow. Select **...** next to the edge to see additional details on where the GitHub workflow was run.
-Below is an example of an advanced query that utilizes container image mapping. Starting with a Kubernetes workload that is exposed to the internet, you can trace all container images with high severity CVEs back to the GitHub repository where the container image was built, empowering a security practitioner to kick off a developer remediation workflow.
+The following is an example of an advanced query that utilizes container image mapping. Starting with a Kubernetes workload that is exposed to the internet, you can trace all container images with high severity CVEs back to the GitHub repository where the container image was built, empowering a security practitioner to kick off a developer remediation workflow.
:::image type="content" source="media/container-image-mapping/advanced-mapping-query.png" alt-text="Screenshot that shows basic container mapping results." lightbox="media/container-image-mapping/advanced-mapping-query.png":::
defender-for-cloud Episode Forty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-forty-three.md
Last updated 01/18/2024
## Recommended resources -- Learn more about [enabling permissions management in Defender for Cloud](enable-permissions-management.md)
+- Learn more about [enabling permissions management in Defender for Cloud](enable-permissions-management.md).
- Learn more about [Microsoft Security](https://msft.it/6002T9HQY). - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
See the [list of security recommendations](recommendations-reference.md).
| December 14 | [Public preview of Windows support for Containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management](#public-preview-of-windows-support-for-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management) | | December 13 | [Retirement of AWS container vulnerability assessment powered by Trivy](#retirement-of-aws-container-vulnerability-assessment-powered-by-trivy) | | December 13 | [Agentless container posture for AWS in Defender for Containers and Defender CSPM (Preview)](#agentless-container-posture-for-aws-in-defender-for-containers-and-defender-cspm-preview) |
-| December 13 | [Deny effect - replacing deprecated policies](#deny-effectreplacing-deprecated-policies) |
| December 13 | [General availability (GA) support for PostgreSQL Flexible Server in Defender for open-source relational databases plan](#general-availability-support-for-postgresql-flexible-server-in-defender-for-open-source-relational-databases-plan) | | December 12 | [Container vulnerability assessment powered by Microsoft Defender Vulnerability Management now supports Google Distroless](#container-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-now-supports-google-distroless) | | December 4 | [Defender for Storage alert released for preview: malicious blob was downloaded from a storage account](#defender-for-storage-alert-released-for-preview-malicious-blob-was-downloaded-from-a-storage-account) |
December 13, 2023
The new Agentless container posture (Preview) capabilities are available for AWS. For more information, see [Agentless container posture in Defender CSPM](concept-agentless-containers.md) and [Agentless capabilities in Defender for Containers](defender-for-containers-introduction.md#agentless-capabilities).
-### Deny effect - replacing deprecated policies
-
-December 13, 2023
-
-The [Deny effect](manage-mcsb.md#deny-and-enforce-recommendations) is used to prevent deployment of resources that don't comply with the [Microsoft Cloud Security Benchmark (MCSB) standard](concept-regulatory-compliance.md). A change in the policy's effects requires the deprecation of the current versions of the policy.
-
-To make sure you can still use the Deny effect, you must delete the old policies and assign the new policies in their place.
-
-**Deprecated policies**:
--- Function apps should use the latest TLS version-- App Service apps should have local authentication methods disabled for FTP deployments-- Function app slots should use the latest TLS version-- App Service app slots should have local authentication methods disabled for FTP deployments-- App Service apps should use the latest TLS version-- App Service apps should have local authentication methods disabled for SCM site deployments-- App Service app slots should use the latest TLS version-- App Service app slots should have local authentication methods disabled for SCM site deployments-
-Learn how to [Enable and configure at scale with an Azure built-in policy](defender-for-storage-policy-enablement.md).
-
-Check out [Azure Policy built-in definitions for Microsoft Defender for Cloud](policy-reference.md).
- ### General availability support for PostgreSQL Flexible Server in Defender for open-source relational databases plan December 13, 2023
dns Dns Alerts Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alerts-metrics.md
Title: Metrics and alerts - Azure DNS description: With this learning path, get started with Azure DNS metrics and alerts. Last updated 11/30/2023
dns Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-for-azure-services.md
Title: Use Azure DNS with other Azure services description: In this learning path, get started on how to use Azure DNS to resolve names for other Azure services tags: azure dns
tags: azure dns
ms.assetid: e9b5eb94-7984-4640-9930-564bb9e82b78 Last updated 11/30/2023
dns Dns Operations Dnszones Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones-cli.md
Title: Manage DNS zones in Azure DNS - Azure CLI | Microsoft Docs description: You can manage DNS zones using Azure CLI. This article shows how to update, delete, and create DNS zones on Azure DNS. ms.devlang: azurecli Last updated 11/30/2023
dns Dns Operations Dnszones Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones-portal.md
Title: Manage DNS zones in Azure DNS - Azure portal | Microsoft Docs description: You can manage DNS zones using the Azure portal. This article describes how to update, delete, and create DNS zones on Azure DNS Last updated 11/30/2023
dns Dns Operations Dnszones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones.md
Title: Manage DNS zones in Azure DNS - PowerShell | Microsoft Docs description: You can manage DNS zones using Azure PowerShell. This article describes how to update, delete, and create DNS zones on Azure DNS Last updated 11/30/2023
dns Dns Operations Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets.md
Title: Manage DNS records in Azure DNS using Azure PowerShell | Microsoft Docs description: Managing DNS record sets and records on Azure DNS when hosting your domain on Azure DNS. All PowerShell commands for operations on record sets and records. Last updated 11/30/2023
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-for-azure-services.md
Title: Reverse DNS for Azure services - Azure DNS description: With this learning path, get started configuring reverse DNS lookups for services hosted in Azure. Last updated 01/10/2024
dns Dns Reverse Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-overview.md
Title: Overview of reverse DNS in Azure - Azure DNS description: In this learning path, get started learning how reverse DNS works and how it can be used in Azure- Last updated 04/27/2023
dns Dns Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-sdk.md
Title: Create DNS zones and record sets using the .NET SDK
description: In this learning path, get started creating DNS zones and record sets in Azure DNS by using the .NET SDK. ms.assetid: eed99b87-f4d4-4fbf-a926-263f7e30b884 ms.devlang: csharp Last updated 11/30/2023
event-grid Powershell Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-create-custom-topic.md
Title: Azure PowerShell script sample - Create custom topic | Microsoft Docs description: This article provides a sample Azure PowerShell script that shows how to create an Event Grid custom topic. ms.devlang: powershell - Last updated 09/15/2021
event-hubs Exceptions Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/exceptions-dotnet.md
Title: Azure Event Hubs - .NET exceptions description: This article provides a list of Azure Event Hubs .NET messaging exceptions and suggested actions.- ms.devlang: csharp - Last updated 09/23/2021
event-hubs Transport Layer Security Audit Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-audit-minimum-version.md
Title: Use Azure Policy to audit for compliance of minimum TLS version for an Azure Event Hubs namespace description: Configure Azure Policy to audit compliance of Azure Event Hubs for using a minimum version of Transport Layer Security (TLS).-
event-hubs Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-client-version.md
Title: Configure Transport Layer Security (TLS) for an Event Hubs client application description: Configure a client application to communicate with Azure Event Hubs using a minimum version of Transport Layer Security (TLS).-
event-hubs Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-minimum-version.md
Title: Configure the minimum TLS version for an Event Hubs namespace description: Configure an Azure Event Hubs namespace to use a minimum version of Transport Layer Security (TLS).-
event-hubs Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-enforce-minimum-version.md
Title: Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace description: Configure a service bus namespace to require a minimum version of Transport Layer Security (TLS) for clients making requests against Azure Event Hubs.- - Last updated 04/25/2022
expressroute Expressroute Howto Coexist Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-coexist-classic.md
Title: 'Configure ExpressRoute and S2S VPN coexisting connections: classic' description: This article walks you through configuring ExpressRoute and a Site-to-Site VPN connection that can coexist for the classic deployment model.
expressroute Using Expressroute For Microsoft365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/using-expressroute-for-microsoft365.md
Title: 'Using ExpressRoute for Microsoft 365 Services | Microsoft Docs' description: This document discusses objectively on using ExpressRoute circuit for Microsoft 365 SaaS services.
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
Title: 'How to add a custom domain - Azure Front Door' description: In this article, you learn how to onboard a custom domain to Azure Front Door profile using the Azure portal.
governance Assign Policy Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-dotnet.md
- Title: "Quickstart: New policy assignment with .NET Core"
-description: In this quickstart, you use .NET Core to create an Azure Policy assignment to identify non-compliant resources.
Previously updated : 08/17/2021---
-# Quickstart: Create a policy assignment to identify non-compliant resources with .NET Core
-
-The first step in understanding compliance in Azure is to identify the status of your resources. In
-this quickstart, you create a policy assignment to identify virtual machines that aren't using
-managed disks. When complete, you'll identify virtual machines that are _non-compliant_.
-
-The .NET Core library is used to manage Azure resources. This guide explains how to use the .NET
-Core library for Azure Policy to create a policy assignment.
-
-## Prerequisites
--- An Azure subscription. If you don't have an Azure subscription, create a
- [free](https://azure.microsoft.com/free/) account before you begin.
-- An Azure service principal, including the _clientId_ and _clientSecret_. If you don't have a
- service principal for use with Azure Policy or want to create a new one, see
- [Azure management libraries for .NET authentication](/dotnet/azure/sdk/authentication#mgmt-auth).
- Skip the step to install the .NET Core packages as we'll do that in the next steps.
-
-## Create the Azure Policy project
-
-To enable .NET Core to manage Azure Policy, create a new console application and install the
-required packages.
-
-1. Check that the latest .NET Core is installed (at least **3.1.8**). If it isn't yet installed,
- download it at [dotnet.microsoft.com](https://dotnet.microsoft.com/download/dotnet-core).
-
-1. Initialize a new .NET Core console application named "policyAssignment":
-
- ```dotnetcli
- dotnet new console --name "policyAssignment"
- ```
-
-1. Change directories into the new project folder and install the required packages for Azure
- Policy:
-
- ```dotnetcli
- # Add the Azure Policy package for .NET Core
- dotnet add package Microsoft.Azure.Management.ResourceManager --version 3.10.0-preview
-
- # Add the Azure app auth package for .NET Core
- dotnet add package Microsoft.Azure.Services.AppAuthentication --version 1.5.0
- ```
-
-1. Replace the default `program.cs` with the following code and save the updated file:
-
- ```csharp
- using System;
- using System.Collections.Generic;
- using System.Threading.Tasks;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- using Microsoft.Rest;
- using Microsoft.Azure.Management.ResourceManager;
- using Microsoft.Azure.Management.ResourceManager.Models;
-
- namespace policyAssignment
- {
- class Program
- {
- static async Task Main(string[] args)
- {
- string strTenant = args[0];
- string strClientId = args[1];
- string strClientSecret = args[2];
- string strSubscriptionId = args[3];
- string strName = args[4];
- string strDisplayName = args[5];
- string strPolicyDefID = args[6];
- string strDescription = args[7];
- string strScope = args[8];
-
- var authContext = new AuthenticationContext($"https://login.microsoftonline.com/{strTenant}");
- var authResult = await authContext.AcquireTokenAsync(
- "https://management.core.windows.net",
- new ClientCredential(strClientId, strClientSecret));
-
- using (var client = new PolicyClient(new TokenCredentials(authResult.AccessToken)))
- {
- var policyAssignment = new PolicyAssignment
- {
- DisplayName = strDisplayName,
- PolicyDefinitionId = strPolicyDefID,
- Description = strDescription
- };
- var response = await client.PolicyAssignments.CreateAsync(strScope, strName, policyAssignment);
- }
- }
- }
- }
- ```
-
-1. Build and publish the `policyAssignment` console application:
-
- ```dotnetcli
- dotnet build
- dotnet publish -o {run-folder}
- ```
-
-## Create a policy assignment
-
-In this quickstart, you create a policy assignment and assign the **Audit VMs that do not use
-managed disks** (`06a78e20-9358-41c9-923c-fb736d382a4d`) definition. This policy definition
-identifies resources that aren't compliant to the conditions set in the policy definition.
-
-1. Change directories to the `{run-folder}` you defined with the previous `dotnet publish` command.
-
-1. Enter the following command in the terminal:
-
- ```bash
- policyAssignment.exe `
- "{tenantId}" `
- "{clientId}" `
- "{clientSecret}" `
- "{subscriptionId}" `
- "audit-vm-manageddisks" `
- "Audit VMs without managed disks Assignment" `
- "/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d" `
- "Shows all virtual machines not using managed disks" `
- "{scope}"
- ```
-
-The preceding commands use the following information:
--- `{tenantId}` - Replace with your tenant ID-- `{clientId}` - Replace with the client ID of your service principal-- `{clientSecret}` - Replace with the client secret of your service principal-- `{subscriptionId}` - Replace with your subscription ID-- **name** - The unique name for the policy assignment object. The example above uses
- _audit-vm-manageddisks_.
-- **displayName** - Display name for the policy assignment. In this case, you're using _Audit VMs
- without managed disks Assignment_.
-- **policyDefID** - The policy definition path, based on which you're using to create the
- assignment. In this case, it's the ID of policy definition _Audit VMs that do not use managed
- disks_.
-- **description** - A deeper explanation of what the policy does or why it's assigned to this scope.-- **scope** - A scope determines what resources or grouping of resources the policy assignment gets
- enforced on. It could range from a management group to an individual resource. Be sure to replace
- `{scope}` with one of the following patterns:
- - Management group: `/providers/Microsoft.Management/managementGroups/{managementGroup}`
- - Subscription: `/subscriptions/{subscriptionId}`
- - Resource group: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}`
- - Resource: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/[{parentResourcePath}/]`
-
-You're now ready to identify non-compliant resources to understand the compliance state of your
-environment.
-
-## Identify non-compliant resources
-
-Now that your policy assignment is created, you can identify resources that aren't compliant.
-
-1. Initialize a new .NET Core console application named "policyCompliance":
-
- ```dotnetcli
- dotnet new console --name "policyCompliance"
- ```
-
-1. Change directories into the new project folder and install the required packages for Azure
- Policy:
-
- ```dotnetcli
- # Add the Azure Policy package for .NET Core
- dotnet add package Microsoft.Azure.Management.PolicyInsights --version 3.1.0
-
- # Add the Azure app auth package for .NET Core
- dotnet add package Microsoft.Azure.Services.AppAuthentication --version 1.5.0
- ```
-
-1. Replace the default `program.cs` with the following code and save the updated file:
-
- ```csharp
- using System;
- using System.Collections.Generic;
- using System.Threading.Tasks;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- using Microsoft.Rest;
- using Microsoft.Azure.Management.PolicyInsights;
- using Microsoft.Azure.Management.PolicyInsights.Models;
-
- namespace policyAssignment
- {
- class Program
- {
- static async Task Main(string[] args)
- {
- string strTenant = args[0];
- string strClientId = args[1];
- string strClientSecret = args[2];
- string strSubscriptionId = args[3];
- string strName = args[4];
-
- var authContext = new AuthenticationContext($"https://login.microsoftonline.com/{strTenant}");
- var authResult = await authContext.AcquireTokenAsync(
- "https://management.core.windows.net",
- new ClientCredential(strClientId, strClientSecret));
-
- using (var client = new PolicyInsightsClient(new TokenCredentials(authResult.AccessToken)))
- {
- var policyQueryOptions = new QueryOptions
- {
- Filter = $"IsCompliant eq false and PolicyAssignmentId eq '{strName}'",
- Apply = "groupby(ResourceId)"
- };
-
- var response = await client.PolicyStates.ListQueryResultsForSubscriptionAsync(
- "latest", strSubscriptionId, policyQueryOptions);
- Console.WriteLine(response.Odatacount);
- }
- }
- }
- }
- ```
-
-1. Build and publish the `policyAssignment` console application:
-
- ```dotnetcli
- dotnet build
- dotnet publish -o {run-folder}
- ```
-
-1. Change directories to the `{run-folder}` you defined with the previous `dotnet publish` command.
-
-1. Enter the following command in the terminal:
-
- ```bash
- policyCompliance.exe `
- "{tenantId}" `
- "{clientId}" `
- "{clientSecret}" `
- "{subscriptionId}" `
- "audit-vm-manageddisks"
- ```
-
-The preceding commands use the following information:
--- `{tenantId}` - Replace with your tenant ID-- `{clientId}` - Replace with the client ID of your service principal-- `{clientSecret}` - Replace with the client secret of your service principal-- `{subscriptionId}` - Replace with your subscription ID-- **name** - The unique name for the policy assignment object. The example above uses
- _audit-vm-manageddisks_.
-
-The results in `response` match what you see in the **Resource compliance** tab of a policy
-assignment in the Azure portal view.
-
-## Clean up resources
--- Delete the policy assignment _Audit VMs without managed disks Assignment_ through the portal. The
- policy definition is a built-in, so there's no definition to remove.
--- If you wish to remove the .NET Core console applications and installed packages, delete the
- `policyAssignment` and `policyCompliance` project folders.
-
-## Next steps
-
-In this quickstart, you assigned a policy definition to identify non-compliant resources in your
-Azure environment.
-
-To learn more about assigning policy definitions to validate that new resources are compliant,
-continue to the tutorial for:
-
-> [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
governance Assign Policy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-javascript.md
- Title: "Quickstart: New policy assignment with JavaScript"
-description: In this quickstart, you use JavaScript to create an Azure Policy assignment to identify non-compliant resources.
Previously updated : 08/17/2021---
-# Quickstart: Create a policy assignment to identify non-compliant resources using JavaScript
-
-The first step in understanding compliance in Azure is to identify the status of your resources. In
-this quickstart, you create a policy assignment to identify virtual machines that aren't using
-managed disks. When complete, you'll identify virtual machines that are _non-compliant_.
-
-The JavaScript library is used to manage Azure resources from the command line or in scripts. This
-guide explains how to use JavaScript library to create a policy assignment.
-
-## Prerequisites
--- **Azure subscription**: If you don't have an Azure subscription, create a
- [free](https://azure.microsoft.com/free/) account before you begin.
--- **Node.js**: [Node.js](https://nodejs.org/) version 12 or higher is required.--
-## Add the Policy libraries
-
-To enable JavaScript to work with Azure Policy, the libraries must be added. These libraries work
-wherever JavaScript can be used, including [bash on Windows 10](/windows/wsl/install-win10).
-
-1. Set up a new Node.js project by running the following command.
-
- ```bash
- npm init -y
- ```
-
-1. Add a reference to the yargs library.
-
- ```bash
- npm install yargs
- ```
-
-1. Add a reference to the Azure Policy libraries.
-
- ```bash
- # arm-policy is for working with Azure Policy objects such as definitions and assignments
- npm install @azure/arm-policy
-
- # arm-policyinsights is for working with Azure Policy compliance data such as events and states
- npm install @azure/arm-policyinsights
- ```
-
-1. Add a reference to the Azure authentication library.
-
- ```bash
- npm install @azure/identity
- ```
-
- > [!NOTE]
- > Verify in _package.json_ `@azure/arm-policy` is version **5.0.1** or higher,
- > `@azure/arm-policyinsights` is version **5.0.0** or higher, and `@azure/identity` is
- > version **2.0.4** or higher.
-
-## Create a policy assignment
-
-In this quickstart, you create a policy assignment and assign the **Audit VMs that do not use
-managed disks** (`06a78e20-9358-41c9-923c-fb736d382a4d`) definition. This policy definition
-identifies resources that aren't compliant to the conditions set in the policy definition.
-
-1. Create a new file named _policyAssignment.js_ and enter the following code.
-
- ```javascript
- const argv = require("yargs").argv;
- const { DefaultAzureCredential } = require("@azure/identity");
- const { PolicyClient } = require("@azure/arm-policy");
-
- if (argv.subID && argv.name && argv.displayName && argv.policyDefID && argv.scope && argv.description) {
-
- const createAssignment = async () => {
- const credentials = new DefaultAzureCredential();
- const client = new PolicyClient(credentials, argv.subID);
-
- const result = await client.policyAssignments.create(
- argv.scope,
- argv.name,
- {
- displayName: argv.displayName,
- policyDefinitionId: argv.policyDefID,
- description: argv.description
- }
- );
- console.log(result);
- };
-
- createAssignment();
- }
- ```
-
-1. Enter the following command in the terminal:
-
- ```bash
- node policyAssignment.js `
- --subID "{subscriptionId}" `
- --name "audit-vm-manageddisks" `
- --displayName "Audit VMs without managed disks Assignment" `
- --policyDefID "/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d" `
- --description "Shows all virtual machines not using managed disks" `
- --scope "{scope}"
- ```
-
-The preceding commands use the following information:
--- **subID** - The subscription ID for authentication context. Be sure to replace `{subscriptionId}`
- with your subscription.
-- **name** - The unique name for the policy assignment object. The example above uses
- _audit-vm-manageddisks_.
-- **displayName** - Display name for the policy assignment. In this case, you're using _Audit VMs
- without managed disks Assignment_.
-- **policyDefID** - The policy definition path, based on which you're using to create the
- assignment. In this case, it's the ID of policy definition _Audit VMs that do not use managed
- disks_.
-- **description** - A deeper explanation of what the policy does or why it's assigned to this scope.-- **scope** - A scope determines what resources or grouping of resources the policy assignment gets
- enforced on. It could range from a management group to an individual resource. Be sure to replace
- `{scope}` with one of the following patterns:
- - Management group: `/providers/Microsoft.Management/managementGroups/{managementGroup}`
- - Subscription: `/subscriptions/{subscriptionId}`
- - Resource group: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}`
- - Resource: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/[{parentResourcePath}/]`
-
-You're now ready to identify non-compliant resources to understand the compliance state of your
-environment.
-
-## Identify non-compliant resources
-
-Now that your policy assignment is created, you can identify resources that aren't compliant.
-
-1. Create a new file named _policyState.js_ and enter the following code.
-
- ```javascript
- const argv = require("yargs").argv;
- const { DefaultAzureCredential } = require("@azure/identity");
- const { PolicyInsightsClient } = require("@azure/arm-policyinsights");
-
- if (argv.subID && argv.name) {
-
- const getStates = async () => {
-
- const credentials = new DefaultAzureCredential();
- const client = new PolicyInsightsClient(credentials);
- const result = client.policyStates.listQueryResultsForSubscription(
- "latest",
- argv.subID,
- {
- queryOptions: {
- filter: "IsCompliant eq false and PolicyAssignmentId eq '" + argv.name + "'",
- apply: "groupby((ResourceId))"
- }
- }
- );
- console.log(result);
- };
-
- getStates();
- }
- ```
-
-1. Enter the following command in the terminal:
-
- ```bash
- node policyState.js --subID "{subscriptionId}" --name "audit-vm-manageddisks"
- ```
-
-Replace `{subscriptionId}` with the subscription you want to see the compliance results for the
-policy assignment named 'audit-vm-manageddisks' that we created in the previous steps. For a list of
-other scopes and ways to summarize the data, see
-[PolicyStates*](/javascript/api/@azure/arm-policyinsights/) methods.
-
-Your results resemble the following example:
-
-```output
-{
- 'additional_properties': {
- '@odata.nextLink': None
- },
- 'odatacontext': 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest',
- 'odatacount': 12,
- 'value': [{data}]
-}
-```
-
-The results match what you see in the **Resource compliance** tab of a policy assignment in the
-Azure portal view.
-
-## Clean up resources
--- Delete the policy assignment _Audit VMs without managed disks Assignment_ through the portal. The
- policy definition is a built-in, so there's no definition to remove.
--- If you wish to remove the installed libraries from your application, run the following command.-
- ```bash
- npm uninstall @azure/arm-policy @azure/arm-policyinsights @azure/identity yargs
- ```
-
-## Next steps
-
-In this quickstart, you assigned a policy definition to identify non-compliant resources in your
-Azure environment.
-
-To learn more about assigning policy definitions to validate that new resources are compliant,
-continue to the tutorial for:
-
-> [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
governance Assign Policy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-python.md
- Title: "Quickstart: New policy assignment with Python"
-description: In this quickstart, you use Python to create an Azure Policy assignment to identify non-compliant resources.
Previously updated : 10/01/2021---
-# Quickstart: Create a policy assignment to identify non-compliant resources using Python
-
-The first step in understanding compliance in Azure is to identify the status of your resources. In
-this quickstart, you create a policy assignment to identify virtual machines that aren't using
-managed disks. When complete, you'll identify virtual machines that are _non-compliant_.
-
-The Python library is used to manage Azure resources from the command line or in scripts. This guide
-explains how to use Python library to create a policy assignment.
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
--
-## Add the Policy library
-
-To enable Python to work with Azure Policy, the library must be added. This library works wherever
-Python can be used, including [bash on Windows 10](/windows/wsl/install-win10) or locally installed.
-
-1. Check that the latest Python is installed (at least **3.8**). If it isn't yet installed, download
- it at [Python.org](https://www.python.org/downloads/).
-
-1. Check that the latest Azure CLI is installed (at least **2.5.1**). If it isn't yet installed, see
- [Install the Azure CLI](/cli/azure/install-azure-cli).
-
- > [!NOTE]
- > Azure CLI is required to enable Python to use the **CLI-based authentication** in the following
- > examples. For information about other options, see
- > [Authenticate using the Azure management libraries for Python](/azure/developer/python/sdk/authentication-overview).
-
-1. Authenticate through Azure CLI.
-
- ```azurecli
- az login
- ```
-
-1. In your Python environment of choice, install the required libraries for Azure Policy:
-
- ```bash
- # Add the Python library for Python
- pip install azure-mgmt-policyinsights
-
- # Add the Resources library for Python
- pip install azure-mgmt-resource
-
- # Add the CLI Core library for Python for authentication (development only!)
- pip install azure-cli-core
-
- # Add the Azure identity library for Python
- pip install azure.identity
- ```
-
- > [!NOTE]
- > If Python is installed for all users, these commands must be run from an elevated console.
-
-1. Validate that the libraries have been installed. `azure-mgmt-policyinsights` should be **0.5.0**
- or higher, `azure-mgmt-resource` should be **9.0.0** or higher, and `azure-cli-core` should be
- **2.5.0** or higher.
-
- ```bash
- # Check each installed library
- pip show azure-mgmt-policyinsights azure-mgmt-resource azure-cli-core azure.identity
- ```
-
-## Create a policy assignment
-
-In this quickstart, you create a policy assignment and assign the **Audit VMs that do not use
-managed disks** (`06a78e20-9358-41c9-923c-fb736d382a4d`) definition. This policy definition
-identifies resources that aren't compliant to the conditions set in the policy definition.
-
-Run the following code to create a new policy assignment:
-
-```python
-# Import specific methods and models from other libraries
-from azure.mgmt.resource.policy import PolicyClient
-from azure.mgmt.resource.policy.models import PolicyAssignment, Identity, UserAssignedIdentitiesValue, PolicyAssignmentUpdate
-from azure.identity import AzureCliCredential
-
-# Set subscription
-subId = "{subId}"
-assignmentLocation = "westus2"
-
-# Get your credentials from Azure CLI (development only!) and get your subscription list
-credential = AzureCliCredential()
-policyClient = PolicyClient(credential, subId, base_url=none)
-
-# Create details for the assignment
-policyAssignmentIdentity = Identity(type="SystemAssigned")
-policyAssignmentDetails = PolicyAssignment(display_name="Audit VMs without managed disks Assignment", policy_definition_id="/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d", description="Shows all virtual machines not using managed disks", identity=policyAssignmentIdentity, location=assignmentLocation)
-
-# Create new policy assignment
-policyAssignment = policyClient.policy_assignments.create("{scope}", "audit-vm-manageddisks", policyAssignmentDetails)
-
-# Show results
-print(policyAssignment)
-```
-
-The preceding commands use the following information:
-
-Assignment details:
-- **subId** - Your subscription. Needed for authentication. Replace `{subId}` with your
- subscription.
-- **display_name** - Display name for the policy assignment. In this case, you're using _Audit VMs
- without managed disks Assignment_.
-- **policy_definition_id** - The policy definition path, based on which you're using to create the
- assignment. In this case, it's the ID of policy definition _Audit VMs that do not use managed
- disks_. In this example, the policy definition is a built-in and the path doesn't include
- management group or subscription information.
-- **scope** - A scope determines what resources or grouping of resources the policy assignment gets
- enforced on. It could range from a management group to an individual resource. Be sure to replace
- `{scope}` with one of the following patterns:
- - Management group: `/providers/Microsoft.Management/managementGroups/{managementGroup}`
- - Subscription: `/subscriptions/{subscriptionId}`
- - Resource group: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}`
- - Resource: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/[{parentResourcePath}/]`
-- **description** - A deeper explanation of what the policy does or why it's assigned to this scope.-
-Assignment creation:
--- Scope - This scope determines where the policy assignment gets saved. The scope set in the
- assignment details must exist within this scope.
-- Name - The actual name of the assignment. For this example, _audit-vm-manageddisks_ was used.-- Policy assignment - The Python **PolicyAssignment** object created in the previous step.-
-You're now ready to identify non-compliant resources to understand the compliance state of your
-environment.
-
-## Identify non-compliant resources
-
-Use the following information to identify resources that aren't compliant with the policy assignment
-you created. Run the following code:
-
-```python
-# Import specific methods and models from other libraries
-from azure.mgmt.policyinsights._policy_insights_client import PolicyInsightsClient
-from azure.mgmt.policyinsights.models import QueryOptions
-from azure.identity import AzureCliCredential
-
-# Set subscription
-subId = "{subId}"
-
-# Get your credentials from Azure CLI (development only!) and get your subscription list
-credential = AzureCliCredential()
-policyClient = PolicyInsightsClient(credential, subId, base_url=none)
-
-# Set the query options
-queryOptions = QueryOptions(filter="IsCompliant eq false and PolicyAssignmentId eq 'audit-vm-manageddisks'",apply="groupby((ResourceId))")
-
-# Fetch 'latest' results for the subscription
-results = policyInsightsClient.policy_states.list_query_results_for_subscription(policy_states_resource="latest", subscription_id=subId, query_options=queryOptions)
-
-# Show results
-print(results)
-```
-
-Replace `{subId}` with the subscription you want to see the compliance results for this policy
-assignment. For a list of other scopes and ways to summarize the data, see
-[Policy State methods](/python/api/azure-mgmt-policyinsights/azure.mgmt.policyinsights.operations.policystatesoperations#methods).
-
-Your results resemble the following example:
-
-```output
-{
- 'additional_properties': {
- '@odata.nextLink': None
- },
- 'odatacontext': 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest',
- 'odatacount': 12,
- 'value': [{data}]
-}
-```
-
-The results match what you see in the **Resource compliance** tab of a policy assignment in the
-Azure portal view.
-
-## Clean up resources
-
-To remove the assignment created, use the following command:
-
-```python
-# Import specific methods and models from other libraries
-from azure.mgmt.resource.policy import PolicyClient
-from azure.identity import AzureCliCredential
-
-# Set subscription
-subId = "{subId}"
-
-# Get your credentials from Azure CLI (development only!) and get your subscription list
-credential = AzureCliCredential()
-policyClient = PolicyClient(credential, subId, base_url=none)
-
-# Delete the policy assignment
-policyAssignment = policyClient.policy_assignments.delete("{scope}", "audit-vm-manageddisks")
-
-# Show results
-print(policyAssignment)
-```
-
-Replace `{subId}` with your subscription and `{scope}` with the same scope you used to create the
-policy assignment.
-
-## Next steps
-
-In this quickstart, you assigned a policy definition to identify non-compliant resources in your
-Azure environment.
-
-To learn more about assigning policy definitions to validate that new resources are compliant,
-continue to the tutorial for:
-
-> [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
governance Work With Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/work-with-data.md
Here's a sample of a query result with the _ObjectArray_ formatting:
} ```
-Here are some examples of setting **resultFormat** to use the _ObjectArray_ format:
-
-```csharp
-var requestOptions = new QueryRequestOptions( resultFormat: ResultFormat.ObjectArray);
-var request = new QueryRequest(subscriptions, "Resources | limit 1", options: requestOptions);
-```
-
-```python
-request_options = QueryRequestOptions(
- result_format=ResultFormat.object_array
-)
-request = QueryRequest(query="Resources | limit 1", subscriptions=subs_list, options=request_options)
-response = client.resources(request)
-```
- ## Next steps - See the language in use in [Starter queries](../samples/starter.md).
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
Title: Overview of Azure Resource Graph description: Understand how the Azure Resource Graph service enables complex querying of resources at scale across subscriptions and tenants. Previously updated : 12/18/2023 Last updated : 01/20/2024
For more information, see
## Running your first query
-Azure Resource Graph Explorer, part of Azure portal, enables running Resource Graph queries directly
-in the Azure portal. Pin the results as dynamic charts to provide real-time dynamic information to
-your portal workflow. For more information, see
-[First query with Azure Resource Graph Explorer](./first-query-portal.md).
+Azure Resource Graph Explorer, part of Azure portal, enables running Resource Graph queries directly in the Azure portal. Pin the results as dynamic charts to provide real-time dynamic information to your portal workflow. For more information, go to [First query with Azure Resource Graph Explorer](./first-query-portal.md).
-Resource Graph supports Azure CLI, Azure PowerShell, Azure SDK for Python, and more. The query is
-structured the same for each language. Learn how to enable Resource Graph with:
+Resource Graph also supports Azure CLI, Azure PowerShell, and REST API. The query is structured the same for each language. Learn how to enable Resource Graph with:
-- [Azure portal and Resource Graph Explorer](./first-query-portal.md)-- [Azure CLI](./first-query-azurecli.md#add-the-resource-graph-extension)-- [Azure PowerShell](./first-query-powershell.md#add-the-resource-graph-module)-- [Python](./first-query-python.md#add-the-resource-graph-library)
+- [Azure CLI](./first-query-azurecli.md)
+- [Azure PowerShell](./first-query-powershell.md)
+- [REST API](./first-query-rest-api.md)
## Alerts integration with Log Analytics
hdinsight Hdinsight Business Continuity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-business-continuity-architecture.md
Title: Azure HDInsight business continuity architectures description: This article discusses the different possible business continuity architectures for HDInsight
-keywords: hadoop high availability
Last updated 06/08/2023
hdinsight Hdinsight High Availability Case Study https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-high-availability-case-study.md
Title: Azure HDInsight highly available solution architecture case study description: This article is a fictional case study of a possible Azure HDInsight highly available solution architecture.
-keywords: hadoop high availability
Last updated 06/08/2023
iot-develop Quickstart Devkit Espressif Esp32 Freertos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-espressif-esp32-freertos.md
ms.devlang: c Last updated 11/29/2022- #Customer intent: As a device builder, I want to see a working IoT device sample connecting to Azure IoT, sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
iot-develop Quickstart Devkit Stm B L475e Freertos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-freertos.md
ms.devlang: c Last updated 11/29/2022- #Customer intent: As a device builder, I want to see a working IoT device sample connecting to Azure IoT, sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md
Title: Deploy modules at scale in Azure portal - Azure IoT Edge description: Use the Azure portal to create automatic deployments for groups of IoT Edge devices
-keywords:
- Last updated 9/22/2022
iot-edge How To Deploy Cli At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-cli-at-scale.md
Title: Deploy modules at scale using Azure CLI - Azure IoT Edge description: Use the IoT extension for the Azure CLI to create automatic deployments for groups of IoT Edge devices.
-keywords:
- Last updated 10/13/2020
iot-edge How To Deploy Modules Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-vscode.md
Title: Deploy modules from Visual Studio Code - Azure IoT Edge description: Use Visual Studio Code with Azure IoT Edge for Visual Studio Code to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest. - Last updated 10/13/2020 -
iot-edge How To Deploy Vscode At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-vscode-at-scale.md
Title: Deploy modules at scale using Visual Studio Code - Azure IoT Edge description: Use the IoT extension for Visual Studio Code to create automatic deployments for groups of IoT Edge devices.
-keywords:
- Last updated 1/8/2020
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
Title: Update IoT Edge version on devices description: How to update IoT Edge devices to run the latest versions of the security subsystem and the IoT Edge runtime
-keywords:
- Last updated 04/03/2023
iot-edge How To Use Create Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-use-create-options.md
Title: Write createOptions for modules - Azure IoT Edge | Microsoft Docs description: How to use createOptions in the deployment manifest to configure modules at runtime
-keywords:
- Last updated 04/01/2020
iot-edge Iot Edge For Linux On Windows Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-benefits.md
Title: Why use Azure IoT Edge for Linux on Windows? | Microsoft Docs description: Benefits - Azure IoT Edge for Linux on Windows
-keywords:
Last updated 04/15/2022
iot-edge Iot Edge For Linux On Windows Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-security.md
Title: Azure IoT Edge for Linux on Windows security | Microsoft Docs description: Security framework - Azure IoT Edge for Linux on Windows
-keywords:
Last updated 08/03/2022
key-vault Rest Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rest-error-codes.md
Title: REST API error codes - Azure Key Vault description: These error codes could be returned by an operation on an Azure Key Vault web service.
-keywords:
-
key-vault Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/service-limits.md
Title: Azure Key Vault Service Limits - Azure Key Vault | Microsoft Docs
description: Learn about service limits for Azure Key Vault, including key transactions and Azure Private Link integration. -
key-vault Policy Grammar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/policy-grammar.md
Title: Azure Managed HSM Secure key release policy grammar description: Managed HSM Secure key release policy grammar
-keywords:
-
key-vault Third Party Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/third-party-solutions.md
Title: Azure Key Vault Managed HSM - Third-party solutions | Microsoft Docs
description: Learn about third-party solutions integrated with Managed HSM. -
load-balancer Ipv6 Configure Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/ipv6-configure-template-json.md
Title: Deploy an IPv6 dual stack application with Basic Load Balancer in Azure v
description: This article shows how to deploy an IPv6 dual stack application in Azure virtual network using Azure Resource Manager VM templates.
machine-learning Concept Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-component.md
Previously updated : 11/04/2022 Last updated : 01/19/2024
A component consists of three parts:
- Interface: input/output specifications (name, type, description, default value, etc.). - Command, Code & Environment: command, code and environment required to run the component. ## Why should I use a component?
machine-learning Concept Deep Learning Vs Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-deep-learning-vs-machine-learning.md
Previously updated : 11/04/2022 Last updated : 01/19/2024
machine-learning Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-soft-delete.md
description: Soft delete allows you to recover workspace data after accidental d
-
machine-learning How To Monitor Kubernetes Online Enpoint Inference Server Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-kubernetes-online-enpoint-inference-server-log.md
- Last updated 09/26/2023
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Last updated 01/02/2024
- # Azure Policy built-in definitions for Azure Migrate
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-audit-logs.md
Title: Audit logs
description: Describes the audit logs available in Azure Database for MySQL - Flexible Server. - Last updated 11/21/2022
mysql Concepts Connect To A Gateway Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connect-to-a-gateway-node.md
- Last updated 06/20/2022
nat-gateway Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-metrics.md
The metrics dashboard can be used to better understand the performance and healt
For more information on what each metric is showing you and how to analyze these metrics, see [How to use NAT gateway metrics](#how-to-use-nat-gateway-metrics).
+## More NAT gateway metrics guidance
+
+### What type of metrics are available for NAT gateway?
+
+NAT gateway has [multi-dimensional metrics](/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics). Multi-dimensional metrics can be filtered by different dimensions in order to provide greater insight on the data provided. The [SNAT connection count](#snat-connection-count) metric can be filtered by Attempted and Failed connections in order to distinguish between the different types of connections being made by NAT gateway.
+
+Refer to the dimensions column in the [metrics overview](#metrics-overview) table to see which dimensions are available for each NAT gateway metric.
+
+### How to store NAT gateway metrics long-term
+
+All [platform metrics are stored](/azure/azure-monitor/essentials/data-platform-metrics#retention-of-metrics) for 93 days. If you require long term access to your NAT gateway metrics data, NAT gateway metrics can be retrieved by using the [metrics REST API](/rest/api/monitor/metrics/list). For more information on how to use the API, see the [Azure monitoring REST API walkthrough](/azure/azure-monitor/essentials/rest-api-walkthrough).
+
+>[!NOTE]
+>Diagnostic Settings [doesnΓÇÖt support the export of multi-dimensional metrics](/azure/azure-monitor/reference/supported-metrics/metrics-index#exporting-platform-metrics-to-other-locations) to another location, such as Azure Storage and Log Analytics.
+>
+>To retrieve NAT gateway metrics, use the metrics REST API.
+
+### How interpret metrics charts
+
+Refer to [troubleshooting metrics charts](/azure/azure-monitor/essentials/metrics-troubleshoot) if you run into issues with creating, customizing or interpreting charts in Azure metrics explorer.
+ ## Next steps * Learn about [Azure NAT Gateway](nat-overview.md)
network-function-manager Delete Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/delete-functions.md
Last updated 05/10/2022 - # Tutorial: Delete network functions on Azure Stack Edge
networking Azure For Network Engineers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/azure-for-network-engineers.md
Title: 'Azure for Network Engineers' description: This page explains to traditional network engineers how networks work in Azure.
networking Check Usage Against Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/check-usage-against-limits.md
Title: Check Azure resource usage against limits | Microsoft Docs description: Learn how to check your Azure resource usage against Azure subscription limits. tags: azure-resource-manager Last updated 06/05/2018
networking Disaster Recovery Dns Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/disaster-recovery-dns-traffic-manager.md
Title: 'Disaster recovery using Azure DNS and Traffic Manager | Microsoft Docs' description: Overview of the disaster recovery solutions using Azure DNS and Traffic Manager. tags: azure-resource-manager- Last updated 11/30/2023
networking Traffic Manager Cli Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/traffic-manager-cli-websites-high-availability.md
ms.devlang: azurecli Last updated 04/27/2023
networking Traffic Manager Powershell Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/traffic-manager-powershell-websites-high-availability.md
ms.devlang: powershell Last updated 04/27/2023
notification-hubs Notification Hubs Aspnet Cross Platform Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-aspnet-cross-platform-notification.md
Title: Send cross-platform notifications to users with Azure Notification Hubs (ASP.NET) description: Learn how to use Notification Hubs templates to send, in a single request, a platform-agnostic notification that targets all platforms. editor: thsomasu
notification-hubs Notification Hubs Deploy And Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-deploy-and-manage-powershell.md
Title: Deploy and manage Notification Hubs using PowerShell description: How to create and manage Notification Hubs using PowerShell for Automation editor: jwargo
notification-hubs Notification Hubs Enterprise Push Notification Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-enterprise-push-notification-architecture.md
Title: Notification Hubs enterprise push architecture description: Learn about using Azure Notification Hubs in an enterprise environment editor: jwargo
notification-hubs Notification Hubs Java Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-java-push-notification-tutorial.md
Title: How to use Azure Notification Hubs with Java description: Learn how to use Azure Notification Hubs from a Java back-end.
notification-hubs Notification Hubs Nodejs Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-nodejs-push-notification-tutorial.md
ms.devlang: javascript Last updated 08/23/2021
notification-hubs Notification Hubs Php Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-php-push-notification-tutorial.md
Title: How to use Azure Notification Hubs with PHP description: Learn how to use Azure Notification Hubs from a PHP back-end.
notification-hubs Notification Hubs Push Notification Fixer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-fixer.md
ms.devlang: csharp Last updated 06/08/2023
notification-hubs Notification Hubs Push Notification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-overview.md
editor: tjsomasundaram ms.assetid: fcfb0ce8-0e19-4fa8-b777-6b9f9cdda178
notification-hubs Notification Hubs Push Notification Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-security.md
editor: jwargo mobile-multiple
notification-hubs Notification Hubs Python Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-python-push-notification-tutorial.md
Title: How to use Notification Hubs with Python description: Learn how to use Azure Notification Hubs from a Python application.
notification-hubs Notification Hubs Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-sdks.md
Title: Azure Notification Hubs SDKs description: A list of the available Azure Notification Hubs SDKs editor: jwargo
notification-hubs Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/samples-powershell.md
editor: jwargo
Last updated 01/04/2019
notification-hubs Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/samples.md
Title: Azure Notification Hubs Samples description: A list of available Azure Notification Hubs samples.
notification-hubs Create Notification Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/scripts/create-notification-hub-powershell.md
editor: sethmanheim
Last updated 01/14/2020
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-audit.md
- Last updated 11/30/2021
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md
- Last updated 1/4/2024
postgresql Concepts Networking Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private.md
- Last updated 11/30/2021
postgresql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-maintenance-portal.md
- Last updated 11/30/2021
reliability Migrate Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-load-balancer.md
Last updated 05/09/2022 - CustomerIntent: As a cloud architect/engineer, I need general guidance on migrating load balancers to using availability zones.
resource-mover About Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/about-move-process.md
Title: About the move process in Azure Resource Mover description: Learn about the process for moving resources across regions with Azure Resource Mover - Last updated 02/02/2023
resource-mover Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/common-questions.md
Title: Common questions about Azure Resource Mover? description: Get answers to common questions about Azure Resource Mover -
resource-mover Manage Resources Created Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/manage-resources-created-move-process.md
Title: Manage resources that are created during the VM move process in Azure Resource Mover description: Learn how to manage resources that are created during the VM move process in Azure Resource Mover-
resource-mover Modify Target Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/modify-target-settings.md
Title: Modify destination settings when moving Azure VMs between regions with Azure Resource Mover description: Learn how to modify destination settings when moving Azure VMs between regions with Azure Resource Mover.-
resource-mover Move Region Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-availability-zone.md
Title: Move Azure VMs to availability zones in another region with Azure Resource Mover description: Learn how to move Azure VMs to availability zones with Azure Resource Mover.-
resource-mover Move Region Within Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-region-within-resource-group.md
Title: Move resources to another region with Azure Resource Mover description: Learn how to move resources within a resource group to another region with Azure Resource Mover.-
resource-mover Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/overview.md
Title: What is Azure Resource Mover? description: Learn about Azure Resource Mover - Last updated 02/02/2023
resource-mover Remove Move Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/remove-move-resources.md
Title: Remove resources from a move collection in Azure Resource Mover description: Learn how to remove resources from a move collection in Azure Resource Mover.-
resource-mover Select Move Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/select-move-tool.md
Title: Choose a tool for moving Azure resources across regions description: Review options and tools for moving Azure resources across regions - Last updated 12/23/2022
resource-mover Support Matrix Move Region Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-azure-vm.md
Title: Support matrix for moving Azure VMs to another region with Azure Resource Mover description: Review support for moving Azure VMs between regions with Azure Resource Mover. - Last updated 03/21/2023
resource-mover Support Matrix Move Region Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-sql.md
Title: Support for moving Azure SQL resources between regions with Azure Resource Mover. description: Review support for moving Azure SQL resources between regions with Azure Resource Mover. - Last updated 03/21/2023
resource-mover Tutorial Move Region Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
Title: Move encrypted Azure VMs across regions by using Azure Resource Mover description: Learn how to move encrypted Azure VMs to another region by using Azure Resource Mover.-
resource-mover Tutorial Move Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-powershell.md
Title: Move resources across regions using PowerShell in Azure Resource Mover description: Learn how to move resources across regions using PowerShell in Azure Resource Mover.-
resource-mover Tutorial Move Region Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-sql.md
Title: Move Azure SQL resources between regions with Azure Resource Mover description: Learn how to move Azure SQL resources to another region with Azure Resource Mover - Last updated 02/10/2023
resource-mover Tutorial Move Region Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-virtual-machines.md
Title: Move Azure VMs across regions with Azure Resource Mover description: Learn how to move Azure VMs to another region with Azure Resource Mover-
role-based-access-control Classic Administrators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/classic-administrators.md
Title: Azure classic subscription administrators description: Describes how to add or change the Azure Co-Administrator and Service Administrator roles, and how to view the Account Administrator. Last updated 06/07/2023
role-based-access-control Custom Roles Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-cli.md
Title: Create or update Azure custom roles using Azure CLI - Azure RBAC description: Learn how to list, create, update, or delete Azure custom roles using Azure CLI and Azure role-based access control (Azure RBAC). ms.assetid: 3483ee01-8177-49e7-b337-4d5cb14f5e32 Last updated 12/01/2023
role-based-access-control Custom Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-portal.md
Title: Create or update Azure custom roles using the Azure portal - Azure RBAC description: Learn how to create Azure custom roles using the Azure portal and Azure role-based access control (Azure RBAC). This includes how to list, create, update, and delete custom roles.
role-based-access-control Custom Roles Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-powershell.md
Title: Create or update Azure custom roles using Azure PowerShell - Azure RBAC description: Learn how to list, create, update, or delete custom roles using Azure PowerShell and Azure role-based access control (Azure RBAC). ms.assetid: 9e225dba-9044-4b13-b573-2f30d77925a9 Last updated 12/01/2023
role-based-access-control Custom Roles Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-rest.md
Title: Create or update Azure custom roles using the REST API - Azure RBAC description: Learn how to list, create, update, or delete Azure custom roles using the REST API and Azure role-based access control (Azure RBAC). - ms.assetid: 1f90228a-7aac-4ea7-ad82-b57d222ab128
role-based-access-control Deny Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/deny-assignments-portal.md
Title: List Azure deny assignments using the Azure portal - Azure RBAC description: Learn how to list the users, groups, service principals, and managed identities that have been denied access to specific Azure resource actions at particular scopes using the Azure portal and Azure role-based access control (Azure RBAC). ms.assetid: 8078f366-a2c4-4fbb-a44b-fc39fd89df81 Last updated 01/24/2022
role-based-access-control Deny Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/deny-assignments-powershell.md
Title: List Azure deny assignments using Azure PowerShell - Azure RBAC description: Learn how to list the users, groups, service principals, and managed identities that have been denied access to specific Azure resource actions at particular scopes using Azure PowerShell and Azure role-based access control (Azure RBAC). Last updated 01/24/2022
role-based-access-control Deny Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/deny-assignments-rest.md
Title: List Azure deny assignments using the REST API - Azure RBAC description: Learn how to list Azure deny assignments for users, groups, and applications using the REST API and Azure role-based access control (Azure RBAC). - rest-api
role-based-access-control Deny Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/deny-assignments.md
Title: Understand Azure deny assignments - Azure RBAC description: Learn about Azure deny assignments in Azure role-based access control (Azure RBAC). - Last updated 03/25/2022 - # Understand Azure deny assignments
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/rbac-and-directory-admin-roles.md
Title: "Azure roles, Microsoft Entra roles, and classic subscription administrator roles" description: Describes the different roles in Azure - Azure roles, and Microsoft Entra roles, and classic subscription administrator roles ms.assetid: 174f1706-b959-4230-9a75-bf651227ebf6 Last updated 12/01/2023
role-based-access-control Role Assignments List Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-cli.md
Title: List Azure role assignments using Azure CLI - Azure RBAC description: Learn how to determine what resources users, groups, service principals, or managed identities have access to using Azure CLI and Azure role-based access control (Azure RBAC). ms.assetid: 3483ee01-8177-49e7-b337-4d5cb14f5e32 Last updated 01/02/2024
role-based-access-control Role Assignments List Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-powershell.md
Title: List Azure role assignments using Azure PowerShell - Azure RBAC description: Learn how to determine what resources users, groups, service principals, or managed identities have access to using Azure PowerShell and Azure role-based access control (Azure RBAC). ms.assetid: 9e225dba-9044-4b13-b573-2f30d77925a9 Last updated 07/28/2020
role-based-access-control Role Assignments List Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-rest.md
Title: List Azure role assignments using the REST API - Azure RBAC description: Learn how to determine what resources users, groups, service principals, or managed identities have access to using the REST API and Azure role-based access control (Azure RBAC).
role-based-access-control Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-template.md
Title: Assign Azure roles using Azure Resource Manager templates - Azure RBAC description: Learn how to grant access to Azure resources for users, groups, service principals, or managed identities using Azure Resource Manager templates and Azure role-based access control (Azure RBAC).
role-based-access-control Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments.md
Title: Understand Azure role assignments - Azure RBAC description: Learn about Azure role assignments in Azure role-based access control (Azure RBAC) for fine-grained access management of Azure resources.
role-based-access-control Role Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-definitions.md
Title: Understand Azure role definitions - Azure RBAC description: Learn about Azure role definitions in Azure role-based access control (Azure RBAC) for fine-grained access management of Azure resources.
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
ms.assetid: df42cca2-02d6-4f3c-9d56-260e1eb7dc44 Last updated 12/01/2023
role-based-access-control Tutorial Custom Role Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-custom-role-cli.md
Title: "Tutorial: Create an Azure custom role with Azure CLI - Azure RBAC" description: Get started creating an Azure custom role using Azure CLI and Azure role-based access control (Azure RBAC) in this tutorial. - Last updated 12/01/2023
sap Run Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/run-ansible.md
When you use [SAP Deployment Automation Framework](deployment-framework.md), you
| `playbook_04_00_00_hana_db_install` | SAP HANA database installation | | `playbook_05_00_00_sap_scs_install.yaml` | SAP central services (SCS) installation | | `playbook_05_01_sap_dbload.yaml` | Database loader |
+| `playbook_04_00_01_hana_hsr.yaml` | SAP HANA high-availability configuration |
| `playbook_05_02_sap_pas_install.yaml` | SAP primary application server (PAS) installation | | `playbook_05_03_sap_app_install.yaml` | SAP application server installation | | `playbook_05_04_sap_web_install.yaml` | SAP web dispatcher installation |
-| `playbook_04_00_01_hana_hsr.yaml` | SAP HANA high-availability configuration |
## Prerequisites
upgrade_packages: false
# TERRAFORM CREATED sap_fqdn: sap.contoso.net # kv_name is the name of the key vault containing the system credentials
-kv_name: DEVWEEUSAP01user###
+kv_name: LABSECESAP01user###
# secret_prefix is the prefix for the name of the secret stored in key vault
-secret_prefix: DEV-WEEU-SAP01
+secret_prefix: LAB-SECE-SAP01
# sap_sid is the application SID
-sap_sid: X01
+sap_sid: L00
# scs_high_availability is a boolean flag indicating # if the SAP Central Services are deployed using high availability scs_high_availability: false
db_high_availability: false
db_lb_ip: 10.110.96.13 disks:
- - { host: 'x01dxdb00l0538', LUN: 0, type: 'sap' }
- - { host: 'x01dxdb00l0538', LUN: 10, type: 'data' }
- - { host: 'x01dxdb00l0538', LUN: 11, type: 'data' }
- - { host: 'x01dxdb00l0538', LUN: 12, type: 'data' }
- - { host: 'x01dxdb00l0538', LUN: 13, type: 'data' }
- - { host: 'x01dxdb00l0538', LUN: 20, type: 'log' }
- - { host: 'x01dxdb00l0538', LUN: 21, type: 'log' }
- - { host: 'x01dxdb00l0538', LUN: 22, type: 'log' }
- - { host: 'x01dxdb00l0538', LUN: 2, type: 'backup' }
- - { host: 'x01app00l538', LUN: 0, type: 'sap' }
- - { host: 'x01app01l538', LUN: 0, type: 'sap' }
- - { host: 'x01scs00l538', LUN: 0, type: 'sap' }
+ - { host: 'l00dxdb00l0538', LUN: 0, type: 'sap' }
+ - { host: 'l00dxdb00l0538', LUN: 10, type: 'data' }
+ - { host: 'l00dxdb00l0538', LUN: 11, type: 'data' }
+ - { host: 'l00dxdb00l0538', LUN: 12, type: 'data' }
+ - { host: 'l00dxdb00l0538', LUN: 13, type: 'data' }
+ - { host: 'l00dxdb00l0538', LUN: 20, type: 'log' }
+ - { host: 'l00dxdb00l0538', LUN: 21, type: 'log' }
+ - { host: 'l00dxdb00l0538', LUN: 22, type: 'log' }
+ - { host: 'l00dxdb00l0538', LUN: 2, type: 'backup' }
+ - { host: 'l00app00l538', LUN: 0, type: 'sap' }
+ - { host: 'l00app01l538', LUN: 0, type: 'sap' }
+ - { host: 'l00scs00l538', LUN: 0, type: 'sap' }
... ```
-The `X01_hosts.yaml` file is the inventory file that Ansible uses for configuration of the SAP infrastructure. The `X01` label might differ for your deployments.
+The `L00_hosts.yaml` file is the inventory file that Ansible uses for configuration of the SAP infrastructure. The `L00` label might differ for your deployments.
```yaml
-X01_DB:
+L00_DB:
hosts:
- x01dxdb00l0538:
+ l00dxdb00l0538:
ansible_host : 10.110.96.12 ansible_user : azureadm ansible_connection : ssh
X01_DB:
vars: node_tier : hana
-X01_SCS:
+L00_SCS:
hosts:
- x01scs00l538:
+ l00scs00l538:
ansible_host : 10.110.32.25 ansible_user : azureadm ansible_connection : ssh
X01_SCS:
vars: node_tier : scs
-X01_ERS:
+L00_ERS:
hosts: vars: node_tier : ers
-X01_PAS:
+L00_PAS:
hosts:
- x01app00l538:
+ l00app00l538:
ansible_host : 10.110.32.24 ansible_user : azureadm ansible_connection : ssh
X01_PAS:
vars: node_tier : pas
-X01_APP:
+L00_APP:
hosts:
- x01app01l538:
+ l00app01l538:
ansible_host : 10.110.32.15 ansible_user : azureadm ansible_connection : ssh
X01_APP:
vars: node_tier : app
-X01_WEB:
+L00_WEB:
hosts: vars: node_tier : web
X01_WEB:
Make sure that you [download the SAP software](software.md) to your Azure environment before you run this step.
+One way you can run the playbooks is to use the **Configuration** menu.
+
+Run the `configuration_menu` script.
+
+```bash
+${HOME}/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/configuration_menu.sh
+```
+++ To run a playbook or multiple playbooks, use the following `ansible-playbook` command. This example runs the operating system configuration playbook. ```bash
kv_name="$(awk '$1 == "kv_name:" {print $2}' ${sap_params_file})"
prefix="$(awk '$1 == "secret_prefix:" {print $2}' ${sap_params_file})" password_secret_name=$prefix-sid-password
-password_secret=$(az keyvault secret show --vault-name ${kv_name} --name ${password_secret_name} | jq -r .value)
+password_secret=$(az keyvault secret show --vault-name ${kv_name} --name ${password_secret_name} --query value --output table )
export ANSIBLE_PASSWORD=$password_secret export ANSIBLE_INVENTORY="${sap_sid}_hosts.yaml"
export ANSIBLE_PRIVATE_KEY_FILE=sshkey
export ANSIBLE_COLLECTIONS_PATHS=/opt/ansible/collections${ANSIBLE_COLLECTIONS_PATHS:+${ANSIBLE_COLLECTIONS_PATHS}} export ANSIBLE_REMOTE_USER=azureadm
-# Ref: https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html
-# Silence warnings about Python interpreter discovery
export ANSIBLE_PYTHON_INTERPRETER=auto_silent # Set of options that will be passed to the ansible-playbook command
playbook_options=(
ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_01_os_base_config.yaml + ```
-### Operating system configuration
+## Operating system configuration
The operating system configuration playbook is used to configure the operating system of the SAP virtual machines. The playbook performs the following tasks.
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'Core Operating System Configuration.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++ # [Linux](#tab/linux) The following tasks are executed on Linux virtual machines:
The following tasks are executed on Linux virtual machines:
- Configures the banners displayed when signed in - Configures the services required +
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to perform the Operating System configuration
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_01_os_base_config.yaml
+
+```
+ # [Windows](#tab/windows) - Ensures that all the components are installed:
The following tasks are executed on Linux virtual machines:
- Configures Windows Firewall - Joins the virtual machine to the specified domain
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to perform the Operating System configuration
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_01_os_base_config.yaml
+
+```
+
-### SAP-specific operating system configuration
+## SAP-specific operating system configuration
The SAP-specific operating system configuration playbook is used to configure the operating system of the SAP virtual machines. The playbook performs the following tasks.
The following tasks are executed on Linux virtual machines:
- Configures the SAP-specific services - Implements configurations defined in the relevant SAP Notes
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'SAP Operating System Configuration.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to perform the SAP Specific Operating System configuration
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_02_os_sap_specific_config.yaml
+
+```
+ # [Windows](#tab/windows) - Adds local groups and permissions - Connects to the Windows file shares +
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to perform the SAP Specific Operating System configuration
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_02_os_sap_specific_config.yaml
+
+```
+
-### Local software download
+## Local software download
This playbook downloads the installation media from the control plane to the installation media source. The installation media can be shared out from the central services instance or from Azure Files or Azure NetApp Files.
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'Local software download.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++ # [Linux](#tab/linux) The following tasks are executed on the central services instance virtual machine: - Download the software from the storage account and make it available for the other virtual machines.
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_03_bom_processing.yaml
+
+```
++ # [Windows](#tab/windows) The following tasks are executed on the central services instance virtual machine: - Download the software from the storage account and make it available for the other virtual machines.
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_03_bom_processing.yaml
+
+```
++
+## SAP Central Services & High Availability Configuration
+
+This playbook performs the Central Services installation. For High availability scenarios, the playbook also configures the Pacemaker cluster needed for SAP Central Services for high availability on Linux and Windows Failover Clustering for Windows.
+
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'SCS Installation & High Availability Configuration.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++
+# [Linux](#tab/linux)
+
+The playbook performs the following tasks:
+
+- Central Services Installation.
+- Pacemaker cluster configuration.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_00_00_sap_scs_install.yaml
+
+```
++
+# [Windows](#tab/windows)
+
+The playbook performs the following tasks:
+
+- Central Services Installation.
+- Windows failover cluster configuration.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_00_00_sap_scs_install.yaml
+
+```
+++
+## Database Installation
+
+This playbook performs the Database server installation.
+
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'Database Installation.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++
+# [Linux](#tab/linux)
+
+The playbook performs the following tasks:
+
+- Database instance installation.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_04_00_00_db_install.yaml
+
+```
++
+# [Windows](#tab/windows)
+
+The playbook performs the following tasks:
+
+- Database instance installation.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_04_00_00_db_install.yaml
+
+```
+++
+## Database Load
+
+This playbook performs the Database load.
+
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'Database Load.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++
+# [Linux](#tab/linux)
+
+The playbook performs the following tasks:
+
+- Database load.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_01_sap_dbload.yaml
+
+```
++
+# [Windows](#tab/windows)
+
+The playbook performs the following tasks:
+
+- Database load.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_01_sap_dbload.yaml
+
+```
+++
+## Database High Availability Configuration
+
+This playbook performs the Database server high availability configuration.
++
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'Database High Availability Configuration.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++
+# [Linux](#tab/linux)
+
+The playbook performs the following tasks:
+
+- Database high availability configuration.
+- For HANA, the playbook also configures the Pacemaker cluster needed for SAP HANA for high availability on Linux and configures HANA System replication.
+- For Oracle, the playbook also configures Oracle Data Guard.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_04_00_01_db_ha.yaml
+
+```
++
+# [Windows](#tab/windows)
+
+The playbook performs the following tasks:
+
+- Database high availability configuration.
+- SQL Server Always On Availability Group configuration.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_04_00_01_db_ha.yaml
+
+```
++++
+## Primary Application Server Installation
+
+This playbook performs the installation of the primary application server.
+
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'Primary Application Server Installation.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++
+# [Linux](#tab/linux)
+
+The playbook performs the following tasks:
+
+- Primary Application Server Installation.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_02_sap_pas_install.yaml
+
+```
++
+# [Windows](#tab/windows)
+
+The playbook performs the following tasks:
+
+- Primary Application Server Installation.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_02_sap_pas_install.yaml
+
+```
++++
+## Additional Application Server Installation
+
+This playbook performs the installation of the application servers.
+
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'Application Server Installation.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++
+# [Linux](#tab/linux)
+
+The playbook performs the following tasks:
+
+- Application Server Installation.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_02_sap_app_install.yaml
+
+```
++
+# [Windows](#tab/windows)
+
+The playbook performs the following tasks:
+
+- Application Server Installation.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_02_sap_app_install.yaml
+
+```
+++
+## Web Dispatcher Installation
+
+This playbook performs the installation of the Web Dispatchers.
+
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'Web Dispatcher Installation.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++
+# [Linux](#tab/linux)
+
+The playbook performs the following tasks:
+
+- Web Dispatcher Installation.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_04_sap_web_install.yaml
+
+```
++
+# [Windows](#tab/windows)
+
+The playbook performs the following tasks:
+
+- Web Dispatcher Installation.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
++
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_04_sap_web_install.yaml
+
+```
+++
+## ACSS Registration
+
+This playbook performs the ACSS registration.
+
+You can either run the playbook using:
+- the DevOps Pipeline 'Configuration and SAP installation' choosing 'Register System in ACSS.'
+- the configuration menu script 'configuration_menu.sh.'
+- directly from the command line.
++
+# [Linux](#tab/linux)
+
+The playbook performs the following tasks:
+
+- ACSS registration.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_06_00_acss_registration.yaml
+
+```
++
+# [Windows](#tab/windows)
+
+The playbook performs the following tasks:
+
+- ACSS registration.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export workload_vault_name="LABSECESAP04user###"
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+ prefix="LAB-SECE-SAP04"
+
+password_secret_name=$prefix-sid-password
+
+password_secret=$(az keyvault secret show --vault-name ${workload_vault_name} --name ${password_secret_name} --query value --output table)
+export ANSIBLE_PASSWORD=$password_secret
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_06_00_acss_registration.yaml
+
+```
+
sap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md
code .
# The environment value is a mandatory field, it is used for partitioning the environments, for example, PROD and NP. environment = "LAB" # The location/region value is a mandatory field, it is used to control where the resources are deployed
- location = "westeurope"
+ location = "swedencentral"
# management_network_address_space is the address space for management virtual network management_network_address_space = "10.10.20.0/25"
code .
# The environment value is a mandatory field, it is used for partitioning the environments, for example, PROD and NP. environment = "LAB" # The location/region value is a mandatory field, it is used to control where the resources are deployed
- location = "westeurope"
+ location = "swedencentral"
#Defines the DNS suffix for the resources dns_label = "lab.sdaf.contoso.net"
Use the [deploy_controlplane.sh](bash/deploy-controlplane.md) script to deploy t
The deployment goes through cycles of deploying the infrastructure, refreshing the state, and uploading the Terraform state files to the library storage account. All of these steps are packaged into a single deployment script. The script needs the location of the configuration file for the deployer and library, and some other parameters.
-For example, choose **West Europe** as the deployment location, with the four-character name `WEEU`, as previously described. The sample deployer configuration file `LAB-WEEU-DEP05-INFRASTRUCTURE.tfvars` is in the `${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/LAB-WEEU-DEP05-INFRASTRUCTURE` folder.
+For example, choose **West Europe** as the deployment location, with the four-character name `SECE`, as previously described. The sample deployer configuration file `LAB-SECE-DEP05-INFRASTRUCTURE.tfvars` is in the `${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/LAB-SECE-DEP05-INFRASTRUCTURE` folder.
-The sample SAP library configuration file `LAB-WEEU-SAP_LIBRARY.tfvars` is in the `${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/LAB-WEEU-SAP_LIBRARY` folder.
+The sample SAP library configuration file `LAB-SECE-SAP_LIBRARY.tfvars` is in the `${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/LAB-SECE-SAP_LIBRARY` folder.
Set the environment variables for the service principal:
export TF_use_webapp=true
export env_code="LAB" export vnet_code="DEP05"
-export region_code="WEEU"
+export region_code="SECE"
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
You need to note some values for upcoming steps. Look for this text block in the
######################################################################################### # # # Please save these values: #
-# - Key Vault: LABWEEUDEP05user39B #
+# - Key Vault: LABSECEDEP05user39B #
# - Deployer IP: x.x.x.x #
-# - Storage Account: mgmtnoeutfstate53e #
-# - Web Application Name: mgmt-noeu-sapdeployment39B #
+# - Storage Account: labsecetfstate53e #
+# - Web Application Name: lab-sece-sapdeployment39B #
# - App registration Id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # # # #########################################################################################
To copy the control plane configuration files to the deployer VM, you can use th
```bash
-terraform_state_storage_account=labweeutfstate###
+terraform_state_storage_account=labsecetfstate###
cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES
export ARM_TENANT_ID="<tenantId>"
export env_code="LAB" export vnet_code="DEP05"
-export region_code="WEEU"
+export region_code="SECE"
-terraform_state_storage_account=labweeutfstate###
- vault_name="LABWEEUDEP05user###"
+terraform_state_storage_account=labsecetfstate###
+ vault_name="LABSECEDEP05user###"
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
You can deploy the web application using the following script:
```bash export env_code="LAB" export vnet_code="DEP05"
-export region_code="WEEU"
+export region_code="SECE"
export webapp_name="<webAppName>" export app_id="<appRegistrationId>" export webapp_id="<webAppId>"
az webapp restart --resource-group ${env_code}-${region_code}-${vnet_code}-INFRA
1. Collect the following information in a text editor. This information was collected at the end of the "Deploy the control plane" phase. 1. The name of the Terraform state file storage account in the library resource group:
- - Following from the preceding example, the resource group is `LAB-WEEU-SAP_LIBRARY`.
- - The name of the storage account contains `mgmtnoeutfstate`.
+ - Following from the preceding example, the resource group is `LAB-SECE-SAP_LIBRARY`.
+ - The name of the storage account contains `labsecetfstate`.
1. The name of the key vault in the deployer resource group:
- - Following from the preceding example, the resource group is `LAB-WEEU-DEP05-INFRASTRUCTURE`.
- - The name of the key vault contains `LABWEEUDEP05user`.
+ - Following from the preceding example, the resource group is `LAB-SECE-DEP05-INFRASTRUCTURE`.
+ - The name of the key vault contains `LABSECEDEP05user`.
1. The public IP address of the deployer VM. Go to your deployer's resource group, open the deployer VM, and copy the public IP address.
az webapp restart --resource-group ${env_code}-${region_code}-${vnet_code}-INFRA
1. The name of the deployer state file is found under the library resource group: - Select **Library resource group** > **State storage account** > **Containers** > `tfstate`. Copy the name of the deployer state file.
- - Following from the preceding example, the name of the blob is `LAB-WEEU-DEP05-INFRASTRUCTURE.terraform.tfstate`.
+ - Following from the preceding example, the name of the blob is `LAB-SECE-DEP05-INFRASTRUCTURE.terraform.tfstate`.
1. If necessary, register the Service Principal, for this tutorial this step is not needed.
az webapp restart --resource-group ${env_code}-${region_code}-${vnet_code}-INFRA
export ARM_TENANT_ID="<tenant>" export key_vault="<vaultName>" export env_code="LAB"
- export region_code="WEEU"
+ export region_code="SECE"
export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
Use the [install_workloadzone](bash/install-workloadzone.md) script to deploy th
1. On the deployer VM, go to the `Azure_SAP_Automated_Deployment` folder. ```bash
- cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/LAB-WEEU-SAP04-INFRASTRUCTURE
+ cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/LAB-SECE-SAP04-INFRASTRUCTURE
``` 1. Optionally, open the workload zone configuration file and, if needed, change the network logical name to match the network name.
export ARM_TENANT_ID="<tenantId>"
```bash export deployer_env_code="LAB" export sap_env_code="LAB"
-export region_code="WEEU"
+export region_code="SECE"
export deployer_vnet_code="DEP05" export vnet_code="SAP04"
Deploy the SAP system.
```bash export sap_env_code="LAB"
-export region_code="WEEU"
+export region_code="SECE"
export vnet_code="SAP04" export SID="L00"
materials:
- name: "Kernel Part I ; OS: Linux on x86_64 64bit ; DB: Database independent" ```
-For this example configuration, the resource group is `LAB-WEEU-DEP05-INFRASTRUCTURE`. The deployer key vault name contains `LABWEEUDEP05user` in the name. You use this information to configure your deployer's key vault secrets.
+For this example configuration, the resource group is `LAB-SECE-DEP05-INFRASTRUCTURE`. The deployer key vault name contains `LABSECEDEP05user` in the name. You use this information to configure your deployer's key vault secrets.
1. Connect to your deployer VM for the following steps. A copy of the repo is now there.
The SAP application installation happens through Ansible playbooks.
Go to the system deployment folder. ```bash
-cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-WEEU-SAP04-L00/
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
``` Make sure you have the following files in the current folders: `sap-parameters.yaml` and `L00_host.yaml`.
Run the `configuration_menu` script.
${HOME}/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/configuration_menu.sh ``` ++ Choose the playbooks to run. ### Playbook: Base Operating System configuration This playbook performs the generic OS configuration setup on all the machines, which includes configuration of software repositories, packages, and services.
+You can either run the playbook using the configuration menu or directly from the command line.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to perform the Operating System configuration
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_01_os_base_config.yaml
+
+```
++ ### Playbook: SAP specific Operating System configuration This playbook performs the SAP OS configuration setup on all the machines. The steps include creation of volume groups and file systems and configuration of software repositories, packages, and services.
+You can either run the playbook using the configuration menu or directly from the command line.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to perform the SAP Specific Operating System configuration
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_02_os_sap_specific_config.yaml
+
+```
+ ### Playbook: BOM Processing This playbook downloads the SAP software to the SCS virtual machine.
+You can either run the playbook using the configuration menu or directly from the command line.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_03_bom_processing.yaml
+
+```
++ ### Playbook: SCS Install This playbook installs SAP central services. For highly available configurations, the playbook also installs the SAP ERS instance and configures Pacemaker.
+You can either run the playbook using the configuration menu or directly from the command line.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_00_00_sap_scs_install.yaml
+
+```
++ ### Playbook: Database Instance installation This playbook installs the database instances.
+You can either run the playbook using the configuration menu or directly from the command line.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_04_00_00_db_install.yaml
+
+```
+ ### Playbook: Database Load This playbook invokes the database load task from the primary application server.
+You can either run the playbook using the configuration menu or directly from the command line.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_01_sap_dbload.yaml
+
+```
+ ### Playbook: Database High Availability Setup This playbook configures the Database High availability, for HANA it entails HANA system replication and Pacemaker for the HANA database.
+You can either run the playbook using the configuration menu or directly from the command line.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_04_00_01_db_ha.yaml
+
+```
+ ### Playbook: Primary Application Server installation This playbook installs the primary application server.
+You can either run the playbook using the configuration menu or directly from the command line.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_02_sap_pas_install.yaml
+
+```
### Playbook: Application Server installations This playbook installs the application servers.
+You can either run the playbook using the configuration menu or directly from the command line.
+
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_02_sap_app_install.yaml
+
+```
### Playbook: Web Dispatcher installations This playbook installs the web dispatchers.
+You can either run the playbook using the configuration menu or directly from the command line.
You've now deployed and configured a standalone HANA system. If you need to configure a highly available (HA) SAP HANA database, run the HANA HA playbook.
+```bash
+
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/LAB-SECE-SAP04-L00/
+
+export sap_sid=L00
+export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars ansible_ssh_pass='{{ lookup("env", "ANSIBLE_PASSWORD") }}'
+ --extra-vars="@sap-parameters.yaml"
+ "${@}"
+)
+
+# Run the playbook to retrieve the ssh key from the Azure key vault
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+# Run the playbook to download the software from the SAP Library
+ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_05_04_sap_web_install.yaml
+
+```
## Clean up installation
Before you begin, sign in to your Azure account. Then, check that you're in the
### Remove the SAP infrastructure
-Go to the `DEV-WEEU-SAP01-X00` subfolder inside the `SYSTEM` folder. Then, run this command:
+Go to the `LAB-SECE-SAP01-L00` subfolder inside the `SYSTEM` folder. Then, run this command:
```bash
-export sap_env_code="DEV"
-export region_code="WEEU"
+export sap_env_code="LAB"
+export region_code="SECE"
export sap_vnet_code="SAP04"
-cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-${sap_vnet_code}-X00
+cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-${sap_vnet_code}-L00
${DEPLOYMENT_REPO_PATH}/deploy/scripts/remover.sh \
- --parameterfile "${sap_env_code}-${region_code}-${sap_vnet_code}-X00.tfvars" \
+ --parameterfile "${sap_env_code}-${region_code}-${sap_vnet_code}-L00.tfvars" \
--type sap_system ``` ### Remove the SAP workload zone
-Go to the `DEV-XXXX-SAP01-INFRASTRUCTURE` subfolder inside the `LANDSCAPE` folder. Then, run the following command:
+Go to the `LAB-XXXX-SAP01-INFRASTRUCTURE` subfolder inside the `LANDSCAPE` folder. Then, run the following command:
```bash
-export sap_env_code="DEV"
-export region_code="WEEU"
+export sap_env_code="LAB"
+export region_code="SECE"
export sap_vnet_code="SAP01" cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-${sap_vnet_code}-INFRASTRUCTURE
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
Run the following command: ```bash
-export region_code="WEEU"
+export region_code="SECE"
export env_code="LAB" export vnet_code="DEP05"
sap Ha Setup With Fencing Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/ha-setup-with-fencing-device.md
Title: High availability setup with fencing device for SAP HANA on Azure (Large Instances)| Microsoft Docs description: Learn to establish high availability for SAP HANA on Azure (Large Instances) in SUSE by using the fencing device.
sap Hana Additional Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-additional-network-requirements.md
Title: Other network requirements for SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about added network requirements for SAP HANA on Azure (Large Instances) that you might have.
sap Hana Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-architecture.md
Title: Architecture of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn the architecture for deploying SAP HANA on Azure (Large Instances).
sap Hana Available Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-available-skus.md
Title: SKUs for SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about the SKUs available for SAP HANA on Azure (Large Instances). keywords: 'HLI, HANA, SKUs, S896, S224, S448, S672, Optane, SAP'
sap Hana Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-backup-restore.md
Title: HANA backup and restore on SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn how to back up and restore SAP HANA on HANA Large Instances.
sap Hana Certification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-certification.md
Title: Certification of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about certification of SAP HANA on Azure (Large Instances).
sap Hana Concept Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-concept-preparation.md
Title: Disaster recovery principles and preparation on SAP HANA on Azure (Large Instances) | Microsoft Docs description: Become familiar with disaster recovery principles and preparation on SAP HANA on Azure (Large Instances).
sap Hana Connect Azure Vm Large Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-connect-azure-vm-large-instances.md
Title: Connectivity setup from virtual machines to SAP HANA on Azure (Large Instances) | Microsoft Docs description: Connectivity setup from virtual machines for using SAP HANA on Azure (Large Instances). tags: azure-resource-manager
-keywords: ''
sap Hana Connect Vnet Express Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-connect-vnet-express-route.md
Title: Connect a virtual network to SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn how to connect a virtual network to SAP HANA on Azure (Large Instances).
sap Hana Data Tiering Extension Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-data-tiering-extension-nodes.md
Title: Data tiering and extension nodes for SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about data tiering and extension nodes for SAP HANA on Azure (Large Instances).
sap Hana Example Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-example-installation.md
Title: Install HANA on SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn how to install HANA on SAP HANA on Azure (Large Instances).
sap Hana Failover Procedure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-failover-procedure.md
Title: HANA failover procedure to a disaster site for SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn how to fail over to a disaster recovery site for SAP HANA on Azure (Large Instances).
sap Hana Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-installation.md
Title: Install SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn how to install and configure SAP HANA on Azure (Large Instances).
sap Hana Know Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-know-terms.md
Title: Know the terms of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Know the terms of SAP HANA on Azure (Large Instances).
sap Hana Large Instance Enable Kdump https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-large-instance-enable-kdump.md
Title: Script to enable kdump in SAP HANA (Large Instances)| Microsoft Docs description: Learn how to enable the kdump service on Azure HANA Large Instances Type I and Type II.
sap Hana Large Instance Virtual Machine Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-large-instance-virtual-machine-migration.md
Title: Migrating SAP HANA on Azure (Large Instances) to Azure virtual machines| Microsoft Docs description: How to migrate SAP HANA on Azure (Large Instances) to Azure virtual machines
sap Hana Monitor Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-monitor-troubleshoot.md
Title: Monitoring and troubleshooting from HANA side on SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn how to monitor and troubleshoot your SAP HANA on Azure (Large Instances) using resources provided by SAP HANA.
sap Hana Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-network-architecture.md
Title: Network architecture of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about the network architecture for deploying SAP HANA on Azure (Large Instances).
sap Hana Onboarding Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-onboarding-requirements.md
Title: Onboarding requirements for SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about onboarding requirements for SAP HANA on Azure (Large Instances).
sap Hana Operations Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-operations-model.md
Title: Operations model of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about the SAP HANA (Large Instances) operations model and your responsibilities.
sap Hana Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-overview-architecture.md
Title: Overview of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Overview of how to deploy SAP HANA on Azure (Large Instances).
sap Hana Overview High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-overview-high-availability-disaster-recovery.md
Title: High availability and disaster recovery of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (Large Instances).
sap Hana Overview Infrastructure Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-overview-infrastructure-connectivity.md
Title: Infrastructure and connectivity to SAP HANA on Azure (large instances) | Microsoft Docs description: Configure required connectivity infrastructure to use SAP HANA on Azure (large instances).
sap Hana Setup Smt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-setup-smt.md
Title: How to set up SMT server for SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn how to set up SMT server for SAP HANA on Azure (Large Instances).
sap Hana Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-sizing.md
Title: Sizing of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about sizing of SAP HANA on Azure (Large Instances).
sap Hana Storage Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-storage-architecture.md
Title: Storage architecture of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about the storage architecture for SAP HANA on Azure (Large Instances).
sap Hana Supported Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-supported-scenario.md
Title: Supported scenarios for SAP HANA on Azure (Large Instances)| Microsoft Docs description: Learn about scenarios supported for SAP HANA on Azure (Large Instances) and their architectural details.
sap Large Instance Os Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/large-instance-os-backup.md
Title: Operating system backup and restore of SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn how to do operating system backup and restore for SAP HANA on Azure (Large Instances).
sap Os Backup Hli Type Ii Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/os-backup-hli-type-ii-skus.md
Title: Operating system backup and restore of SAP HANA on Azure (Large Instances) type II SKUs| Microsoft Docs description: Perform Operating system backup and restore for SAP HANA on Azure (Large Instances) Type II SKUs
sap Os Compatibility Matrix Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/os-compatibility-matrix-hana-large-instance.md
Title: Operating system compatibility matrix for SAP HANA (Large Instances)| Microsoft Docs description: The compatibility matrix represents the compatibility of different versions of operating system with different hardware types (Large Instances).
sap Os Upgrade Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/os-upgrade-hana-large-instance.md
Title: Operating system upgrade for the SAP HANA on Azure (Large Instances)| Microsoft Docs description: Learn to do an operating system upgrade for SAP HANA on Azure (Large Instances).
sap Troubleshooting Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/troubleshooting-monitoring.md
Title: Monitoring SAP HANA on Azure (Large Instances) | Microsoft Docs description: Learn about monitoring SAP HANA on an Azure (Large Instances).
sap Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md
Title: Deploy SAP S/4HANA or BW/4HANA on an Azure VM | Microsoft Docs description: Deploy SAP S/4HANA or BW/4HANA on an Azure VM tags: azure-resource-manager
-keywords: ''
- ms.assetid: 44bbd2b6-a376-4b5c-b824-e76917117fa9
sap Certifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/certifications.md
Title: Microsoft Azure certifications for SAP | Microsoft Docs description: Updated list of current configurations and certifications of SAP on the Azure platform. tags: azure-resource-manager
-keywords: ''
vm-linux
Last updated 01/25/2022 - # SAP certifications and configurations running on Microsoft Azure
sap Hana Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-get-started.md
Title: Installation of SAP HANA on Azure virtual machines | Microsoft Docs' description: Guide for installation of SAP HANA on Azure VMs tags: azure-resource-manager
-keywords: ''
ms.assetid: c51a2a06-6e97-429b-a346-b433a785c9f0
sap High Availability Guide Standard Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-standard-load-balancer-outbound-connections.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
sap High Availability Guide Windows Azure Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-azure-files-smb.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
Here are prerequisites for the installation of SAP NetWeaver HA systems on Azure
The Active Directory administrator should create, in advance, three domain users with Local Administrator rights and one global group in the local Windows Server Active Directory instance.
-*SAPCONT_ADMIN@SAPCONTOSO.local* has Domain Administrator rights and is used to run *SAPinst*, *\<sid>adm*, and *SAPService\<SID>* as SAP system users and the *SAP_\<SAPSID>_GlobalAdmin* group. The SAP Installation Guide contains the specific details required for these accounts.
+`SAPCONT_ADMIN@SAPCONTOSO.local` has Domain Administrator rights and is used to run *SAPinst*, *\<sid>adm*, and *SAPService\<SID>* as SAP system users and the *SAP_\<SAPSID>_GlobalAdmin* group. The SAP Installation Guide contains the specific details required for these accounts.
> [!NOTE] > SAP user accounts should not be Domain Administrator. We generally recommend that you don't use *\<sid>adm* to run SAPinst.
The Azure administrator should complete the following tasks:
![Screenshot of the Azure portal that shows options for private endpoint definition.](media/virtual-machines-shared-sap-high-availability-guide/create-sa-3.png)
- 1. If necessary, add a DNS A record into Windows DNS for *<storage_account_name>.file.core.windows.net*. (This might need to be in a new DNS zone.) Discuss this topic with the DNS administrator. The new zone should not update outside an organization.
+ 1. If necessary, add a DNS A record into Windows DNS for `<storage_account_name>.file.core.windows.net`. (This might need to be in a new DNS zone.) Discuss this topic with the DNS administrator. The new zone should not update outside an organization.
![Screenshot of DNS Manager that shows private endpoint DNS definition.](media/virtual-machines-shared-sap-high-availability-guide/pe-dns-1.png)
The Azure administrator should complete the following tasks:
This script creates either a computer account or a service account in Active Directory. It has the following requirements:
- * The user who's running the script must have permission to create objects in the Active Directory domain that contains the SAP servers. Typically, an organization uses a Domain Administrator account such as *SAPCONT_ADMIN@SAPCONTOSO.local*.
- * Before the user runs the script, confirm that this Active Directory domain user account is synchronized with Microsoft Entra ID. An example of this would be to open the Azure portal and go to Microsoft Entra users, check that the user *SAPCONT_ADMIN@SAPCONTOSO.local* exists, and verify the Microsoft Entra user account.
- * Grant the Contributor role-based access control (RBAC) role to this Microsoft Entra user account for the resource group that contains the storage account that holds the file share. In this example, the user *SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com* is granted the Contributor role to the respective resource group.
+ * The user who's running the script must have permission to create objects in the Active Directory domain that contains the SAP servers. Typically, an organization uses a Domain Administrator account such as `SAPCONT_ADMIN@SAPCONTOSO.local`.
+ * Before the user runs the script, confirm that this Active Directory domain user account is synchronized with Microsoft Entra ID. An example of this would be to open the Azure portal and go to Microsoft Entra users, check that the user `SAPCONT_ADMIN@SAPCONTOSO.local` exists, and verify the Microsoft Entra user account.
+ * Grant the Contributor role-based access control (RBAC) role to this Microsoft Entra user account for the resource group that contains the storage account that holds the file share. In this example, the user `SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com` is granted the Contributor role to the respective resource group.
* The user should run the script while logged on to a Windows Server instance by using an Active Directory domain user account with the permission as specified earlier.
- In this example scenario, the Active Directory administrator would log on to the Windows Server instance as *SAPCONT_ADMIN@SAPCONTOSO.local*. When the administrator is using the PowerShell command `Connect-AzAccount`, the administrator connects as user *SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com*. Ideally, the Active Directory administrator and the Azure administrator should work together on this task.
+ In this example scenario, the Active Directory administrator would log on to the Windows Server instance as `SAPCONT_ADMIN@SAPCONTOSO.local`. When the administrator is using the PowerShell command `Connect-AzAccount`, the administrator connects as user `SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com`. Ideally, the Active Directory administrator and the Azure administrator should work together on this task.
![Screenshot of the PowerShell script that creates a local Active Directory account.](media/virtual-machines-shared-sap-high-availability-guide/ps-script-1.png)
The Azure administrator should complete the following tasks:
An SAP Basis administrator should complete these tasks:
-1. [Install the Windows cluster on ASCS/ERS nodes and add the cloud witness](sap-high-availability-infrastructure-wsfc-shared-disk.md#0d67f090-7928-43e0-8772-5ccbf8f59aab).
-2. The first cluster node installation asks for the Azure Files SMB storage account name. Enter the FQDN *<storage_account_name>.file.core.windows.net*. If SAPinst doesn't accept more than 13 characters, the SWPM version is too old.
+1. [Install the Windows cluster on ASCS/ERS nodes and add the cloud witness](sap-high-availability-infrastructure-wsfc-shared-disk.md#install-and-configure-windows-failover-cluster).
+2. The first cluster node installation asks for the Azure Files SMB storage account name. Enter the FQDN `<storage_account_name>.file.core.windows.net`. If SAPinst doesn't accept more than 13 characters, the SWPM version is too old.
3. [Modify the SAP profile of the ASCS/SCS instance](sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761). 4. [Update the probe port for the SAP \<SID> role in Windows Server Failover Cluster (WSFC)](sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761). 5. Continue with SWPM installation for the second ASCS/ERS node. SWPM requires only the path of the profile directory. Enter the full UNC path to the profile directory.
sap High Availability Guide Windows Dfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-dfs.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap High Availability Guide Windows Netapp Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-netapp-files-smb.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
[dfs-n-reference]:high-availability-guide-windows-dfs.md [anf-azure-doc]:../../azure-netapp-files/azure-netapp-files-introduction.md
-[anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=storage&regions=all
[anf-sap-applications-azure]:https://www.netapp.com/us/media/tr-4746.pdf
-[2205917]:https://launchpad.support.sap.com/#/notes/2205917
-[1944799]:https://launchpad.support.sap.com/#/notes/1944799
[1928533]:https://launchpad.support.sap.com/#/notes/1928533 [2015553]:https://launchpad.support.sap.com/#/notes/2015553 [2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2191498]:https://launchpad.support.sap.com/#/notes/2191498
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
-[1984787]:https://launchpad.support.sap.com/#/notes/1984787
[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[1410736]:https://launchpad.support.sap.com/#/notes/1410736
-
-[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html
-
-[suse-ha-guide]:https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
-[suse-drbd-guide]:https://www.suse.com/documentation/sle-ha-12/singlehtml/book_sleha_techguides/book_sleha_techguides.html
-[suse-ha-12sp3-relnotes]:https://www.suse.com/releasenotes/x86_64/SLE-HA/12-SP3/
-
-[template-multisid-xscs]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-xscs-md%2Fazuredeploy.json
-[template-converged]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-converged-md%2Fazuredeploy.json
-[template-file-server]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-file-server-md%2Fazuredeploy.json
[sap-hana-ha]:sap-hana-high-availability.md
-[nfs-ha]:high-availability-guide-suse-nfs.md
This article describes how to deploy, configure the virtual machines, install the cluster framework, and install a highly available SAP NetWeaver 7.50 system on Windows VMs, using [SMB](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) on [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md).
When considering Azure NetApp Files for the SAP Netweaver architecture, be aware
## Prepare the infrastructure for SAP HA by using a Windows failover cluster
-1. [Set the ASCS/SCS load balancing rules for the Azure internal load balancer](./sap-high-availability-infrastructure-wsfc-shared-disk.md#fe0bd8b5-2b43-45e3-8295-80bee5415716).
-2. [Add Windows virtual machines to the domain](./sap-high-availability-infrastructure-wsfc-shared-disk.md#e69e9a34-4601-47a3-a41c-d2e11c626c0c).
-3. [Add registry entries on both cluster nodes of the SAP ASCS/SCS instance](./sap-high-availability-infrastructure-wsfc-shared-disk.md#661035b2-4d0f-4d31-86f8-dc0a50d78158)
-4. [Set up a Windows Server failover cluster for an SAP ASCS/SCS instance](./sap-high-availability-infrastructure-wsfc-shared-disk.md#0d67f090-7928-43e0-8772-5ccbf8f59aab)
+1. [Set the ASCS/SCS load balancing rules for the Azure internal load balancer](./sap-high-availability-infrastructure-wsfc-shared-disk.md#create-azure-internal-load-balancer).
+2. [Add Windows virtual machines to the domain](./sap-high-availability-infrastructure-wsfc-shared-disk.md#add-the-windows-vms-to-the-domain).
+3. [Add registry entries on both cluster nodes of the SAP ASCS/SCS instance](./sap-high-availability-infrastructure-wsfc-shared-disk.md#add-registry-entries-on-both-cluster-nodes-of-the-ascsscs-instance)
+4. [Set up a Windows Server failover cluster for an SAP ASCS/SCS instance](./sap-high-availability-infrastructure-wsfc-shared-disk.md#install-and-configure-windows-failover-cluster)
5. If you are using Windows Server 2016, we recommend that you configure [Azure Cloud Witness](/windows-server/failover-clustering/deploy-cloud-witness).
Update parameters in the SAP ASCS/SCS instance profile \<SID>_ASCS/SCS\<Nr>_\<Ho
Parameter `enque/encni/set_so_keepalive` is only needed if using ENSA1. Restart the SAP ASCS/SCS instance.
-Set `KeepAlive` parameters on both SAP ASCS/SCS cluster nodes follow the instructions to [Set registry entries on the cluster nodes of the SAP ASCS/SCS instance](./sap-high-availability-infrastructure-wsfc-shared-disk.md#661035b2-4d0f-4d31-86f8-dc0a50d78158).
+Set `KeepAlive` parameters on both SAP ASCS/SCS cluster nodes follow the instructions to [Set registry entries on the cluster nodes of the SAP ASCS/SCS instance](./sap-high-availability-infrastructure-wsfc-shared-disk.md#add-registry-entries-on-both-cluster-nodes-of-the-ascsscs-instance).
### Install a DBMS instance and SAP application servers
sap Lama Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/lama-installation.md
Title: SAP LaMa connector for Azure description: Learn how to manage SAP systems on Azure by using SAP LaMa. tags: azure-resource-manager
-keywords: ''
sap Rise Integration Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration-network.md
Title: Network connectivity options in Azure with SAP RISE| Microsoft Docs description: Describes network connectivity between customer's own Azure environment and SAP RISE managed workloads tags: azure-resource-manager
-keywords: ''
sap Rise Integration Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration-security.md
Title: Identity and security in Azure with SAP RISE| Microsoft Docs description: Describes integration scenarios of Azure security, identity and monitoring services with SAP RISE managed workloads tags: azure-resource-manager
-keywords: ''
sap Rise Integration Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration-services.md
Title: Integrating Azure services with SAP RISE| Microsoft Docs description: Describes integration scenarios of Azure services with SAP RISE managed workloads tags: azure-resource-manager
-keywords: ''
sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration.md
Title: Integrating Azure with SAP RISE| Microsoft Docs description: Describes integrating SAP RISE managed virtual network with customer's own Azure environment tags: azure-resource-manager
-keywords: ''
sap Sap Ascs Ha Multi Sid Wsfc Azure Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md
Title: SAP ASCS/SCS multi-SID HA with WSFC and Azure shared disk | Microsoft Docs description: Learn about multi-SID high availability for an SAP ASCS/SCS instance with Windows Server Failover Clustering and an Azure shared disk.
-tags: azure-resource-manager
-keywords: ''
ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325+ -- Previously updated : 12/16/2022 Last updated : 01/19/2024
This article focuses on how to move from a single SAP ASCS/SCS installation to c
You can use Azure Premium SSD disks as Azure shared disks for the SAP ASCS/SCS instance. The following limitations are currently in place: -- [Azure Ultra Disk Storage disks](../../virtual-machines/disks-types.md#ultra-disks) and [Azure Standard SSD disks](../../virtual-machines/disks-types.md#standard-ssds) are not supported as Azure shared disks for SAP workloads.
+- [Azure Ultra Disk Storage disks](../../virtual-machines/disks-types.md#ultra-disks) and [Azure Standard SSD disks](../../virtual-machines/disks-types.md#standard-ssds) aren't supported as Azure shared disks for SAP workloads.
- [Azure shared disks](../../virtual-machines/disks-shared.md) with [Premium SSD disks](../../virtual-machines/disks-types.md#premium-ssds) are supported for SAP deployment in availability sets and availability zones. - Azure shared disks with Premium SSD disks come with two storage options: - Locally redundant storage (LRS) for Premium SSD shared disks (`skuName` value of `Premium_LRS`) is supported with deployment in availability sets. - Zone-redundant storage (ZRS) for Premium SSD shared disks (`skuName` value of `Premium_ZRS`) is supported with deployment in availability zones. - The Azure shared disk value [maxShares](../../virtual-machines/disks-shared-enable.md?tabs=azure-cli#disk-sizes) determines how many cluster nodes can use the shared disk. For an SAP ASCS/SCS instance, you typically configure two nodes in WSFC. You then set the value for `maxShares` to `2`.-- An [Azure proximity placement group (PPG)](../../virtual-machines/windows/proximity-placement-groups.md) is not required for Azure shared disks. But for SAP deployment with PPGs, follow these guidelines:
+- An [Azure proximity placement group (PPG)](../../virtual-machines/windows/proximity-placement-groups.md) isn't required for Azure shared disks. But for SAP deployment with PPGs, follow these guidelines:
- If you're using PPGs for an SAP system deployed in a region, all virtual machines that share a disk must be part of the same PPG. - If you're using PPGs for an SAP system deployed across zones, as described in [Proximity placement groups with zonal deployments](proximity-placement-scenarios.md#proximity-placement-groups-with-zonal-deployments), you can attach `Premium_ZRS` storage to virtual machines that share a disk.
We strongly recommend using at least Windows Server 2019 Datacenter, for these r
## Architecture
-Both ERS1 and ERS2 are supported in a multi-SID configuration. A mix of ERS1 and ERS2 is not supported in the same cluster.
+Both ERS1 and ERS2 are supported in a multi-SID configuration. A mix of ERS1 and ERS2 isn't supported in the same cluster.
The following example shows two SAP SIDs. Both have an ERS1 architecture where:
The next example also shows two SAP SIDs. Both have an ERS2 architecture where:
## Infrastructure preparation
-You'll install a new SAP SID PR2 instance, in addition to the existing clustered SAP PR1 ASCS/SCS instance.
+You install a new SAP SID PR2 instance, in addition to the existing clustered SAP PR1 ASCS/SCS instance.
### Host names and IP addresses
The steps in this article remain the same for both deployment types. But if your
### Create an Azure internal load balancer
-SAP ASCS, SAP SCS, and SAP ERS2 use virtual host names and virtual IP addresses. On Azure, a [load balancer](../../load-balancer/load-balancer-overview.md) is required to use a virtual IP address.
-We strongly recommend using a [standard load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
-
-You need to add configuration to the existing load balancer for the second SAP SID instance, PR2, for ASCS, SCS, or ERS. The configuration for the first SAP SID, PR1, should be already in place.
-
-#### (A)SCS PR2 [instance number 02]
--- Front-end configuration:
- - Static ASCS/SCS IP address 10.0.0.45.
-- Back-end configuration:
- - Already in place. The VMs were added to the back-end pool during configuration of SAP SID PR1.
-- Probe port:
- - Port 620*nr* [62002]. Leave the default options for protocol (TCP), interval (5), and unhealthy threshold (2).
-- Load-balancing rules:
- - If you're using a standard load balancer, select high-availability (HA) ports.
- - If you're using a basic load balancer, create load-balancing rules for the following ports:
- - 32*nr* TCP [3202]
- - 36*nr* TCP [3602]
- - 39*nr* TCP [3902]
- - 81*nr* TCP [8102]
- - 5*nr*13 TCP [50213]
- - 5*nr*14 TCP [50214]
- - 5*nr*16 TCP [50216]
-
- - Associate load-balancing rules with the PR2 ASCS front-end IP address, the health probe, and the existing back-end pool.
-
- - Make sure that idle timeout is set to the maximum value of 30 minutes, and that floating IP (direct server return) is enabled.
+For multi-sid configuration of SAP SID, PR2, you could use the same internal load balancer that you have created for SAP SID, PR1 system. For the ENSA1 architecture on Windows, you would need only one virtual IP address for SAP ASCS/SCS. On the other hand, the ENSA2 architecture necessitates two virtual IP addresses - one for SAP ASCS and another for ERS2.
+
+Configure additional frontend IP and load balancing rule for SAP SID, PR2 system on the existing load balancer using following guidelines. This section assumes that the configuration of standard internal load balancer for SAP SID, PR1 is already in place as described in [create load balancer](./sap-high-availability-infrastructure-wsfc-shared-disk.md#create-azure-internal-load-balancer).
+
+1. Open the same standard internal load balancer that you have created for SAP SID, PR1 system.
+2. **Frontend IP Configuration:** Create frontend IP (example: 10.0.0.45).
+3. **Backend Pool:** Backend Pool is same as that of SAP SID PR1 system.
+4. **Inbound rules:** Create load balancing rule.
+ - Frontend IP address: Select frontend IP
+ - Backend pool: Select backend pool
+ - Check "High availability ports"
+ - Protocol: TCP
+ - Health Probe: Create health probe with below details
+ - Protocol: TCP
+ - Port: [for example: 620<Instance-no.> for SAP SID, PR2 ASCS]
+ - Interval: 5
+ - Probe Threshold: 2
+ - Idle timeout (minutes): 30
+ - Check "Enable Floating IP"
+5. Applicable to only ENSA2 architecture: Create additional frontend IP (10.0.0.44), load balancing rule (use 621<Instance-no.> for ERS2 health probe port) as described in point 1 and 3.
+
+> [!NOTE]
+> Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. So to control the number of successful or failed consecutive probes, set the property "probeThreshold" to 2. It is currently not possible to set this property using Azure portal, so use either the [Azure CLI](/cli/azure/network/lb/probe) or [PowerShell](/powershell/module/az.network/new-azloadbalancerprobeconfig) command.
-#### ERS2 PR2 [instance number 12]
-
-Because ERS2 is clustered, you must configure the ERS2 virtual IP address on an Azure internal load balancer in addition to the preceding SAP ASCS/SCS IP address. This section applies only if you're using the ERS2 architecture for PR2.
--- New front-end configuration:
- - Static SAP ERS2 IP address 10.0.0.46.
--- Back-end configuration:
- - The VMs were already added to the internal load balancer's back-end pool.
--- New probe port:
- - Port 621*nr* [62112]. Leave the default options for protocol (TCP), interval (5), and unhealthy threshold (2).
--- New load-balancing rules:
- - If you're using a standard load balancer, select HA ports.
- - If you're using a basic load balancer, create load-balancing rules for the following ports:
- - 32*nr* TCP [3212]
- - 33*nr* TCP [3312]
- - 5*nr*13 TCP [51212]
- - 5*nr*14 TCP [51212]
- - 5*nr*16 TCP [51212]
-
- - Associate load-balancing rules with the PR2 ERS2 front-end IP address, the health probe, and the existing back-end pool.
+> [!IMPORTANT]
+> A floating IP address isn't supported on a network interface card (NIC) secondary IP configuration in load-balancing scenarios. For details, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC.
- - Make sure that idle timeout is set to the maximum value of 30 minutes, and that floating IP (direct server return) is enabled.
+> [!NOTE]
+> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
### Create and attach a second Azure shared disk
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose
1. Create a DNS entry for the virtual host name for new the SAP ASCS/SCS instance in the Windows DNS manager.
- The IP address that you assign to the virtual host name in DNS must be the same as the IP address that you assigned in Azure Load Balancer.
+ The IP address that you assigned to the virtual host name in DNS must be the same as the IP address that you assigned in Azure Load Balancer.
![Screenshot that shows options for defining a DNS entry for the SAP ASCS/SCS cluster virtual name and IP address.][sap-ha-guide-figure-6009] 2. If you're using a clustered instance of SAP ERS2, you need to reserve in DNS a virtual host name for ERS2.
- The IP address that you assign to the virtual host name for ERS2 in DNS must be the same as the IP address that you assigned in Azure Load Balancer.
+ The IP address that you assigned to the virtual host name for ERS2 in DNS must be the same as the IP address that you assigned in Azure Load Balancer.
![Screenshot that shows options for defining a DNS entry for the SAP ERS2 cluster virtual name and IP address.][sap-ha-guide-figure-6010]
-3. To define the IP address that's assigned to the virtual host name, select **DNS Manager** > **Domain**.
+3. To define the IP address assigned to the virtual host name, select **DNS Manager** > **Domain**.
![Screenshot that shows a new virtual name and IP address for SAP ASCS/SCS and ERS2 cluster configuration.][sap-ha-guide-figure-6011]
Follow the SAP-described installation procedure. Be sure to select **First Clust
### Modify the SAP profile of the ASCS/SCS instance
-If you're running ERS1, add the SAP profile parameter `enque/encni/set_so_keepalive`. The profile parameter prevents connections between SAP work processes and the enqueue server from closing when they're idle for too long. The SAP parameter is not required for ERS2.
+If you're running ERS1, add the SAP profile parameter `enque/encni/set_so_keepalive`. The profile parameter prevents connections between SAP work processes and the enqueue server from closing when they're idle for too long. The SAP parameter isn't required for ERS2.
1. Add this profile parameter to the SAP ASCS/SCS instance profile, if you're using ERS1:
The code for the function `Set-AzureLoadBalancerHealthCheckProbePortOnSAPCluster
### Continue with the SAP installation
-1. Install the database instance by following the process that's described in the SAP installation guide.
+1. Install the database instance by following the process described in the SAP installation guide.
2. Install SAP on the second cluster node by following the steps that are described in the SAP installation guide.
-3. Install the SAP Primary Application Server (PAS) instance on the virtual machine that you've designated to host the PAS.
+3. Install the SAP Primary Application Server (PAS) instance on the virtual machine that is designated to host the PAS.
Follow the process described in the SAP installation guide. There are no dependencies on Azure. 4. Install additional SAP application servers on the virtual machines that are designated to host SAP application server instances.
sap Sap Ascs Ha Multi Sid Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-shared-disk.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325
You have already configured a WSFC cluster to use for one SAP ASCS/SCS instance
> [!IMPORTANT] > The setup must meet the following conditions:
+>
> * The SAP ASCS/SCS instances must share the same WSFC cluster. > * Each database management system (DBMS) SID must have its own dedicated WSFC cluster. > * SAP application servers that belong to one SAP system SID must have their own dedicated VMs.
$ILB | Set-AzLoadBalancer
Write-Host "Successfully added new IP '$ILBIP' to the internal load balancer '$ILBName'!" -ForegroundColor Green ```+ After the script has run, the results are displayed in the Azure portal, as shown in the following screenshot: ![New front-end IP pool in the Azure portal][sap-ha-guide-figure-6005]
After the script has run, the results are displayed in the Azure portal, as show
You must add a new cluster-share disk for each additional SAP ASCS/SCS instance. For Windows Server 2012 R2, the WSFC cluster share disk currently in use is the SIOS DataKeeper software solution. Do the following:+ 1. Add an additional disk or disks of the same size (which you need to stripe) to each of the cluster nodes, and format them. 2. Configure storage replication with SIOS DataKeeper.
The high-level procedure is as follows:
Also open the Azure internal load balancer probe port, which is 62350 in our scenario. It is described [in this article][sap-high-availability-installation-wsfc-shared-disk-win-firewall-probe-port].
-8. Install the SAP primary application server on the new dedicated VM, as described in the SAP installation guide.
+7. Install the SAP primary application server on the new dedicated VM, as described in the SAP installation guide.
-9. Install the SAP additional application server on the new dedicated VM, as described in the SAP installation guide.
+8. Install the SAP additional application server on the new dedicated VM, as described in the SAP installation guide.
-10. [Test the SAP ASCS/SCS instance failover and SIOS replication][sap-high-availability-installation-wsfc-shared-disk-test-ascs-failover-and-sios-repl].
+9. [Test the SAP ASCS/SCS instance failover and SIOS replication][sap-high-availability-installation-wsfc-shared-disk-test-ascs-failover-and-sios-repl].
## Next steps -- [Networking limits: Azure Resource Manager][networking-limits-azure-resource-manager]-- [Multiple VIPs for Azure Load Balancer][load-balancer-multivip-overview]-
-[1928533]:https://launchpad.support.sap.com/#/notes/1928533
-[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[2015553]:https://launchpad.support.sap.com/#/notes/2015553
-[2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
-[1869038]:https://launchpad.support.sap.com/#/notes/1869038
-[2287140]:https://launchpad.support.sap.com/#/notes/2287140
-[2492395]:https://launchpad.support.sap.com/#/notes/2492395
-
-[sap-installation-guides]:http://service.sap.com/instguides
+* [Networking limits: Azure Resource Manager][networking-limits-azure-resource-manager]
+* [Multiple VIPs for Azure Load Balancer][load-balancer-multivip-overview]
-[azure-resource-manager/management/azure-subscription-service-limits]:../../azure-resource-manager/management/azure-subscription-service-limits.md
-[azure-resource-manager/management/azure-subscription-service-limits-subscription]:../../azure-resource-manager/management/azure-subscription-service-limits.md
[networking-limits-azure-resource-manager]:../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits [load-balancer-multivip-overview]:../../load-balancer/load-balancer-multivip-overview.md - [sap-net-weaver-ports]:https://help.sap.com/viewer/ports
-[sap-high-availability-architecture-scenarios]:sap-high-availability-architecture-scenarios.md
-[sap-high-availability-guide-wsfc-shared-disk]:sap-high-availability-guide-wsfc-shared-disk.md
-[sap-high-availability-guide-wsfc-file-share]:sap-high-availability-guide-wsfc-file-share.md
-[sap-ascs-high-availability-multi-sid-wsfc]:sap-ascs-high-availability-multi-sid-wsfc.md
-[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
[sap-high-availability-installation-wsfc-shared-disk]:sap-high-availability-installation-wsfc-shared-disk.md
-[sap-hana-ha]:sap-hana-high-availability.md
-[sap-suse-ascs-ha]:high-availability-guide-suse.md
-[sap-net-weaver-ports-ascs-scs-ports]:sap-high-availability-infrastructure-wsfc-shared-disk.md#fe0bd8b5-2b43-45e3-8295-80bee5415716
-
-[dbms-guide]:../../virtual-machines-windows-sap-dbms-guide-general.md
-
-[deployment-guide]:deployment-guide.md
-
-[dr-guide-classic]:https://go.microsoft.com/fwlink/?LinkID=521971
-
-[getting-started]:get-started.md
+[sap-net-weaver-ports-ascs-scs-ports]:sap-high-availability-infrastructure-wsfc-shared-disk.md#create-azure-internal-load-balancer
-[planning-guide]:planning-guide.md
-[planning-guide-11]:planning-guide.md
-[planning-guide-2.1]:planning-guide.md#1625df66-4cc6-4d60-9202-de8a0b77f803
-[planning-guide-2.2]:planning-guide.md#f5b3b18c-302c-4bd8-9ab2-c388f1ab3d10
-
-[planning-guide-microsoft-azure-networking]:planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd
-[planning-guide-storage-microsoft-azure-storage-and-data-disks]:planning-guide.md#a72afa26-4bf4-4a25-8cf7-855d6032157f
-
-[sap-high-availability-architecture-scenarios]:sap-high-availability-architecture-scenarios.md
-[sap-high-availability-guide-wsfc-shared-disk]:sap-high-availability-guide-wsfc-shared-disk.md
-[sap-high-availability-guide-wsfc-file-share]:sap-high-availability-guide-wsfc-file-share.md
-[sap-ascs-high-availability-multi-sid-wsfc]:sap-ascs-high-availability-multi-sid-wsfc.md
-[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
-[sap-high-availability-infrastructure-wsfc-file-share]:sap-high-availability-infrastructure-wsfc-file-share.md
-
-[sap-high-availability-installation-wsfc-shared-disk]:sap-high-availability-installation-wsfc-shared-disk.md
[sap-high-availability-installation-wsfc-shared-disk-install-ascs]:sap-high-availability-installation-wsfc-shared-disk.md#31c6bd4f-51df-4057-9fdf-3fcbc619c170 [sap-high-availability-installation-wsfc-shared-disk-modify-ascs-profile]:sap-high-availability-installation-wsfc-shared-disk.md#e4caaab2-e90f-4f2c-bc84-2cd2e12a9556 [sap-high-availability-installation-wsfc-shared-disk-add-probe-port]:sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761 [sap-high-availability-installation-wsfc-shared-disk-win-firewall-probe-port]:sap-high-availability-installation-wsfc-shared-disk.md#4498c707-86c0-4cde-9c69-058a7ab8c3ac
-[sap-high-availability-installation-wsfc-shared-disk-change-ers-service-startup-type]:sap-high-availability-installation-wsfc-shared-disk.md#094bc895-31d4-4471-91cc-1513b64e406a
[sap-high-availability-installation-wsfc-shared-disk-test-ascs-failover-and-sios-repl]:sap-high-availability-installation-wsfc-shared-disk.md#18aa2b9d-92d2-4c0e-8ddd-5acaabda99e9
+[sap-high-availability-infrastructure-wsfc-shared-disk-install-sios]:sap-high-availability-infrastructure-wsfc-shared-disk.md#sios-datakeeper-cluster-edition-for-the-sap-ascsscs-cluster-share-disk
-[sap-high-availability-installation-wsfc-file-share]:sap-high-availability-installation-wsfc-file-share.md
-[sap-high-availability-infrastructure-wsfc-shared-disk-install-sios]:sap-high-availability-infrastructure-wsfc-shared-disk.md#5c8e5482-841e-45e1-a89d-a05c0907c868
-
-[Logo_Linux]:media/virtual-machines-shared-sap-shared/Linux.png
[Logo_Windows]:media/virtual-machines-shared-sap-shared/Windows.png
-[sap-ha-guide-figure-1000]:./media/virtual-machines-shared-sap-high-availability-guide/1000-wsfc-for-sap-ascs-on-azure.png
-[sap-ha-guide-figure-1001]:./media/virtual-machines-shared-sap-high-availability-guide/1001-wsfc-on-azure-ilb.png
-[sap-ha-guide-figure-1002]:./media/virtual-machines-shared-sap-high-availability-guide/1002-wsfc-sios-on-azure-ilb.png
-[sap-ha-guide-figure-2000]:./media/virtual-machines-shared-sap-high-availability-guide/2000-wsfc-sap-as-ha-on-azure.png
-[sap-ha-guide-figure-2001]:./media/virtual-machines-shared-sap-high-availability-guide/2001-wsfc-sap-ascs-ha-on-azure.png
-[sap-ha-guide-figure-2003]:./media/virtual-machines-shared-sap-high-availability-guide/2003-wsfc-sap-dbms-ha-on-azure.png
-[sap-ha-guide-figure-2004]:./media/virtual-machines-shared-sap-high-availability-guide/2004-wsfc-sap-ha-e2e-archit-template1-on-azure.png
-[sap-ha-guide-figure-2005]:./media/virtual-machines-shared-sap-high-availability-guide/2005-wsfc-sap-ha-e2e-arch-template2-on-azure.png
-
-[sap-ha-guide-figure-3000]:./media/virtual-machines-shared-sap-high-availability-guide/3000-template-parameters-sap-ha-arm-on-azure.png
-[sap-ha-guide-figure-3001]:./media/virtual-machines-shared-sap-high-availability-guide/3001-configuring-dns-servers-for-Azure-vnet.png
-[sap-ha-guide-figure-3002]:./media/virtual-machines-shared-sap-high-availability-guide/3002-configuring-static-IP-address-for-network-card-of-each-vm.png
-[sap-ha-guide-figure-3003]:./media/virtual-machines-shared-sap-high-availability-guide/3003-setup-static-ip-address-ilb-for-ascs-instance.png
-[sap-ha-guide-figure-3004]:./media/virtual-machines-shared-sap-high-availability-guide/3004-default-ascs-scs-ilb-balancing-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3005]:./media/virtual-machines-shared-sap-high-availability-guide/3005-changing-ascs-scs-default-ilb-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3006]:./media/virtual-machines-shared-sap-high-availability-guide/3006-adding-vm-to-domain.png
-[sap-ha-guide-figure-3007]:./media/virtual-machines-shared-sap-high-availability-guide/3007-config-wsfc-1.png
-[sap-ha-guide-figure-3008]:./media/virtual-machines-shared-sap-high-availability-guide/3008-config-wsfc-2.png
-[sap-ha-guide-figure-3009]:./media/virtual-machines-shared-sap-high-availability-guide/3009-config-wsfc-3.png
-[sap-ha-guide-figure-3010]:./media/virtual-machines-shared-sap-high-availability-guide/3010-config-wsfc-4.png
-[sap-ha-guide-figure-3011]:./media/virtual-machines-shared-sap-high-availability-guide/3011-config-wsfc-5.png
-[sap-ha-guide-figure-3012]:./media/virtual-machines-shared-sap-high-availability-guide/3012-config-wsfc-6.png
-[sap-ha-guide-figure-3013]:./media/virtual-machines-shared-sap-high-availability-guide/3013-config-wsfc-7.png
-[sap-ha-guide-figure-3014]:./media/virtual-machines-shared-sap-high-availability-guide/3014-config-wsfc-8.png
-[sap-ha-guide-figure-3015]:./media/virtual-machines-shared-sap-high-availability-guide/3015-config-wsfc-9.png
-[sap-ha-guide-figure-3016]:./media/virtual-machines-shared-sap-high-availability-guide/3016-config-wsfc-10.png
-[sap-ha-guide-figure-3017]:./media/virtual-machines-shared-sap-high-availability-guide/3017-config-wsfc-11.png
-[sap-ha-guide-figure-3018]:./media/virtual-machines-shared-sap-high-availability-guide/3018-config-wsfc-12.png
-[sap-ha-guide-figure-3019]:./media/virtual-machines-shared-sap-high-availability-guide/3019-assign-permissions-on-share-for-cluster-name-object.png
-[sap-ha-guide-figure-3020]:./media/virtual-machines-shared-sap-high-availability-guide/3020-change-object-type-include-computer-objects.png
-[sap-ha-guide-figure-3021]:./media/virtual-machines-shared-sap-high-availability-guide/3021-check-box-for-computer-objects.png
-[sap-ha-guide-figure-3022]:./media/virtual-machines-shared-sap-high-availability-guide/3022-set-security-attributes-for-cluster-name-object-on-file-share-quorum.png
-[sap-ha-guide-figure-3023]:./media/virtual-machines-shared-sap-high-availability-guide/3023-call-configure-cluster-quorum-setting-wizard.png
-[sap-ha-guide-figure-3024]:./media/virtual-machines-shared-sap-high-availability-guide/3024-selection-screen-different-quorum-configurations.png
-[sap-ha-guide-figure-3025]:./media/virtual-machines-shared-sap-high-availability-guide/3025-selection-screen-file-share-witness.png
-[sap-ha-guide-figure-3026]:./media/virtual-machines-shared-sap-high-availability-guide/3026-define-file-share-location-for-witness-share.png
-[sap-ha-guide-figure-3027]:./media/virtual-machines-shared-sap-high-availability-guide/3027-successful-reconfiguration-cluster-file-share-witness.png
-[sap-ha-guide-figure-3028]:./media/virtual-machines-shared-sap-high-availability-guide/3028-install-dot-net-framework-35.png
-[sap-ha-guide-figure-3029]:./media/virtual-machines-shared-sap-high-availability-guide/3029-install-dot-net-framework-35-progress.png
-[sap-ha-guide-figure-3030]:./media/virtual-machines-shared-sap-high-availability-guide/3030-sios-installer.png
-[sap-ha-guide-figure-3031]:./media/virtual-machines-shared-sap-high-availability-guide/3031-first-screen-sios-data-keeper-installation.png
-[sap-ha-guide-figure-3032]:./media/virtual-machines-shared-sap-high-availability-guide/3032-data-keeper-informs-service-be-disabled.png
-[sap-ha-guide-figure-3033]:./media/virtual-machines-shared-sap-high-availability-guide/3033-user-selection-sios-data-keeper.png
-[sap-ha-guide-figure-3034]:./media/virtual-machines-shared-sap-high-availability-guide/3034-domain-user-sios-data-keeper.png
-[sap-ha-guide-figure-3035]:./media/virtual-machines-shared-sap-high-availability-guide/3035-provide-sios-data-keeper-license.png
-[sap-ha-guide-figure-3036]:./media/virtual-machines-shared-sap-high-availability-guide/3036-data-keeper-management-config-tool.png
-[sap-ha-guide-figure-3037]:./media/virtual-machines-shared-sap-high-availability-guide/3037-tcp-ip-address-first-node-data-keeper.png
-[sap-ha-guide-figure-3038]:./media/virtual-machines-shared-sap-high-availability-guide/3038-create-replication-sios-job.png
-[sap-ha-guide-figure-3039]:./media/virtual-machines-shared-sap-high-availability-guide/3039-define-sios-replication-job-name.png
-[sap-ha-guide-figure-3040]:./media/virtual-machines-shared-sap-high-availability-guide/3040-define-sios-source-node.png
-[sap-ha-guide-figure-3041]:./media/virtual-machines-shared-sap-high-availability-guide/3041-define-sios-target-node.png
-[sap-ha-guide-figure-3042]:./media/virtual-machines-shared-sap-high-availability-guide/3042-define-sios-synchronous-replication.png
-[sap-ha-guide-figure-3043]:./media/virtual-machines-shared-sap-high-availability-guide/3043-enable-sios-replicated-volume-as-cluster-volume.png
-[sap-ha-guide-figure-3044]:./media/virtual-machines-shared-sap-high-availability-guide/3044-data-keeper-synchronous-mirroring-for-SAP-gui.png
-[sap-ha-guide-figure-3045]:./media/virtual-machines-shared-sap-high-availability-guide/3045-replicated-disk-by-data-keeper-in-wsfc.png
-[sap-ha-guide-figure-3046]:./media/virtual-machines-shared-sap-high-availability-guide/3046-dns-entry-sap-ascs-virtual-name-ip.png
-[sap-ha-guide-figure-3047]:./media/virtual-machines-shared-sap-high-availability-guide/3047-dns-manager.png
-[sap-ha-guide-figure-3048]:./media/virtual-machines-shared-sap-high-availability-guide/3048-default-cluster-probe-port.png
-[sap-ha-guide-figure-3049]:./media/virtual-machines-shared-sap-high-availability-guide/3049-cluster-probe-port-after.png
-[sap-ha-guide-figure-3050]:./media/virtual-machines-shared-sap-high-availability-guide/3050-service-type-ers-delayed-automatic.png
-[sap-ha-guide-figure-5000]:./media/virtual-machines-shared-sap-high-availability-guide/5000-wsfc-sap-sid-node-a.png
-[sap-ha-guide-figure-5001]:./media/virtual-machines-shared-sap-high-availability-guide/5001-sios-replicating-local-volume.png
-[sap-ha-guide-figure-5002]:./media/virtual-machines-shared-sap-high-availability-guide/5002-wsfc-sap-sid-node-b.png
-[sap-ha-guide-figure-5003]:./media/virtual-machines-shared-sap-high-availability-guide/5003-sios-replicating-local-volume-b-to-a.png
- [sap-ha-guide-figure-6001]:media/virtual-machines-shared-sap-high-availability-guide/6001-sap-multi-sid-ascs-scs-sid1.png [sap-ha-guide-figure-6002]:media/virtual-machines-shared-sap-high-availability-guide/6002-sap-multi-sid-ascs-scs.png [sap-ha-guide-figure-6003]:media/virtual-machines-shared-sap-high-availability-guide/6003-sap-multi-sid-full-landscape.png [sap-ha-guide-figure-6004]:media/virtual-machines-shared-sap-high-availability-guide/6004-sap-multi-sid-dns.png [sap-ha-guide-figure-6005]:media/virtual-machines-shared-sap-high-availability-guide/6005-sap-multi-sid-azure-portal.png [sap-ha-guide-figure-6006]:media/virtual-machines-shared-sap-high-availability-guide/6006-sap-multi-sid-sios-replication.png---
-[sap-ha-guide-figure-8001]:./media/virtual-machines-shared-sap-high-availability-guide/8001.png
-[sap-ha-guide-figure-8002]:./media/virtual-machines-shared-sap-high-availability-guide/8002.png
-[sap-ha-guide-figure-8003]:./media/virtual-machines-shared-sap-high-availability-guide/8003.png
-[sap-ha-guide-figure-8004]:./media/virtual-machines-shared-sap-high-availability-guide/8004.png
-[sap-ha-guide-figure-8005]:./media/virtual-machines-shared-sap-high-availability-guide/8005.png
-[sap-ha-guide-figure-8006]:./media/virtual-machines-shared-sap-high-availability-guide/8006.png
-[sap-ha-guide-figure-8007]:./media/virtual-machines-shared-sap-high-availability-guide/8007.png
-[sap-ha-guide-figure-8008]:./media/virtual-machines-shared-sap-high-availability-guide/8008.png
-[sap-ha-guide-figure-8009]:./media/virtual-machines-shared-sap-high-availability-guide/8009.png
-[sap-ha-guide-figure-8010]:./media/virtual-machines-shared-sap-high-availability-guide/8010.png
-[sap-ha-guide-figure-8011]:./media/virtual-machines-shared-sap-high-availability-guide/8011.png
-[sap-ha-guide-figure-8012]:./media/virtual-machines-shared-sap-high-availability-guide/8012.png
-[sap-ha-guide-figure-8013]:./media/virtual-machines-shared-sap-high-availability-guide/8013.png
-[sap-ha-guide-figure-8014]:./media/virtual-machines-shared-sap-high-availability-guide/8014.png
-[sap-ha-guide-figure-8015]:./media/virtual-machines-shared-sap-high-availability-guide/8015.png
-[sap-ha-guide-figure-8016]:./media/virtual-machines-shared-sap-high-availability-guide/8016.png
-[sap-ha-guide-figure-8017]:./media/virtual-machines-shared-sap-high-availability-guide/8017.png
-[sap-ha-guide-figure-8018]:./media/virtual-machines-shared-sap-high-availability-guide/8018.png
-[sap-ha-guide-figure-8019]:./media/virtual-machines-shared-sap-high-availability-guide/8019.png
-[sap-ha-guide-figure-8020]:./media/virtual-machines-shared-sap-high-availability-guide/8020.png
-[sap-ha-guide-figure-8021]:./media/virtual-machines-shared-sap-high-availability-guide/8021.png
-[sap-ha-guide-figure-8022]:./media/virtual-machines-shared-sap-high-availability-guide/8022.png
-[sap-ha-guide-figure-8023]:./media/virtual-machines-shared-sap-high-availability-guide/8023.png
-[sap-ha-guide-figure-8024]:./media/virtual-machines-shared-sap-high-availability-guide/8024.png
-[sap-ha-guide-figure-8025]:./media/virtual-machines-shared-sap-high-availability-guide/8025.png
--
-[sap-templates-3-tier-multisid-xscs-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-xscs%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-xscs-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-xscs-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-db%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-db-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-apps%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-apps-md%2Fazuredeploy.json
-
-[virtual-machines-azure-resource-manager-architecture-benefits-arm]:../../azure-resource-manager/management/overview.md#the-benefits-of-using-resource-manager
-
-[virtual-machines-manage-availability]:../../virtual-machines/availability.md
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
Title: High availability of SAP HANA on Azure VMs on RHEL | Microsoft Docs description: Establish high availability of SAP HANA on Azure virtual machines (VMs).
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
Title: High availability for SAP HANA on Azure VMs on SLES description: Learn how to set up and use high availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server.-
sap Sap High Availability Guide Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-start.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
ms.assetid: 1cfcc14a-6795-4cfd-a740-aa09d6d2b817
sap Sap High Availability Guide Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-wsfc-file-share.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap Sap High Availability Guide Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-wsfc-shared-disk.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
ms.assetid: f6fb85f8-c77a-4af1-bde8-1de7e4425d2e
sap Sap High Availability Infrastructure Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-infrastructure-wsfc-file-share.md
# Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances
-[1928533]:https://launchpad.support.sap.com/#/notes/1928533
-[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[2015553]:https://launchpad.support.sap.com/#/notes/2015553
-[2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
[ms-blog-s2d-in-azure]:https://blogs.technet.microsoft.com/filecab/2016/05/05/s2dazuretp5/ [arm-sofs-s2d-managed-disks]:https://github.com/robotechredmond/301-storage-spaces-direct-md
[deploy-cloud-witness]:/windows-server/failover-clustering/deploy-cloud-witness [tuning-failover-cluster-network-thresholds]:https://techcommunity.microsoft.com/t5/Failover-Clustering/Tuning-Failover-Cluster-Network-Thresholds/ba-p/371834
-[sap-installation-guides]:http://service.sap.com/instguides
-
-[azure-resource-manager/management/azure-subscription-service-limits]:../../azure-resource-manager/management/azure-subscription-service-limits.md
-[azure-resource-manager/management/azure-subscription-service-limits-subscription]:../../azure-resource-manager/management/azure-subscription-service-limits.md
-
-[dbms-guide]:../../virtual-machines-windows-sap-dbms-guide-general.md
-
-[deployment-guide]:deployment-guide.md
-
-[dr-guide-classic]:https://go.microsoft.com/fwlink/?LinkID=521971
-
-[getting-started]:get-started.md
-
-[sap-high-availability-architecture-scenarios]:sap-high-availability-architecture-scenarios.md
-[sap-high-availability-guide-wsfc-shared-disk]:sap-high-availability-guide-wsfc-shared-disk.md
[sap-high-availability-guide-wsfc-file-share]:sap-high-availability-guide-wsfc-file-share.md
-[sap-ascs-high-availability-multi-sid-wsfc]:sap-ascs-high-availability-multi-sid-wsfc.md
[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
-[sap-high-availability-infrastructure-wsfc-shared-disk-default-ascs-ilb-rules]:sap-high-availability-infrastructure-wsfc-shared-disk.md#fe0bd8b5-2b43-45e3-8295-80bee5415716
-[sap-high-availability-infrastructure-wsfc-shared-disk-change-ascs-ilb-rules]:sap-high-availability-infrastructure-wsfc-shared-disk.md#fe0bd8b5-2b43-45e3-8295-80bee5415716
-[sap-high-availability-infrastructure-wsfc-shared-disk-add-win-domain]:sap-high-availability-infrastructure-wsfc-shared-disk.md#e69e9a34-4601-47a3-a41c-d2e11c626c0c
+[sap-high-availability-infrastructure-wsfc-shared-disk-default-ascs-ilb-rules]:sap-high-availability-infrastructure-wsfc-shared-disk.md#create-azure-internal-load-balancer
+[sap-high-availability-infrastructure-wsfc-shared-disk-add-win-domain]:sap-high-availability-infrastructure-wsfc-shared-disk.md#add-the-windows-vms-to-the-domain
[sap-high-availability-installation-wsfc-file-share]:sap-high-availability-installation-wsfc-file-share.md
-[sap-hana-ha]:sap-hana-high-availability.md
-[sap-suse-ascs-ha]:high-availability-guide-suse.md
-
-[planning-guide]:planning-guide.md
-[planning-guide-11]:planning-guide.md
-[planning-guide-2.1]:planning-guide.md#1625df66-4cc6-4d60-9202-de8a0b77f803
-[planning-guide-2.2]:planning-guide.md#f5b3b18c-302c-4bd8-9ab2-c388f1ab3d10
-
-[planning-guide-microsoft-azure-networking]:planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd
-[planning-guide-storage-microsoft-azure-storage-and-data-disks]:planning-guide.md#a72afa26-4bf4-4a25-8cf7-855d6032157f
---
-[sap-high-availability-infrastructure-wsfc-file-share]:sap-high-availability-infrastructure-wsfc-file-share.md
-[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
-
-[sap-ha-guide-2]:#42b8f600-7ba3-4606-b8a5-53c4f026da08
-[sap-ha-guide-4]:#8ecf3ba0-67c0-4495-9c14-feec1a2255b7
-[sap-ha-guide-8]:#78092dbe-165b-454c-92f5-4972bdbef9bf
-[sap-ha-guide-8.1]:#c87a8d3f-b1dc-4d2f-b23c-da4b72977489
-[sap-ha-guide-8.9]:#fe0bd8b5-2b43-45e3-8295-80bee5415716
-[sap-ha-guide-8.11]:#661035b2-4d0f-4d31-86f8-dc0a50d78158
-[sap-ha-guide-8.12]:#0d67f090-7928-43e0-8772-5ccbf8f59aab
-[sap-ha-guide-8.12.1]:#5eecb071-c703-4ccc-ba6d-fe9c6ded9d79
-[sap-ha-guide-8.12.3]:#5c8e5482-841e-45e1-a89d-a05c0907c868
-[sap-ha-guide-8.12.3.1]:#1c2788c3-3648-4e82-9e0d-e058e475e2a3
-[sap-ha-guide-8.12.3.2]:#dd41d5a2-8083-415b-9878-839652812102
-[sap-ha-guide-8.12.3.3]:#d9c1fc8e-8710-4dff-bec2-1f535db7b006
-[sap-ha-guide-9]:#a06f0b49-8a7a-42bf-8b0d-c12026c5746b
-[sap-ha-guide-9.1]:#31c6bd4f-51df-4057-9fdf-3fcbc619c170
-[sap-ha-guide-9.1.1]:#a97ad604-9094-44fe-a364-f89cb39bf097
----
-[sap-ha-guide-figure-1000]:./media/virtual-machines-shared-sap-high-availability-guide/1000-wsfc-for-sap-ascs-on-azure.png
-[sap-ha-guide-figure-1001]:./media/virtual-machines-shared-sap-high-availability-guide/1001-wsfc-on-azure-ilb.png
-[sap-ha-guide-figure-1002]:./media/virtual-machines-shared-sap-high-availability-guide/1002-wsfc-sios-on-azure-ilb.png
-[sap-ha-guide-figure-2000]:./media/virtual-machines-shared-sap-high-availability-guide/2000-wsfc-sap-as-ha-on-azure.png
-[sap-ha-guide-figure-2001]:./media/virtual-machines-shared-sap-high-availability-guide/2001-wsfc-sap-ascs-ha-on-azure.png
-[sap-ha-guide-figure-2003]:./media/virtual-machines-shared-sap-high-availability-guide/2003-wsfc-sap-dbms-ha-on-azure.png
-[sap-ha-guide-figure-2004]:./media/virtual-machines-shared-sap-high-availability-guide/2004-wsfc-sap-ha-e2e-archit-template1-on-azure.png
-[sap-ha-guide-figure-2005]:./media/virtual-machines-shared-sap-high-availability-guide/2005-wsfc-sap-ha-e2e-arch-template2-on-azure.png
-
-[sap-ha-guide-figure-3000]:./media/virtual-machines-shared-sap-high-availability-guide/3000-template-parameters-sap-ha-arm-on-azure.png
-[sap-ha-guide-figure-3001]:./media/virtual-machines-shared-sap-high-availability-guide/3001-configuring-dns-servers-for-Azure-vnet.png
-[sap-ha-guide-figure-3002]:./media/virtual-machines-shared-sap-high-availability-guide/3002-configuring-static-IP-address-for-network-card-of-each-vm.png
-[sap-ha-guide-figure-3003]:./media/virtual-machines-shared-sap-high-availability-guide/3003-setup-static-ip-address-ilb-for-ascs-instance.png
-[sap-ha-guide-figure-3004]:./media/virtual-machines-shared-sap-high-availability-guide/3004-default-ascs-scs-ilb-balancing-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3005]:./media/virtual-machines-shared-sap-high-availability-guide/3005-changing-ascs-scs-default-ilb-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3006]:./media/virtual-machines-shared-sap-high-availability-guide/3006-adding-vm-to-domain.png
-[sap-ha-guide-figure-3007]:./media/virtual-machines-shared-sap-high-availability-guide/3007-config-wsfc-1.png
-[sap-ha-guide-figure-3008]:./media/virtual-machines-shared-sap-high-availability-guide/3008-config-wsfc-2.png
-[sap-ha-guide-figure-3009]:./media/virtual-machines-shared-sap-high-availability-guide/3009-config-wsfc-3.png
-[sap-ha-guide-figure-3010]:./media/virtual-machines-shared-sap-high-availability-guide/3010-config-wsfc-4.png
-[sap-ha-guide-figure-3011]:./media/virtual-machines-shared-sap-high-availability-guide/3011-config-wsfc-5.png
-[sap-ha-guide-figure-3012]:./media/virtual-machines-shared-sap-high-availability-guide/3012-config-wsfc-6.png
-[sap-ha-guide-figure-3013]:./media/virtual-machines-shared-sap-high-availability-guide/3013-config-wsfc-7.png
-[sap-ha-guide-figure-3014]:./media/virtual-machines-shared-sap-high-availability-guide/3014-config-wsfc-8.png
-[sap-ha-guide-figure-3015]:./media/virtual-machines-shared-sap-high-availability-guide/3015-config-wsfc-9.png
-[sap-ha-guide-figure-3016]:./media/virtual-machines-shared-sap-high-availability-guide/3016-config-wsfc-10.png
-[sap-ha-guide-figure-3017]:./media/virtual-machines-shared-sap-high-availability-guide/3017-config-wsfc-11.png
-[sap-ha-guide-figure-3018]:./media/virtual-machines-shared-sap-high-availability-guide/3018-config-wsfc-12.png
-[sap-ha-guide-figure-3019]:./media/virtual-machines-shared-sap-high-availability-guide/3019-assign-permissions-on-share-for-cluster-name-object.png
-[sap-ha-guide-figure-3020]:./media/virtual-machines-shared-sap-high-availability-guide/3020-change-object-type-include-computer-objects.png
-[sap-ha-guide-figure-3021]:./media/virtual-machines-shared-sap-high-availability-guide/3021-check-box-for-computer-objects.png
-[sap-ha-guide-figure-3022]:./media/virtual-machines-shared-sap-high-availability-guide/3022-set-security-attributes-for-cluster-name-object-on-file-share-quorum.png
-[sap-ha-guide-figure-3023]:./media/virtual-machines-shared-sap-high-availability-guide/3023-call-configure-cluster-quorum-setting-wizard.png
-[sap-ha-guide-figure-3024]:./media/virtual-machines-shared-sap-high-availability-guide/3024-selection-screen-different-quorum-configurations.png
-[sap-ha-guide-figure-3025]:./media/virtual-machines-shared-sap-high-availability-guide/3025-selection-screen-file-share-witness.png
-[sap-ha-guide-figure-3026]:./media/virtual-machines-shared-sap-high-availability-guide/3026-define-file-share-location-for-witness-share.png
-[sap-ha-guide-figure-3027]:./media/virtual-machines-shared-sap-high-availability-guide/3027-successful-reconfiguration-cluster-file-share-witness.png
-[sap-ha-guide-figure-3028]:./media/virtual-machines-shared-sap-high-availability-guide/3028-install-dot-net-framework-35.png
-[sap-ha-guide-figure-3029]:./media/virtual-machines-shared-sap-high-availability-guide/3029-install-dot-net-framework-35-progress.png
-[sap-ha-guide-figure-3030]:./media/virtual-machines-shared-sap-high-availability-guide/3030-sios-installer.png
-[sap-ha-guide-figure-3031]:./media/virtual-machines-shared-sap-high-availability-guide/3031-first-screen-sios-data-keeper-installation.png
-[sap-ha-guide-figure-3032]:./media/virtual-machines-shared-sap-high-availability-guide/3032-data-keeper-informs-service-be-disabled.png
-[sap-ha-guide-figure-3033]:./media/virtual-machines-shared-sap-high-availability-guide/3033-user-selection-sios-data-keeper.png
-[sap-ha-guide-figure-3034]:./media/virtual-machines-shared-sap-high-availability-guide/3034-domain-user-sios-data-keeper.png
-[sap-ha-guide-figure-3035]:./media/virtual-machines-shared-sap-high-availability-guide/3035-provide-sios-data-keeper-license.png
-[sap-ha-guide-figure-3036]:./media/virtual-machines-shared-sap-high-availability-guide/3036-data-keeper-management-config-tool.png
-[sap-ha-guide-figure-3037]:./media/virtual-machines-shared-sap-high-availability-guide/3037-tcp-ip-address-first-node-data-keeper.png
-[sap-ha-guide-figure-3038]:./media/virtual-machines-shared-sap-high-availability-guide/3038-create-replication-sios-job.png
-[sap-ha-guide-figure-3039]:./media/virtual-machines-shared-sap-high-availability-guide/3039-define-sios-replication-job-name.png
-[sap-ha-guide-figure-3040]:./media/virtual-machines-shared-sap-high-availability-guide/3040-define-sios-source-node.png
-[sap-ha-guide-figure-3041]:./media/virtual-machines-shared-sap-high-availability-guide/3041-define-sios-target-node.png
-[sap-ha-guide-figure-3042]:./media/virtual-machines-shared-sap-high-availability-guide/3042-define-sios-synchronous-replication.png
-[sap-ha-guide-figure-3043]:./media/virtual-machines-shared-sap-high-availability-guide/3043-enable-sios-replicated-volume-as-cluster-volume.png
-[sap-ha-guide-figure-3044]:./media/virtual-machines-shared-sap-high-availability-guide/3044-data-keeper-synchronous-mirroring-for-SAP-gui.png
-[sap-ha-guide-figure-3045]:./media/virtual-machines-shared-sap-high-availability-guide/3045-replicated-disk-by-data-keeper-in-wsfc.png
-[sap-ha-guide-figure-3046]:./media/virtual-machines-shared-sap-high-availability-guide/3046-dns-entry-sap-ascs-virtual-name-ip.png
-[sap-ha-guide-figure-3047]:./media/virtual-machines-shared-sap-high-availability-guide/3047-dns-manager.png
-[sap-ha-guide-figure-3048]:./media/virtual-machines-shared-sap-high-availability-guide/3048-default-cluster-probe-port.png
-[sap-ha-guide-figure-3049]:./media/virtual-machines-shared-sap-high-availability-guide/3049-cluster-probe-port-after.png
-[sap-ha-guide-figure-3050]:./media/virtual-machines-shared-sap-high-availability-guide/3050-service-type-ers-delayed-automatic.png
-[sap-ha-guide-figure-5000]:./media/virtual-machines-shared-sap-high-availability-guide/5000-wsfc-sap-sid-node-a.png
-[sap-ha-guide-figure-5001]:./media/virtual-machines-shared-sap-high-availability-guide/5001-sios-replicating-local-volume.png
-[sap-ha-guide-figure-5002]:./media/virtual-machines-shared-sap-high-availability-guide/5002-wsfc-sap-sid-node-b.png
-[sap-ha-guide-figure-5003]:./media/virtual-machines-shared-sap-high-availability-guide/5003-sios-replicating-local-volume-b-to-a.png
-
-[sap-ha-guide-figure-6003]:./media/virtual-machines-shared-sap-high-availability-guide/6003-sap-multi-sid-full-landscape.png
-
-[sap-ha-guide-figure-8001]:./media/virtual-machines-shared-sap-high-availability-guide/8001.png
-[sap-ha-guide-figure-8002]:./media/virtual-machines-shared-sap-high-availability-guide/8002.png
-[sap-ha-guide-figure-8003]:./media/virtual-machines-shared-sap-high-availability-guide/8003.png
-[sap-ha-guide-figure-8004]:./media/virtual-machines-shared-sap-high-availability-guide/8004.png
-[sap-ha-guide-figure-8005]:./media/virtual-machines-shared-sap-high-availability-guide/8005.png
-[sap-ha-guide-figure-8006]:./media/virtual-machines-shared-sap-high-availability-guide/8006.png
-[sap-ha-guide-figure-8007]:./media/virtual-machines-shared-sap-high-availability-guide/8007.png
-[sap-ha-guide-figure-8008]:./media/virtual-machines-shared-sap-high-availability-guide/8008.png
-[sap-ha-guide-figure-8009]:./media/virtual-machines-shared-sap-high-availability-guide/8009.png
[sap-ha-guide-figure-8010]:./media/virtual-machines-shared-sap-high-availability-guide/8010.png [sap-ha-guide-figure-8011]:./media/virtual-machines-shared-sap-high-availability-guide/8011.png
-[sap-ha-guide-figure-8012]:./media/virtual-machines-shared-sap-high-availability-guide/8012.png
-[sap-ha-guide-figure-8013]:./media/virtual-machines-shared-sap-high-availability-guide/8013.png
-[sap-ha-guide-figure-8014]:./media/virtual-machines-shared-sap-high-availability-guide/8014.png
-[sap-ha-guide-figure-8015]:./media/virtual-machines-shared-sap-high-availability-guide/8015.png
-[sap-ha-guide-figure-8016]:./media/virtual-machines-shared-sap-high-availability-guide/8016.png
-[sap-ha-guide-figure-8017]:./media/virtual-machines-shared-sap-high-availability-guide/8017.png
-[sap-ha-guide-figure-8018]:./media/virtual-machines-shared-sap-high-availability-guide/8018.png
-[sap-ha-guide-figure-8019]:./media/virtual-machines-shared-sap-high-availability-guide/8019.png
-[sap-ha-guide-figure-8020]:./media/virtual-machines-shared-sap-high-availability-guide/8020.png
-[sap-ha-guide-figure-8021]:./media/virtual-machines-shared-sap-high-availability-guide/8021.png
-[sap-ha-guide-figure-8022]:./media/virtual-machines-shared-sap-high-availability-guide/8022.png
-[sap-ha-guide-figure-8023]:./media/virtual-machines-shared-sap-high-availability-guide/8023.png
-[sap-ha-guide-figure-8024]:./media/virtual-machines-shared-sap-high-availability-guide/8024.png
-[sap-ha-guide-figure-8025]:./media/virtual-machines-shared-sap-high-availability-guide/8025.png
--
-[sap-templates-3-tier-multisid-xscs-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-xscs%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-xscs-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-xscs-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-db%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-db-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-apps%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-apps-md%2Fazuredeploy.json
-
-[virtual-machines-azure-resource-manager-architecture-benefits-arm]:../../azure-resource-manager/management/overview.md#the-benefits-of-using-resource-manager
-
-[virtual-machines-manage-availability]:../../virtual-machines-windows-manage-availability.md
This article describes the Azure infrastructure preparation steps that are needed to install and configure high-availability SAP systems on a Windows Server Failover Clustering cluster (WSFC), using scale-out file share as an option for clustering SAP ASCS/SCS instances.
sap Sap High Availability Infrastructure Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-infrastructure-wsfc-shared-disk.md
Title: Azure infrastructure for SAP ASCS/SCS with WSFC&shared disk | Microsoft Docs description: Learn how to prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS instance.
-tags: azure-resource-manager
-keywords: ''
vm-windows Previously updated : 12/16/2022 Last updated : 01/19/2024 # Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS
-[1928533]:https://launchpad.support.sap.com/#/notes/1928533
-[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[2015553]:https://launchpad.support.sap.com/#/notes/2015553
-[2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
-
-[sap-installation-guides]:http://service.sap.com/instguides
-[tuning-failover-cluster-network-thresholds]:https://techcommunity.microsoft.com/t5/Failover-Clustering/Tuning-Failover-Cluster-Network-Thresholds/ba-p/371834
-
-[azure-resource-manager/management/azure-subscription-service-limits]:../../azure-resource-manager/management/azure-subscription-service-limits.md
-[azure-resource-manager/management/azure-subscription-service-limits-subscription]:../../azure-resource-manager/management/azure-subscription-service-limits.md
-
-[dbms-guide]:../../virtual-machines-windows-sap-dbms-guide-general.md
-
-[deployment-guide]:deployment-guide.md
-
-[dr-guide-classic]:https://go.microsoft.com/fwlink/?LinkID=521971
-
-[getting-started]:get-started.md
--
-[sap-high-availability-architecture-scenarios]:sap-high-availability-architecture-scenarios.md
[sap-high-availability-guide-wsfc-shared-disk]:sap-high-availability-guide-wsfc-shared-disk.md
-[sap-high-availability-guide-wsfc-file-share]:sap-high-availability-guide-wsfc-file-share.md
-[sap-ascs-high-availability-multi-sid-wsfc]:sap-ascs-high-availability-multi-sid-wsfc.md
-[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
[sap-high-availability-installation-wsfc-shared-disk]:sap-high-availability-installation-wsfc-shared-disk.md
-[sap-hana-ha]:sap-hana-high-availability.md
-[sap-suse-ascs-ha]:high-availability-guide-suse.md
-
-[planning-guide]:planning-guide.md
-[planning-guide-11]:planning-guide.md
-[planning-guide-2.2]:planning-guide.md#f5b3b18c-302c-4bd8-9ab2-c388f1ab3d10
-
-[planning-guide-microsoft-azure-networking]:planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd
-[planning-guide-storage-microsoft-azure-storage-and-data-disks]:planning-guide.md#a72afa26-4bf4-4a25-8cf7-855d6032157f
--
-[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
-[sap-high-availability-infrastructure-wsfc-shared-disk-vpn]:sap-high-availability-infrastructure-wsfc-shared-disk.md#c87a8d3f-b1dc-4d2f-b23c-da4b72977489
-[sap-high-availability-infrastructure-wsfc-shared-disk-change-def-ilb]:sap-high-availability-infrastructure-wsfc-shared-disk.md#fe0bd8b5-2b43-45e3-8295-80bee5415716
-[sap-high-availability-infrastructure-wsfc-shared-disk-setup-wsfc]:sap-high-availability-infrastructure-wsfc-shared-disk.md#0d67f090-7928-43e0-8772-5ccbf8f59aab
-[sap-high-availability-infrastructure-wsfc-shared-disk-collect-cluster-config]:sap-high-availability-infrastructure-wsfc-shared-disk.md#5eecb071-c703-4ccc-ba6d-fe9c6ded9d79
-[sap-high-availability-infrastructure-wsfc-shared-disk-install-sios]:sap-high-availability-infrastructure-wsfc-shared-disk.md#5c8e5482-841e-45e1-a89d-a05c0907c868
-[sap-high-availability-infrastructure-wsfc-shared-disk-add-dot-net]:sap-high-availability-infrastructure-wsfc-shared-disk.md#1c2788c3-3648-4e82-9e0d-e058e475e2a3
-[sap-high-availability-infrastructure-wsfc-shared-disk-install-sios-both-nodes]:sap-high-availability-infrastructure-wsfc-shared-disk.md#dd41d5a2-8083-415b-9878-839652812102
-[sap-high-availability-infrastructure-wsfc-shared-disk-setup-sios]:sap-high-availability-infrastructure-wsfc-shared-disk.md#d9c1fc8e-8710-4dff-bec2-1f535db7b006
--
-[Logo_Linux]:media/virtual-machines-shared-sap-shared/Linux.png
[Logo_Windows]:media/virtual-machines-shared-sap-shared/Windows.png -
-[sap-ha-guide-figure-1000]:./media/virtual-machines-shared-sap-high-availability-guide/1000-wsfc-for-sap-ascs-on-azure.png
-[sap-ha-guide-figure-1001]:./media/virtual-machines-shared-sap-high-availability-guide/1001-wsfc-on-azure-ilb.png
-[sap-ha-guide-figure-1002]:./media/virtual-machines-shared-sap-high-availability-guide/1002-wsfc-sios-on-azure-ilb.png
-[sap-ha-guide-figure-2000]:./media/virtual-machines-shared-sap-high-availability-guide/2000-wsfc-sap-as-ha-on-azure.png
-[sap-ha-guide-figure-2001]:./media/virtual-machines-shared-sap-high-availability-guide/2001-wsfc-sap-ascs-ha-on-azure.png
-[sap-ha-guide-figure-2003]:./media/virtual-machines-shared-sap-high-availability-guide/2003-wsfc-sap-dbms-ha-on-azure.png
-[sap-ha-guide-figure-2004]:./media/virtual-machines-shared-sap-high-availability-guide/2004-wsfc-sap-ha-e2e-archit-template1-on-azure.png
-[sap-ha-guide-figure-2005]:./media/virtual-machines-shared-sap-high-availability-guide/2005-wsfc-sap-ha-e2e-arch-template2-on-azure.png
-
-[sap-ha-guide-figure-3000]:./media/virtual-machines-shared-sap-high-availability-guide/3000-template-parameters-sap-ha-arm-on-azure.png
-[sap-ha-guide-figure-3001]:./media/virtual-machines-shared-sap-high-availability-guide/3001-configuring-dns-servers-for-Azure-vnet.png
-[sap-ha-guide-figure-3002]:./media/virtual-machines-shared-sap-high-availability-guide/3002-configuring-static-IP-address-for-network-card-of-each-vm.png
-[sap-ha-guide-figure-3003]:./media/virtual-machines-shared-sap-high-availability-guide/3003-setup-static-ip-address-ilb-for-ascs-instance.png
-[sap-ha-guide-figure-3004]:./media/virtual-machines-shared-sap-high-availability-guide/3004-default-ascs-scs-ilb-balancing-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3005]:./media/virtual-machines-shared-sap-high-availability-guide/3005-changing-ascs-scs-default-ilb-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3006]:./media/virtual-machines-shared-sap-high-availability-guide/3006-adding-vm-to-domain.png
-[sap-ha-guide-figure-3007]:./media/virtual-machines-shared-sap-high-availability-guide/3007-config-wsfc-1.png
-[sap-ha-guide-figure-3008]:./media/virtual-machines-shared-sap-high-availability-guide/3008-config-wsfc-2.png
-[sap-ha-guide-figure-3009]:./media/virtual-machines-shared-sap-high-availability-guide/3009-config-wsfc-3.png
-[sap-ha-guide-figure-3010]:./media/virtual-machines-shared-sap-high-availability-guide/3010-config-wsfc-4.png
-[sap-ha-guide-figure-3011]:./media/virtual-machines-shared-sap-high-availability-guide/3011-config-wsfc-5.png
-[sap-ha-guide-figure-3012]:./media/virtual-machines-shared-sap-high-availability-guide/3012-config-wsfc-6.png
-[sap-ha-guide-figure-3013]:./media/virtual-machines-shared-sap-high-availability-guide/3013-config-wsfc-7.png
-[sap-ha-guide-figure-3014]:./media/virtual-machines-shared-sap-high-availability-guide/3014-config-wsfc-8.png
-[sap-ha-guide-figure-3015]:./media/virtual-machines-shared-sap-high-availability-guide/3015-config-wsfc-9.png
-[sap-ha-guide-figure-3016]:./media/virtual-machines-shared-sap-high-availability-guide/3016-config-wsfc-10.png
-[sap-ha-guide-figure-3017]:./media/virtual-machines-shared-sap-high-availability-guide/3017-config-wsfc-11.png
-[sap-ha-guide-figure-3018]:./media/virtual-machines-shared-sap-high-availability-guide/3018-config-wsfc-12.png
-[sap-ha-guide-figure-3019]:./media/virtual-machines-shared-sap-high-availability-guide/3019-assign-permissions-on-share-for-cluster-name-object.png
-[sap-ha-guide-figure-3020]:./media/virtual-machines-shared-sap-high-availability-guide/3020-change-object-type-include-computer-objects.png
-[sap-ha-guide-figure-3021]:./media/virtual-machines-shared-sap-high-availability-guide/3021-check-box-for-computer-objects.png
-[sap-ha-guide-figure-3022]:./media/virtual-machines-shared-sap-high-availability-guide/3022-set-security-attributes-for-cluster-name-object-on-file-share-quorum.png
-[sap-ha-guide-figure-3023]:./media/virtual-machines-shared-sap-high-availability-guide/3023-call-configure-cluster-quorum-setting-wizard.png
-[sap-ha-guide-figure-3024]:./media/virtual-machines-shared-sap-high-availability-guide/3024-selection-screen-different-quorum-configurations.png
-[sap-ha-guide-figure-3025]:./media/virtual-machines-shared-sap-high-availability-guide/3025-selection-screen-file-share-witness.png
-[sap-ha-guide-figure-3026]:./media/virtual-machines-shared-sap-high-availability-guide/3026-define-file-share-location-for-witness-share.png
-[sap-ha-guide-figure-3027]:./media/virtual-machines-shared-sap-high-availability-guide/3027-successful-reconfiguration-cluster-file-share-witness.png
-[sap-ha-guide-figure-3028]:./media/virtual-machines-shared-sap-high-availability-guide/3028-install-dot-net-framework-35.png
-[sap-ha-guide-figure-3029]:./media/virtual-machines-shared-sap-high-availability-guide/3029-install-dot-net-framework-35-progress.png
[sap-ha-guide-figure-3030]:./media/virtual-machines-shared-sap-high-availability-guide/3030-sios-installer.png [sap-ha-guide-figure-3031]:./media/virtual-machines-shared-sap-high-availability-guide/3031-first-screen-sios-data-keeper-installation.png [sap-ha-guide-figure-3032]:./media/virtual-machines-shared-sap-high-availability-guide/3032-data-keeper-informs-service-be-disabled.png
[sap-ha-guide-figure-3043]:./media/virtual-machines-shared-sap-high-availability-guide/3043-enable-sios-replicated-volume-as-cluster-volume.png [sap-ha-guide-figure-3044]:./media/virtual-machines-shared-sap-high-availability-guide/3044-data-keeper-synchronous-mirroring-for-SAP-gui.png [sap-ha-guide-figure-3045]:./media/virtual-machines-shared-sap-high-availability-guide/3045-replicated-disk-by-data-keeper-in-wsfc.png
-[sap-ha-guide-figure-3046]:./media/virtual-machines-shared-sap-high-availability-guide/3046-dns-entry-sap-ascs-virtual-name-ip.png
-[sap-ha-guide-figure-3047]:./media/virtual-machines-shared-sap-high-availability-guide/3047-dns-manager.png
-[sap-ha-guide-figure-3048]:./media/virtual-machines-shared-sap-high-availability-guide/3048-default-cluster-probe-port.png
-[sap-ha-guide-figure-3049]:./media/virtual-machines-shared-sap-high-availability-guide/3049-cluster-probe-port-after.png
-[sap-ha-guide-figure-3050]:./media/virtual-machines-shared-sap-high-availability-guide/3050-service-type-ers-delayed-automatic.png
-[sap-ha-guide-figure-5000]:./media/virtual-machines-shared-sap-high-availability-guide/5000-wsfc-sap-sid-node-a.png
-[sap-ha-guide-figure-5001]:./media/virtual-machines-shared-sap-high-availability-guide/5001-sios-replicating-local-volume.png
-[sap-ha-guide-figure-5002]:./media/virtual-machines-shared-sap-high-availability-guide/5002-wsfc-sap-sid-node-b.png
-[sap-ha-guide-figure-5003]:./media/virtual-machines-shared-sap-high-availability-guide/5003-sios-replicating-local-volume-b-to-a.png
-
-[sap-ha-guide-figure-6003]:./media/virtual-machines-shared-sap-high-availability-guide/6003-sap-multi-sid-full-landscape.png
-
-[sap-templates-3-tier-multisid-xscs-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-xscs%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-xscs-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-xscs-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-db%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-db-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-apps%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-apps-md%2Fazuredeploy.json
-
-[virtual-machines-azure-resource-manager-architecture-benefits-arm]:../../azure-resource-manager/management/overview.md#the-benefits-of-using-resource-manager
-
-[virtual-machines-manage-availability]:../../virtual-machines-windows-manage-availability.md
- > ![Windows OS][Logo_Windows] Windows This article describes the steps you take to prepare the Azure infrastructure for installing and configuring a high-availability SAP ASCS/SCS instance on a Windows failover cluster by using a *cluster shared disk* as an option for clustering an SAP ASCS instance. Two alternatives for *cluster shared disk* are presented in the documentation: - [Azure shared disks](../../virtual-machines/disks-shared.md)-- Using [SIOS DataKeeper Cluster Edition](https://us.sios.com/products/sios-datakeeper/) to create mirrored storage, that will simulate clustered shared disk
+- Using [SIOS DataKeeper Cluster Edition](https://us.sios.com/products/sios-datakeeper/) to create mirrored storage, that simulates clustered shared disk
The documentation doesn't cover the database layer. ## Prerequisites
-Before you begin the installation, review this article:
+Before you begin the installation review this article:
-* [Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk][sap-high-availability-guide-wsfc-shared-disk]
+- [Architecture guide: Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk][sap-high-availability-guide-wsfc-shared-disk]
## Create the ASCS VMs
For SAP ASCS / SCS cluster deploy two VMs in Azure availability set or Azure ava
Based on your deployment type, the host names and the IP addresses of the scenario would be like:
-**SAP deployment in Azure availability set**
+SAP deployment in Azure availability set
| Host name role | Host name | Static IP address | Availability set | Disk SkuName | | -- | -- | - | - | |
-| 1st cluster node ASCS/SCS cluster | pr1-ascs-10 | 10.0.0.4 | pr1-ascs-avset | Premium_LRS |
-| 2nd cluster node ASCS/SCS cluster | pr1-ascs-11 | 10.0.0.5 | pr1-ascs-avset | |
+| First cluster node ASCS/SCS cluster | pr1-ascs-10 | 10.0.0.4 | pr1-ascs-avset | Premium_LRS |
+| Second cluster node ASCS/SCS cluster | pr1-ascs-11 | 10.0.0.5 | pr1-ascs-avset | |
| Cluster Network Name | pr1clust | 10.0.0.42(**only** for Win 2016 cluster) | n/a | | | ASCS cluster network name | pr1-ascscl | 10.0.0.43 | n/a | | | ERS cluster network name (**only** for ERS2) | pr1-erscl | 10.0.0.44 | n/a | |
-**SAP deployment in Azure availability zones**
+SAP deployment in Azure availability zones
| Host name role | Host name | Static IP address | Availability zone | Disk SkuName | | -- | -- | - | -- | |
-| 1st cluster node ASCS/SCS cluster | pr1-ascs-10 | 10.0.0.4 | AZ01 | Premium_ZRS |
-| 2nd cluster node ASCS/SCS cluster | pr1-ascs-11 | 10.0.0.5 | AZ02 | |
+| First cluster node ASCS/SCS cluster | pr1-ascs-10 | 10.0.0.4 | AZ01 | Premium_ZRS |
+| Second cluster node ASCS/SCS cluster | pr1-ascs-11 | 10.0.0.5 | AZ02 | |
| Cluster Network Name | pr1clust | 10.0.0.42(**only** for Win 2016 cluster) | n/a | | | ASCS cluster network name | pr1-ascscl | 10.0.0.43 | n/a | | | ERS cluster network name (**only** for ERS2) | pr1-erscl | 10.0.0.44 | n/a | | The steps mentioned in the document remain same for both deployment type. But if your cluster is running in availability set, you need to deploy LRS for Azure premium shared disk (Premium_LRS) and if the cluster is running in availability zone deploy ZRS for Azure premium shared disk (Premium_ZRS).
-> [!Note]
+> [!NOTE]
> [Azure proximity placement group](../../virtual-machines/windows/proximity-placement-groups.md) is not required for Azure shared disk. But for SAP deployment with PPG, follow below guidelines:
+>
> - If you are using PPG for SAP system deployed in a region then all virtual machines sharing a disk must be part of the same PPG.
-> - If you are using PPG for SAP system deployed across zones like described in the document [Proximity placement groups with zonal deployments](proximity-placement-scenarios.md#proximity-placement-groups-with-zonal-deployments), you can attach Premium_ZRS storage to virtual machines sharing a disk.
-
-## <a name="fe0bd8b5-2b43-45e3-8295-80bee5415716"></a> Create Azure internal load balancer
+> - If you are using PPG for SAP system deployed across zones like described in the document [Proximity placement groups with zonal deployments](proximity-placement-scenarios.md#proximity-placement-groups-with-zonal-deployments), you can attach Premium_ZRS storage to virtual machines sharing a disk.
+
+## Create Azure internal load balancer
+
+During VM configuration, you can create or select exiting load balancer in networking section. For the ENSA1 architecture on Windows, you would need only one virtual IP address for SAP ASCS/SCS. On the other hand, the ENSA2 architecture necessitates two virtual IP addresses - one for SAP ASCS/SCS and another for ERS2. When configuring a [standard internal load balancer](../../load-balancer/quickstart-load-balancer-standard-internal-portal.md#create-load-balancer) for the HA setup of SAP ASCS/SCS on Windows, follow below guidelines.
+
+1. **Frontend IP Configuration:** Create frontend IP (example: 10.0.0.43). Select the same virtual network and subnet as your ASCS/ERS virtual machines.
+2. **Backend Pool:** Create backend pool and add ASCS and ERS VMs. In this example, VMs are pr1-ascs-10 and pr1-ascs-11.
+3. **Inbound rules:** Create load balancing rule.
+ - Frontend IP address: Select frontend IP
+ - Backend pool: Select backend pool
+ - Check "High availability ports"
+ - Protocol: TCP
+ - Health Probe: Create health probe with below details
+ - Protocol: TCP
+ - Port: [for example: 620<Instance-no.> for ASCS]
+ - Interval: 5
+ - Probe Threshold: 2
+ - Idle timeout (minutes): 30
+ - Check "Enable Floating IP"
+4. Applicable to only ENSA2 architecture: Create additional frontend IP (10.0.0.44), load balancing rule (use 621<Instance-no.> for ERS2 health probe port) as described in point 1 and 3.
-SAP ASCS, SAP SCS, and the new SAP ERS2, use virtual hostname and virtual IP addresses. On Azure a [load balancer](../../load-balancer/load-balancer-overview.md) is required to use a virtual IP address.
-We strongly recommend using [Standard load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
+> [!NOTE]
+> Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. So to control the number of successful or failed consecutive probes, set the property "probeThreshold" to 2. It is currently not possible to set this property using Azure portal, so use either the [Azure CLI](/cli/azure/network/lb/probe) or [PowerShell](/powershell/module/az.network/new-azloadbalancerprobeconfig) command.
> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
-
-The following list shows the configuration of the (A)SCS/ERS load balancer. The configuration for both SAP ASCS and ERS2 in performed in the same Azure load balancer.
-
-**(A)SCS**
-- Frontend configuration
- - Static ASCS/SCS IP address **10.0.0.43**
-- Backend configuration
- Add all virtual machines that should be part of the (A)SCS/ERS cluster. In this example VMs **pr1-ascs-10** and **pr1-ascs-11**.
-- Probe Port
- - Port 620**nr**
- Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
-- Load-balancing rules
- - If using Standard Load Balancer, select HA ports
- - If using Basic Load Balancer, create Load-balancing rules for the following ports
- - 32**nr** TCP
- - 36**nr** TCP
- - 39**nr** TCP
- - 81**nr** TCP
- - 5**nr**13 TCP
- - 5**nr**14 TCP
- - 5**nr**16 TCP
-
- - Make sure that Idle timeout (minutes) is set to max value 30, and that Floating IP (direct server return) is Enabled.
-
-**ERS2**
-
-As Enqueue Replication Server 2 (ERS2) is also clustered, ERS2 virtual IP address must be also configured on Azure ILB in addition to above SAP ASCS/SCS IP. This section only applies, if using Enqueue replication server 2 architecture.
-- 2nd Frontend configuration
- - Static SAP ERS2 IP address **10.0.0.44**
--- Backend configuration
- The VMs were already added to the ILB backend pool.
--- 2nd Probe Port
- - Port 621**nr**
- Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
--- 2nd Load-balancing rules
- - If using Standard Load Balancer, select HA ports
- - If using Basic Load Balancer, create Load-balancing rules for the following ports
- - 32**nr** TCP
- - 33**nr** TCP
- - 5**nr**13 TCP
- - 5**nr**14 TCP
- - 5**nr**16 TCP
-
- - Make sure that Idle timeout (minutes) is set to max value 30, and that Floating IP (direct server return) is Enabled.
+> A floating IP address isn't supported on a network interface card (NIC) secondary IP configuration in load-balancing scenarios. For details, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC.
+
+> [!NOTE]
+> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
> [!TIP] > With the [Azure Resource Manager Template for WSFC for SAP ASCS/SCS instance with Azure Shared Disk](https://github.com/robotechredmond/301-shared-disk-sap), you can automate the infrastructure preparation, using Azure Shared Disk for one SAP SID with ERS1.
-> The Azure ARM template will create two Windows 2019 or 2016 VMs, create Azure shared disk and attach to the VMs. Azure Internal Load Balancer will be created and configured as well.
-> For details - see the ARM template.
+> The Azure ARM template will create two Windows 2019 or 2016 VMs, create Azure shared disk and attach to the VMs. Azure Internal Load Balancer will be created and configured as well.
+> For details - see the ARM template.
-## <a name="661035b2-4d0f-4d31-86f8-dc0a50d78158"></a> Add registry entries on both cluster nodes of the ASCS/SCS instance
+## Add registry entries on both cluster nodes of the ASCS/SCS instance
-Azure Load Balancer may close connections, if the connections are idle for a period and exceed the idle timeout. The SAP work processes open connections to the SAP enqueue process as soon as the first enqueue/dequeue request needs to be sent. To avoid interrupting these connections, change the TCP/IP KeepAliveTime and KeepAliveInterval values on both cluster nodes. If using ERS1, it is also necessary to add SAP profile parameters, as described later in this article.
+Azure Load Balancer may close connections, if the connections are idle for a period and exceed the idle timeout. The SAP work processes open connections to the SAP enqueue process as soon as the first enqueue/dequeue request needs to be sent. To avoid interrupting these connections, change the TCP/IP KeepAliveTime and KeepAliveInterval values on both cluster nodes. If using ERS1, it's also necessary to add SAP profile parameters, as described later in this article.
The following registry entries must be changed on both cluster nodes: - KeepAliveTime
The following registry entries must be changed on both cluster nodes:
| HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters |KeepAliveTime |REG_DWORD (Decimal) |120000 |[KeepAliveTime](/previous-versions/windows/it-pro/windows-2000-server/cc957549(v=technet.10)) | | HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters |KeepAliveInterval |REG_DWORD (Decimal) |120000 |[KeepAliveInterval](/previous-versions/windows/it-pro/windows-2000-server/cc957548(v=technet.10)) | - To apply the changes, restart both cluster nodes.
-## <a name="e69e9a34-4601-47a3-a41c-d2e11c626c0c"></a> Add the Windows VMs to the domain
-After you assign static IP addresses to the virtual machines, add the virtual machines to the domain.
+## Add the Windows VMs to the domain
+
+After you assign static IP addresses to the virtual machines, add the virtual machines to the domain.
-## <a name="0d67f090-7928-43e0-8772-5ccbf8f59aab"></a> Install and configure Windows failover cluster
+## Install and configure Windows failover cluster
### Install the Windows failover cluster feature
Invoke-Command $ClusterNodes {Install-WindowsFeature Failover-Clustering, FS-Fil
Once the feature installation has completed, reboot both cluster nodes.
-### Test and configure Windows failover cluster
+### Test and configure Windows failover cluster
-On Windows 2019, the cluster will automatically recognize that it is running in Azure, and as a default option for cluster management IP, it will use Distributed Network name. Therefore, it will use any of the cluster nodes local IP addresses. As a result, there is no need for a dedicated (virtual) network name for the cluster, and there is no need to configure this IP address on Azure Internal Load Balancer.
+On Windows 2019, the cluster will automatically recognize that it's running in Azure, and as a default option for cluster management IP, it uses Distributed Network name. Therefore, it uses any of the cluster nodes local IP addresses. As a result, there's no need for a dedicated (virtual) network name for the cluster, and there's no need to configure this IP address on Azure Internal Load Balancer.
For more information, see, [Windows Server 2019 Failover Clustering New features](https://techcommunity.microsoft.com/t5/failover-clustering/windows-server-2019-failover-clustering-new-features/ba-p/544029) Run this command on one of the cluster nodes:
if($WindowsVersion -eq "Windows Server 2019 Datacenter"){
``` ### Configure cluster cloud quorum+ As you use Windows Server 2016 or 2019, we recommended configuring [Azure Cloud Witness](/windows-server/failover-clustering/deploy-cloud-witness), as cluster quorum. Run this command on one of the cluster nodes:
Set-ClusterQuorum ΓÇôCloudWitness ΓÇôAccountName $AzureStorageAccountName -Acces
``` ### Tuning the Windows failover cluster thresholds
-
+ After you successfully install the Windows failover cluster, you need to adjust some thresholds, to be suitable for clusters deployed in Azure. The parameters to be changed are documented in [Tuning failover cluster network thresholds](https://techcommunity.microsoft.com/t5/Failover-Clustering/Tuning-Failover-Cluster-Network-Thresholds/ba-p/371834). Assuming that your two VMs that make up the Windows cluster configuration for ASCS/SCS are in the same subnet, change the following parameters to these values:+ - SameSubNetDelay = 2000 - SameSubNetThreshold = 15 - RouteHistoryLength = 30
-These settings were tested with customers and offer a good compromise. They are resilient enough, but they also provide failover that is fast enough for real error conditions in SAP workloads or VM failure.
+These settings were tested with customers and offer a good compromise. They're resilient enough, but they also provide failover that is fast enough for real error conditions in SAP workloads or VM failure.
## Configure Azure shared disk
-This section is only applicable, if you are using Azure shared disk.
+
+This section is only applicable, if you're using Azure shared disk.
### Create and attach Azure shared disk with PowerShell
-Run this command on one of the cluster nodes. You will need to adjust the values for your resource group, Azure region, SAPSID, and so on.
+
+Run this command on one of the cluster nodes. You'll need to adjust the values for your resource group, Azure region, SAPSID, and so on.
```powershell #############################
$NumberOfWindowsClusterNodes = 2
$SkuName = "Premium_LRS" # For SAP deployment in availability zone, use below storage SkuName $SkuName = "Premium_ZRS"
-
+
$diskConfig = New-AzDiskConfig -Location $location -SkuName $SkuName -CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount $NumberOfWindowsClusterNodes $dataDisk = New-AzDisk -ResourceGroupName $ResourceGroupName -DiskName $DiskName -Disk $diskConfig
Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose
``` ### Format the shared disk with PowerShell+ 1. Get the disk number. Run these PowerShell commands on one of the cluster nodes: ```powershell
- Get-Disk | Where-Object PartitionStyle -Eq "RAW" | Format-Table -AutoSize
- # Example output
- # Number Friendly Name Serial Number HealthStatus OperationalStatus Total Size Partition Style
- # - - -- -
- # 2 Msft Virtual Disk Healthy Online 512 GB RAW
-
+ Get-Disk | Where-Object PartitionStyle -Eq "RAW" | Format-Table -AutoSize
+ # Example output
+ # Number Friendly Name Serial Number HealthStatus OperationalStatus Total Size Partition Style
+ # - - -- -
+ # 2 Msft Virtual Disk Healthy Online 512 GB RAW
```
-2. Format the disk. In this example, it is disk number 2.
+
+2. Format the disk. In this example, it's disk number 2.
```powershell
- # Format SAP ASCS Disk number '2', with drive letter 'S'
- $SAPSID = "PR1"
- $DiskNumber = 2
- $DriveLetter = "S"
- $DiskLabel = "$SAPSID" + "SAP"
-
- Get-Disk -Number $DiskNumber | Where-Object PartitionStyle -Eq "RAW" | Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -DriveLetter $DriveLetter -UseMaximumSize | Format-Volume -FileSystem ReFS -NewFileSystemLabel $DiskLabel -Force -Verbose
- # Example outout
- # DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining Size
- # -- - -- - -
- # S PR1SAP ReFS Fixed Healthy OK 504.98 GB 511.81 GB
+ # Format SAP ASCS Disk number '2', with drive letter 'S'
+ $SAPSID = "PR1"
+ $DiskNumber = 2
+ $DriveLetter = "S"
+ $DiskLabel = "$SAPSID" + "SAP"
+
+ Get-Disk -Number $DiskNumber | Where-Object PartitionStyle -Eq "RAW" | Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -DriveLetter $DriveLetter -UseMaximumSize | Format-Volume -FileSystem ReFS -NewFileSystemLabel $DiskLabel -Force -Verbose
+ # Example outout
+ # DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining Size
+ # -- - -- - -
+ # S PR1SAP ReFS Fixed Healthy OK 504.98 GB 511.81 GB
``` 3. Verify that the disk is now visible as a cluster disk.+ ```powershell
- # List all disks
- Get-ClusterAvailableDisk -All
- # Example output
- # Cluster : pr1clust
- # Id : 88ff1d94-0cf1-4c70-89ae-cbbb2826a484
- # Name : Cluster Disk 1
- # Number : 2
- # Size : 549755813888
- # Partitions : {\\?\GLOBALROOT\Device\Harddisk2\Partition2\}
+ # List all disks
+ Get-ClusterAvailableDisk -All
+ # Example output
+ # Cluster : pr1clust
+ # Id : 88ff1d94-0cf1-4c70-89ae-cbbb2826a484
+ # Name : Cluster Disk 1
+ # Number : 2
+ # Size : 549755813888
+ # Partitions : {\\?\GLOBALROOT\Device\Harddisk2\Partition2\}
```+ 4. Register the disk in the cluster. + ```powershell
- # Add the disk to cluster
- Get-ClusterAvailableDisk -All | Add-ClusterDisk
- # Example output
- # Name State OwnerGroup ResourceType
- # - -- -
- # Cluster Disk 1 Online Available Storage Physical Disk
+ # Add the disk to cluster
+ Get-ClusterAvailableDisk -All | Add-ClusterDisk
+ # Example output
+ # Name State OwnerGroup ResourceType
+ # - -- -
+ # Cluster Disk 1 Online Available Storage Physical Disk
```
-## <a name="5c8e5482-841e-45e1-a89d-a05c0907c868"></a> SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk
-This section is only applicable, if you are using the third-party software SIOS DataKeeper Cluster Edition to create a mirrored storage that simulates cluster shared disk.
+## SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk
+
+This section is only applicable, if you're using the third-party software SIOS DataKeeper Cluster Edition to create a mirrored storage that simulates cluster shared disk.
Now, you have a working Windows Server failover clustering configuration in Azure. To install an SAP ASCS/SCS instance, you need a shared disk resource. One of the options is to use SIOS DataKeeper Cluster Edition is a third-party solution that you can use to create shared disk resources. Installing SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk involves these tasks:-- Add Microsoft .NET Framework, if needed. See the [SIOS documentation](https://us.sios.com/products/sios-datakeeper/) for the most up-to-date .NET framework requirements +
+- Add Microsoft .NET Framework, if needed. See the [SIOS documentation](https://us.sios.com/products/sios-datakeeper/) for the most up-to-date .NET framework requirements
- Install SIOS DataKeeper - Configure SIOS DataKeeper ### Install SIOS DataKeeper+ Install SIOS DataKeeper Cluster Edition on each node in the cluster. To create virtual shared storage with SIOS DataKeeper, create a synced mirror and then simulate cluster shared storage. Before you install the SIOS software, create the DataKeeperSvc domain user.
Before you install the SIOS software, create the DataKeeperSvc domain user.
![Figure 31: First page of the SIOS DataKeeper installation][sap-ha-guide-figure-3031]
- _First page of the SIOS DataKeeper installation_
+ *First page of the SIOS DataKeeper installation*
2. In the dialog box, select **Yes**. ![Figure 32: DataKeeper informs you that a service will be disabled][sap-ha-guide-figure-3032]
- _DataKeeper informs you that a service will be disabled_
+ *DataKeeper informs you that a service will be disabled*
3. In the dialog box, we recommend that you select **Domain or Server account**. ![Figure 33: User selection for SIOS DataKeeper][sap-ha-guide-figure-3033]
- _User selection for SIOS DataKeeper_
+ *User selection for SIOS DataKeeper*
4. Enter the domain account user name and password that you created for SIOS DataKeeper. ![Figure 34: Enter the domain user name and password for the SIOS DataKeeper installation][sap-ha-guide-figure-3034]
- _Enter the domain user name and password for the SIOS DataKeeper installation_
+ *Enter the domain user name and password for the SIOS DataKeeper installation*
5. Install the license key for your SIOS DataKeeper instance, as shown in Figure 35. ![Figure 35: Enter your SIOS DataKeeper license key][sap-ha-guide-figure-3035]
- _Enter your SIOS DataKeeper license key_
+ *Enter your SIOS DataKeeper license key*
6. When prompted, restart the virtual machine. ### Configure SIOS DataKeeper+ After you install SIOS DataKeeper on both nodes, start the configuration. The goal of the configuration is to have synchronous data replication between the additional disks that are attached to each of the virtual machines. 1. Start the DataKeeper Management and Configuration tool, and then select **Connect Server**. ![Figure 36: SIOS DataKeeper Management and Configuration tool][sap-ha-guide-figure-3036]
- _SIOS DataKeeper Management and Configuration tool_
+ *SIOS DataKeeper Management and Configuration tool*
2. Enter the name or TCP/IP address of the first node the Management and Configuration tool should connect to, and, in a second step, the second node. ![Figure 37: Insert the name or TCP/IP address of the first node the Management and Configuration tool should connect to, and in a second step, the second node][sap-ha-guide-figure-3037]
- _Insert the name or TCP/IP address of the first node the Management and Configuration tool should connect to, and in a second step, the second node_
+ *Insert the name or TCP/IP address of the first node the Management and Configuration tool should connect to, and in a second step, the second node*
3. Create the replication job between the two nodes. ![Figure 38: Create a replication job][sap-ha-guide-figure-3038]
- _Create a replication job_
+ *Create a replication job*
A wizard guides you through the process of creating a replication job.
After you install SIOS DataKeeper on both nodes, start the configuration. The go
![Figure 39: Define the name of the replication job][sap-ha-guide-figure-3039]
- _Define the name of the replication job_
+ *Define the name of the replication job*
![Figure 40: Define the base data for the node, which should be the current source node][sap-ha-guide-figure-3040]
- _Define the base data for the node, which should be the current source node_
+ *Define the base data for the node, which should be the current source node*
5. Define the name, TCP/IP address, and disk volume of the target node. ![Figure 41: Define the name, TCP/IP address, and disk volume of the current target node][sap-ha-guide-figure-3041]
- _Define the name, TCP/IP address, and disk volume of the current target node_
+ *Define the name, TCP/IP address, and disk volume of the current target node*
6. Define the compression algorithms. In our example, we recommend that you compress the replication stream. Especially in resynchronization situations, the compression of the replication stream dramatically reduces resynchronization time. Compression uses the CPU and RAM resources of a virtual machine. As the compression rate increases, so does the volume of CPU resources that are used. You can adjust this setting later.
After you install SIOS DataKeeper on both nodes, start the configuration. The go
![Figure 42: Define replication details][sap-ha-guide-figure-3042]
- _Define replication details_
+ *Define replication details*
8. Define whether the volume that is replicated by the replication job should be represented to a Windows Server failover cluster configuration as a shared disk. For the SAP ASCS/SCS configuration, select **Yes** so that the Windows cluster sees the replicated volume as a shared disk that it can use as a cluster volume. ![Figure 43: Select Yes to set the replicated volume as a cluster volume][sap-ha-guide-figure-3043]
- _Select **Yes** to set the replicated volume as a cluster volume_
+ *Select **Yes** to set the replicated volume as a cluster volume*
After the volume is created, the DataKeeper Management and Configuration tool shows that the replication job is active. ![Figure 44: DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active][sap-ha-guide-figure-3044]
- _DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active_
+ *DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active*
Failover Cluster Manager now shows the disk as a DataKeeper disk, as shown in Figure 45: ![Figure 45: Failover Cluster Manager shows the disk that DataKeeper replicated][sap-ha-guide-figure-3045]
- _Failover Cluster Manager shows the disk that DataKeeper replicated_
-
+ *Failover Cluster Manager shows the disk that DataKeeper replicated*
## Next steps
-* [Install SAP NetWeaver HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS instance][sap-high-availability-installation-wsfc-shared-disk]
+- [Install SAP NetWeaver HA by using a Windows failover cluster and shared disk for an SAP ASCS/SCS instance][sap-high-availability-installation-wsfc-shared-disk]
sap Sap High Availability Installation Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-installation-wsfc-file-share.md
Title: SAP NetWeaver high availability installation on a Windows failover cluste
description: SAP NetWeaver high availability installation on a Windows failover cluster and file share for SAP ASCS/SCS instances
-tags: azure-resource-manager
# Install SAP NetWeaver high availability on a Windows failover cluster and file share for SAP ASCS/SCS instances on Azure
-[1928533]:https://launchpad.support.sap.com/#/notes/1928533
-[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[2015553]:https://launchpad.support.sap.com/#/notes/2015553
-[2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
-[1596496]:https://launchpad.support.sap.com/#/notes/1596496
-
-[sap-installation-guides]:http://service.sap.com/instguides
-
-[sap-powershell-scrips]:https://github.com/Azure-Samples/sap-powershell
-
-[azure-resource-manager/management/azure-subscription-service-limits]:../../azure-resource-manager/management/azure-subscription-service-limits.md
-[azure-resource-manager/management/azure-subscription-service-limits-subscription]:../../azure-resource-manager/management/azure-subscription-service-limits.md
- [s2d-in-win-2016]:/windows-server/storage/storage-spaces/storage-spaces-direct-overview [sofs-overview]:https://technet.microsoft.com/library/hh831349(v=ws.11).aspx [new-in-win-2016-storage]:/windows-server/storage/whats-new-in-storage
-[dbms-guide]:../../virtual-machines-windows-sap-dbms-guide-general.md
-
-[deployment-guide]:deployment-guide.md
-
-[dr-guide-classic]:https://go.microsoft.com/fwlink/?LinkID=521971
-
-[getting-started]:get-started.md
-
-[sap-blog-new-sap-cluster-resource-dll]:https://blogs.sap.com/2017/08/28/new-sap-cluster-resource-dll-is-available/
-[sap-high-availability-architecture-scenarios]:sap-high-availability-architecture-scenarios.md
-[sap-high-availability-guide-wsfc-shared-disk]:sap-high-availability-guide-wsfc-shared-disk.md
[sap-high-availability-guide-wsfc-file-share]:sap-high-availability-guide-wsfc-file-share.md
-[sap-ascs-high-availability-multi-sid-wsfc]:sap-ascs-high-availability-multi-sid-wsfc.md
-[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
[sap-high-availability-infrastructure-wsfc-file-share]:sap-high-availability-infrastructure-wsfc-file-share.md
-[sap-high-availability-installation-wsfc-shared-disk]:sap-high-availability-installation-wsfc-shared-disk.md
-[sap-hana-ha]:sap-hana-high-availability.md
-[sap-suse-ascs-ha]:high-availability-guide-suse.md
-[sap-high-availability-installation-wsfc-shared-disk]:sap-high-availability-installation-wsfc-shared-disk.md
[sap-high-availability-installation-wsfc-shared-disk-create-ascs-virt-host]:sap-high-availability-installation-wsfc-shared-disk.md#a97ad604-9094-44fe-a364-f89cb39bf097 [sap-high-availability-installation-wsfc-shared-disk-add-probe-port]:sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761
-[planning-guide]:planning-guide.md
-[planning-guide-11]:planning-guide.md
-[planning-guide-2.1]:planning-guide.md#1625df66-4cc6-4d60-9202-de8a0b77f803
-[planning-guide-2.2]:planning-guide.md#f5b3b18c-302c-4bd8-9ab2-c388f1ab3d10
-
-[planning-guide-microsoft-azure-networking]:planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd
-[planning-guide-storage-microsoft-azure-storage-and-data-disks]:planning-guide.md#a72afa26-4bf4-4a25-8cf7-855d6032157f
-
-[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
-
-[sap-high-availability-infrastructure-wsfc-shared-disk]:sap-high-availability-infrastructure-wsfc-shared-disk.md
-[sap-high-availability-infrastructure-wsfc-shared-disk-vpn]:sap-high-availability-infrastructure-wsfc-shared-disk.md#c87a8d3f-b1dc-4d2f-b23c-da4b72977489
-[sap-high-availability-infrastructure-wsfc-shared-disk-change-def-ilb]:sap-high-availability-infrastructure-wsfc-shared-disk.md#fe0bd8b5-2b43-45e3-8295-80bee5415716
-[sap-high-availability-infrastructure-wsfc-shared-disk-setup-wsfc]:sap-high-availability-infrastructure-wsfc-shared-disk.md#0d67f090-7928-43e0-8772-5ccbf8f59aab
-[sap-high-availability-infrastructure-wsfc-shared-disk-collect-cluster-config]:sap-high-availability-infrastructure-wsfc-shared-disk.md#5eecb071-c703-4ccc-ba6d-fe9c6ded9d79
-[sap-high-availability-infrastructure-wsfc-shared-disk-install-sios]:sap-high-availability-infrastructure-wsfc-shared-disk.md#5c8e5482-841e-45e1-a89d-a05c0907c868
-[sap-high-availability-infrastructure-wsfc-shared-disk-add-dot-net]:sap-high-availability-infrastructure-wsfc-shared-disk.md#1c2788c3-3648-4e82-9e0d-e058e475e2a3
-[sap-high-availability-infrastructure-wsfc-shared-disk-install-sios-both-nodes]:sap-high-availability-infrastructure-wsfc-shared-disk.md#dd41d5a2-8083-415b-9878-839652812102
-[sap-high-availability-infrastructure-wsfc-shared-disk-setup-sios]:sap-high-availability-infrastructure-wsfc-shared-disk.md#d9c1fc8e-8710-4dff-bec2-1f535db7b006
-
-[sap-ha-guide-figure-1000]:./media/virtual-machines-shared-sap-high-availability-guide/1000-wsfc-for-sap-ascs-on-azure.png
-[sap-ha-guide-figure-1001]:./media/virtual-machines-shared-sap-high-availability-guide/1001-wsfc-on-azure-ilb.png
-[sap-ha-guide-figure-1002]:./media/virtual-machines-shared-sap-high-availability-guide/1002-wsfc-sios-on-azure-ilb.png
-[sap-ha-guide-figure-2000]:./media/virtual-machines-shared-sap-high-availability-guide/2000-wsfc-sap-as-ha-on-azure.png
-[sap-ha-guide-figure-2001]:./media/virtual-machines-shared-sap-high-availability-guide/2001-wsfc-sap-ascs-ha-on-azure.png
-[sap-ha-guide-figure-2003]:./media/virtual-machines-shared-sap-high-availability-guide/2003-wsfc-sap-dbms-ha-on-azure.png
-[sap-ha-guide-figure-2004]:./media/virtual-machines-shared-sap-high-availability-guide/2004-wsfc-sap-ha-e2e-archit-template1-on-azure.png
-[sap-ha-guide-figure-2005]:./media/virtual-machines-shared-sap-high-availability-guide/2005-wsfc-sap-ha-e2e-arch-template2-on-azure.png
-
-[sap-ha-guide-figure-3000]:./media/virtual-machines-shared-sap-high-availability-guide/3000-template-parameters-sap-ha-arm-on-azure.png
-[sap-ha-guide-figure-3001]:./media/virtual-machines-shared-sap-high-availability-guide/3001-configuring-dns-servers-for-Azure-vnet.png
-[sap-ha-guide-figure-3002]:./media/virtual-machines-shared-sap-high-availability-guide/3002-configuring-static-IP-address-for-network-card-of-each-vm.png
-[sap-ha-guide-figure-3003]:./media/virtual-machines-shared-sap-high-availability-guide/3003-setup-static-ip-address-ilb-for-ascs-instance.png
-[sap-ha-guide-figure-3004]:./media/virtual-machines-shared-sap-high-availability-guide/3004-default-ascs-scs-ilb-balancing-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3005]:./media/virtual-machines-shared-sap-high-availability-guide/3005-changing-ascs-scs-default-ilb-rules-for-azure-ilb.png
-[sap-ha-guide-figure-3006]:./media/virtual-machines-shared-sap-high-availability-guide/3006-adding-vm-to-domain.png
-[sap-ha-guide-figure-3007]:./media/virtual-machines-shared-sap-high-availability-guide/3007-config-wsfc-1.png
-[sap-ha-guide-figure-3008]:./media/virtual-machines-shared-sap-high-availability-guide/3008-config-wsfc-2.png
-[sap-ha-guide-figure-3009]:./media/virtual-machines-shared-sap-high-availability-guide/3009-config-wsfc-3.png
-[sap-ha-guide-figure-3010]:./media/virtual-machines-shared-sap-high-availability-guide/3010-config-wsfc-4.png
-[sap-ha-guide-figure-3011]:./media/virtual-machines-shared-sap-high-availability-guide/3011-config-wsfc-5.png
-[sap-ha-guide-figure-3012]:./media/virtual-machines-shared-sap-high-availability-guide/3012-config-wsfc-6.png
-[sap-ha-guide-figure-3013]:./media/virtual-machines-shared-sap-high-availability-guide/3013-config-wsfc-7.png
-[sap-ha-guide-figure-3014]:./media/virtual-machines-shared-sap-high-availability-guide/3014-config-wsfc-8.png
-[sap-ha-guide-figure-3015]:./media/virtual-machines-shared-sap-high-availability-guide/3015-config-wsfc-9.png
-[sap-ha-guide-figure-3016]:./media/virtual-machines-shared-sap-high-availability-guide/3016-config-wsfc-10.png
-[sap-ha-guide-figure-3017]:./media/virtual-machines-shared-sap-high-availability-guide/3017-config-wsfc-11.png
-[sap-ha-guide-figure-3018]:./media/virtual-machines-shared-sap-high-availability-guide/3018-config-wsfc-12.png
-[sap-ha-guide-figure-3019]:./media/virtual-machines-shared-sap-high-availability-guide/3019-assign-permissions-on-share-for-cluster-name-object.png
-[sap-ha-guide-figure-3020]:./media/virtual-machines-shared-sap-high-availability-guide/3020-change-object-type-include-computer-objects.png
-[sap-ha-guide-figure-3021]:./media/virtual-machines-shared-sap-high-availability-guide/3021-check-box-for-computer-objects.png
-[sap-ha-guide-figure-3022]:./media/virtual-machines-shared-sap-high-availability-guide/3022-set-security-attributes-for-cluster-name-object-on-file-share-quorum.png
-[sap-ha-guide-figure-3023]:./media/virtual-machines-shared-sap-high-availability-guide/3023-call-configure-cluster-quorum-setting-wizard.png
-[sap-ha-guide-figure-3024]:./media/virtual-machines-shared-sap-high-availability-guide/3024-selection-screen-different-quorum-configurations.png
-[sap-ha-guide-figure-3025]:./media/virtual-machines-shared-sap-high-availability-guide/3025-selection-screen-file-share-witness.png
-[sap-ha-guide-figure-3026]:./media/virtual-machines-shared-sap-high-availability-guide/3026-define-file-share-location-for-witness-share.png
-[sap-ha-guide-figure-3027]:./media/virtual-machines-shared-sap-high-availability-guide/3027-successful-reconfiguration-cluster-file-share-witness.png
-[sap-ha-guide-figure-3028]:./media/virtual-machines-shared-sap-high-availability-guide/3028-install-dot-net-framework-35.png
-[sap-ha-guide-figure-3029]:./media/virtual-machines-shared-sap-high-availability-guide/3029-install-dot-net-framework-35-progress.png
-[sap-ha-guide-figure-3030]:./media/virtual-machines-shared-sap-high-availability-guide/3030-sios-installer.png
-[sap-ha-guide-figure-3031]:./media/virtual-machines-shared-sap-high-availability-guide/3031-first-screen-sios-data-keeper-installation.png
-[sap-ha-guide-figure-3032]:./media/virtual-machines-shared-sap-high-availability-guide/3032-data-keeper-informs-service-be-disabled.png
-[sap-ha-guide-figure-3033]:./media/virtual-machines-shared-sap-high-availability-guide/3033-user-selection-sios-data-keeper.png
-[sap-ha-guide-figure-3034]:./media/virtual-machines-shared-sap-high-availability-guide/3034-domain-user-sios-data-keeper.png
-[sap-ha-guide-figure-3035]:./media/virtual-machines-shared-sap-high-availability-guide/3035-provide-sios-data-keeper-license.png
-[sap-ha-guide-figure-3036]:./media/virtual-machines-shared-sap-high-availability-guide/3036-data-keeper-management-config-tool.png
-[sap-ha-guide-figure-3037]:./media/virtual-machines-shared-sap-high-availability-guide/3037-tcp-ip-address-first-node-data-keeper.png
-[sap-ha-guide-figure-3038]:./media/virtual-machines-shared-sap-high-availability-guide/3038-create-replication-sios-job.png
-[sap-ha-guide-figure-3039]:./media/virtual-machines-shared-sap-high-availability-guide/3039-define-sios-replication-job-name.png
-[sap-ha-guide-figure-3040]:./media/virtual-machines-shared-sap-high-availability-guide/3040-define-sios-source-node.png
-[sap-ha-guide-figure-3041]:./media/virtual-machines-shared-sap-high-availability-guide/3041-define-sios-target-node.png
-[sap-ha-guide-figure-3042]:./media/virtual-machines-shared-sap-high-availability-guide/3042-define-sios-synchronous-replication.png
-[sap-ha-guide-figure-3043]:./media/virtual-machines-shared-sap-high-availability-guide/3043-enable-sios-replicated-volume-as-cluster-volume.png
-[sap-ha-guide-figure-3044]:./media/virtual-machines-shared-sap-high-availability-guide/3044-data-keeper-synchronous-mirroring-for-SAP-gui.png
-[sap-ha-guide-figure-3045]:./media/virtual-machines-shared-sap-high-availability-guide/3045-replicated-disk-by-data-keeper-in-wsfc.png
-[sap-ha-guide-figure-3046]:./media/virtual-machines-shared-sap-high-availability-guide/3046-dns-entry-sap-ascs-virtual-name-ip.png
-[sap-ha-guide-figure-3047]:./media/virtual-machines-shared-sap-high-availability-guide/3047-dns-manager.png
-[sap-ha-guide-figure-3048]:./media/virtual-machines-shared-sap-high-availability-guide/3048-default-cluster-probe-port.png
-[sap-ha-guide-figure-3049]:./media/virtual-machines-shared-sap-high-availability-guide/3049-cluster-probe-port-after.png
-[sap-ha-guide-figure-3050]:./media/virtual-machines-shared-sap-high-availability-guide/3050-service-type-ers-delayed-automatic.png
-[sap-ha-guide-figure-5000]:./media/virtual-machines-shared-sap-high-availability-guide/5000-wsfc-sap-sid-node-a.png
-[sap-ha-guide-figure-5001]:./media/virtual-machines-shared-sap-high-availability-guide/5001-sios-replicating-local-volume.png
-[sap-ha-guide-figure-5002]:./media/virtual-machines-shared-sap-high-availability-guide/5002-wsfc-sap-sid-node-b.png
-[sap-ha-guide-figure-5003]:./media/virtual-machines-shared-sap-high-availability-guide/5003-sios-replicating-local-volume-b-to-a.png
-
-[sap-ha-guide-figure-6003]:./media/virtual-machines-shared-sap-high-availability-guide/6003-sap-multi-sid-full-landscape.png
--
-[sap-ha-guide-figure-8001]:./media/virtual-machines-shared-sap-high-availability-guide/8001.png
-[sap-ha-guide-figure-8002]:./media/virtual-machines-shared-sap-high-availability-guide/8002.png
-[sap-ha-guide-figure-8003]:./media/virtual-machines-shared-sap-high-availability-guide/8003.png
-[sap-ha-guide-figure-8004]:./media/virtual-machines-shared-sap-high-availability-guide/8004.png
-[sap-ha-guide-figure-8005]:./media/virtual-machines-shared-sap-high-availability-guide/8005.png
-[sap-ha-guide-figure-8006]:./media/virtual-machines-shared-sap-high-availability-guide/8006.png
-[sap-ha-guide-figure-8007]:./media/virtual-machines-shared-sap-high-availability-guide/8007.png
-[sap-ha-guide-figure-8008]:./media/virtual-machines-shared-sap-high-availability-guide/8008.png
-[sap-ha-guide-figure-8009]:./media/virtual-machines-shared-sap-high-availability-guide/8009.png
-[sap-ha-guide-figure-8010]:./media/virtual-machines-shared-sap-high-availability-guide/8010.png
-[sap-ha-guide-figure-8011]:./media/virtual-machines-shared-sap-high-availability-guide/8011.png
-[sap-ha-guide-figure-8012]:./media/virtual-machines-shared-sap-high-availability-guide/8012.png
-[sap-ha-guide-figure-8013]:./media/virtual-machines-shared-sap-high-availability-guide/8013.png
-[sap-ha-guide-figure-8014]:./media/virtual-machines-shared-sap-high-availability-guide/8014.png
-[sap-ha-guide-figure-8015]:./media/virtual-machines-shared-sap-high-availability-guide/8015.png
-[sap-ha-guide-figure-8016]:./media/virtual-machines-shared-sap-high-availability-guide/8016.png
-[sap-ha-guide-figure-8017]:./media/virtual-machines-shared-sap-high-availability-guide/8017.png
-[sap-ha-guide-figure-8018]:./media/virtual-machines-shared-sap-high-availability-guide/8018.png
-[sap-ha-guide-figure-8019]:./media/virtual-machines-shared-sap-high-availability-guide/8019.png
-[sap-ha-guide-figure-8020]:./media/virtual-machines-shared-sap-high-availability-guide/8020.png
-[sap-ha-guide-figure-8021]:./media/virtual-machines-shared-sap-high-availability-guide/8021.png
-[sap-ha-guide-figure-8022]:./media/virtual-machines-shared-sap-high-availability-guide/8022.png
-[sap-ha-guide-figure-8023]:./media/virtual-machines-shared-sap-high-availability-guide/8023.png
-[sap-ha-guide-figure-8024]:./media/virtual-machines-shared-sap-high-availability-guide/8024.png
-[sap-ha-guide-figure-8025]:./media/virtual-machines-shared-sap-high-availability-guide/8025.png
-
-[sap-templates-3-tier-multisid-xscs-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-xscs%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-xscs-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-xscs-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-db%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-db-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-db-md%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image-multi-sid-apps%2Fazuredeploy.json
-[sap-templates-3-tier-multisid-apps-marketplace-image-md]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-apps-md%2Fazuredeploy.json
-
-[virtual-machines-azure-resource-manager-architecture-benefits-arm]:../../azure-resource-manager/management/overview.md#the-benefits-of-using-resource-manager
-
-[virtual-machines-manage-availability]:../../virtual-machines-windows-manage-availability.md
- This article describes how to install and configure a high-availability SAP system on Azure, with Windows Server Failover Cluster (WSFC) and Scale-Out File Server as an option for clustering SAP ASCS/SCS instances. ## Prerequisites
Before you start the installation, review the following articles:
* [Prepare Azure infrastructure SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances][sap-high-availability-infrastructure-wsfc-file-share]
-* [High availability for SAP NetWeaver on Azure VMs]
-*
+* [High availability for SAP NetWeaver on Azure VMs](./sap-high-availability-architecture-scenarios.md)
You need the following executables and DLLs from SAP:+ * SAP Software Provisioning Manager (SWPM) installation tool version SPS25 or later. * SAP Kernel 7.49 or later > [!IMPORTANT]
-> Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel 7.49 (and later).
->
-> [!IMPORTANT]
-> The setup must meet the following requirement: the SAP ASCS/SCS instances and the SOFS share must be deployed in separate clusters.
>
+> Clustering SAP ASCS/SCS instances by using a file share is supported for SAP NetWeaver 7.40 (and later), with SAP Kernel 7.49 (and later).
+> The setup must meet the following requirement: the SAP ASCS/SCS instances and the SOFS share must be deployed in separate clusters.
We do not describe the Database Management System (DBMS) setup because setups vary depending on the DBMS you use. However, we assume that high-availability concerns with the DBMS are addressed with the functionalities that various DBMS vendors support for Azure. Such functionalities include Always On or database mirroring for SQL Server, and Oracle Data Guard for Oracle databases. In the scenario we use in this article, we didn't add more protection to the DBMS.
There are no special considerations when various DBMS services interact with thi
> [!NOTE] > The installation procedures of SAP NetWeaver ABAP systems, Java systems, and ABAP+Java systems are almost identical. The most significant difference is that an SAP ABAP system has one ASCS instance. The SAP Java system has one SCS instance. The SAP ABAP+Java system has one ASCS instance and one SCS instance running in the same Microsoft failover cluster group. Any installation differences for each SAP NetWeaver installation stack are explicitly mentioned. You can assume that all other parts are the same.
->
->
## Prepare an SAP global host on the SOFS cluster
Create the following volume and file share on the SOFS cluster:
* SAPMNT file share * Set security on the SAPMNT file share and folder with full control for:
- * The \<DOMAIN>\SAP_\<SID>_GlobalAdmin user group
- * The SAP ASCS/SCS cluster node computer objects \<DOMAIN>\ClusterNode1$ and \<DOMAIN>\ClusterNode2$
+ * The \<DOMAIN>\SAP_\<SID>_GlobalAdmin user group
+ * The SAP ASCS/SCS cluster node computer objects \<DOMAIN>\ClusterNode1$ and \<DOMAIN>\ClusterNode2$
To create a CSV volume with mirror resiliency, execute the following PowerShell cmdlet on one of the SOFS cluster nodes: - ```powershell New-Volume -StoragePoolFriendlyName S2D* -FriendlyName SAPPR1 -FileSystem CSVFS_ReFS -Size 5GB -ResiliencySettingName Mirror ```+ To create SAPMNT and set folder and share security, execute the following PowerShell script on one of the SOFS cluster nodes: ```powershell
$Acl.SetAccessRule($Ar)
# Set security Set-Acl $UsrSAPFolder $Acl -Verbose
- ```
+```
## Create a virtual host name for the clustered SAP ASCS/SCS instance Create an SAP ASCS/SCS cluster network name (for example, **pr1-ascs [10.0.6.7]**), as described in [Create a virtual host name for the clustered SAP ASCS/SCS instance][sap-high-availability-installation-wsfc-shared-disk-create-ascs-virt-host]. - ## Install an ASCS/SCS and ERS instances in the cluster ### Install an ASCS/SCS instance on the first ASCS/SCS cluster node
Install an SAP ASCS/SCS instance on the second cluster node. To install the inst
**\<Product>** > **\<DBMS>** > **Installation** > **Application Server ABAP** (or **Java**) > **High-Availability System** > **ASCS/SCS instance** > **Additional cluster node**. - ## Update the SAP ASCS/SCS instance profile Update parameters in the SAP ASCS/SCS instance profile \<SID>_ASCS/SCS\<Nr>_\<Host>. - | Parameter name | Parameter value | | | | | gw/netstat_once | **0** |
Update parameters in the SAP ASCS/SCS instance profile \<SID>_ASCS/SCS\<Nr>_\<Ho
| service/ha_check_node | **1** | Parameter `enque/encni/set_so_keepalive` is only needed if using ENSA1.
-Restart the SAP ASCS/SCS instance.
-Set `KeepAlive` parameters on both SAP ASCS/SCS cluster nodes follow the instructions to [Set registry entries on the cluster nodes of the SAP ASCS/SCS instance](./sap-high-availability-infrastructure-wsfc-shared-disk.md#661035b2-4d0f-4d31-86f8-dc0a50d78158).
-
+Restart the SAP ASCS/SCS instance.
+Set `KeepAlive` parameters on both SAP ASCS/SCS cluster nodes follow the instructions to [Set registry entries on the cluster nodes of the SAP ASCS/SCS instance](./sap-high-availability-infrastructure-wsfc-shared-disk.md#add-registry-entries-on-both-cluster-nodes-of-the-ascsscs-instance).
## Install a DBMS instance and SAP application servers Finalize your SAP system installation by installing:+ * A DBMS instance. * A primary SAP application server. * An additional SAP application server. ## Next steps
-* [Install an ASCS/SCS instance on a failover cluster with no shared disks - Official SAP guidelines for high-availability file share][sap-official-ha-file-share-document]
-
-* [Storage Spaces Direct in Windows Server 2016][s2d-in-win-2016]
-
-* [Scale-Out File Server for application data overview][sofs-overview]
-
-* [What's new in storage in Windows Server 2016][new-in-win-2016-storage]
+* [Storage Spaces Direct in Windows Server 2016][s2d-in-win-2016].
+* [Scale-Out File Server for application data overview][sofs-overview].
+* [What's new in storage in Windows Server 2016][new-in-win-2016-storage].
sap Sap High Availability Installation Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-installation-wsfc-shared-disk.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
ms.assetid: 6209bcb3-5b20-4845-aa10-1475c576659f
sap Sap Higher Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-higher-availability-architecture-scenarios.md
documentationcenter: saponazure tags: azure-resource-manager
-keywords: ''
ms.assetid: f0b2f8f0-e798-4176-8217-017afe147917
sap Sap Information Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-information-lifecycle-management.md
Title: SAP Information Lifecycle Management with Microsoft Azure Blob Storage | Microsoft Docs description: SAP Information Lifecycle Management with Microsoft Azure Blob Storage tags: azure-resource-manager
-keywords: ''
sap Vm Extension For Sap New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-new.md
Title: New version of Azure VM extension for SAP solutions | Microsoft Docs description: Learn how to deploy the new VM Extension for SAP. tags: azure-resource-manager
-keywords: ''
ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e
sap Vm Extension For Sap Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-standard.md
Title: Standard version of Azure VM extension for SAP solutions | Microsoft Docs description: Learn how to deploy the Std VM Extension for SAP. tags: azure-resource-manager
-keywords: ''
ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e
sap Vm Extension For Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap.md
Title: Implement the Azure VM extension for SAP solutions | Microsoft Docs description: Learn how to deploy the VM Extension for SAP. tags: azure-resource-manager
-keywords: ''
ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e
search Performance Benchmarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/performance-benchmarks.md
- ignite-2023 Previously updated : 01/31/2023 Last updated : 01/19/2024 # Azure AI Search performance benchmarks
-Azure AI Search's performance depends on a [variety of factors](search-performance-tips.md) including the size of your search service and the types of queries you're sending. To help estimate the size of search service needed for your workload, we've run several benchmarks to document the performance for different search services and configurations. These benchmarks in no way guarantee a certain level of performance from your service but can give you an idea of the performance you can expect.
+Azure AI Search's performance depends on a [variety of factors](search-performance-tips.md) including the size of your search service and the types of queries you're sending. To help estimate the size of search service needed for your workload, we've run several benchmarks to document the performance for different search services and configurations. *These benchmarks in no way guarantee a certain level of performance from your service but can give you an idea of the performance you can expect*.
To cover a range of different use cases, we ran benchmarks for two main scenarios:
Each scenario used at least 10,000 unique queries to avoid tests being overly sk
- **Latency** - The server's latency for a query; these numbers don't include [round trip delay (RTT)](https://en.wikipedia.org/wiki/Round-trip_delay). Values are in milliseconds (ms).
-### Disclaimer
+## Testing disclaimer
The code we used to run these benchmarks is available on the [azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing/tree/main/other_tools) repository. It's worth noting that we observed slightly lower QPS levels with the [JMeter performance testing solution](https://github.com/Azure-Samples/azure-search-performance-testing) than in the benchmarks. The differences can be attributed to differences in the style of the tests. This speaks to the importance of making your performance tests as similar to your production workload as possible.
-These benchmarks in no way guarantee a certain level of performance from your service but can give you an idea of the performance you can expect based on your scenario.
+> [!IMPORTANT]
+> These benchmarks in no way guarantee a certain level of performance from your service but can give you an idea of the performance you can expect based on your scenario.
If you have any questions or concerns, reach out to us at azuresearch_contact@microsoft.com.
The following chart shows the highest query load a service could handle for an e
#### Query latency
-Query latency varies based on the load of the service and services under higher stress will have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
+Query latency varies based on the load of the service and services under higher stress have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
| Percentage of max QPS | Average latency | 25% | 75% | 90% | 95% | 99%| ||||| | | |
Query latency varies based on the load of the service and services under higher
| 50% | 140 ms | 47 ms | 144 ms | 241 ms | 400 ms | 1175 ms | | 80% | 239 ms | 77 ms | 248 ms | 466 ms | 763 ms | 1752 ms | - ### S2 Performance #### Queries per second
The following chart shows the highest query load a service could handle for an e
#### Query latency
-Query latency varies based on the load of the service and services under higher stress will have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
+Query latency varies based on the load of the service and services under higher stress have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
| Percentage of max QPS | Average latency | 25% | 75% | 90% | 95% | 99%| ||||| | | |
In this case, we see that adding a second partition significantly increases the
#### Query latency
-Query latency varies based on the load of the service and services under higher stress will have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
+Query latency varies based on the load of the service and services under higher stress have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
| Percentage of max QPS | Average latency | 25% | 75% | 90% | 95% | 99%| ||||| | | |
The following chart shows the highest query load a service could handle for an e
#### Query latency
-Query latency varies based on the load of the service and services under higher stress will have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
+Query latency varies based on the load of the service and services under higher stress have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
| Percentage of max QPS | Average latency | 25% | 75% | 90% | 95% | 99%| ||||| | | |
The following chart shows the highest query load a service could handle for an e
#### Query latency
-Query latency varies based on the load of the service and services under higher stress will have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
+Query latency varies based on the load of the service and services under higher stress have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
| Percentage of max QPS | Average latency | 25% | 75% | 90% | 95% | 99%| ||||| | | |
The following chart shows the highest query load a service could handle for an e
#### Query latency
-Query latency varies based on the load of the service and services under higher stress will have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
+Query latency varies based on the load of the service and services under higher stress have a higher average query latency. The following table shows the 25th, 50th, 75th, 90th, 95th, and 99th percentiles of query latency for three different usage levels.
| Percentage of max QPS | Average latency | 25% | 75% | 90% | 95% | 99%| ||||| | | |
Through these benchmarks, you can get an idea of the performance Azure AI Search
Some key take ways from these benchmarks are: * An S2 can typically handle at least four times the query volume as an S1
-* An S2 will typically have lower latency than an S1 at comparable query volumes
+* An S2 typically has lower latency than an S1 at comparable query volumes
* As you add replicas, the QPS a service can handle typically scales linearly (for example, if one replica can handle 10 QPS then five replicas can usually handle 50 QPS)
-* The higher the load on the service, the higher the average latency will be
+* The higher the load on the service, the higher the average latency
You can also see that performance can vary drastically between scenarios. If you're not getting the performance you expect, check out the [tips for better performance](search-performance-tips.md).
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
Last updated 01/19/2024
Get started with vector search in Azure AI Search using the **2023-11-01** REST APIs that create, load, and query a search index.
-Search indexes can have vector and non-vector fields. You can create pure vector queries, or hybrid queries targeting both vector *and* textual fields configured for filters, sorts, facets, and semantic reranking.
+Search indexes can have vector and nonvector fields. You can execute pure vector queries, or hybrid queries targeting both vector *and* textual fields configured for filters, sorts, facets, and semantic reranking.
> [!NOTE]
-> Looking for [built-in data chunking and vectorization public preview](vector-search-integrated-vectorization.md)? Try the [**Import and vectorize data** wizard](search-get-started-portal-import-vectors.md) instead.
+> The stable REST API version depends on external modules for data chunking and embedding. If you want test-drive the [built-in data chunking and vectorization (public preview)](vector-search-integrated-vectorization.md) features, try the [**Import and vectorize data** wizard](search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
## Prerequisites + [Postman app](https://www.postman.com/downloads/) ++ [Sample Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart-vectors), with requests targeting the **2023-11-01** API version of Azure AI Search.+ + An Azure subscription. [Create one for free](https://azure.microsoft.com/free/). + Azure AI Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created.
- For the optional [semantic ranking](semantic-search-overview.md) shown in the last example, your search service must be Basic tier or higher, with [semantic ranking enabled](semantic-how-to-enable-disable.md).
-
-+ [Sample Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart-vectors), with requests targeting the **2023-11-01** API version of Azure AI Search.
++ Optionally, for [semantic reranking](semantic-search-overview.md) shown in the last example, your search service must be Basic tier or higher, with [semantic ranking enabled](semantic-how-to-enable-disable.md). + Optionally, an [Azure OpenAI](https://aka.ms/oai/access) resource with a deployment of **text-embedding-ada-002**. The quickstart includes an optional step for generating new text embeddings, but we provide existing embeddings so that you can skip this step.
You should get a status HTTP 201 success.
**Key points:**
-+ The `"fields"` collection includes a required key field, text and vector fields (such as `"Description"`, `"DescriptionVector"`) for keyword and vector search. Colocating vector and non-vector fields in the same index enables hybrid queries. For instance, you can combine filters, keyword search with semantic ranking, and vectors into a single query operation.
++ The `"fields"` collection includes a required key field, text and vector fields (such as `"Description"`, `"DescriptionVector"`) for keyword and vector search. Colocating vector and nonvector fields in the same index enables hybrid queries. For instance, you can combine filters, keyword search with semantic ranking, and vectors into a single query operation. + Vector fields must be `"type": "Collection(Edm.Single)"` with `"dimensions"` and `"vectorSearchProfile"` properties. See [Create or Update Index](/rest/api/searchservice/indexes/create-or-update) for property descriptions.
The response for the vector equivalent of "classic lodging near running trails,
### Single vector search with filter
-You can add filters, but the filters are applied to the non-vector content in your index. In this example, the filter applies to the `"Tags"` field, filtering out any hotels that don't provide free WIFI.
+You can add filters, but the filters are applied to the nonvector content in your index. In this example, the filter applies to the `"Tags"` field, filtering out any hotels that don't provide free WIFI.
This example sets `vectorFilterMode` to pre-query filtering, which is the default, so you don't need to set it. It's listed here for awareness because it's a newer feature.
security Secure Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-deploy.md
ms.assetid: 521180dc-2cc9-43f1-ae87-2701de7ca6b8- # Deploy secure applications on Azure
security Secure Dev Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-dev-overview.md
ms.assetid: 521180dc-2cc9-43f1-ae87-2701de7ca6b8- # Secure development best practices on Azure
security Secure Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-develop.md
ms.assetid: 521180dc-2cc9-43f1-ae87-2701de7ca6b8- # Develop secure applications on Azure
security Threat Modeling Tool Auditing And Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-auditing-and-logging.md
Title: Auditing and Logging - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Learn about auditing and logging mitigation in the Threat Modeling Tool. See mitigation information and view code examples. -- - Last updated 02/07/2017
security Threat Modeling Tool Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authentication.md
Title: Authentication - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Learn about authentication mitigation in the Threat Modeling Tool. See mitigation information and view code examples. -- - Last updated 02/07/2017
security Threat Modeling Tool Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authorization.md
Title: Authorization - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Learn about authorization mitigation in the Threat Modeling Tool. See a list of potential threats and mitigation instructions. -- - Last updated 02/07/2017
security Threat Modeling Tool Communication Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-communication-security.md
Title: Communication security for the Microsoft Threat Modeling Tool
description: Learn about mitigation for communication security threats exposed in the Threat Modeling Tool. See mitigation information and view code examples. -- - Last updated 02/07/2017
security Threat Modeling Tool Configuration Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-configuration-management.md
Title: Configuration management for the Microsoft Threat Modeling Tool
description: Learn about configuration management for the Threat Modeling Tool. See mitigation information and view code examples. -- - Last updated 02/07/2017
security Threat Modeling Tool Cryptography https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-cryptography.md
Title: Cryptography - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Learn about cryptography mitigation for threats exposed in the Threat Modeling Tool. See mitigation information and view code examples. - editor: jegeib
security Threat Modeling Tool Exception Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-exception-management.md
Title: Exception Management - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Learn about exception management in the Threat Modeling Tool. See mitigation information and view code examples. -- - Last updated 02/07/2017
security Threat Modeling Tool Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-getting-started.md
Title: Getting Started - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Learn how to get started using the Threat Modeling Tool. Create a diagram, identify threats, mitigate threats, and validate each mitigation. -- - Last updated 08/17/2017
security Threat Modeling Tool Input Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-input-validation.md
Title: Input Validation - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Learn about input validation in the Threat Modeling Tool. See mitigation information and view code examples. -- - Last updated 02/07/2017
security Threat Modeling Tool Mitigations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-mitigations.md
Title: Mitigations - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Mitigations page for the Microsoft Threat Modeling Tool highlighting possible solutions to the most exposed generated threats. -- - Last updated 08/17/2017
security Threat Modeling Tool Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases.md
Title: Microsoft Threat Modeling Tool release notes
description: Read the release notes for all updates of the Microsoft Threat Modeling Tool. See a download link and system requirements. -- - Last updated 03/22/2019
security Threat Modeling Tool Sensitive Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-sensitive-data.md
Title: Sensitive Data - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Learn about sensitive data mitigation in the Threat Modeling Tool. See mitigation information and view code examples. -- - Last updated 02/07/2017
security Threat Modeling Tool Session Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-session-management.md
Title: Session Management - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Learn about session management mitigation for threats exposed in the Threat Modeling Tool. See mitigation information and view code examples. -- - Last updated 02/07/2017
security Threat Modeling Tool Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-threats.md
Title: Threats - Microsoft Threat Modeling Tool - Azure | Microsoft Docs description: Threat category page for the Microsoft Threat Modeling Tool, containing categories for all exposed generated threats. -- - Last updated 08/17/2017
security Antimalware Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware-code-samples.md
Title: Microsoft Antimalware code samples for Azure | Microsoft Docs description: PowerShell code samples to enable and configure Microsoft Antimalware.
ms.assetid: 265683c8-30d7-4f2b-b66c-5082a18f7a8b
- Last updated 01/25/2023
security Antimalware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware.md
Title: Microsoft Antimalware for Azure | Microsoft Docs description: Learn about Microsoft Antimalware for Azure Cloud Services and Virtual Machines. See information about topics like architecture and deployment scenarios.
ms.assetid: 265683c8-30d7-4f2b-b66c-5082a18f7a8b
- Last updated 04/27/2023
security Azure Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-domains.md
- Last updated 07/07/2020
security Azure Marketplace Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-marketplace-images.md
Title: Security Recommendations for Azure Marketplace Images | Microsoft Docs description: This article provides recommendations for images included in the market place
security Cyber Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/cyber-services.md
Title: Microsoft Services in Cybersecurity | Microsoft Docs description: The article provides an introduction about Microsoft services related to cybersecurity and how to obtain more information about these services.
ms.assetid: 925ba3c6-fe35-413a-98ea-e1a1461f3022
- Last updated 04/03/2023
security Data Encryption Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/data-encryption-best-practices.md
Title: Data security and encryption best practices - Microsoft Azure description: This article provides a set of best practices for data security and encryption using built in Azure capabilities.
ms.assetid: 17ba67ad-e5cd-4a8f-b435-5218df753ca4
- Last updated 01/22/2023
security Database Security Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/database-security-checklist.md
Title: Azure database security checklist| Microsoft Docs description: Use the Azure database security checklist to make sure that you address important cloud computing security issues. - Last updated 01/29/2023
security Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/double-encryption.md
Title: Double Encryption in Microsoft Azure description: This article describes how Microsoft Azure provides double encryption for data at rest and data in transit.
ms.assetid: 9dcb190e-e534-4787-bf82-8ce73bf47dba
- Last updated 07/01/2022
security Encryption Atrest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-atrest.md
Title: Azure Data Encryption-at-Rest - Azure Security description: This article provides an overview of Azure Data Encryption at-rest, the overall capabilities, and general considerations.
ms.assetid: 9dcb190e-e534-4787-bf82-8ce73bf47dba
- Last updated 11/14/2022
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
Title: Data encryption models in Microsoft Azure description: This article provides an overview of data encryption models In Microsoft Azure. ms.assetid: 9dcb190e-e534-4787-bf82-8ce73bf47dba - Last updated 05/05/2023
security Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-overview.md
Title: Azure encryption overview | Microsoft Docs
description: Learn about encryption options in Azure. See information for encryption at rest, encryption in flight, and key management with Azure Key Vault. -
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/end-to-end.md
Title: End-to-end security in Azure | Microsoft Docs description: The article provides a map of Azure services that help you secure and protect your cloud resources and detect and investigate threats.
ms.assetid: a5a7f60a-97e2-49b4-a8c5-7c010ff27ef8
- Last updated 01/29/2023
security Event Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/event-support-ticket.md
Title: How to Log an Azure security issue - Azure | Microsoft Docs description: As a seller on the Azure Marketplace, having identified a potential security event, I need to know how to log an appropriate ticket.
ms.assetid: f1ffde66-98f0-4c3e-ad94-fee1f97cae03
- Last updated 01/29/2023
security Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/iaas.md
Title: Security best practices for IaaS workloads in Azure | Microsoft Docs description: " The migration of workloads to Azure IaaS brings opportunities to reevaluate our designs "
ms.assetid: 02c5b7d2-a77f-4e7f-9a1e-40247c57e7e2
- Last updated 08/29/2023
security Identity Management Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-best-practices.md
Title: Azure identity & access security best practices | Microsoft Docs description: This article provides a set of best practices for identity management and access control using built in Azure capabilities.
ms.assetid: 07d8e8a8-47e8-447c-9c06-3a88d2713bc1
- Last updated 08/29/2023
security Identity Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-overview.md
Title: Azure security features that help with identity management | Microsoft Docs description: Learn about the core Azure security features that help with identity management. See information about topics like single sign-on and reverse proxy.
ms.assetid: 5aa0a7ac-8f18-4ede-92a1-ae0dfe585e28
- Last updated 12/05/2022 # Customer intent: As an IT Pro or decision maker, I am trying to learn about identity management capabilities in Azure
security Infrastructure Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-availability.md
Title: Azure infrastructure availability - Azure security description: This article provides information about what Microsoft does to secure the Azure infrastructure and provide maximum availability of customers' data.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 01/20/2023
security Infrastructure Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-components.md
Title: Azure information system components and boundaries description: This article provides a general description of the Microsoft Azure architecture and management.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 02/09/2023
security Infrastructure Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-integrity.md
Title: Azure infrastructure integrity description: Learn about Azure infrastructure integrity and the steps Microsoft takes to secure it, such as virus scans on software component builds.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 01/30/2023
security Infrastructure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-monitoring.md
Title: Azure infrastructure monitoring description: Learn about infrastructure monitoring aspects of the Azure production network, such as vulnerability scanning.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 08/29/2023
security Infrastructure Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-network.md
Title: Azure network architecture description: This article provides a general description of the Microsoft Azure infrastructure network.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 09/08/2020
security Infrastructure Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-operations.md
Title: Management of Azure production network - Microsoft Azure description: This article describes how Microsoft manages and operates the Azure production network to secure the Azure datacenters.- - ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e - Last updated 08/29/2023 - # Management and operation of the Azure production network
security Infrastructure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-sql.md
Title: Azure SQL Database security features description: This article provides a general description of how Azure SQL Database protects customer data in Azure.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 08/29/2023
security Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure.md
Title: Azure infrastructure security | Microsoft Docs description: Learn how Microsoft works to secure the Azure datacenters. The datacenters are managed, monitored, and administered by Microsoft operations staff.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 01/31/2023
security Isolation Choices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/isolation-choices.md
Title: Isolation in the Azure Public Cloud | Microsoft Docs description: Learn how Azure provides isolation against both malicious and non-malicious users and offers various isolation choices to architects. - - Last updated 08/29/2023
security Key Management Choose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/key-management-choose.md
description: This article provides a detailed explanation of how to choose the right Key Management solution in Azure. - Last updated 07/25/2023 - # How to choose the right key management solution
security Log Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/log-audit.md
Title: Azure security logging and auditing | Microsoft Docs description: Learn about the logs available in Azure and the security insights you can gain. - - Last updated 08/29/2023
security Management Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management-monitoring-overview.md
Title: Management and monitoring security features - Microsoft Azure | Microsoft Docs description: This article provides an overview of the security features and services that Azure provides to aid in the management and monitoring of Azure cloud services and virtual machines.
ms.assetid: 5cf2827b-6cd3-434d-9100-d7411f7ed424
- Last updated 01/20/2023
security Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management.md
Title: Enhance remote management security in Azure | Microsoft Docs description: "This article discusses steps for enhancing remote management security while administering Microsoft Azure environments, including cloud services, virtual machines, and custom applications."
ms.assetid: 2431feba-3364-4a63-8e66-858926061dd3
- Last updated 04/03/2023
security Network Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-best-practices.md
ms.assetid: 7f6aa45f-138f-4fde-a611-aaf7e8fe56d1
- Last updated 01/29/2023
security Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-overview.md
Title: Network security concepts and requirements in Azure | Microsoft Docs description: This article provides basic explanations about core network security concepts and requirements, and information on what Azure offers in each of these areas.
ms.assetid: bedf411a-0781-47b9-9742-d524cf3dbfc1
- Last updated 03/31/2023 #Customer intent: As an IT Pro or decision maker, I am looking for information on the network security controls available in Azure.
security Operational Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-best-practices.md
Title: Security best practices for your Azure assets
description: This article provides a set of operational best practices for protecting your data, applications, and other assets in Azure. - - Last updated 04/18/2023
security Operational Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-checklist.md
Title: Azure operational security checklist| Microsoft Docs description: Review this checklist to help your enterprise think through Azure operational security considerations. -- - Last updated 01/23/2023
security Operational Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-overview.md
Title: Azure operational security overview| Microsoft Docs description: Learn about Azure operational security in this overview. Operational security refers to asset protection services, controls, and features. -- - Last updated 08/29/2023
security Operational Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-security.md
Title: Azure Operational Security | Microsoft Docs description: Introduce yourself to Microsoft Azure Monitor logs, its services, and how it works by reading this overview. editor: TomSh - Last updated 11/21/2017
security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/overview.md
Title: Introduction to Azure security | Microsoft Docs description: Introduce yourself to Azure Security, its various services, and how it works by reading this overview. - - Last updated 10/22/2023
security Paas Applications Using App Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-applications-using-app-services.md
Title: Securing PaaS web & mobile applications
description: "Learn about Azure App Service security best practices for securing your PaaS web and mobile applications. " - - Last updated 08/29/2023
security Paas Applications Using Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-applications-using-sql.md
Title: Securing PaaS Databases in Azure | Microsoft Docs description: "Learn about Azure SQL Database and Azure Synapse Analytics security best practices for securing your PaaS web and mobile applications. " - - Last updated 03/31/2023
security Paas Applications Using Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-applications-using-storage.md
Title: Securing PaaS applications using Azure Storage | Microsoft Docs description: "Learn about Azure Storage security best practices for securing your PaaS web and mobile applications." - - Last updated 01/23/2023
security Paas Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-deployments.md
Title: Best practices for secure PaaS deployments - Microsoft Azure description: "Learn best practices for designing, building, and managing secure cloud applications on Azure and understand the security advantages of PaaS versus other cloud service models." - - Last updated 03/31/2023
security Pen Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/pen-testing.md
Title: Penetration testing | Microsoft Docs description: The article provides an overview of the penetration testing process and how to perform a pen test against your app running in Azure infrastructure. ms.assetid: 695d918c-a9ac-4eba-8692-af4526734ccc - Last updated 03/23/2023
security Physical Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/physical-security.md
Title: Physical security of Azure datacenters - Microsoft Azure | Microsoft Docs description: The article describes what Microsoft does to secure the Azure datacenters, including physical infrastructure, security, and compliance offerings.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 01/13/2023
security Production Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/production-network.md
Title: Azure production network description: Learn about the Azure production network. See security access methods and protection mechanisms for establishing a connection to the network.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 03/31/2023
security Protection Customer Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/protection-customer-data.md
Title: Protection of customer data in Azure description: Learn how Azure protects customer data through data segregation, data redundancy, and data destruction.
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
- Last updated 08/29/2023
security Ransomware Protection With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-protection-with-azure-firewall.md
Title: Improve your security defenses for ransomware attacks with Azure Firewall Premium description: In this article, you learn how Azure Firewall Premium can help you protect against ransomware.
ms.assetid: 9dcb190e-e534-4787-bf82-8ce73bf47dba
- Last updated 02/24/2022
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
Title: Use Microsoft and Azure security resources to help recover from systemic identity compromise | Microsoft Docs description: Learn how to use Microsoft and Azure security resources, such as Microsoft Defender XDR, Microsoft Sentinel, Microsoft Entra ID, Microsoft Defender for Cloud, and Microsoft Defender for IoT and Microsoft recommendations to secure your system against systemic-identity compromises. - - Last updated 01/15/2023
security Secrets Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/secrets-best-practices.md
Title: Best practices for protecting secrets - Microsoft Azure | Microsoft Docs description: This article links you to security best practices for protecting secrets.
ms.assetid: 1cbbf8dc-ea94-4a7e-8fa0-c2cb198956c5
- Last updated 11/09/2023
security Services Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/services-technologies.md
Title: Azure Security Services and Technologies | Microsoft Docs description: The article provides a curated list of Azure Security services and technologies.
ms.assetid: a5a7f60a-97e2-49b4-a8c5-7c010ff27ef8
- Last updated 01/16/2023
security Shared Responsibility Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/shared-responsibility-ai.md
Title: AI shared responsibility model - Microsoft Azure description: "Understand the shared responsibility model and which tasks are handled by the AI platform or application provider, and which tasks are handled by you." - - Last updated 10/23/2023
security Shared Responsibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/shared-responsibility.md
Title: Shared responsibility in the cloud - Microsoft Azure description: "Understand the shared responsibility model and which security tasks are handled by the cloud provider and which tasks are handled by you." - - Last updated 09/28/2023
security Subdomain Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/subdomain-takeover.md
- Last updated 01/19/2023
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md
description: Introduction to security services in Azure that help you protect yo
-
security Threat Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/threat-detection.md
Title: Azure threat protection | Microsoft Docs description: Learn about built-in threat protection functionality for Azure, such as the Microsoft Entra ID Protection service. - - Last updated 01/20/2023
security Virtual Machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/virtual-machines-overview.md
Title: Security features used with Azure VMs
description: This article provides an overview of the core Azure security features that can be used with Azure Virtual Machines.
ms.assetid: 467b2c83-0352-4e9d-9788-c77fb400fe54
- Last updated 12/05/2022
sentinel Basic Logs Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/basic-logs-use-cases.md
Last updated 01/05/2023- # Log sources to use for Basic Logs ingestion
sentinel False Positives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/false-positives.md
Title: Handle false positives in Microsoft Sentinel description: Learn how to resolve false positives in Microsoft Sentinel by creating automation rules or modifying analytics rules to specify exceptions.--++ Previously updated : 01/09/2023 Last updated : 01/15/2024 # Handle false positives in Microsoft Sentinel
This article describes two methods for avoiding false positives:
The following table describes characteristics of each method: - |Method|Characteristic| |-|-| |**Automation rules**|<ul><li>Can apply to several analytics rules.</li><li>Keep an audit trail. Exceptions immediately and automatically close created incidents, recording the reason for the closure and comments.</li><li>Are often generated by analysts.</li><li>Allow applying exceptions for a limited time. For example, maintenance work might trigger false positives that outside the maintenance timeframe would be true incidents.</li></ul>|
You can also do subnet filtering by using a watchlist. For example, in the prece
let subnets = _GetWatchlist('subnetallowlist'); ```
+## Example: Manage exceptions for the Microsoft Sentinel solution for SAP® applications
+
+The [Microsoft Sentinel solution for SAP® applications](sap/solution-overview.md) provides functions you can use to exclude users or systems from triggering alerts.
+
+- **Exclude users**. Use the [**SAPUsersGetVIP**](sap/sap-solution-log-reference.md#sapusersgetvip) function to:
+
+ - Call tags for users you want to exclude from triggering alerts. Tag users in the *SAP_User_Config* watchlist, using asterisks (*) as wildcards to tag all users with a specified naming syntax.
+ - List specific SAP roles and/or profiles you want to exclude from triggering alerts.
+
+- **Exclude systems**. Use functions that support the *SelectedSystemRoles* parameter to determine that only specific types of systems trigger alerts, including only *Production* systems, only *UAT* systems, or both.
+
+For more information, see [Microsoft Sentinel solution for SAP® applications data reference](sap/sap-solution-log-reference.md).
+ ## Next steps For more information, see:
sentinel Normalization Schema Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-web.md
Title: The Advanced Security Information Model (ASIM) Web Session normalization
description: This article displays the Microsoft Sentinel Web Session normalization schema. cloud: na Last updated 11/17/2021
sentinel Configure Audit Log Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-audit-log-rules.md
Title: Configure SAP audit log monitoring rules with Microsoft Sentinel description: Monitor the SAP audit logs using Microsoft Sentinel built-in analytics rules, to easily manage your SAP logs, reducing noise with no compromise to security value. --++ Last updated 08/19/2022 #Customer.intent: As a security operator, I want to monitor the SAP audit logs and easily manage the logs, so I can reduce noise without compromising security value.
sentinel Configure Snc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-snc.md
Title: Deploy the Microsoft Sentinel for SAP data connector with Secure Network Communications (SNC) | Microsoft Docs description: This article shows you how to deploy the Microsoft Sentinel for SAP data connector to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications.--++ Last updated 05/03/2022
sentinel Cross Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/cross-workspace.md
Title: Working with the Microsoft Sentinel solution for SAP® applications across multiple workspaces description: This article discusses working with Microsoft Sentinel solution for SAP® applications across multiple workspaces in different scenarios.--++ Last updated 03/22/2023
sentinel Deploy Sap Btp Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-btp-solution.md
Title: Deploy Microsoft Sentinel Solution for SAP® BTP description: This article introduces you to the process of deploying the Microsoft Sentinel Solution for SAP® BTP.--++ Last updated 03/30/2023
sentinel Deploy Sap Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-security-content.md
Title: Deploy the Microsoft Sentinel solution for SAP applications® from the content hub description: This article shows you how to deploy the Microsoft Sentinel solution for SAP applications® security content from the content hub into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Microsoft Sentinel Solution for SAP.--++ Last updated 03/23/2023
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Title: Deploy Microsoft Sentinel solution for SAP® applications in Microsoft Sentinel description: This article introduces you to the process of deploying the Microsoft Sentinel solution for SAP® applications.--++ Last updated 06/19/2023
Microsoft Sentinel solution for SAP® applications is certified for SAP S/4HANA
- Playbooks for automating responses to threats. > [!NOTE]
-> The Microsoft Sentinel for SAP solution is free to install, but there will be an [additional hourly charge](https://azure.microsoft.com/pricing/offers/microsoft-sentinel-sap-promo/) for activating and using the solution on production systems starting May 2023.
+> The Microsoft Sentinel for SAP solution is free to install, but there is an [additional hourly charge](https://azure.microsoft.com/pricing/offers/microsoft-sentinel-sap-promo/) for activating and using the solution on production systems.
>
-> - The additional hourly charge applies to connected production systems only.
-> - Microsoft Sentinel identifies a production system by looking at the configuration on the SAP system. To do this, Microsoft Sentinel searches for a production entry in the T000 table.
-> - [View the roles of your connected production systems](../monitor-sap-system-health.md).
+> - The additional hourly charge applies to connected production systems only.
+> - Microsoft Sentinel identifies a production system by looking at the configuration on the SAP system. To do this, Microsoft Sentinel searches for a production entry in the T000 table.
+>
+> For more information, see [View the roles of your connected production systems](../monitor-sap-system-health.md).
+The Microsoft Sentinel for SAP data connector is an agent that's installed on a VM, a physical server, or a Kubernetes cluster. The agent collects application logs from across the entire SAP system landscape, for all of your SAP SIDs, and sends those logs to your Log Analytics workspace in Microsoft Sentinel. Use the other content in the [Threat Monitoring for SAP solution](sap-solution-security-content.md) ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
-The Microsoft Sentinel for SAP data connector is an agent, installed on a VM, a physical server, or a Kubernetes cluster that collects application logs from across the entire SAP system landscape for all of your SAP SIDs. It then sends those logs to your Log Analytics workspace in Microsoft Sentinel. You can then use the other content in the Threat Monitoring for SAP solution ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
+For example, the following image shows a multi-SID SAP landscape with a split between productive and non-productive systems, including the SAP Business Technology Platform. All of the systems in this image are onboarded to Microsoft Sentinel for the SAP solution.
- This diagram shows a multi-SID SAP landscape with a split between productive and non-productive systems including the SAP Business Technology Platform. All of the systems and services are being onboarded to the Sentinel for SAP solution.
## Deployment milestones
sentinel Deployment Solution Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md
Title: Configure Microsoft Sentinel solution for SAP® applications description: This article shows you how to configure the deployed Microsoft Sentinel solution for SAP® applications--++ Last updated 03/10/2023
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
Title: Deploy SAP Change Requests (CRs) and configure authorization description: This article shows you how to deploy the SAP Change Requests (CRs) necessary to prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems.--++ Last updated 03/10/2023
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
Title: Prerequisites for deploying Microsoft Sentinel solution for SAP® applications description: This article lists the prerequisites required for deployment of the Microsoft Sentinel solution for SAP® applications.--++ Last updated 06/19/2023
sentinel Reference Systemconfig Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig-json.md
Title: Microsoft Sentinel solution for SAP® applications systemconfig.json container configuration file reference description: Description of settings available in systemconfig.json file--++ Last updated 06/03/2023
sentinel Reference Systemconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig.md
Title: Microsoft Sentinel solution for SAP® applications systemconfig.ini container configuration file reference description: Description of settings available in systemconfig.ini file--++ Last updated 06/03/2023
sentinel Sap Audit Log Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-audit-log-workbook.md
Title: Microsoft Sentinel solution for SAP® applications - SAP -Security Audit log and Initial Access workbook overview description: Learn about the SAP -Security Audit log and Initial Access workbook, used to monitor and track data across your SAP systems.--++ Last updated 01/23/2023
sentinel Sap Btp Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-btp-security-content.md
Title: Microsoft Sentinel Solution for SAP® BTP - security content reference description: Learn about the built-in security content provided by the Microsoft Sentinel Solution for SAP® BTP.--++ Last updated 03/30/2023
sentinel Sap Btp Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-btp-solution-overview.md
Title: Microsoft Sentinel Solution for SAP® BTP overview description: This article introduces the Microsoft Sentinel Solution for SAP® BTP.--++ Last updated 03/22/2023
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md
Title: Microsoft Sentinel solution for SAP® applications deployment troubleshooting description: Learn how to troubleshoot specific issues that may occur in your Microsoft Sentinel solution for SAP® applications deployment.--++ Last updated 01/09/2023
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-deploy-alternate.md
Title: Microsoft Sentinel for SAP data connector expert configuration options, on-premises deployment, and SAPControl log sources | Microsoft Docs description: Learn how to deploy Microsoft Sentinel for SAP data connector environments using expert configuration options and an on-premises machine. Also learn more about SAPControl log sources.--++ Last updated 06/19/2023
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
Title: Microsoft Sentinel solution for SAP® applications - data reference description: Learn about the SAP logs, tables, and functions available from the Microsoft Sentinel solution for SAP® applications.--++ Previously updated : 05/24/2023 Last updated : 01/15/2024 # Microsoft Sentinel solution for SAP® applications data reference
SAPAuditLogAnomalies(LearningTime = 14d, DetectingTime=0h, SelectedSystems= dyna
See [Built-in SAP analytics rules for monitoring the SAP audit log](sap-solution-security-content.md#monitoring-the-sap-audit-log) for more information. ### SAPAuditLogConfigRecommend+ The **SAPAuditLogConfigRecommend** is a helper function designed to offer recommendations for the configuration of the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](sap-solution-security-content.md#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview) analytics rule. Learn how to [configure the rules](configure-audit-log-rules.md). ### SAPUsersGetVIP
-The Sentinel for SAP solution uses a concept of central user tagging, designed to allow for lower false positive rate with minimal effort on the customer end:
+The [Microsoft Sentinel solution for SAP® applications](solution-overview.md) uses a concept of central user tagging and explicit exclusions, designed to help you lower false positives with minimal effort. Use the *SAPUsersGetVIP* function to exclude users from triggering alerts by specifying SAP user roles, SAP user functions, or tags that represent those users.
+
+Tags specified as input for the *SAPUsersGetVIP* function exclude all users with a tag listed in the *SAP_User_Config* watchlist. The same functionality is extended to work with wildcards, allowing you to assign a single tag to a group of users with the same naming syntax.
+
+1. Tag users in the *SAP_User_Config* watchlist as follows:
+
+ - Add multiple tags to each user in the *SAP_User_Config* watchlist, as needed to cover various scenarios. Each alert rule has its own relevant tags, if any, and you can add custom tags as needed.
-- Users can be tagged using the "SAP User Config" watchlist (for example DDIC is assigned with ΓÇ£RunObsoleteProgOKΓÇ¥). Multiple users can have multiple tags.-- An alert rule sends the relevant tags to the **SAPUsersGetVIP** function asking for a list of users to be excluded. The alert rule ΓÇ£SAP - Execution of an Obsolete or an Insecure ProgramΓÇ¥ can ask for users bearing the tag ΓÇ£RunObsoleteProgOKΓÇ¥.
+ - Use an asterisk (*) as a wildcard to include users with a specific naming syntax template.
-Here is a KQL query demonstrating the use case described below:
+1. Add the **SAPUsersGetVIP** function in your analytics rules to request the lists of users you've defined to be excluded from alerts. In the function call, add an array with the tags, SAP roles, and SAP profiles that you'd like to exclude.
+
+For example, use the following KQL query in your analytics rule to exclude any users configured with the *RunObsoleteProgOK* tag in the *SAP_User_Config* watchlist, or any users with the sample *SAP_BASIS_ADMIN_ROLE* role or the sample *SAP_ADMIN_PROFILE* profile.
+
+When copying this sample function call, replace *SAP_BASIS_ADMIN_ROLE* role and *SAP_ADMIN_PROFILE* profile with your own SAP roles or profiles as needed.
```kusto // Execution of Obsolete/Insecure Program let ObsoletePrograms = _GetWatchlist("SAP - Obsolete Programs"); // here you can exclude system users which are OK to run obsolete/ sensitive programs // by adding those users in the SAP_User_Config watchlist with a tag of 'RunObsoleteProgOK'
-let excludeUsersTags= dynamic(['RunObsoleteProgOK']);
-let excludedUsers= SAPUsersGetVIP(SearchForTags= dynamic(["RunObsoleteProgOK"]))| summarize by User2Exclude=SAPUser;
+// can also specify SAP roles or SAP profiles that group the users you would like to exclude
+let excludeUsersTagsRolesProfiles= dynamic(["RunObsoleteProgOK","SAP_BASIS_ADMIN_ROLE", "SAP_ADMIN_PROFILE"]);
+let excludedUsers= SAPUsersGetVIP(SearchForTags= excludeUsersTagsRolesProfiles)| summarize by User2Exclude=SAPUser;
// Query logic
-SAPAuditLog
+SAPAuditLog
| where MessageID == 'AUW' | where ABAPProgramName in (ObsoletePrograms) // The program is obsolete | join kind=leftantisemi excludedUsers on $left.User == $right.User2Exclude ```
-This functionality is heavily used in the Deterministic and Anomalous Audit Log Monitor Alerts, 'where tags can be associated with SAP audit log message ID, and can also be easily extended to custom alert rules.
+The **SAPUsersGetVIP** function is commonly used in *Deterministic and Anomalous Audit Log Monitor* alerts. Associate a tag with an SAP audit log message ID, or extend the rule template to a custom rule that matches your organization's needs.
+
+> [!TIP]
+> We recommend that contacting your SAP system admin to understand which SAP users, roles, and profiles to include in your *SAP_User_Config* watchlist.
+>
+ **Parameters:** -- SearchForTags
- - Optional
- - Default value: dynamic('All Tags')
- - When SearchForTags equals 'All Tags', all users are returned along with their tags, else, only users bearing the tags specified in SearchForTags are returned. TagsIntersect will show which tags were found, and IntersectionSize will hold the count of those.
-- SpecialFocusTags
- - Optional
- - Default value: "Do not return any in-focus users"
- - The function returns all users bearing the tags specified in SpecialFocusTags, and marked those with specialFocusTagged = true.
+|Name |Description |Default value |
+||||
+|**SearchForTags** (Optional) | When `SearchForTags` equals `All Tags`, all users are returned along with their tags. <br><br>Otherwise, only users bearing the tags, SAP roles, or SAP profiles specified in `SearchForTags` are returned. `TagsIntersect` shows the tags that are found, and `IntersectionSize` holds the number of tags that are found. | `dynamic('All Tags')` |
+|**SpecialFocusTags** (Optional) | Returns all users bearing the tags specified in `SpecialFocusTags`, and marked those with `specialFocusTagged = true`. | `Do not return any in-focus users` |
+ | Source | Field | Description | Notes | - | - | - | -
-| The "SAP User Config" watchlist | SearchKey | Search Key |
-| The "SAP User Config" watchlist | SAPUser | The SAP User | OSS, DDIC
-| The "SAP User Config" watchlist | Tags | string of tags assigned to user | RunObsoleteProgOK
-| The "SAP User Config" watchlist | User's Microsoft Entra Object ID | Microsoft Entra Object ID |
-| The "SAP User Config" watchlist | User Identifier | AD User Identifier |
-| The "SAP User Config" watchlist | User on-premises Sid | |
-| The "SAP User Config" watchlist | User Principal Name | |
-| The "SAP User Config" watchlist | TagsList | A list of tags assigned to user | ChangeUserMasterDataOK;RunObsoleteProgOK
-| Logic | TagsIntersect | A set of tags that matched SearchForTags | ["ChangeUserMasterDataOK","RunObsoleteProgOK"]
+| The *SAP_User_Config* watchlist | SearchKey | Search Key |
+| The *SAP_User_Config* watchlist | SAPUser | The SAP User | OSS, DDIC
+| The *SAP_User_Config* watchlist | Tags | String of tags assigned to user | RunObsoleteProgOK |
+| The *SAP_User_Config* watchlist | User's Microsoft Entra Object ID | Microsoft Entra Object ID |
+| The *SAP_User_Config* watchlist | User Identifier | AD User Identifier |
+| The *SAP_User_Config* watchlist | User on-premises Sid | |
+| The *SAP_User_Config* watchlist | User Principal Name | |
+| The *SAP_User_Config* watchlist | TagsList | A list of tags assigned to user | ChangeUserMasterDataOK;RunObsoleteProgOK |
+| Logic | TagsIntersect | A set of tags that matched SearchForTags | ["ChangeUserMasterDataOK","RunObsoleteProgOK"] |
| Logic | SpecialFocusTagged | Special focus indication | True, False | Logic | IntersectionSize | The number of intersected Tags |
SelectedSystemRoles:dynamic = dynamic(["All System Roles"]) SelectedSystems:dyna
- Accepts a single user only #### Additional notes+ For performance considerations, only a few days of audit activity are considered. For a full history of user activity, run a custom KQL query against the SAPAuditLog function.
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
Title: Microsoft Sentinel solution for SAP® applications - security content reference description: Learn about the built-in security content provided by the Microsoft Sentinel solution for SAP® applications.--++ Last updated 03/26/2023
sentinel Sap Suspicious Configuration Security Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-suspicious-configuration-security-parameters.md
Title: SAP security parameters monitored by the Microsoft Sentinel solution for SAP® to detect suspicious configuration changes description: Learn about the security parameters in the SAP system that the Microsoft Sentinel solution for SAP® applications monitors for suspicious configuration changes.--++ Last updated 03/26/2023
sentinel Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/solution-overview.md
Title: Microsoft Sentinel solution for SAP® applications overview description: This article introduces Microsoft Sentinel solution for SAP® applications--++ Last updated 03/22/2023
Security operations teams have traditionally had very little visibility into SAP
To help close this gap, Microsoft Sentinel offers the Microsoft Sentinel solution for SAP® applications. This comprehensive solution uses components at every level of Microsoft Sentinel to offer end-to-end detection, analysis, investigation, and response to threats in your SAP environment.
-## What Microsoft Sentinel solution for SAP® applications does
+## What the Microsoft Sentinel solution for SAP® applications does
-- The Microsoft Sentinel solution for SAP® applications continuously monitors SAP systems for threats at all layers - business logic, application, database, and OS.
+The Microsoft Sentinel solution for SAP® applications continuously monitors SAP systems for threats at all layers - business logic, application, database, and OS. It allows you to:
-- It allows you to correlate SAP monitoring with other signals across your organization, and to use detections provided by the solution&mdash;or build your own detections&mdash;to monitor sensitive transactions and other business risks such as privilege escalation, unapproved changes, and unauthorized access.
+- Correlate SAP monitoring with other signals across your organization, and to use detections provided by the solution&mdash;or build your own detections&mdash;to monitor sensitive transactions and other business risks such as privilege escalation, unapproved changes, and unauthorized access.
-- It also allows you to build automated response processes to interact with your SAP systems to stop active security threats.
+- Build automated response processes to interact with your SAP systems to stop active security threats.
-- In addition to that it offers threat monitoring and detection for SAP Business Technology Platform.
+The Microsoft Sentinel solution for SAP® applications also offers threat monitoring and detection for SAP Business Technology Platform.
-## Solution details
+For example, the following image shows a multi-SID SAP landscape with a split between productive and non-productive systems, including the SAP Business Technology Platform. All of the systems in this image are onboarded to Microsoft Sentinel for the SAP solution.
+
- This diagram shows a multi-SID SAP landscape with a split between productive and non-productive systems including the SAP Business Technology Platform. All of the systems and services are being onboarded to the Sentinel for SAP solution.
+## Solution details
### Log sources
sentinel Security Alert Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/security-alert-schema.md
Title: Microsoft Sentinel security alert schema reference
description: This article displays the schema of security alerts in Microsoft Sentinel. cloud: na Last updated 01/11/2022
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
description: This article describes new features in Microsoft Sentinel from the
Previously updated : 10/25/2023 Last updated : 01/11/2024 # What's new in Microsoft Sentinel
The listed features were released in the last three months. For information abou
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+## January 2024
+
+### Reduce false positives for SAP systems with analytics rules
+
+Use analytics rules together with the [Microsoft Sentinel solution for SAP® applications](sap/solution-overview.md) to lower the number of false positives triggered from your SAP® systems. The Microsoft Sentinel solution for SAP® applications now includes the following enhancements:
+
+- The [**SAPUsersGetVIP**](sap/sap-solution-log-reference.md#sapusersgetvip) function now supports excluding users according to their SAP-given roles or profile.
+
+- The **SAP_User_Config** watchlist now supports using wildcards in the **SAPUser** field to exclude all users with a specific syntax.
+
+For more information, see [Microsoft Sentinel solution for SAP® applications data reference](sap/sap-solution-log-reference.md) and [Handle false positives in Microsoft Sentinel](false-positives.md).
+ ## November 2023 - [Take advantage of Microsoft Defender for Cloud integration with Microsoft Defender XDR (Preview)](#take-advantage-of-microsoft-defender-for-cloud-integration-with-microsoft-defender-xdr-preview)
service-bus-messaging Migrate Jms Activemq To Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/migrate-jms-activemq-to-servicebus.md
Title: Migrate Java Message Service (JMS) applications from Apache ActiveMQ to Azure Service Bus | Microsoft Docs description: This article explains how to migrate existing JMS applications that interact with Apache ActiveMQ to interact with Azure Service Bus. editor: spelluru - ms.devlang: java Last updated 09/27/2021
service-fabric Cli Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/cli-create-cluster.md
Title: Azure CLI Script Deploy Sample description: How to create a secure Service Fabric Linux cluster in Azure via the Azure CLI. tags: azure-service-management Last updated 01/18/2018
service-fabric Cli Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/cli-deploy-application.md
Title: Azure Service Fabric CLI (sfctl) Script Deploy Sample description: Deploy an application to an Azure Service Fabric cluster using the Azure Service Fabric CLI tags: azure-service-management
service-fabric Cli Remove Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/cli-remove-application.md
Title: Azure Service Fabric CLI (sfctl) Script Remove Sample description: Remove an application from an Azure Service Fabric cluster using the Azure Service Fabric CLI tags: azure-service-management
service-fabric Service Fabric Powershell Add Application Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-add-application-certificate.md
Title: Add application cert to a cluster in PowerShell description: Azure PowerShell Script Sample - Add an application certificate to a Service Fabric cluster. tags: azure-service-management
service-fabric Service Fabric Powershell Add Nsg Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-add-nsg-rule.md
Title: Add a network security group rule in PowerShell description: Azure PowerShell Script Sample - Adds a network security group to allow inbound traffic on a specific port. tags: azure-service-management
service-fabric Service Fabric Powershell Change Rdp User And Pw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-user-and-pw.md
Title: Update the RDP username and password in PowerShell description: Azure PowerShell Script Sample - Update the RDP username and password for all Service Fabric cluster nodes of a specific node type. tags: azure-service-management
service-fabric Service Fabric Powershell Create Secure Cluster Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-create-secure-cluster-cert.md
Title: Create a Service Fabric cluster in PowerShell description: Azure PowerShell Script Sample - Create a Service Fabric cluster secured with an X.509 certificate. tags: azure-service-management ms.assetid: 0f9c8bc5-3789-4eb3-8deb-ae6e2200795a
service-fabric Service Fabric Powershell Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-deploy-application.md
Title: Deploy application to a cluster in PowerShell description: Azure PowerShell Script Sample - Deploy an application to a Service Fabric cluster. tags: azure-service-management
service-fabric Service Fabric Powershell Open Port In Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-open-port-in-load-balancer.md
Title: Open application port in load balancer in PowerShell description: Azure PowerShell Script Sample - Open a port in the Azure load balancer for a Service Fabric application. tags: azure-service-management
service-fabric Service Fabric Powershell Remove Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-remove-application.md
Title: Remove application from a cluster in PowerShell description: Azure PowerShell Script Sample - Remove an application from a Service Fabric cluster. tags: azure-service-management
service-fabric Service Fabric Powershell Upgrade Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-upgrade-application.md
Title: Upgrade a Service Fabric application in PowerShell description: Azure PowerShell Script Sample - Upgrade and monitor an Azure Service Fabric application using PowerShell. tags: azure-service-management
service-fabric Sfctl List Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/sfctl-list-applications.md
Title: List applications on a cluster in sfctl description: Service Fabric CLI Script Sample - List the applications provisioned on a Service Fabric cluster.
-tags:
- Last updated 04/13/2018 - # List applications running in a Service Fabric cluster
service-fabric Sfctl Upgrade Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/sfctl-upgrade-application.md
Title: Update an application on a cluster in sfctl description: Service Fabric CLI Script Sample - Update an application with a new version. This example also upgrades a deployed application with the new bits.
-tags:
- Last updated 12/06/2017--+ # Update an application using the Service Fabric CLI
service-health Resource Health Checks Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-checks-resource-types.md
Below is a complete list of all the checks executed through resource health by r
|| | - Is the ExpressRoute circuit healthy?|
+## Microsoft.Network/expressRouteGateways (ExpressRoute Gateways in Virtual WAN)
+|Executed Checks|
+||
+| - Is the ExpressRoute Gateway up and running?|
+ ## Microsoft.network/frontdoors |Executed Checks| ||
Below is a complete list of all the checks executed through resource health by r
|| | - Are there any issues impacting the Traffic Manager profile?|
+## Microsoft.Network/virtualHubs
+|Executed Checks|
+||
+| - Is the virtual hub router up and running?|
+ ## Microsoft.network/virtualNetworkGateways |Executed Checks| ||
site-recovery Azure To Azure Enable Global Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-enable-global-disaster-recovery.md
Title: Enable disaster recovery across Azure regions across the globe description: This article describes the global disaster recovery feature in Azure Site Recovery.- Last updated 12/14/2023
site-recovery Azure Vm Disaster Recovery With Accelerated Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking.md
Title: Enable accelerated networking for Azure VM disaster recovery with Azure Site Recovery description: Describes how to enable Accelerated Networking with Azure Site Recovery for Azure virtual machine disaster recovery
site-recovery Site Recovery Deployment Planner History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner-history.md
Title: Azure Site Recovery Deployment Planner Version History
description: Known different Site Recovery Deployment Planner Versions fixes and known limitations along with their release dates. -
spring-apps How To Configure Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-planned-maintenance.md
Last updated 11/07/2023- # How to configure planned maintenance (preview)
sql-server-stretch-database Sql Server Stretch Database Encryption Tde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sql-server-stretch-database/sql-server-stretch-database-encryption-tde.md
Title: Enable Transparent Data Encryption for Stretch Database description: Enable Transparent Data Encryption (TDE) for SQL Server Stretch Database on Azure ms.assetid: a44ed8f5-b416-4c41-9b1e-b7271f10bdc3 Last updated 06/14/2016
sql-server-stretch-database Sql Server Stretch Database Index All Articles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sql-server-stretch-database/sql-server-stretch-database-index-all-articles.md
Title: All topics for SQL Server Stretch Database service | Microsoft Docs description: Table of all topics for the Azure service named SQL Server Stretch Database that exist on https://azure.microsoft.com/documentation/articles/, Title and description. ms.assetid: b1718024-84d6-4f5c-a912-3a99edb3f632 Last updated 10/05/2016
sql-server-stretch-database Sql Server Stretch Database Tde Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sql-server-stretch-database/sql-server-stretch-database-tde-tsql.md
Title: Enable Transparent Data Encryption for Stretch Database (T-SQL) description: Enable Transparent Data Encryption (TDE) for SQL Server Stretch Database on Azure TSQL ms.assetid: 27753d91-9ca2-4d47-b34d-b5e2c2f029bb Last updated 01/23/2017
storage Storage Failover Customer Managed Unplanned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-unplanned.md
Last updated 09/22/2023 - # How customer-managed storage account failover works
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 09/12/2023 Last updated : 01/19/2024 -+ # Connect to Elastic SAN Preview volumes - Linux
-This article explains how to connect to an Elastic storage area network (SAN) volume from a Linux client. For details on connecting from a Windows client, see [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md).
+This article explains how to connect to an Elastic storage area network (SAN) volume from an individual Linux client. For details on connecting from a Windows client, see [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md).
In this article, you'll add the Storage service endpoint to an Azure virtual network's subnet, then you'll configure your volume group to allow connections from your subnet. Finally, you'll configure your client environment to connect to an Elastic SAN volume and establish a connection.
+You must use a cluster manager when connecting an individual elastic SAN volume to multiple clients. For details, see [Use clustered applications on Azure Elastic SAN Preview](elastic-san-shared-volumes.md).
+ ## Prerequisites - Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 09/12/2023 Last updated : 01/19/2024 -+ # Connect to Elastic SAN Preview volumes - Windows
-This article explains how to connect to an Elastic storage area network (SAN) volume from a Windows client. For details on connecting from a Linux client, see [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md).
+This article explains how to connect to an Elastic storage area network (SAN) volume from an individual Windows client. For details on connecting from a Linux client, see [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md).
In this article, you add the Storage service endpoint to an Azure virtual network's subnet, then you configure your volume group to allow connections from your subnet. Finally, you configure your client environment to connect to an Elastic SAN volume and establish a connection. For best performance, ensure that your VM and your Elastic SAN are in the same zone.
+You must use a cluster manager when connecting an individual elastic SAN volume to multiple clients. For details, see [Use clustered applications on Azure Elastic SAN Preview](elastic-san-shared-volumes.md).
+ ## Prerequisites - Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)
storsimple Storsimple Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md
Title: StorSimple 8000 series solution overview | Microsoft Docs description: Describes StorSimple data copy resources, data migration, device decommission operations, end of support, tiering, virtual device, and storage management.- - ms.assetid: 7144d218-db21-4495-88fb-e3b24bbe45d1 -- Last updated 07/10/2023
stream-analytics Automation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/automation-powershell.md
Title: Auto-pause an Azure Stream Analytics with PowerShell description: This article describes how to auto-pause an Azure Stream Analytics job on a schedule with PowerShell-
stream-analytics Cicd Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cicd-autoscale.md
Title: Configure autoscale settings for a Stream Analytics job by using the CI/CD tool description: This article shows how to configure autoscale settings for a Stream Analytics job by using the CI/CD tool.-
stream-analytics Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cicd-github-actions.md
Title: Integrating with GitHub Actions description: This article gives an instruction on how to create a workflow using GitHub Actions to deploy a Stream Analytics job. -
stream-analytics Cicd Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cicd-overview.md
Title: Continuous integration and continuous deployment of Azure Stream Analytics jobs description: This article gives an overview of setting up a continuous integration and deployment (CI/CD) pipeline for Azure Stream Analytics jobs.-
stream-analytics Cicd Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cicd-tools.md
Title: Automate builds, tests, and deployments of an Azure Stream Analytics job using CI/CD tools description: This article describes how to use Azure Stream Analytics CI/CD tools to auto build, test, and deploy an Azure Stream Analytics project.-
stream-analytics Input Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/input-validation.md
Title: Input validation for better Azure Stream Analytics job resiliency description: "This article describes how to improve the resiliency of Azure Stream Analytics jobs with input validation"--
stream-analytics No Code Build Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-build-power-bi-dashboard.md
description: Learn how to use the no code editor to easily create a Stream Analy
- Last updated 2/17/2023
stream-analytics Quick Create Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-azure-cli.md
Title: Quickstart - Create an Azure Stream Analytics job using the Azure CLI description: This quickstart shows how to use the Azure CLI to create an Azure Stream Analytics job.- - Last updated 02/28/2023
stream-analytics Quick Create Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-azure-resource-manager.md
Title: Quickstart - Create an Azure Stream Analytics job by Azure Resource Manager template description: This quickstart shows how to use the Azure Resource Manager template to create an Azure Stream Analytics job.- - Last updated 08/07/2023
stream-analytics Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-bicep.md
Title: Quickstart - Create an Azure Stream Analytics job using Bicep description: This quickstart shows how to use Bicep to create an Azure Stream Analytics job. - Last updated 05/17/2022
stream-analytics Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-terraform.md
Title: 'Quickstart: Create an Azure Stream Analytics job using Terraform' description: In this article, you create an Azure Stream Analytics job using Terraform. -
stream-analytics Resource Manager Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/resource-manager-export.md
Title: Export an Azure Stream Analytics job Azure Resource Manager template description: This article describes how to export an Azure Resource Manager template for your Azure Stream Analytics job.-
stream-analytics Set Up Cicd Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/set-up-cicd-pipeline.md
Title: Use Azure DevOps to create a CI/CD pipeline for a Stream Analytics job description: This article describes how to set up a continuous integration and deployment (CI/CD) pipeline for an Azure Stream Analytics job in Azure DevOps-
stream-analytics Stream Analytics Stream Analytics Query Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-stream-analytics-query-patterns.md
Title: Common query patterns in Azure Stream Analytics description: This article describes several common query patterns and designs that are useful in Azure Stream Analytics jobs.- Last updated 08/29/2022
stream-analytics Stream Analytics Twitter Sentiment Analysis Trends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-twitter-sentiment-analysis-trends.md
Title: Social media analysis with Azure Stream Analytics description: This article describes how to use Stream Analytics for social media analysis using the twitter client API. Step-by-step guidance from event generation to data on a live dashboard.-
synapse-analytics Sqlpool Create Restore Point https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/sqlpool-create-restore-point.md
Title: Create a user defined restore point for a dedicated SQL pool description: Learn how to use the Azure portal to create a user-defined restore point for dedicated SQL pool in Azure Synapse Analytics. -
synapse-analytics Concepts Data Factory Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/concepts-data-factory-differences.md
Title: Differences from Azure Data Factory description: Learn how the data integration capabilities of Azure Synapse Analytics differ from those of Azure Data Factory-
synapse-analytics Data Integration Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/data-integration-data-lake.md
Title: Ingest into Azure Data Lake Storage Gen2 description: Learn how to ingest data into Azure Data Lake Storage Gen2 in Azure Synapse Analytics-
synapse-analytics Data Integration Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/data-integration-sql-pool.md
Title: Ingest data into a dedicated SQL pool description: Learn how to ingest data into a dedicated SQL pool in Azure Synapse Analytics-
synapse-analytics Linked Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/linked-service.md
Title: Secure a linked service description: Learn how to provision and secure a linked service with Managed VNet -
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
- Last updated 02/15/2023
synapse-analytics How To Analyze Complex Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/how-to-analyze-complex-schema.md
Title: Analyze schema with arrays and nested structures description: How to analyze arrays and nested structures with Apache Spark and SQL-
synapse-analytics How To Move Workspace From One Region To Another https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/how-to-move-workspace-from-one-region-to-another.md
Title: Move an Azure Synapse Analytics workspace from region to another description: This article teaches you how to move an Azure Synapse Analytics workspace from one region to another. -
synapse-analytics How To Recover Workspace After Tenant Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/how-to-recover-workspace-after-tenant-move.md
Title: Recovering Synapse Analytics workspace after transferring a subscription to a different Microsoft Entra directory description: This article provides steps to recover the Synapse Analytics workspace after moving a subscription to a different Microsoft Entra directory (tenant)-
-#
Last updated 04/11/2022
synapse-analytics Quickstart Gallery Sample Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-gallery-sample-notebook.md
description: Learn how to use a sample notebook from the Synapse Analytics galle
- Last updated 06/11/2021
synapse-analytics Tutorial Build Applications Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-build-applications-use-mmlspark.md
description: Learn how to use Synapse Machine Learning to create machine learnin
-- Last updated 03/08/2021
synapse-analytics Tutorial Computer Vision Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-computer-vision-use-mmlspark.md
description: Learn how to use Azure AI Vision in Azure Synapse Analytics.
- Last updated 11/02/2021
synapse-analytics Tutorial Form Recognizer Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-form-recognizer-use-mmlspark.md
description: Learn how to use Azure AI Document Intelligence in Azure Synapse An
- Last updated 11/02/2021
synapse-analytics Tutorial Text Analytics Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md
description: Learn how to use text analytics in Azure Synapse Analytics.
- Last updated 11/02/2021
synapse-analytics Tutorial Translator Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-translator-use-mmlspark.md
description: Learn how to use translator in Azure Synapse Analytics.
- Last updated 11/02/2021
synapse-analytics What Is Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/what-is-machine-learning.md
Title: Machine Learning in Azure Synapse Analytics description: An Overview of machine learning capabilities in Azure Synapse Analytics.-- --+ Last updated 08/31/2022
synapse-analytics Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/metadata/database.md
Title: Lake database in serverless SQL pools description: Azure Synapse Analytics provides a shared metadata model where creating a lake database in an Apache Spark pool will make it accessible from its serverless SQL pool engine. -
+
synapse-analytics Apache Spark Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/apache-spark-advisor.md
Title: Apache Spark Advisor in Azure Synapse Analytics description: Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when a customer executes code or query.-
+
synapse-analytics Apache Spark Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/apache-spark-applications.md
Title: Monitor Apache Spark applications using Synapse Studio description: Use Synapse Studio to monitor your Apache Spark applications.-
+
synapse-analytics How To Monitor Pipeline Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/how-to-monitor-pipeline-runs.md
Title: Monitor pipeline runs using Synapse Studio description: Use the Synapse Studio to monitor your workspace pipeline runs.-
+
synapse-analytics How To Monitor Spark Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/how-to-monitor-spark-applications.md
Title: How to monitor Apache Spark applications in Synapse Studio description: Learn how to monitor your Apache Spark applications by using Synapse Studio.-
+
synapse-analytics How To Monitor Spark Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/how-to-monitor-spark-pools.md
Title: How to monitor Apache Spark pools in Synapse Studio description: Learn how to monitor your Apache Spark pools by using Synapse Studio.-
synapse-analytics How To Monitor Sql Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/how-to-monitor-sql-pools.md
Title: How to monitor SQL pools in Synapse Studio description: Learn how to monitor your SQL pools by using Synapse Studio.-
synapse-analytics How To Monitor Sql Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/how-to-monitor-sql-requests.md
Title: How to monitor SQL requests in Synapse Studio description: Learn how to monitor your SQL requests by using Synapse Studio.-
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-set-up-access-control.md
Title: Access control in Synapse workspace how to description: Learn how to control access to Azure Synapse workspaces using Azure roles, Synapse roles, SQL permissions, and Git permissions.-
synapse-analytics Synapse Workspace Access Control Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-access-control-overview.md
Title: Azure Synapse workspace access control overview description: This article describes the mechanisms used to control access to an Azure Synapse workspace and the resources and code artifacts it contains.-
synapse-analytics Apache Spark Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-autoscale.md
Title: Automatically scale Apache Spark instances
description: Use the Azure Synapse autoscale feature to automatically scale Apache Spark Instances -
synapse-analytics Apache Spark Azure Create Spark Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-create-spark-configuration.md
Title: Manage Apache Spark configuration description: Learn how to create an Apache Spark configuration for your synapse studio.-
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
Title: Monitor Apache Spark applications with Azure Log Analytics description: Learn how to enable the Synapse Studio connector for collecting and sending the Apache Spark application metrics and logs to your Log Analytics workspace.-
synapse-analytics Apache Spark Data Visualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-data-visualization.md
description: Use Python and Azure Synapse notebooks to visualize your data
-
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
Last updated 05/08/2021 -
synapse-analytics Apache Spark History Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-history-server.md
Title: Use the extended Spark history server to debug apps description: Use the extended Spark history server to debug and diagnose Spark applications in Azure Synapse Analytics.-
synapse-analytics Apache Spark Intelligent Cache Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-intelligent-cache-concept.md
Title: Intelligent Cache for Apache Spark 3.x in Azure Synapse Analytics description: This article provides an overview of the Intelligent Cache feature in Azure Synapse Analytics.-
synapse-analytics Apache Spark Machine Learning Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-machine-learning-training.md
description: Use Apache Spark in Azure Synapse Analytics to train machine learni
-
synapse-analytics Apache Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-overview.md
Title: Apache Spark in Azure Synapse Analytics overview description: This article provides an introduction to Apache Spark in Azure Synapse Analytics and the different scenarios in which you can use Spark.-
synapse-analytics Apache Spark Performance Hyperspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-performance-hyperspace.md
Title: Hyperspace indexes for Apache Spark description: Performance optimization for Apache Spark using Hyperspace indexes-
synapse-analytics Apache Spark Pool Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-pool-configurations.md
Title: Apache Spark pool concepts description: Introduction to Apache Spark pool sizes and configurations in Azure Synapse Analytics. -
synapse-analytics Apache Spark R Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-r-language.md
Title: Use R for Apache Spark
description: Learn about using R and Apache Spark to do data preparation and machine learning in Azure Synapse Analytics notebooks. -
synapse-analytics Apache Spark Secure Credentials With Tokenlibrary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md
Title: Secure access credentials with Linked Services in Apache Spark for Azure Synapse Analytics description: This article provides concepts on how to securely integrate Apache Spark for Azure Synapse Analytics with other services using linked services and token library-
synapse-analytics Apache Spark What Is Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-what-is-delta-lake.md
Title: What is Delta Lake? description: Overview of Delta Lake and how it works as part of Azure Synapse Analytics-
synapse-analytics Azure Synapse Diagnostic Emitters Azure Eventhub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/azure-synapse-diagnostic-emitters-azure-eventhub.md
Title: Collect your Apache Spark applications logs and metrics using Azure Event Hubs description: In this tutorial, you learn how to use the Synapse Apache Spark diagnostic emitter extension to emit Apache Spark applicationsΓÇÖ logs, event logs and metrics to your Azure Event Hubs.-
synapse-analytics Azure Synapse Diagnostic Emitters Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/azure-synapse-diagnostic-emitters-azure-storage.md
Title: Collect your Apache Spark applications logs and metrics using Azure Storage account description: This article shows how to use the Synapse Spark diagnostic emitter extension to collect logs, event logs and metrics.cluster and learn how to integrate the Grafana dashboards.--+ -+
synapse-analytics Connect Monitor Azure Synapse Spark Application Level Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/connect-monitor-azure-synapse-spark-application-level-metrics.md
Title: Collect Apache Spark applications metrics using APIs description: Tutorial - Learn how to integrate your existing on-premises Prometheus server with Azure Synapse workspace for near real-time Azure Spark application metrics using the Synapse Prometheus connector.-
synapse-analytics Apache Spark Cdm Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-cdm-connector.md
Title: Spark Common Data Model connector for Azure Synapse Analytics description: Learn how to use the Spark CDM connector in Azure Synapse Analytics to read and write Common Data Model entities in a Common Data Model folder on Azure Data Lake Storage.-
synapse-analytics Apache Spark Kusto Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-kusto-connector.md
Title: Azure Data Explorer (Kusto) description: This article provides information on how to use the connector for moving data between Azure Data Explorer (Kusto) and serverless Apache Spark pools.-
synapse-analytics Apache Spark Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-sql-connector.md
Title: Azure SQL and SQL Server description: This article provides information on how to use the connector for moving data between Azure MS SQL and serverless Apache Spark pools.-
synapse-analytics Intellij Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/intellij-tool-synapse.md
Title: Tutorial - Azure Toolkit for IntelliJ (Spark application) description: Tutorial - Use the Azure Toolkit for IntelliJ to develop Spark applications, which are written in Scala, and submit them to a serverless Apache Spark pool.-
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
Title: Introduction to Microsoft Spark utilities description: "Tutorial: MSSparkutils in Azure Synapse Analytics notebooks" - Last updated 09/10/2020 - zone_pivot_groups: programming-languages-spark-all-minus-sql
synapse-analytics Spark Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/spark-dotnet.md
Title: Use .NET for Apache Spark
description: Learn about using .NET and Apache Spark to do batch processing, real-time streaming, machine learning, and write ad-hoc queries in Azure Synapse Analytics notebooks. -
synapse-analytics Synapse File Mount Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-file-mount-api.md
Title: Introduction to file APIs in Azure Synapse Analytics description: This tutorial describes how to use the file mount and file unmount APIs in Azure Synapse Analytics, for both Azure Data Lake Storage Gen2 and Azure Blob Storage. -
synapse-analytics Use Prometheus Grafana To Monitor Apache Spark Application Level Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/use-prometheus-grafana-to-monitor-apache-spark-application-level-metrics.md
Title: Tutorial - Monitor Apache Spark Applications metrics with Prometheus and Grafana description: Tutorial - Learn how to deploy the Apache Spark application metrics solution to an Azure Kubernetes Service (AKS) cluster and learn how to integrate the Grafana dashboards.-
synapse-analytics Quickstart Bulk Load Copy Tsql Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql-examples.md
Title: Authentication mechanisms with the COPY statement description: Outlines the authentication mechanisms to bulk load data with the COPY statement in Synapse SQL.-
synapse-analytics Active Directory Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/active-directory-authentication.md
Title: Microsoft Entra ID description: Learn about how to use Microsoft Entra ID for authentication with Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse SQL.-
synapse-analytics Author Sql Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/author-sql-script.md
Title: SQL scripts in Synapse Studio description: Introduction to Synapse Studio SQL scripts in Azure synapse Analytics. -
synapse-analytics Develop Storage Files Spark Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-spark-tables.md
Title: Synchronize Apache Spark for external table definitions in serverless SQL pool description: Overview of how to query Spark tables using serverless SQL pool-
synapse-analytics Develop User Defined Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-user-defined-schemas.md
Title: User-defined schemas within Synapse SQL description: In the sections below, you'll find various tips for using T-SQL user-defined schemas to develop solutions with the Synapse SQL capability of Azure Synapse Analytics.-
synapse-analytics Sql Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/sql-authentication.md
Title: SQL Authentication in Azure Synapse Analytics description: Learn about SQL authentication in Azure Synapse Analytics. Azure Synapse Analytics has two SQL form-factors to control your resource consumption. -
synapse-analytics Concept Synapse Link Cosmos Db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/concept-synapse-link-cosmos-db-support.md
Title: Azure Synapse Link for Azure Cosmos DB supported features description: Understand the current list of actions supported by Azure Synapse Link for Azure Cosmos DB-
synapse-analytics Sql Database Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-database-synapse-link.md
Title: Azure Synapse Link for Azure SQL Database description: Learn about Azure Synapse Link for Azure SQL Database, the link connection, and monitoring the Synapse Link. -
synapse-analytics Sql Server 2022 Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-server-2022-synapse-link.md
Title: Azure Synapse Link for SQL Server 2022 description: Learn about Azure Synapse Link for SQL Server 2022, the link connection, landing zone, Self-hosted integration runtime, and monitoring the Azure Synapse Link for SQL.-
synapse-analytics Sql Synapse Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-synapse-link-overview.md
Title: What is Azure Synapse Link for SQL? description: Learn about Azure Synapse Link for SQL, the benefits it offers, and price.-
synapse-analytics Synapse Notebook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-notebook-activity.md
Last updated 05/19/2021 -
time-series-insights Breaking Changes Long Data Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/breaking-changes-long-data-type.md
Title: 'Support for long data type in Azure Time Series Insights Gen2 | Microsoft Docs' description: Support for long data type in Azure Time Series Insights Gen2. - - Last updated 12/07/2020
time-series-insights Concepts Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-access-policies.md
Title: 'Configure security to grant data access - Azure Time Series Insights | Microsoft Docs' description: Learn how to configure security, permissions, and manage data access policies in your Azure Time Series Insights environment. - - Last updated 12/02/2020
time-series-insights Concepts Ingestion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-ingestion-overview.md
- - Last updated 12/02/2020
time-series-insights Concepts Json Flattening Escaping Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-json-flattening-escaping-rules.md
- - Last updated 01/21/2021
time-series-insights Concepts Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-model-overview.md
- - Last updated 01/22/2021
time-series-insights Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-power-bi.md
- Last updated 09/28/2020
time-series-insights Concepts Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-private-links.md
description: Private Links and Public Access Restriction overview in Azure Time
- - Last updated 09/01/2021
time-series-insights Concepts Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-query-overview.md
- - Last updated 01/22/2021
time-series-insights Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-storage.md
description: Learn about data storage in Azure Time Series Insights Gen2.
- - Last updated 01/21/2021
time-series-insights Concepts Streaming Ingestion Event Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-streaming-ingestion-event-sources.md
- - Last updated 03/18/2021
time-series-insights Concepts Streaming Ingress Throughput Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-streaming-ingress-throughput-limits.md
- - Last updated 01/21/2021
time-series-insights Concepts Supported Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-supported-data-types.md
- - Last updated 01/19/2021
time-series-insights Concepts Ux Panels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-ux-panels.md
- - Last updated 01/22/2021
time-series-insights How To Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-api-migration.md
Title: 'Migrating to new Azure Time Series Insights Gen2 API versions | Microsoft Docs' description: How to update Azure Time Series Insights Gen2 environments to use new generally available versions. - - Last updated 10/01/2020
time-series-insights How To Connect Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-connect-power-bi.md
- Last updated 12/14/2020
time-series-insights How To Create Environment Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-create-environment-using-cli.md
- - Last updated 03/15/2021
time-series-insights How To Create Environment Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-create-environment-using-portal.md
description: 'Learn how to set up an environment in Azure Time Series Insights G
- - Last updated 03/15/2021
time-series-insights How To Diagnose Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-diagnose-troubleshoot.md
- - Last updated 10/01/2020
time-series-insights How To Edit Your Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-edit-your-model.md
- - Last updated 10/02/2020
time-series-insights How To Ingest Data Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-ingest-data-event-hub.md
Title: 'Add an Event Hubs event source - Azure Time Series Insights | Microsoft Docs' description: Learn how to add an Azure Event Hubs event source to your Azure Time Series Insights environment. - - Last updated 01/21/2021
time-series-insights How To Ingest Data Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-ingest-data-iot-hub.md
Title: 'How to add an IoT hub event source - Azure Time Series Insights | Microsoft Docs' description: Learn how to add an IoT hub event source to your Azure Time Series Insight environment. - - Last updated 01/21/2021
time-series-insights How To Monitor Tsi Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-monitor-tsi-reference.md
- - Last updated 12/10/2020
time-series-insights How To Monitor Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-monitor-tsi.md
- - Last updated 12/10/2020
time-series-insights How To Plan Your Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-plan-your-environment.md
- - Last updated 09/30/2020
time-series-insights How To Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-private-links.md
description: See how to enable private access for Azure Time Series Insights Gen
- - Last updated 09/01/2021
time-series-insights How To Provision Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-provision-manage.md
description: Learn how to manage an Azure Time Series Insights Gen 2 environment
- - Last updated 03/15/2020
time-series-insights How To Select Tsid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-select-tsid.md
description: Learn about best practices when choosing a Time Series ID in Azure
- - Last updated 03/23/2021
time-series-insights How To Tsi Gen1 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-tsi-gen1-migration.md
Title: 'Time Series Insights Gen1 migration to Azure Data Explorer | Microsoft Docs' description: How to migrate Azure Time Series Insights Gen 1 environments to Azure Data Explorer. - -- Last updated 3/15/2022
time-series-insights How To Tsi Gen2 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-tsi-gen2-migration.md
description: How to migrate Azure Time Series Insights Gen 2 environments to Azu
- Last updated 3/15/2022
time-series-insights Ingestion Rules Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/ingestion-rules-update.md
Title: 'Upcoming changes to the ingestion and flattening rules in Azure Time Series Insights Gen2 | Microsoft Docs' description: Ingestion rule changes - - Last updated 10/02/2020
time-series-insights Migration To Adx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/migration-to-adx.md
Title: 'Migrating to Azure Data Explorer | Microsoft Docs' description: How to migrate Azure Time Series Insights environments to Azure Data Explorer. - -- Last updated 3/15/2022
time-series-insights Overview Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/overview-use-cases.md
- - Last updated 12/16/2020
time-series-insights Overview What Is Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/overview-what-is-tsi.md
Title: 'Overview: What is Azure Time Series Insights Gen2? - Azure Time Series Insights Gen2' description: Learn about changes, improvements, and features in Azure Time Series Insights Gen2. - - Last updated 12/16/2020
time-series-insights Quickstart Explore Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/quickstart-explore-tsi.md
Title: 'Quickstart: Explore the Gen2 demo environment - Azure Time Series Insights Gen2 | Microsoft Docs' description: Explore key features of the Azure Time Series Insights Gen2 demo environment. - - Last updated 03/01/2021
time-series-insights Time Series Insights Add Reference Data Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-add-reference-data-set.md
Title: 'How to add reference data sets to your environment - Azure Time Series Insights | Microsoft Docs' description: This article describes how to add a reference data set to augment data in your Azure Time Series Insights environment. - - Last updated 09/30/2020
time-series-insights Time Series Insights Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-authentication-and-authorization.md
ms.devlang: csharp- Last updated 02/23/2021
time-series-insights Time Series Insights Concepts Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-concepts-retention.md
Title: 'Understand data retention in your environment - Azure Time Series Insight | Microsoft Docs' description: This article describes two settings that control data retention in your Azure Time Series Insights environment. - - Last updated 09/29/2020
time-series-insights Time Series Insights Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-customer-data-requests.md
- Last updated 10/02/2020
time-series-insights Time Series Insights Diagnose And Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-diagnose-and-solve-problems.md
Title: 'Diagnose, troubleshoot, and solve issues - Azure Time Series Insights' description: This article describes how to diagnose, troubleshoot, and solve common issues in your Azure Time Series Insights environment. - - Last updated 09/29/2020
time-series-insights Time Series Insights Environment Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-environment-mitigate-latency.md
Title: 'How to monitor and reduce throttling - Azure Time Series Insights | Microsoft Docs' description: Learn how to monitor, diagnose, and mitigate performance issues that cause latency and throttling in Azure Time Series Insights. - ms.devlang: csharp- Last updated 09/29/2020
time-series-insights Time Series Insights Environment Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-environment-planning.md
Title: 'Plan your Gen1 environment - Azure Time Series Insights | Microsoft Docs' description: Best practices for preparing, configuring, and deploying your Azure Time Series Insights Gen1 environment.- ms.devlang: csharp- Last updated 09/29/2020
time-series-insights Time Series Insights Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-explorer.md
Title: 'Explore data using the Explorer - Azure Time Series Insights | Microsoft Docs' description: Learn how to use the Azure Time Series Insights Explorer to view your IoT data. - ms.devlang: csharp- Last updated 09/29/2020
time-series-insights Time Series Insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-get-started.md
Title: 'Create an environment - Azure Time Series Insights | Microsoft Docs' description: Learn how to use the Azure portal to create a new Azure Time Series Insights environment. - - Last updated 09/29/2020
time-series-insights Time Series Insights How To Configure Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-how-to-configure-retention.md
Title: 'How to configure retention in your environment - Azure Time Series Insights | Microsoft Docs' description: Learn how to configure retention in your Azure Time Series Insights environment. - - Last updated 09/29/2020
time-series-insights Time Series Insights How To Scale Your Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-how-to-scale-your-environment.md
Title: 'How to scale your environment - Azure Time Series Insights| Microsoft Docs' description: Learn how to scale your Azure Time Series Insights environment using the Azure portal. - ms.devlang: csharp- Last updated 09/29/2020
time-series-insights Time Series Insights Manage Reference Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-reference-data-csharp.md
Title: 'Manage reference data in GA environments using C# - Azure Time Series Insights' description: Learn how to manage reference data for your GA environment by creating a custom application written in C#. - ms.devlang: csharp- Last updated 09/29/2020
time-series-insights Time Series Insights Manage Resources Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-resources-using-azure-resource-manager-template.md
Title: 'Manage your environment using Azure Resource Manager templates - Azure Time Series Insights | Microsoft Docs' description: Learn how to manage your Azure Time Series Insights environment programmatically using Azure Resource Manager. - ms.devlang: csharp- Last updated 09/30/2020
time-series-insights Time Series Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-overview.md
Title: 'Overview: What is Azure Time Series Insights? - Azure Time Series Insights | Microsoft Docs' description: Introduction to Azure Time Series Insights, a new service for time series data analytics and IoT solutions. - - Last updated 09/30/2020
time-series-insights Time Series Insights Parameterized Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-parameterized-urls.md
Title: 'Share custom views with parameterized URLs - Azure Time Series Insights | Microsoft Docs' description: Learn how to create parameterized URLs to easily share customized Explorer views in Azure Time Series Insights. - - Last updated 10/02/2020
time-series-insights Time Series Insights Query Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-query-data-csharp.md
Title: 'Query data from a Gen1 environment using C# code - Azure Time Series Insights Gen1 | Microsoft Docs' description: Learn how to query data from an Azure Time Series Insights Gen1 environment using a custom app written in C#. - ms.devlang: csharp- Last updated 09/30/2020
time-series-insights Time Series Insights Send Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-send-events.md
Title: 'Send events to an environment - Azure Time Series Insights | Microsoft Docs' description: Learn how to configure an event hub, run a sample application, and send events to your Azure Time Series Insights environment. - ms.devlang: csharp- Last updated 09/30/2020
time-series-insights Time Series Insights Update Query Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-update-query-data-csharp.md
Title: 'Query data from a Gen2 environment using C# - Azure Time Series Insights | Microsoft Docs' description: Learn how to query data from an Azure Time Series Insights Gen2 environment by using an app written in C#. - ms.devlang: csharp- Last updated 10/02/2020
time-series-insights Time Series Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-quickstart.md
Title: 'Quickstart: Azure Time Series Insights Explorer - Azure Time Series Insights | Microsoft Docs' description: Learn how to get started with the Azure Time Series Insights Explorer. Visualize large volumes of IoT data and tour key features of your environment. - - Last updated 09/30/2020 #Customer intent: As a data analyst or developer, I want to quickly learn about the Azure Time Series Insights visualization Explorer.
time-series-insights Tutorial Create Populate Tsi Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/tutorial-create-populate-tsi-environment.md
Title: 'Tutorial: Create an environment - Azure Time Series Insights | Microsoft Docs' description: Learn how to create an Azure Time Series Insights environment that's populated with data from simulated devices.-
time-series-insights Tutorial Set Up Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/tutorial-set-up-environment.md
Title: 'Tutorial: Set up a Gen2 environment - Azure Time Series Insights Gen2| M
description: 'Tutorial: Learn how to set up an environment in Azure Time Series Insights Gen2.' - Last updated 04/23/2021
time-series-insights Tutorials Model Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/tutorials-model-sync.md
Title: 'Model synchronization between Azure Digital Twins and Time Series Insights | Microsoft Docs' description: Best practices and tools used to translate asset model in ADT to asset model in Azure TSI - - Last updated 01/19/2021
traffic-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/cli-samples.md
Title: Azure CLI Samples for Traffic Manager
description: Learn about an Azure CLI script you can use to direct traffic across multiple regions for high application availability. -
traffic-manager Quickstart Create Traffic Manager Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile.md
description: This quickstart article describes how to create a Traffic Manager p
-+ Last updated 02/18/2023
traffic-manager Traffic Manager Cli Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/scripts/traffic-manager-cli-websites-high-availability.md
tags: azure-infrastructure ms.devlang: azurecli Last updated 04/27/2023
traffic-manager Traffic Manager Powershell Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/scripts/traffic-manager-powershell-websites-high-availability.md
documentationcenter: traffic-manager tags: azure-infrastructure- ms.devlang: powershell Last updated 04/27/2023
traffic-manager Traffic Manager Configure Performance Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-performance-routing-method.md
Title: Configure performance traffic routing method using Azure Traffic Manager | Microsoft Docs description: This article explains how to configure Traffic Manager to route traffic to the endpoint with lowest latency -+
virtual-machine-scale-sets Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/azure-hybrid-benefit-linux.md
Title: Azure Hybrid Benefit for Linux Virtual Machine Scale Sets description: Learn how Azure Hybrid Benefit can apply to Virtual Machine Scale Sets and save you money on Linux virtual machines in Azure.
virtual-machines Co Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/co-location.md
Last updated 09/12/2022- # Proximity placement groups
virtual-machines Create Portal Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/create-portal-availability-zone.md
Last updated 03/17/2022 - # Create virtual machines in an availability zone using the Azure portal
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-overview.md
Title: Desired State Configuration for Azure overview
description: Learn how to use the Microsoft Azure extension handler for PowerShell Desired State Configuration (DSC), including prerequisites, architecture, and cmdlets. - tags: azure-resource-manager keywords: 'dsc' ms.assetid: bbacbc93-1e7b-4611-a3ec-e3320641f9ba
vm-windows- Last updated 03/28/2023
virtual-machines Dsc Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-template.md
vm-windows- Last updated 03/13/2022
virtual-machines Hpc Compute Infiniband Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md
Title: InfiniBand driver extension - Azure Linux VMs description: Microsoft Azure Extension for installing InfiniBand Drivers on HB- and N-series compute VMs running Linux.
virtual-machines Hpc Compute Infiniband Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-windows.md
Title: InfiniBand driver extension - Azure Windows VMs description: Microsoft Azure Extension for installing InfiniBand Drivers on H- and N-series compute VMs running Windows.
virtual-machines Iaas Antimalware Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/iaas-antimalware-windows.md
Title: Microsoft Antimalware Extension for Windows VMs on Azure description: Deploy the Azure Antimalware extension on Windows virtual machine using an extension.
virtual-machines Vmsnapshot Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmsnapshot-linux.md
Title: VM Snapshot Linux extension for Azure Backup description: Take application consistent backup of the virtual machine from Azure Backup using VM snapshot Linux extension.
virtual-machines Vmsnapshot Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmsnapshot-windows.md
Title: VM Snapshot Windows extension for Azure Backup description: Take application consistent backup of the virtual machine from Azure Backup using VM snapshot extension
virtual-machines Hibernate Resume Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hibernate-resume-troubleshooting.md
Last updated 10/31/2023 - # Troubleshooting VM hibernation
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
Last updated 09/20/2023 -
virtual-machines Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-snapshot.md
documentationcenter: storage - tags: azure-service-management- ms.devlang: azurecli
virtual-machines Create Vm From Managed Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-managed-os-disks.md
editor: ramankum tags: azure-service-management- ms.devlang: azurecli
virtual-machines Create Vm From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-snapshot.md
editor: ramankum tags: azure-service-management- ms.devlang: azurecli
virtual-machines Virtual Machines Powershell Sample Copy Managed Disks Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-managed-disks-vhd.md
documentationcenter: storage - tags: azure-service-management - - vm-windows
virtual-machines Virtual Machines Powershell Sample Copy Snapshot To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-snapshot-to-same-or-different-subscription.md
documentationcenter: storage - tags: azure-service-management - - vm-windows
virtual-machines Sizes Memory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-memory.md
Title: Azure VM sizes - Memory | Microsoft Docs description: Lists the different memory optimized sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks, and NICs as well as storage throughput and network bandwidth for sizes in this series. tags: azure-resource-manager,azure-service-management keywords: VM isolation,isolated VM,isolation,isolated
virtual-machines Using Managed Disks Template Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/using-managed-disks-template-deployments.md
Title: Deploying disks with Azure Resource Manager templates description: Details how to use managed and unmanaged disks in Azure Resource Manager templates for Azure VMs.-
virtual-machines Vm Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-usage.md
Title: Understanding Azure virtual machine usage description: Understand virtual machine usage details
virtual-machines Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/incremental-snapshots.md
Title: Use incremental snapshots for backup and recovery of unmanaged Azure Windows VM disks description: Create a custom solution for backup and recovery of your Azure Windows virtual machine disks using incremental snapshots. -+
virtual-machines On Prem To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/on-prem-to-azure.md
Title: Migrate from AWS and other platforms to managed disks in Azure description: Create VMs in Azure using VHDs uploaded from other clouds like AWS or other virtualization platforms and use Azure managed disks. -+ vm-windows
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/proximity-placement-groups.md
Last updated 3/12/2023 -
virtual-machines Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/demo.md
description: Run an IBM Z Development and Test Environment (zD&T) environment on
Last updated 02/22/2019
virtual-machines Deploy Ibm Db2 Purescale Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/deploy-ibm-db2-purescale-azure.md
Title: Deploy IBM DB2 pureScale on Azure
description: Learn how to deploy an example architecture used recently to migrate an enterprise from its IBM DB2 environment running on z/OS to IBM DB2 pureScale on Azure. -+
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/get-started.md
description: Use a mainframe emulator and other services from Microsoft partners
Last updated 02/22/2019
-tags:
-keywords:
# IBM workloads on Azure
virtual-machines Ibm Db2 Purescale Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/ibm-db2-purescale-azure.md
Title: IBM DB2 pureScale on Azure
description: In this article, we show an architecture for running an IBM DB2 pureScale environment on Azure.
virtual-machines Install Ibm Z Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/install-ibm-z-environment.md
description: Deploy IBM Z Development and Test Environment (zD&T) on Azure Virtu
Last updated 04/02/2019
-tags:
-keywords:
# Install IBM zD&T dev/test environment on Azure
virtual-machines Mainframe White Papers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/mainframe-white-papers.md
description: Get resources about mainframe migration, rehosting, and moving IBM
Last updated 04/02/2019
virtual-machines Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/demo.md
Title: Set up Micro Focus CICS BankDemo for Micro Focus Enterprise Developer 4.0
description: Run the Micro Focus BankDemo application on Azure Virtual Machines (VMs) to learn to use Micro Focus Enterprise Server and Enterprise Developer. Last updated 03/30/2020
virtual-machines Deploy Micro Focus Enterprise Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/deploy-micro-focus-enterprise-server.md
Title: Deploy Micro Focus Enterprise Server 5.0 to AKS | Microsoft Docs description: Rehost your IBM z/OS mainframe workloads using the Micro Focus development and test environment on Azure virtual machines (VMs). Last updated 06/29/2020
-tags:
-keywords:
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/get-started.md
Title: Micro Focus dev/test environments on Azure | Microsoft Docs
description: Rehost your IBM z/OS mainframe workloads using Micro Focus solutions on Azure virtual machines (VMs). Last updated 03/30/2020
virtual-machines Run Enterprise Server Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/run-enterprise-server-container.md
Title: Run Micro Focus Enterprise Server 5.0 in a Docker container on Azure | Microsoft Docs description: In this article, learn how to run Micro Focus Enterprise Server 5.0 in a Docker container on Microsoft Azure. Last updated 06/29/2020
-tags:
-keywords:
virtual-machines Set Up Micro Focus Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/set-up-micro-focus-azure.md
Title: Install Micro Focus Enterprise Server 5.0 and Enterprise Developer 5.0 on Azure | Microsoft Docs description: In this article, learn how to install Micro Focus Enterprise Server 5.0 and Enterprise Developer 5.0 on Microsoft Azure. Last updated 06/29/2020
-tags:
-keywords:
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/overview.md
Title: Mainframe rehosting on Azure virtual machines
description: Rehost your mainframe workloads such as IBM Z-based systems using virtual machines (VMs) on Microsoft Azure. editor: edprice
virtual-machines Partner Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/partner-workloads.md
description: Use a mainframe emulator and other services from Microsoft partners
editor: edprice
virtual-machines Install Openframe Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
Title: Install TmaxSoft OpenFrame on Azure Virtual Machines description: Learn how to set up an OpenFrame environment on Azure suitable for development, demos, testing, or production workloads. Last updated 04/19/2023
virtual-machines Oracle Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-overview.md
Title: Overview of Oracle Applications and solutions on Azure | Microsoft Docs description: Learn about deploying Oracle Applications and solutions on Azure. Run entirely on Azure infrastructure or use cross-cloud connectivity with OCI. tags: azure-resource-management
virtual-network Diagnose Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-routing-problem.md
Title: Diagnose an Azure virtual machine routing problem
description: Learn how to diagnose a virtual machine routing problem by viewing the effective routes for a virtual machine. -+ tags: azure-resource-manager
virtual-network Diagnose Network Traffic Filter Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-traffic-filter-problem.md
Title: Diagnose a virtual machine network traffic filter problem
description: Learn how to diagnose a virtual machine network traffic filter problem by viewing the effective security rules for a virtual machine. -+ tags: azure-resource-manager ms.assetid: a54feccf-0123-4e49-a743-eb8d0bdd1ebc
virtual-network Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-portal.md
Last updated 06/06/2023 - #Customer intent: I want to use the Azure portal to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
virtual-network Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-terraform.md
+
+ Title: 'Quickstart: Create virtual network and subnets using Terraform'
+
+description: In this quickstart, you create an Azure Virtual Network and Subnets using Terraform. You use Azure CLI to verify the resources.
+ Last updated : 1/19/2024++++
+content_well_notification:
+ - AI-contribution
+# Customer intent: As a Network Administrator, I want to create a virtual network and subnets using Terraform.
++
+# Quickstart: Create an Azure Virtual Network and subnets using Terraform
+
+In this quickstart, you learn about a Terraform script that creates an Azure resource group and a virtual network with two subnets. The names of the resource group and the virtual network are generated using a random pet name with a prefix. The script also outputs the names of the created resources.
+
+The script uses the Azure Resource Manager (azurerm) and Random (random) providers. The azurerm provider is used to interact with Azure resources, while the random provider is used to generate random pet names for the resources.
+
+The script creates the following resources:
+
+- A resource group: A container that holds related resources for an Azure solution.
+
+- A virtual network: A fundamental building block for your private network in Azure.
+
+- Two subnets: Segments of a virtual network's IP address range where you can place groups of isolated resources.
++
+## Prerequisites
+
+- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-virtual-network-create-two-subnets). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-virtual-network-create-two-subnets/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `main.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-virtual-network-create-two-subnets/main.tf":::
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-virtual-network-create-two-subnets/outputs.tf":::
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-virtual-network-create-two-subnets/providers.tf":::
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-virtual-network-create-two-subnets/variables.tf":::
++
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the virtual network name.
+
+ ```console
+ virtual_network_name=$(terraform output -raw virtual_network_name)
+ ```
+
+1. Use [`az network vnet show`](/cli/azure/network/vnet#az-network-vnet-show) to display the details of your newly created virtual network.
+
+ ```azurecli
+ az network vnet show \
+ --resource-group $resource_group_name \
+ --name $virtual_network_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+ For more information about troubleshooting Terraform, see [Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about using Terraform in Azure](/azure/terraform)
virtual-network Virtual Network Cli Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-peer-two-virtual-networks.md
ms.devlang: azurecli Last updated 02/03/2022
virtual-network Tutorial Filter Network Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-cli.md
description: In this article, you learn how to filter network traffic to a subne
documentationcenter: virtual-network -+ tags: azure-resource-manager # Customer intent: I want to filter network traffic to virtual machines that perform similar functions, such as web servers.
virtual-network Tutorial Filter Network Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-powershell.md
description: In this article, you learn how to filter network traffic to a subne
documentationcenter: virtual-network -+ tags: azure-resource-manager # Customer intent: I want to filter network traffic to virtual machines that perform similar functions, such as web servers.
virtual-wan Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitoring-best-practices.md
This section of the article focuses on metric-based alerts. Azure Firewall offer
|Create alert rule for risk of SNAT port exhaustion.|Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale instance. ItΓÇÖs important to estimate in advance the number of SNAT ports that will fulfill your organizational requirements for outbound traffic to the Internet. Not doing so increases the risk of exhausting the number of available SNAT ports on the Azure Firewall, potentially causing outbound connectivity failures.<br><br>Use the **SNAT port utilization** metric to monitor the percentage of outbound SNAT ports currently in use. Create an alert rule for this metric to be triggered whenever this percentage surpasses **95%** (due to an unforeseen traffic increase, for example) so you can act accordingly by configuring an additional public IP address on the Azure Firewall, or by using an [Azure NAT Gateway](../nat-gateway/nat-overview.md) instead. Use the **Maximum** aggregation type when configuring the alert rule.<br><br>To learn more about how to interpret the **SNAT port utilization** metric, see [Overview of Azure Firewall logs and metrics](../firewall/logs-and-metrics.md#metrics). To learn more about how to scale SNAT ports in Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](../firewall/integrate-with-nat-gateway.md).| |Create alert rule for firewall overutilization.|Azure Firewall maximum throughput differs depending on the SKU and features enabled. To learn more about Azure Firewall performance, see [Azure Firewall performance](../firewall/firewall-performance.md).<br><br>You might want to be alerted if your firewall is nearing its maximum throughput and troubleshoot the underlying cause, as this can have an impact in the firewallΓÇÖs performance.<br><br> Create an alert rule to be triggered whenever the **Throughput** metric surpasses a value nearing the firewallΓÇÖs maximum throughput ΓÇô if the maximum throughput is 30Gbps, configure 25Gbps as the **Threshold** value, for example. The **Throughput** metric unit is **bits/sec**. Choose the **Average** aggregation type when creating the alert rule.
+## Resource Health Alerts
+
+You can also configure [Resource Health Alerts](../service-health/resource-health-alert-monitor-guide.md) via Service Health for the below resources. This ensures you are informed of the availability of your Virtual WAN environment, and this allows you to troubleshoot if networking issues are due to your Azure resources entering an unhealthy state, as opposed to issues from your on-premises environment. It is recommended to configure alerts when the resource status becomes degraded or unavailable. If the resource status does become degraded/unavailable, you can analyze if there are any recent spikes in the amount of traffic processed by these resources, the routes advertised to these resources, or the number of branch/VNet connections created. Please see [Azure Virtual WAN limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-wan-limits) for additional info on limits supported in Virtual WAN.
+
+* Microsoft.Network/vpnGateways
+* Microsoft.Network/expressRouteGateways
+* Microsoft.Network/azureFirewalls
+* Microsoft.Network/virtualHubs
+* Microsoft.Network/p2sVpnGateways
+ ## Next steps * See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a data reference of the metrics, logs, and other important values created by Virtual WAN.