Updates from: 11/25/2023 02:08:45
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Luis Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-configuration.md
Do not use the starter key or the authoring key.
## Billing setting
-The `Billing` setting specifies the endpoint URI of the _Azure AI services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Azure AI services_ resource on Azure. The container reports usage about every 10 to 15 minutes.
+The `Billing` setting specifies the endpoint URI of the _Azure AI services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for an _Azure AI services_ resource on Azure. The container reports usage about every 10 to 15 minutes.
This setting can be found in the following places:
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
In the `SpeechRecognizer`, you can specify the language that you're learning or
> [!TIP] > If you aren't sure which locale to set when a language has multiple locales (such as Spanish), try each locale (such as `es-ES` and `es-MX`) separately. Evaluate the results to determine which locale scores higher for your specific scenario.
-You must create a `PronunciationAssessmentConfig` object. You need to configure the `PronunciationAssessmentConfig` object to enable prosody assessment for your pronunciation evaluation. This feature assesses aspects like stress, intonation, speaking speed, and rhythm, providing insights into the naturalness and expressiveness of your speech. For a content assessment (part of the [unscripted assessment](#unscripted-assessment-results) for the speaking language learning scenario), you also need to configure the `PronunciationAssessmentConfig` object. By providing a topic description, you can enhance the assessment's understanding of the specific topic being spoken about, resulting in more precise content assessment scores.
+You must create a `PronunciationAssessmentConfig` object. Optionally you can set `EnableProsodyAssessment` and `EnableContentAssessmentWithTopic` to enable prosody and content assessment. For more information, see [configuration methods](#configuration-methods).
::: zone pivot="programming-language-csharp"
This table lists some of the key configuration parameters for pronunciation asse
| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. Enabling miscue is optional. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table. | | `ScenarioId` | A GUID indicating a customized point system. |
+### Configuration methods
+
+This table lists some of the optional methods you can set for the `PronunciationAssessmentConfig` object.
+
+> [!NOTE]
+> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale.
+
+| Method | Description |
+|--|-|
+| `EnableProsodyAssessment` | Enables prosody assessment for your pronunciation evaluation. This feature assesses aspects like stress, intonation, speaking speed, and rhythm, providing insights into the naturalness and expressiveness of your speech.<br/><br/>Enabling prosody assessment is optional. If this method is called, the `ProsodyScore` result value is returned. |
+| `EnableContentAssessmentWithTopic` | Enables content assessment. A content assessment is part of the [unscripted assessment](#unscripted-assessment-results) for the speaking language learning scenario. By providing a topic description via this method, you can enhance the assessment's understanding of the specific topic being spoken about. For example, in C# call `pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting");`, you can replace 'greeting' with your desired text to describe a topic. The topic value has no length limit and currently only supports `en-US` locale . |
+ ## Get pronunciation assessment results When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string.
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
:::column span="2"::: **Step 7:** 1. Open *.github/workflows/main_msdocs-core-sql-XYZ* in the explorer. This file was created by the App Service create wizard.
- 1. Under the `dotnet publish` step, add a step to install the [Entity Framework Core tool](/ef/core/cli/dotnet) with the command `dotnet tool install -g dotnet-ef`.
+ 1. Under the `dotnet publish` step, add a step to install the [Entity Framework Core tool](/ef/core/cli/dotnet) with the command `dotnet tool install -g dotnet-ef --version 7.0.14`.
1. Under the new step, add another step to generate a database [migration bundle](/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli#bundles) in the deployment package: `dotnet ef migrations bundle --runtime linux-x64 -p DotNetCoreSqlDb/DotNetCoreSqlDb.csproj -o ${{env.DOTNET_ROOT}}/myapp/migrate`. The migration bundle is a self-contained executable that you can run in the production environment without needing the .NET SDK. The App Service linux container only has the .NET runtime and not the .NET SDK. :::column-end:::
azure-web-pubsub Reference Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-functions-bindings.md
Title: Reference - Azure Web PubSub trigger and bindings for Azure Functions description: The reference describes Azure Web PubSub trigger and bindings for Azure Functions--++ Previously updated : 04/04/2023 Last updated : 11/24/2023 # Azure Web PubSub trigger and bindings for Azure Functions
Working with the trigger and bindings requires you reference the appropriate pac
| C# Script, JavaScript, Python, PowerShell | [Explicitly install extensions], [Use extension bundles] | The [Azure Tools extension] is recommended to use with Visual Studio Code. | | C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
-> [!NOTE]
-> Install the client library from [NuGet](https://www.nuget.org/) with specified package and version.
->
-> ```bash
-> func extensions install --package Microsoft.Azure.WebJobs.Extensions.WebPubSub
-> ```
- [NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.WebPubSub [Use extension bundles]: ../azure-functions/functions-bindings-register.md#extension-bundles [Explicitly install extensions]: ../azure-functions/functions-bindings-register.md#explicitly-install-extensions
public static object Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req, [WebPubSubContext] WebPubSubContext wpsContext) {
- if (wpsContext.IsPreflight || !wpsContext.HasError)
+ // in the case request is a preflight or invalid, directly return prebuild response by extension.
+ if (wpsContext.IsPreflight || wpsContext.HasError)
{ return wpsContext.Response; }
Define function in `index.js`.
```js module.exports = async function (context, req, wpsContext) {
+ // in the case request is a preflight or invalid, directly return prebuild response by extension.
if (!wpsContext.hasError || wpsContext.isPreflight) {
- console.log(`invalid request: ${wpsContext.response.message}.`);
return wpsContext.response; }
- console.log(`user: ${wpsContext.connectionContext.userId} is connecting.`);
+ // return an http response with connect event response as body.
return { body: {"userId": wpsContext.connectionContext.userId} }; }; ```
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
In this tutorial:
## Install Notation CLI and AKV plugin
-1. Install Notation v1.0.0 on a Linux amd64 environment. You can also download the package for other environments by following the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/).
+1. Install Notation v1.0.1 on a Linux amd64 environment. You can also download the package for other environments by following the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/).
```bash # Download, extract and install
- curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0/notation_1.0.0_linux_amd64.tar.gz
+ curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.1/notation_1.0.1_linux_amd64.tar.gz
tar xvzf notation.tar.gz # Copy the Notation binary to the desired bin directory in your $PATH, for example
iot-operations Howto Configure Destination Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-destination-data-explorer.md
Title: Send data to Azure Data Explorer from a pipeline
description: Configure a pipeline destination stage to send the pipeline output to Azure Data Explorer for storage and analysis.
-#
+ - ignite-2023
iot-operations Howto Configure Destination Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-destination-fabric.md
Title: Send data to Microsoft Fabric from a pipeline
description: Configure a pipeline destination stage to send the pipeline output to Microsoft Fabric for storage and analysis.
-#
+ - ignite-2023
iot-operations Quickstart Process Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-process-telemetry.md
description: "Quickstart: Use a Data Processor pipeline to process data from you
+ - ignite-2023 Last updated 10/11/2023
iot-operations Concept Configuration Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/concept-configuration-patterns.md
Title: Data Processor configuration patterns
description: Understand the common patterns such as path, batch, and duration that you use to configure pipeline stages.
-#
+ - ignite-2023
iot-operations Concept Contextualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/concept-contextualization.md
Title: Understand message contextualization
description: Understand how you can use message contextualization in Azure IoT Data Processor to enrich messages in a pipeline.
-#
+ - ignite-2023
iot-operations Concept Jq Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/concept-jq-expression.md
Title: Data Processor jq expressions
description: Understand the jq expressions used by Azure IoT Data Processor to operate on messages in the pipeline.
-#
+ - ignite-2023
iot-operations Concept Jq Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/concept-jq-path.md
Title: Data Processor jq path expressions
description: Understand the jq path expressions used by Azure IoT Data Processor to reference parts of a message.
-#
+ - ignite-2023
iot-operations Concept Jq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/concept-jq.md
Title: Data Processor jq usage
description: Overview of how the Azure IoT Data Processor uses jq expressions and paths to configure pipeline stages.
-#
+ - ignite-2023
iot-operations Concept Message Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/concept-message-structure.md
Title: Data Processor message structure overview
description: Understand the message structure used internally by Azure IoT Data Processor to represent messages as they move between pipeline stages.
-#
+ - ignite-2023
iot-operations Concept Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/concept-partitioning.md
Title: What is pipeline partitioning?
description: Understand how to use partitioning in pipelines to enable parallelism. Partitioning can improve throughput and reduce latency
-#
+ - ignite-2023
iot-operations Concept Supported Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/concept-supported-formats.md
Title: Serialization and deserialization formats overview
description: Understand the data formats the Azure IoT Data Processor supports when it serializes or deserializes messages.
-#
+ - ignite-2023
iot-operations Howto Configure Aggregate Stage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-aggregate-stage.md
Title: Aggregate data in a pipeline
description: Configure an aggregate pipeline stage to aggregate data in a Data Processor pipeline to enable batching and down-sampling scenarios.
-#
+ - ignite-2023
iot-operations Howto Configure Datasource Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-datasource-http.md
Title: Configure a pipeline HTTP endpoint source stage
description: Configure a pipeline source stage to read data from an HTTP endpoint for processing. The source stage is the first stage in a Data Processor pipeline.
-#
+ - ignite-2023
iot-operations Howto Configure Datasource Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-datasource-mq.md
Title: Configure a pipeline MQ source stage
description: Configure a pipeline source stage to read messages from an Azure IoT MQ topic for processing. The source stage is the first stage in a Data Processor pipeline.
-#
+ - ignite-2023
iot-operations Howto Configure Datasource Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-datasource-sql.md
Title: Configure a pipeline SQL Server source stage
description: Configure a pipeline source stage to read data from Microsoft SQL Server for processing. The source stage is the first stage in a Data Processor pipeline.
-#
+ - ignite-2023
iot-operations Howto Configure Destination Grpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-destination-grpc.md
Title: Send data to a gRPC endpoint from a pipeline
description: Configure a pipeline destination stage to send the pipeline output to a gRPC endpoint for further processing.
-#
+ - ignite-2023
iot-operations Howto Configure Destination Mq Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-destination-mq-broker.md
Title: Publish data to an MQTT broker from a pipeline
description: Configure a pipeline destination stage to publish the pipeline output to an MQTT broker and make it available to other subscribers.
-#
+ - ignite-2023
iot-operations Howto Configure Destination Reference Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-destination-reference-store.md
Title: Send data to the reference data store from a pipeline
description: Configure a pipeline destination stage to send the pipeline output to the reference data store to use to contextualize messages in other pipelines.
-#
+ - ignite-2023
iot-operations Howto Configure Enrich Stage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-enrich-stage.md
Title: Enrich data in a pipeline
description: Configure an enrich pipeline stage to enrich data in a Data Processor pipeline with contextual or reference data.
-#
+ - ignite-2023
iot-operations Howto Configure Filter Stage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-filter-stage.md
Title: Filter data in a pipeline
description: Configure a filter pipeline stage to remove messages that aren't needed for further processing and to avoid sending unnecessary data to cloud services.
-#
+ - ignite-2023
iot-operations Howto Configure Grpc Callout Stage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-grpc-callout-stage.md
Title: Call a gRPC endpoint from a pipeline
description: Configure a gRPC call out pipeline stage to make an HTTP request from a pipeline to incorporate custom processing logic.
-#
+ - ignite-2023
iot-operations Howto Configure Http Callout Stage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-http-callout-stage.md
Title: Call an HTTP endpoint from a pipeline
description: Configure an HTTP call out pipeline stage to make an HTTP request from a pipeline to incorporate custom processing logic.
-#
+ - ignite-2023
iot-operations Howto Configure Lkv Stage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-lkv-stage.md
Title: Track last known values in a pipeline
description: Configure a last known value pipeline stage to track and maintain up to date and complete data in a Data Processor pipeline.
-#
+ - ignite-2023
iot-operations Howto Configure Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-reference.md
Title: Configure a reference dataset description: The reference datasets within the Data Processor store reference data that other pipelines can use for enrichment and contextualization.
-#
+
iot-operations Howto Configure Transform Stage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-transform-stage.md
Title: Use jq to transform data in a pipeline
description: Configure a transform pipeline stage to configure a data transformation with jq in a Data Processor pipeline.
-#
+ - ignite-2023
iot-operations Howto Edit Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-edit-pipelines.md
Title: Edit and manage pipelines
description: Use the advanced features in the Digital Operations portal to edit pipelines and import and export pipelines.
-#
+ - ignite-2023
iot-operations Overview Data Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/overview-data-processor.md
Title: Process messages at the edge
description: Use the Azure IoT Data Processor to aggregate, enrich, normalize, and filter the data from your devices before you send it to the cloud.
-#
+ - ignite-2023
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
Previously updated : 11/03/2022 Last updated : 11/24/2023 # Key concepts for new Azure Load Testing users
-Learn about the key concepts and components of Azure Load Testing. This can help you to more effectively set up a load test to identify performance issues in your application.
+Learn about the key concepts and components of Azure Load Testing. This information can help you to more effectively set up a load test to identify performance issues in your application.
## General concepts of load testing
You can achieve the target number of virtual users by [configuring the number of
### Ramp-up time
-The ramp-up time is the amount of time to get to the full number of [virtual users](#virtual-users) for the load test. If the number of virtual users is 20, and the ramp-up time is 120 seconds, then it will take 120 seconds to get to all 20 virtual users. Each virtual user will start 6 (120/20) seconds after the previous user was started.
+The ramp-up time is the amount of time to get to the full number of [virtual users](#virtual-users) for the load test. If the number of virtual users is 20, and the ramp-up time is 120 seconds, then it takes 120 seconds to get to all 20 virtual users. Each virtual user will start 6 (120/20) seconds after the previous user was started.
### Response time
-The response time of an individual request, or [elapsed time in JMeter](https://jmeter.apache.org/usermanual/glossary.html), is the total time from just before sending the request to just after the last response has been received. The response time doesn't include the time to render the response. Any client code, such as JavaScript, isn't processed during the load test.
+The response time of an individual request, or [elapsed time in JMeter](https://jmeter.apache.org/usermanual/glossary.html), is the total time from just before sending the request to just after the last response is received. The response time doesn't include the time to render the response. Any client code, such as JavaScript, isn't processed during the load test.
### Latency
-The latency of an individual request is the total time from just before sending the request to just after the first response has been received. Latency includes all the processing needed to assemble the request and assembling the first part of the response.
+The latency of an individual request is the total time from just before sending the request to just after the first response is received. Latency includes all the processing needed to assemble the request and assembling the first part of the response.
### Requests per second (RPS)
Another way to calculate the RPS is based on the average application's [latency]
The formula is: Virtual users = (RPS) * (latency in seconds).
-For example, given an application latency of 20 milliseconds (0.02 second), to simulate 100,000 RPS, you should configure the load test with 2,000 virtual users (100,000 * 0.02).
+For example, given an application latency of 20 milliseconds (0.02 seconds), to simulate 100,000 RPS, you should configure the load test with 2,000 virtual users (100,000 * 0.02).
## Azure Load Testing components
-Learn about the key concepts and components of Azure Load Testing.
+Learn about the key concepts and components of Azure Load Testing. The following diagram gives an overview of how the different concepts relate to one another.
+ ### Load testing resource
To run a load test for your application, you add a [test](#test) to your load te
You can use [Azure role-based access control](./how-to-assign-roles.md) to grant access to your load testing resource and related artifacts.
-Azure Load Testing lets you [use managed identities](./how-to-use-a-managed-identity.md) to access other Azure resources, such as Azure Key Vault for storing [load test secret parameters](./how-to-parameterize-load-tests.md). You can use either a user-assigned or system-assigned managed identity.
+Azure Load Testing lets you [use managed identities](./how-to-use-a-managed-identity.md) to access Azure Key Vault for storing [load test secret parameters or certificates](./how-to-parameterize-load-tests.md). You can use either a user-assigned or system-assigned managed identity.
### Test
-A test represents a load test for your application. A test is attached to an Azure load testing resource. You can create a test in either of two ways:
+A test describes the load test configuration for your application. You add a test to an existing Azure load testing resource.
+
+A test contains a test plan, which describes the steps to invoke the application endpoint. You can define the test plan in either of two ways:
-- Create a [test based on an existing JMeter script](./how-to-create-and-run-load-test-with-jmeter-script.md).-- Create a [URL-based load test](./quickstart-create-and-run-load-test.md) (quick test). Azure Load Testing automatically generates the corresponding JMeter script, which you can modify at any time.
+- [Upload a JMeter test script](./how-to-create-and-run-load-test-with-jmeter-script.md).
+- [Specify the list of URL endpoints to test](./quickstart-create-and-run-load-test.md).
-A test contains a JMeter test script, or *test plan*, and related data and configuration files. Azure Load Testing supports all communication protocols that JMeter supports, not only HTTP-based endpoints. For example, you might want to read from or write to a database or message queue in the test script.
+Azure Load Testing supports all communication protocols that JMeter supports, not only HTTP-based endpoints. For example, you might want to read from or write to a database or message queue in the test script.
The test also specifies the configuration settings for running the load test:
The test also specifies the configuration settings for running the load test:
- [Fail criteria](./how-to-define-test-criteria.md) to determine when the test should pass or fail. - Monitoring settings to configure the list of [Azure app components and resource metrics to monitor](./how-to-monitor-server-side-metrics.md) during the test run.
-When you start a test, Azure Load Testing deploys the JMeter test script, related files, and configuration to the requested test engine instances. The test engine instances then initiate the JMeter test script to simulate the application load.
+In addition, you can upload CSV input data files and JMeter configuration files to the test.
+
+When you start a test, Azure Load Testing deploys the JMeter test script, related files, and configuration to the test engine instances. The test engine instances then initiate the JMeter test script to simulate the application load.
Each time you start a test, Azure Load Testing creates a [test run](#test-run) and attaches it to the test.
-### Test engine
+### Test run
-A test engine is computing infrastructure, managed by Microsoft that runs the Apache JMeter test script. The test engine instances run the JMeter script in parallel. You can [scale out your load test](./how-to-high-scale-load.md) by configuring the number of test engine instances. Learn how to configure the number of [virtual users](#virtual-users), or simulate a target number of [requests per second](#requests-per-second-rps).
+A test run represents one execution of a load test. When you run a test, the test run contains a copy of the configuration settings from the associated test.
-The test engines are hosted in the same location as your Azure Load Testing resource. You can configure the Azure region when you create the Azure load testing resource.
+After the test run completes, you can [view and analyze the load test results in the Azure Load Testing dashboard](./tutorial-identify-bottlenecks-azure-portal.md) in the Azure portal.
-While the test script runs, Azure Load Testing collects and aggregates the Apache JMeter worker logs from all test engine instances. You can [download the logs for analyzing errors during the load test](./how-to-troubleshoot-failing-test.md).
+Alternately, you can [download the test logs](./how-to-diagnose-failing-load-test.md#download-apache-jmeter-worker-logs-for-your-load-test) and [export the test results file](./how-to-export-test-results.md).
-### Test run
+> [!IMPORTANT]
+> When you update a test, the existing test runs don't automatically inherit the new settings from the test. The new settings are only used by new test runs when you run the *test*. If you rerun an existing *test run*, the original settings of the test run are used.
-A test run represents one execution of a load test. It collects the logs associated with running the Apache JMeter script, the [load test YAML configuration](./reference-test-config-yaml.md), the list of [app components to monitor](./how-to-monitor-server-side-metrics.md), and the [results of the test](./how-to-export-test-results.md).
+### Test engine
-You can [view and analyze the load test results in the Azure Load Testing dashboard](./tutorial-identify-bottlenecks-azure-portal.md) in the Azure portal.
+A test engine is computing infrastructure, managed by Microsoft that runs the Apache JMeter test script. The test engine instances run the JMeter script in parallel. You can [scale out your load test](./how-to-high-scale-load.md) by configuring the number of test engine instances. Learn how to configure the number of [virtual users](#virtual-users), or simulate a target number of [requests per second](#requests-per-second-rps).
-> [!IMPORTANT]
-> Existing test runs don't use the new settings when you update a test. When you rerun an existing test run, the initial configuration of the test run is used. The new settings will only be used when you run the test.
+The test engines are hosted in the same location as your Azure Load Testing resource. You can configure the Azure region when you create the Azure load testing resource.
+
+While the test script runs, Azure Load Testing collects and aggregates the Apache JMeter worker logs from all test engine instances. You can [download the logs for analyzing errors during the load test](./how-to-diagnose-failing-load-test.md).
### App component
When you run a load test for an Azure-hosted application, you can monitor resour
When you create or update a load test, you can configure the list of app components that Azure Load Testing will monitor. You can modify the list of default resource metrics for each app component.
-Learn more about which [Azure resource types are supported by Azure Load Testing](./resource-supported-azure-resource-types.md).
+Learn more about which [Azure resource types that Azure Load Testing supports](./resource-supported-azure-resource-types.md).
### Metrics
During a load test, Azure Load Testing collects metrics about the test execution
- *Server-side metrics* are available for Azure-hosted applications and provide information about your Azure [application components](#app-component). Azure Load Testing integrates with Azure Monitor, including Application Insights and Container insights, to capture details from the Azure services. Depending on the type of service, different metrics are available. For example, metrics can be for the number of database reads, the type of HTTP responses, or container resource consumption.
-## Next steps
+## Related content
You now know the key concepts of Azure Load Testing to start creating a load test.
load-testing How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-assign-roles.md
Previously updated : 11/07/2022 Last updated : 11/24/2023
In Azure Load Testing, access is granted by assigning the appropriate Azure role
If you have the **Owner**, **Contributor**, or **Load Test Owner** role at the subscription level, you automatically have the same permissions as the **Load Test Owner** at the resource level.
-You'll encounter this message if your account doesn't have the necessary permissions to manage tests.
+You encounter this message if your account doesn't have the necessary permissions to manage tests.
:::image type="content" source="media/how-to-assign-roles/azure-load-testing-not-authorized.png" lightbox="media/how-to-assign-roles/azure-load-testing-not-authorized.png" alt-text="Screenshot that shows an error message in the Azure portal that you're not authorized to use the Azure Load Testing resource.":::
You'll encounter this message if your account doesn't have the necessary permiss
## Role permissions
-The following tables describe the specific permissions given to each role. This can include Actions, which give permissions, and Not Actions, which restrict them.
+The following tables describe the specific permissions given to each role. These permissions can include *Actions*, which give permissions, and *Not Actions*, which restrict them.
### Load Test Owner
You can also configure role-based access to a load testing resource using the fo
Get-AzRoleDefinition -Name 'Load Test Contributor' ```
- The following is the example output:
+ The following snippet is the example output:
```output Name : Load Test Contributor
You can also configure role-based access to a load testing resource using the fo
AssignableScopes : {/} ```
-* [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) lists Azure role assignments at the specified scope. Without any parameters, this cmdlet returns all the role assignments made under the subscription. Use the `ExpandPrincipalGroups` parameter to list access assignments for the specified user, as well as the groups that the user belongs to.
+* [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) lists Azure role assignments at the specified scope. Without any parameters, this cmdlet returns all the role assignments made under the subscription. Use the `ExpandPrincipalGroups` parameter to list access assignments for the specified user, and the groups that the user belongs to.
**Example**: Use the following cmdlet to list all the users and their roles within a load testing resource.
You can also configure role-based access to a load testing resource using the fo
Remove-AzRoleAssignment -SignInName <sign-in Id of a user you wish to remove> -RoleDefinitionName 'Load Test Reader' -Scope '/subscriptions/<SubscriptionID>/resourcegroups/<Resource Group Name>/Providers/Microsoft.LoadTestService/loadtests/<Load Testing resource name>' ```
-## Next steps
+## Related content
* Learn more about [Using managed identities](./how-to-use-a-managed-identity.md). * Learn more about [Identifying performance bottlenecks (tutorial)](./tutorial-identify-bottlenecks-azure-portal.md).
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
Use the following steps to mark a test run as baseline:
## Next steps - Learn more about [exporting the load test results for reporting](./how-to-export-test-results.md).-- Learn more about [troubleshooting load test execution errors](./how-to-troubleshoot-failing-test.md).
+- Learn more about [diagnosing failing load tests](./how-to-diagnose-failing-load-test.md).
- Learn more about [configuring automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Configure User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-user-properties.md
Alternately, you also specify properties in the JMeter user interface. The follo
:::image type="content" source="media/how-to-configure-user-properties/jmeter-user-properties.png" alt-text="Screenshot that shows how to reference user properties in the JMeter user interface.":::
-You can [download the JMeter errors logs](./how-to-troubleshoot-failing-test.md) to troubleshoot errors during the load test.
+You can [download the JMeter errors logs](./how-to-diagnose-failing-load-test.md) to troubleshoot errors during the load test.
## Next steps - Learn more about [JMeter properties that Azure Load Testing overrides](./resource-jmeter-property-overrides.md). - Learn more about [parameterizing a load test by using environment variables and secrets](./how-to-parameterize-load-tests.md).-- Learn more about [troubleshooting load test execution errors](./how-to-troubleshoot-failing-test.md).
+- Learn more about [diagnosing failing load tests](./how-to-diagnose-failing-load-test.md).
load-testing How To Diagnose Failing Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-diagnose-failing-load-test.md
+
+ Title: Diagnose failing load tests
+
+description: Learn how you can diagnose and troubleshoot failing tests in Azure Load Testing. Download and analyze the Apache JMeter worker logs in the Azure portal.
++++ Last updated : 11/23/2023+++
+# Diagnose failing load tests in Azure Load Testing
+
+In this article, you learn how to diagnose and troubleshoot failing load tests in Azure Load Testing. Azure Load Testing provides several options to identify the root cause of a failing load test. For example, you can use the load test dashboard, or download the test results or test log files for an in-depth analysis. Alternately, configure server-side metrics to identify issues with application endpoint.
+
+Azure Load Testing uses two indicators to determine the outcome of a load test:
+
+- **Test status**: indicates whether the load test was able to start successfully and run the test script until the end. For example, the test status is *Failed* if there's an error in the JMeter test script, or if the [autostop listener](./how-to-define-test-criteria.md#auto-stop-configuration) interrupted the load test because too many requests failed.
+
+- **Test result**: indicates the result of evaluating the [test fail criteria](./how-to-define-test-criteria.md). If at least one of the test fail criteria was met, the test result is set to *Failed*.
+
+Depending on the indicator, you can use a different approach to identify the root cause of a test failure.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure load testing resource that has a completed test run. If you need to create an Azure load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
+
+## Determine the outcome of a load test
+
+Use the following steps to get the outcome of a load test:
+
+# [Azure portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), and go to your load testing resource.
+
+1. Select **Tests** in the left pane, to view the list of tests.
+
+1. Select a test from the list to view all test runs for that test.
+
+ The list of test runs shows the **Test result** and **Test status** fields.
+
+ :::image type="content" source="media/how-to-find-download-logs/load-testing-test-runs-list.png" alt-text="Screenshot that shows the list of test runs in the Azure portal, highlighting the test result and test status columns." lightbox="media/how-to-find-download-logs/load-testing-test-runs-list.png":::
+
+1. Alternately, select a test run to view the load test dashboard for the test run.
+
+ :::image type="content" source="media/how-to-find-download-logs/load-testing-dashboard-failed-test.png" alt-text="Screenshot that shows the load test dashboard, highlighting status information for a failed test." lightbox="media/how-to-find-download-logs/load-testing-dashboard-failed-test.png":::
+
+# [GitHub Actions](#tab/github)
+
+1. In [GitHub](https://github.com), browse to your repository.
+
+1. Select **Actions**, and then select your workflow run from the list.
+
+ On the **Summary** page, the status of the GitHub Actions workflow reflects the outcome of the load test action.
+
+ :::image type="content" source="media/how-to-find-download-logs/github-actions-summary-failed-test.png" alt-text="Screenshot that shows the summary page for an Azure Pipelines run, highlighting the failed load test stage." lightbox="media/how-to-find-download-logs/github-actions-summary-failed-test.png":::
+
+1. Alternately, select the workflow job to view the GitHub Actions workflow log.
+
+ :::image type="content" source="media/how-to-find-download-logs/github-actions-load-testing-log.png" alt-text="Screenshot that shows the GitHub Actions workflow logs, highlighting the error statistics information for a load test run." lightbox="media/how-to-find-download-logs/github-actions-load-testing-log.png":::
+
+# [Azure Pipelines](#tab/pipelines)
+
+1. Sign in to your Azure DevOps organization (`https://dev.azure.com/<your-organization>`), and select your project.
+
+ Replace the `<your-organization>` text placeholder with your project identifier.
+
+1. Select **Pipelines** in the left navigation, and then select your CI/CD workflow.
+
+ You can view the test run status in Azure Pipelines on the pipeline run **Summary** page. The status of the pipeline reflects the status of the load test task.
+
+ :::image type="content" source="media/how-to-find-download-logs/azure-pipelines-summary-failed-test.png" alt-text="Screenshot that shows the summary page for an Azure Pipelines run, highlighting the failed load test stage.":::
+
+1. Alternately, drill down into the Azure Pipelines log.
+
+ :::image type="content" source="./media/how-to-find-download-logs/azure-pipelines-load-test-log.png" alt-text="Screenshot that shows the Azure Pipelines run log, displaying the load testing metrics and Azure portal link." lightbox="./media/how-to-find-download-logs/azure-pipelines-load-test-log.png":::
+++
+## Diagnose test failures
+
+You can use a different approach for diagnosing a load test failure based whether Azure Load Testing was able to run and complete the test script or not.
+
+### Load test failed to complete
+
+When the load test fails to complete, the *test status* of the test run is set to *Failed*.
+
+A load test can fail to complete because of multiple reasons. Examples of why a load test doesn't finish:
+
+- There are errors in the JMeter test script.
+- The test script uses JMeter features that Azure Load Testing doesn't support. Learn about the [supported JMeter features](./resource-jmeter-support.md).
+- The test script references a file or plugin that isn't available on the test engine instance.
+- The autostop functionality interrupted the load test because too many requests are failing and the error rate exceeds the threshold. Learn more about the [autostop functionality in Azure Load Testing](./how-to-define-test-criteria.md#auto-stop-configuration).
+
+Use the following steps to help diagnose a test not finishing:
+
+1. Verify the error details on the load test dashboard.
+1. [Download and analyze the test logs](#download-apache-jmeter-worker-logs-for-your-load-test) to identify issues in the JMeter test script.
+1. [Download the test results](./how-to-export-test-results.md) to identify issues with individual requests.
+
+### Load test completed
+
+A load test might run the test script until the end (test status equals *Done*), but might not pass all the [test fail criteria](./how-to-define-test-criteria.md). If at least one of the test criteria didn't pass, the *test result* of the test run is set to *Failed*.
+
+Use the following steps to help diagnose a test failing to meet the test criteria:
+
+1. Review the [test fail criteria](./how-to-define-test-criteria.md) in the load test dashboard.
+1. Review the sampler statistics in the load test dashboard to further identify which requests in the test script might cause an issue.
+1. Review the client-side metrics in the load test dashboard. Optionally, you can filter the charts for a specific request by using the filter controls.
+1. [Download the test results](./how-to-export-test-results.md) to get error information for individual requests.
+1. Verify the test [engine health metrics](./how-to-high-scale-load.md#monitor-engine-instance-metrics) to identify possible resource contention on the test engines.
+1. Optionally, [add app components and monitor server-side metrics](./how-to-monitor-server-side-metrics.md) to identify performance bottlenecks for the application endpoint.
+
+## Download Apache JMeter worker logs for your load test
+
+When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. During the load test, Apache JMeter stores detailed logging in the worker node logs. You can download these JMeter worker logs for each test run in the Azure portal. Azure Load Testing generates a worker log for each [test engine instance](./concept-load-testing-concepts.md#test-engine).
+
+> [!NOTE]
+> Azure Load Testing only records log messages with `WARN` or `ERROR` level in the worker logs.
+
+For example, if there's a problem with your JMeter script, the load test status is **Failed**. In the worker logs you might find additional information about the cause of the problem.
+
+To download the worker logs for an Azure Load Testing test run, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+
+1. Select **Tests** to view the list of tests, and then select your load test from the list.
+
+1. From the list of test runs, select a test run to view the load test dashboard.
+
+1. On the dashboard, select **Download**, and then select **Logs**.
+
+ The browser should now start downloading a zipped folder that contains the JMeter worker node log file for each [test engine instance](./concept-load-testing-concepts.md#test-engine).
+
+ :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the test log files from the test run details page.":::
+
+1. You can use any zip tool to extract the folder and access the log files.
+
+ The *worker.log* file can help you diagnose the root cause of a failing load test. In the screenshot, you can see that the test failed because of a missing file.
+
+ :::image type="content" source="media/how-to-find-download-logs/jmeter-log.png" alt-text="Screenshot that shows the JMeter log file content.":::
+
+## Related content
+
+- Learn how to [export the load test result](./how-to-export-test-results.md).
+- Learn how to [monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
+- Learn how to [get detailed insights for Azure App Service based applications](./concept-load-test-app-service.md#monitor).
+- Learn how to [compare multiple load test runs](./how-to-compare-multiple-test-runs.md).
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
In this article, you learn how to download the test results from Azure Load Testing in the Azure portal. You might use these results for reporting in third-party tools or for diagnosing test failures. Azure Load Testing generates the test results in comma-separated values (CSV) file format, and provides details of each application request for the load test.
-You can also use the test results to diagnose errors during a load test. The `responseCode` and `responseMessage` fields give you more information about failed requests. For more information about investigating errors, see [Troubleshoot test execution errors](./how-to-troubleshoot-failing-test.md).
+You can also use the test results to diagnose errors during a load test. The `responseCode` and `responseMessage` fields give you more information about failed requests. For more information about investigating errors, see [Diagnose failing load tests](./how-to-diagnose-failing-load-test.md).
You can generate the Apache JMeter dashboard from the CSV log file following the steps mentioned [here](https://jmeter.apache.org/usermanual/generating-dashboard.html#report).
The following code snippet shows an example of a backend listener, for Azure App
## Next steps -- Learn more about [Troubleshooting test execution errors](./how-to-troubleshoot-failing-test.md).
+- Learn more about [Diagnosing failing load tests](./how-to-diagnose-failing-load-test.md).
- For information about comparing test results, see [Compare multiple test results](./how-to-compare-multiple-test-runs.md). - To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To High Scale Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-high-scale-load.md
For example, if application latency is 20 milliseconds (0.02 seconds), and you'r
To achieve a target number of requests per second, configure the total number of virtual users for your load test. > [!NOTE]
-> Apache JMeter only reports requests that made it to the server and back, either successful or not. If Apache JMeter is unable to connect to your application, the actual number of requests per second will be lower than the maximum value. Possible causes might be that the server is too busy to handle the request, or that a TLS/SSL certificate is missing. To diagnose connection problems, you can check the **Errors** chart in the load testing dashboard and [download the load test log files](./how-to-troubleshoot-failing-test.md).
+> Apache JMeter only reports requests that made it to the server and back, either successful or not. If Apache JMeter is unable to connect to your application, the actual number of requests per second will be lower than the maximum value. Possible causes might be that the server is too busy to handle the request, or that a TLS/SSL certificate is missing. To diagnose connection problems, you can check the **Errors** chart in the load testing dashboard and [download the load test log files](./how-to-diagnose-failing-load-test.md).
## Test engine instances and virtual users
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
To configure your load test to split input CSV files:
### Test status is failed and test log has `File {my-filename} must exist and be readable`
-When the load test completes with the Failed status, you can [download the test logs](./how-to-troubleshoot-failing-test.md#download-apache-jmeter-worker-logs).
+When the load test completes with the Failed status, you can [download the test logs](./how-to-diagnose-failing-load-test.md#download-apache-jmeter-worker-logs-for-your-load-test).
When you receive an error message `File {my-filename} must exist and be readable` in the test log, the input CSV file couldn't be found when running the JMeter script.
load-testing How To Troubleshoot Failing Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-troubleshoot-failing-test.md
- Title: Diagnose load test errors-
-description: Learn how you can diagnose and troubleshoot errors in Azure Load Testing. Download and analyze the Apache JMeter worker logs in the Azure portal.
---- Previously updated : 02/15/2023---
-# Diagnose failing load tests in Azure Load Testing
-
-Learn how to diagnose and troubleshoot errors while running a load test with Azure Load Testing. Download the Apache JMeter worker logs or load test results for detailed logging information. Alternately, you can configure server-side metrics to identify issues in specific Azure application components.
-
-Azure Load Testing runs your Apache JMeter script on the [test engine instances](./concept-load-testing-concepts.md#test-engine). During a load test run, errors might occur at different stages. For example, the JMeter test script could have an error that prevents the test from starting. Or there might be a problem to connect to the application endpoint, which results in the load test to have a large number of failed requests.
-
-Azure Load Testing provides different sources of information to diagnose these errors:
--- [Download the Apache JMeter worker logs](#download-apache-jmeter-worker-logs) to investigate issues with JMeter and the test script execution.-- [Diagnose failing tests using test results](#diagnose-failing-tests-using-test-results) and analyze the response code and response message of each HTTP request.-- [Diagnose failing tests using server-side metrics](#diagnose-failing-tests-using-server-side-metrics) to identify issues with specific Azure application components.-
-## Prerequisites
--- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure load testing resource that has a completed test run. If you need to create an Azure load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md). -
-## Identify load test errors
-
-You can identify errors in your load test in the following ways:
-
-# [Azure portal](#tab/portal)
--- The test run status is failed.-
- You can view the test run status in list of test runs for your load test, or in **Test details** in the load test dashboard for your test run.
-
- :::image type="content" source="media/how-to-find-download-logs/dashboard-test-failed.png" alt-text="Screenshot that shows the load test dashboard, highlighting status information for a failed test." lightbox="media/how-to-find-download-logs/dashboard-test-failed.png":::
--- The test run has a non-zero error percentage value.-
- If the test error percentage is below the default threshold, your test run shows as succeeded, even though there are errors. You can add [test fail criteria](./how-to-define-test-criteria.md) based on the error percentage.
-
- You can view the error percentage in the **Statistics** in the load test dashboard for your test run.
--- The errors chart in the client-side metrics in the load test dashboard shows errors.-
- :::image type="content" source="media/how-to-find-download-logs/dashboard-errors.png" alt-text="Screenshot that shows the load test dashboard, highlighting the error information." lightbox="media/how-to-find-download-logs/dashboard-errors.png":::
-
-# [GitHub Actions](#tab/github)
--- The test run status is failed.-
- You can view the test run status in GitHub Actions for your repository, on the **Summary** page, or drill down into the workflow run details.
-
- :::image type="content" source="media/how-to-find-download-logs/github-actions-summary-failed-test.png" alt-text="Screenshot that shows the summary page for an Azure Pipelines run, highlighting the failed load test stage." lightbox="media/how-to-find-download-logs/github-actions-summary-failed-test.png":::
--- The test run has a non-zero error percentage value.-
- If the test error percentage is below the default threshold, your test run shows as succeeded, even though there are errors. You can add [test fail criteria](./how-to-define-test-criteria.md) based on the error percentage.
-
- You can view the error percentage in GitHub Actions, in the workflow run logging information.
-
- :::image type="content" source="media/how-to-find-download-logs/github-actions-log-error-percentage.png" alt-text="Screenshot that shows the GitHub Actions workflow logs, highlighting the error statistics information for a load test run." lightbox="media/how-to-find-download-logs/github-actions-log-error-percentage.png":::
--- The test run log contains errors.-
- When there's a problem running the load test, the test run log might contain details about the root cause.
-
- You can view the list of errors in GitHub Actions, on the workflow run **Summary** page, in the **Annotations** section. From this section, you can drill down into the workflow run details to view the error details.
-
-# [Azure Pipelines](#tab/pipelines)
--- The test run status is failed.-
- You can view the test run status in Azure Pipelines, on the pipeline run **Summary** page, or drill down into the pipeline run details.
-
- :::image type="content" source="media/how-to-find-download-logs/azure-pipelines-summary-failed-test.png" alt-text="Screenshot that shows the summary page for an Azure Pipelines run, highlighting the failed load test stage.":::
--- The test run has a non-zero error percentage value.-
- If the test error percentage is below the default threshold, your test run shows as succeeded, even though there are errors. You can add [test fail criteria](./how-to-define-test-criteria.md) based on the error percentage.
-
- You can view the error percentage in Azure Pipelines, in the pipeline run logging information.
-
- :::image type="content" source="media/how-to-find-download-logs/azure-pipelines-log-error-percentage.png" alt-text="Screenshot that shows the Azure Pipelines run logs, highlighting the error statistics information for a load test run." lightbox="media/how-to-find-download-logs/azure-pipelines-log-error-percentage.png":::
--- The test run log contains errors.-
- When there's a problem running the load test, the test run log might contain details about the root cause.
-
- You can view the list of errors in Azure Pipelines, on the pipeline run **Summary** page, in the **Errors** section. From this section, you can drill down into the pipeline run details to view the error details.
---
-## Download Apache JMeter worker logs
-
-When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. During the load test, Apache JMeter stores detailed logging in the worker node logs. You can download these JMeter worker logs for each test run in the Azure portal. Azure Load Testing generates a worker log for each [test engine instance](./concept-load-testing-concepts.md#test-engine).
-
-> [!NOTE]
-> Azure Load Testing only records log messages with `WARN` or `ERROR` level in the worker logs.
-
-For example, if there's a problem with your JMeter script, the load test status is **Failed**. In the worker logs you might find additional information about the cause of the problem.
-
-To download the worker logs for an Azure Load Testing test run, follow these steps:
-
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
-
-1. Select **Tests** to view the list of tests, and then select your load test from the list.
-
- :::image type="content" source="media/how-to-find-download-logs/test-list.png" alt-text="Screenshot that shows the list of load tests for an Azure Load Test resource.":::
-
-1. Select a test run from the list to view the test run dashboard.
-
-1. On the dashboard, select **Download**, and then select **Logs**.
-
- The browser should now start downloading a zipped folder that contains the JMeter worker node log file for each [test engine instance](./concept-load-testing-concepts.md#test-engine).
-
- :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the test log files from the test run details page.":::
-
-1. You can use any zip tool to extract the folder and access the log files.
-
- The *worker.log* file can help you diagnose the root cause of a failing load test. In the screenshot, you can see that the test failed because of a missing file.
-
- :::image type="content" source="media/how-to-find-download-logs/jmeter-log.png" alt-text="Screenshot that shows the JMeter log file content.":::
-
-## Diagnose failing tests using test results
-
-To diagnose load tests that have failed requests, for example because the application endpoint is not available, the worker logs don't provide request details. You can use the test results to get detailed information about the individual application requests.
-
-1. Follow these steps to [download the test results for a load test run](./how-to-export-test-results.md).
-
-1. Open the test results `.csv` file in an editor of your choice.
-
-1. Use the information in the `responseCode` and `responseMessage` fields to determine the root cause of failing application requests.
-
- In the following example, the test run failed because the application endpoint was not available (`java.net.UnknownHostException`):
-
- ```output
- timeStamp,elapsed,label,responseCode,responseMessage,threadName,dataType,success,failureMessage,bytes,sentBytes,grpThreads,allThreads,URL,Latency,IdleTime,Connect
- 1676471293632,13,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com: Name does not resolve,172.18.44.4-Thread Group 1-1,text,false,,2470,0,1,1,https://backend.contoso.com/blabla,0,0,13
- 1676471294339,0,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com,172.18.44.4-Thread Group 1-1,text,false,,2201,0,1,1,https://backend.contoso.com/blabla,0,0,0
- 1676471294346,0,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com,172.18.44.4-Thread Group 1-1,text,false,,2201,0,1,1,https://backend.contoso.com/blabla,0,0,0
- 1676471294350,0,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com,172.18.44.4-Thread Group 1-1,text,false,,2201,0,1,1,https://backend.contoso.com/blabla,0,0,0
- 1676471294354,0,Home page,Non HTTP response code: java.net.UnknownHostException,Non HTTP response message: backend.contoso.com,172.18.44.4-Thread Group 1-1,text,false,,2201,0,1,1,https://backend.contoso.com/blabla,0,0,0
- ```
-
-## Diagnose failing tests using server-side metrics
-
-For Azure-hosted applications, you can configure your load test to monitor resource metrics for your Azure application components. For example, a load test run might produce failed requests because an application component, such as a database, is throttling requests.
-
-Learn how you can [monitor server-side application metrics in Azure Load Testing](./how-to-monitor-server-side-metrics.md).
-
-For application endpoints that you host on Azure App Service, you can [use App Service Insights to get additional insights](./concept-load-test-app-service.md#monitor) about the application behavior.
-
-## Next steps
--- Learn how to [Export the load test result](./how-to-export-test-results.md).-- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).-- Learn how to [Get detailed insights for Azure App Service based applications](./concept-load-test-app-service.md#monitor).-- Learn how to [Compare multiple load test runs](./how-to-compare-multiple-test-runs.md).
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
Azure Load Testing captures metrics, test results, and logs for each test run. T
| Server-side metrics | 90 days | Learn how to [configure server-side metrics](./how-to-monitor-server-side-metrics.md). | | Client-side metrics | 365 days | | | Test results | 6 months | Learn how to [export test results](./how-to-export-test-results.md). |
-| Test log files | 6 months | Learn how to [download the logs for troubleshooting tests](./how-to-troubleshoot-failing-test.md). |
+| Test log files | 6 months | Learn how to [download the logs for diagnosing failing load tests](./how-to-diagnose-failing-load-test.md). |
## Request quota increases
migrate Tutorial Modernize Asp Net Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-modernize-asp-net-aks.md
Previously updated : 08/31/2023- Last updated : 11/23/2023+ # Modernize ASP.NET web apps to Azure Kubernetes Service (preview)
Once the web apps are assessed, you can migrate them using the integrated migrat
### Choose from discovered apps
-In **Replicate** > **Web apps**, a paged list of discovered ASP.NET apps discovered on your environment is shown.
+In **Replicate** > **Web apps**, you can see a list of ASP.NET apps discovered on your environment.
:::image type="content" source="./media/tutorial-modernize-asp-net-aks/replicate-web-apps-list.png" alt-text="Screenshot of the Web apps tab on the Replicate tab.":::
In **Replicate** > **Web apps**, a paged list of discovered ASP.NET apps discove
5. Select **Next**.
+> [!NOTE]
+> The source path and the attribute value of App configurations and App directories together must be under 3000 characters in length. This can roughly be translated to around 15 entries (inclusive of both configurations and directories) of character length of about 200 each.
+ ### Configure target settings
-In **Replicate** > **Target settings**, settings are provided to configure the target where the applications will be migrated to.
+In **Replicate** > **Target settings**, you can configure the target where the applications will be migrated to.
:::image type="content" source="./media/tutorial-modernize-asp-net-aks/replicate-target-settings.png" alt-text="Screenshot of the Target settings tab on the Replicate tab.":::
-1. Choose the subscription, resource group, and container registry resource to which the app container images should be pushed to.
+1. Choose the subscription, resource group, and container registry resource to which the app container images should be pushed.
2. Choose the subscription, resource group, and AKS cluster resource on which the app should be deployed. 3. Select **Next**. > [!NOTE]
-> Only AKS clusters with windows nodes are listed.
+> Only AKS clusters with Windows nodes are listed.
### Configure deployment settings
-In **Replicate** > **Deployment settings**, settings are provided to configure the application on the AKS cluster.
+In **Replicate** > **Deployment settings**, you can configure the application on the AKS cluster.
:::image type="content" source="./media/tutorial-modernize-asp-net-aks/replicate-deployment-settings.png" alt-text="Screenshot of the Deployment settings tab on the Replicate tab.":::
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
Because replicas are read-only, they don't directly reduce write-capacity burden
### Considerations
-The feature is meant for scenarios where the lag is acceptable and meant for offloading queries. It isn't meant for synchronous replication scenarios where the replica data is expected to be up-to-date. There will be a measurable delay between the primary and the replica. This delay can be in minutes or even hours, depending on the workload and the latency between the primary and the replica. Typically, read replicas in the same region as the primary has less lag than geo-replicas, as the latter often deals with geographical distance-induced latency. For more insights into the performance implications of geo-replication, refer to [Geo-replication](#geo-replication) section. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
+Read replicas are primarily designed for scenarios where offloading queries is beneficial, and a slight lag is manageable. They are optimized to provide near realtime updates from the primary for most workloads, making them an excellent solution for read-heavy scenarios. However, it's important to note that they are not intended for synchronous replication scenarios requiring up-to-the-minute data accuracy. While the data on the replica eventually becomes consistent with the primary, there may be a delay, which typically ranges from a few seconds to minutes, and in some heavy workload or high-latency scenarios, this could extend to hours. Typically, read replicas in the same region as the primary has less lag than geo-replicas, as the latter often deals with geographical distance-induced latency. For more insights into the performance implications of geo-replication, refer to [Geo-replication](#geo-replication) section. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
> [!NOTE] > For most workloads, read replicas offer near-real-time updates from the primary. However, with persistent heavy write-intensive primary workloads, the replication lag could continue to grow and might only be able to catch up with the primary. This might also increase storage usage at the primary as the WAL files are only deleted once received at the replica. If this situation persists, deleting and recreating the read replica after the write-intensive workloads are completed, you can bring the replica back to a good state for lag.
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
Title: Manage read replicas - Azure portal - Azure Database for PostgreSQL - Flexible Server
-description: Learn how to manage read replicas Azure Database for PostgreSQL - Flexible Server from the Azure portal.
-
+ Title: Manage read replicas - Azure portal, REST API - Azure Database for PostgreSQL - Flexible Server
+description: Learn how to manage read replicas Azure Database for PostgreSQL - Flexible Server from the Azure portal and REST API.
+ Last updated 11/06/2023
In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL from the Azure portal. To learn more about read replicas, see the [overview](concepts-read-replicas.md). +
+> [!NOTE]
+> Azure Database for PostgreSQL - Flexible Server is currently supporting the following features in Preview:
+>
+> - Promote to primary server (to maintain backward compatibility, please use promote to independent server and remove from replication, which keeps the former behavior)
+> - Virtual endpoints
+>
+> For these features, remember to use the API version `2023-06-01-preview` in your requests. This version is necessary to access the latest, albeit preview, functionalities of these features.
+ ## Prerequisites An [Azure Database for PostgreSQL server](./quickstart-create-server-portal.md) to be the primary server.
Before setting up a read replica for Azure Database for PostgreSQL, ensure the p
**Private link**: Review the networking configuration of the primary server. For the read replica creation to be allowed, the primary server must be configured with either public access using allowed IP addresses or combined public and private access using virtual network integration.
+#### [Portal](#tab/portal)
+ 1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL - Flexible Server you want for the replica. 2. On the **Overview** dialog, note the PostgreSQL version (ex `15.4`). Also, note the region your primary is deployed to (ex., `East US`).
Before setting up a read replica for Azure Database for PostgreSQL, ensure the p
:::image type="content" source="./media/how-to-read-replicas-portal/primary-compute.png" alt-text="Screenshot of server settings." lightbox="./media/how-to-read-replicas-portal/primary-compute.png":::
+#### [REST API](#tab/restapi)
+
+To obtain information about the configuration of a server in Azure Database for PostgreSQL - Flexible Server, especially to view settings for recently introduced features like storage auto-grow or private link, you should use the latest API version `2023-06-01-preview`. The `GET` request for this would be formatted as follows:
+
+```http request
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{serverName}?api-version=2023-06-01-preview
+```
+
+Replace `{subscriptionId}`, `{resourceGroupName}`, and `{serverName}` with your Azure subscription ID, the resource group name, and the name of the primary server you want to review, respectively. This request will give you access to the configuration details of your primary server, ensuring it is properly set up for creating a read replica.
+
+Review and note the following settings:
+
+ - Compute Tier, Processor, Size (ex `Standard_D8ads_v5`).
+ - Storage
+ - Type
+ - Storage size (ex `128`)
+ - autoGrow
+ - Network
+ - High Availability
+ - Enabled / Disabled
+ - Availability zone settings
+ - Backup settings
+ - Retention period
+ - Redundancy Options
++
+**Sample response**
+
+```json
+{
+ "sku": {
+ "name": "Standard_D8ads_v5",
+ "tier": "GeneralPurpose"
+ },
+ "systemData": {
+ "createdAt": "2023-11-22T16:11:42.2461489Z"
+ },
+ "properties": {
+ "replica": {
+ "role": "Primary",
+ "capacity": 5
+ },
+ "storage": {
+ "type": "",
+ "iops": 500,
+ "tier": "P10",
+ "storageSizeGB": 128,
+ "autoGrow": "Disabled"
+ },
+ "network": {
+ "publicNetworkAccess": "Enabled"
+ },
+ "dataEncryption": {
+ "type": "SystemManaged"
+ },
+ "authConfig": {
+ "activeDirectoryAuth": "Disabled",
+ "passwordAuth": "Enabled"
+ },
+ "fullyQualifiedDomainName": "{serverName}.postgres.database.azure.com",
+ "version": "15",
+ "minorVersion": "4",
+ "administratorLogin": "myadmin",
+ "state": "Ready",
+ "availabilityZone": "1",
+ "backup": {
+ "backupRetentionDays": 7,
+ "geoRedundantBackup": "Disabled",
+ "earliestRestoreDate": "2023-11-23T12:55:33.3443218+00:00"
+ },
+ "highAvailability": {
+ "mode": "Disabled",
+ "state": "NotEnabled"
+ },
+ "maintenanceWindow": {
+ "customWindow": "Disabled",
+ "dayOfWeek": 0,
+ "startHour": 0,
+ "startMinute": 0
+ },
+ "replicationRole": "Primary",
+ "replicaCapacity": 5
+ },
+ "location": "East US",
+ "tags": {},
+ "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{serverName}",
+ "name": "{serverName}",
+ "type": "Microsoft.DBforPostgreSQL/flexibleServers"
+}
+```
+++ ## Create a read replica To create a read replica, follow these steps:
+#### [Portal](#tab/portal)
+ 1. Select an existing Azure Database for the PostgreSQL server to use as the primary server. 2. On the server sidebar, under **Settings**, select **Replication**.
To create a read replica, follow these steps:
:::image type="content" source="./media/how-to-read-replicas-portal/basics.png" alt-text="Screenshot showing entering the basics information." lightbox="./media/how-to-read-replicas-portal/basics.png":::
+5. Select **Review + create** to confirm the creation of the replica or **Next: Networking** if you want to add, delete or modify any firewall rules.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/networking.png" alt-text="Screenshot of modify firewall rules action." lightbox="./media/how-to-read-replicas-portal/networking.png":::
+
+6. Leave the remaining defaults and then select the **Review + create** button at the bottom of the page or proceed to the next forms to add tags or change data encryption method.
+
+7. Review the information in the final confirmation window. When you're ready, select **Create**. A new deployment will be created and executed.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/replica-review.png" alt-text="Screenshot of reviewing the information in the final confirmation window.":::
+
+8. During the deployment, you see the primary in `Updating` state.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/primary-updating.png" alt-text="Screenshot of primary entering into updating status." lightbox="./media/how-to-read-replicas-portal/primary-updating.png":::
+ After the read replica is created, it can be viewed from the **Replication** window.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/list-replica.png" alt-text="Screenshot of viewing the new replica in the replication window." lightbox="./media/how-to-read-replicas-portal/list-replica.png":::
+
+#### [REST API](#tab/restapi)
+
+Initiate an `HTTP PUT` request by using the [create API](/rest/api/postgresql/flexibleserver/servers/create):
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2022-12-01
+```
+
+Here, you need to replace `{subscriptionId}`, `{resourceGroupName}`, and `{replicaserverName}` with your specific Azure subscription ID, the name of your resource group, and the desired name for your read replica, respectively.
+
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "createMode": "Replica",
+ "SourceServerResourceId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{sourceserverName}"
+ }
+}
+```
+++ - Set the replica server name. > [!TIP]
To create a read replica, follow these steps:
:::image type="content" source="./media/how-to-read-replicas-portal/replica-compute.png" alt-text="Screenshot of chose the compute size.":::
-5. Select **Review + create** to confirm the creation of the replica or **Next: Networking** if you want to add, delete or modify any firewall rules.
-
- :::image type="content" source="./media/how-to-read-replicas-portal/networking.png" alt-text="Screenshot of modify firewall rules action." lightbox="./media/how-to-read-replicas-portal/networking.png":::
-
-6. Leave the remaining defaults and then select the **Review + create** button at the bottom of the page or proceed to the next forms to add tags or change data encryption method.
-
-7. Review the information in the final confirmation window. When you're ready, select **Create**. A new deployment will be created and executed.
-
- :::image type="content" source="./media/how-to-read-replicas-portal/replica-review.png" alt-text="Screenshot of reviewing the information in the final confirmation window.":::
-
-8. During the deployment, you see the primary in `Updating` state.
- :::image type="content" source="./media/how-to-read-replicas-portal/primary-updating.png" alt-text="Screenshot of primary entering into updating status." lightbox="./media/how-to-read-replicas-portal/primary-updating.png":::
-After the read replica is created, it can be viewed from the **Replication** window.
-
- :::image type="content" source="./media/how-to-read-replicas-portal/list-replica.png" alt-text="Screenshot of viewing the new replica in the replication window." lightbox="./media/how-to-read-replicas-portal/list-replica.png":::
> [!IMPORTANT] > Review the [considerations section of the Read Replica overview](concepts-read-replicas.md#considerations). >
-> To avoid issues during promotion of replicas constantly change the following server parameters on the replicas first, before applying them on the primary: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes.
+> To avoid issues during promotion of replicas constantly change the following server parameters on the replicas first, before applying them on the primary: `max_connections`, `max_prepared_transactions`, `max_locks_per_transaction`, `max_wal_senders`, `max_worker_processes`.
## Create virtual endpoints (preview)
-1. In the Azure portal, select the primary server.
+> [!NOTE]
+> All operations involving virtual endpoints - like adding, editing, or removing - are executed in the context of the primary server.
-2. On the server sidebar, under **Settings**, select **Replication**.
-3. Select **Create endpoint**.
+#### [Portal](#tab/portal)
+1. In the Azure portal, select the primary server.
-4. In the dialog, type a meaningful name for your endpoint. Notice the DNS endpoint that is being generated.
+2. On the server sidebar, under **Settings**, select **Replication**.
+
+3. Select **Create endpoint**.
+
+4. In the dialog, type a meaningful name for your endpoint. Notice the DNS endpoint that is being generated.
:::image type="content" source="./media/how-to-read-replicas-portal/add-virtual-endpoint.png" alt-text="Screenshot of creating a new virtual endpoint with custom name.":::
-5. Select **Create**.
+5. Select **Create**.
> [!NOTE] > If you do not create a virtual endpoint you will receive an error on the promote replica attempt. :::image type="content" source="./media/how-to-read-replicas-portal/replica-promote-attempt.png" alt-text="Screenshot of promotion error when missing virtual endpoint.":::
+#### [REST API](#tab/restapi)
+
+To create a virtual endpoint in a preview environment using Azure's REST API, you would use an `HTTP PUT` request. The request would look like this:
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{sourceserverName}/virtualendpoints/{virtualendpointName}?api-version=2023-06-01-preview
+```
+
+The accompanying JSON body for this request is as follows:
+
+```json
+{
+ "Properties": {
+ "EndpointType": "ReadWrite",
+ "Members": ["{replicaserverName}"]
+ }
+}
+```
+
+Here, `{replicaserverName}` should be replaced with the name of the replica server you're including as a reader endpoint target in this virtual endpoint.
++++
+## List virtual endpoints (preview)
+
+To list virtual endpoints in the preview version of Azure Database for PostgreSQL - Flexible Server, use the following steps:
+
+#### [Portal](#tab/portal)
+
+1. In the Azure portal, select the **primary** server.
+
+2. On the server sidebar, under **Settings**, select **Replication**.
+
+3. At the top of the page, you will see both the reader and writer endpoints displayed, along with the names of the servers they are pointing to.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/virtual-endpoints-show.png" alt-text="Screenshot of virtual endpoints list." lightbox="./media/how-to-read-replicas-portal/virtual-endpoints-show.png":::
+
+#### [REST API](#tab/restapi)
+
+```http request
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{sourceserverName}/virtualendpoints?api-version=2023-06-01-preview
+```
+
+Here, `{sourceserverName}` should be the name of the primary server from which you're managing the virtual endpoints.
++++ ### Modify application(s) to point to virtual endpoint
-Modify any applications that are using your Azure Database for PostgreSQL to use the new virtual endpoints (ex: `corp-pg-001.writer.postgres.database.azure.com` and `corp-pg-001.reader.postgres.database.azure.com`)
+Modify any applications that are using your Azure Database for PostgreSQL to use the new virtual endpoints (ex: `corp-pg-001.writer.postgres.database.azure.com` and `corp-pg-001.reader.postgres.database.azure.com`).
## Promote replicas With all the necessary components in place, you're ready to perform a promote replica to primary operation.
+#### [Portal](#tab/portal)
To promote replica from the Azure portal, follow these steps: 1. In the [Azure portal](https://portal.azure.com/), select your primary Azure Database for PostgreSQL - Flexible server.
To promote replica from the Azure portal, follow these steps:
6. Select **Promote** to begin the process. Once it's completed, the roles reverse: the replica becomes the primary, and the primary will assume the role of the replica.
+#### [REST API](#tab/restapi)
+
+When promoting a replica to a primary server, use an `HTTP PATCH` request with a specific `JSON` body to set the promotion options. This process is crucial when you need to elevate a replica server to act as the primary server.
+
+The `HTTP` request is structured as follows:
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2023-06-01-preview
+```
+
+```json
+{
+ "Properties": {
+ "Replica": {
+ "PromoteMode": "switchover",
+ "PromoteOption": "planned"
+ }
+ }
+}
+```
+
+In this `JSON`, the promotion is set to occur in `switchover` mode with a `planned` promotion option. While there are two options for promotion - `planned` or `forced` - chose `planned` for this exercise.
+++ > [!NOTE] > The replica you are promoting must have the reader virtual endpoint assigned, or you will receive an error on promotion.
+
### Test applications
Restart your applications and attempt to perform some operations. Your applicati
### Failback to the original server and region
-Repeat the same operations to promote the original server to the primary:
+Repeat the same operations to promote the original server to the primary.
+
+#### [Portal](#tab/portal)
1. In the [Azure portal](https://portal.azure.com/), select the replica.
Repeat the same operations to promote the original server to the primary:
6. Select **Promote**, the process begins. Once it's completed, the roles reverse: the replica becomes the primary, and the primary will assume the role of the replica.
+#### [REST API](#tab/restapi)
+
+This time, change the `{replicaserverName}` in the API request to refer to your old primary server, which is currently acting as a replica, and execute the request again.
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2023-06-01-preview
+```
+
+```json
+{
+ "Properties": {
+ "Replica": {
+ "PromoteMode": "switchover",
+ "PromoteOption": "planned"
+ }
+ }
+}
+```
+
+In this `JSON`, the promotion is set to occur in `switchover` mode with a `planned` promotion option. While there are two options for promotion - `planned` or `forced` - chose `planned` for this exercise.
+++ ### Test applications Again, switch to one of the consuming applications. Wait for the primary and replica status to change to `Updating` and then attempt to perform some operations. During the replica promote, your application might encounter temporary connectivity issues to the endpoint:
Again, switch to one of the consuming applications. Wait for the primary and rep
Create a secondary read replica in a separate region to modify the reader virtual endpoint and to allow for creating an independent server from the first replica.
+#### [Portal](#tab/portal)
+ 1. In the [Azure portal](https://portal.azure.com/), choose the primary Azure Database for PostgreSQL - Flexible Server. 2. On the server sidebar, under **Settings**, select **Replication**.
Create a secondary read replica in a separate region to modify the reader virtua
8. Review the information in the final confirmation window. When you're ready, select **Create**. A new deployment will be created and executed.
-9. During the deployment, you see the primary in `Updating` status:
+9. During the deployment, you see the primary in `Updating` state.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/primary-updating.png" alt-text="Screenshot of primary entering into updating status." lightbox="./media/how-to-read-replicas-portal/primary-updating.png":::
+
+#### [REST API](#tab/restapi)
+
+You can create a secondary read replica by using the [create API](/rest/api/postgresql/flexibleserver/servers/create):
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2022-12-01
+```
+
+Choose a distinct name for `{replicaserverName}` to differentiate it from the primary server and any other replicas.
+
+```json
+{
+ "location": "westus3",
+ "properties": {
+ "createMode": "Replica",
+ "SourceServerResourceId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{sourceserverName}"
+ }
+}
+```
+
+The location is set to `westus3`, but you can adjust this based on your geographical and operational needs.
++ ## Modify virtual endpoint
+#### [Portal](#tab/portal)
+ 1. In the [Azure portal](https://portal.azure.com/), choose the primary Azure Database for PostgreSQL - Flexible Server. 2. On the server sidebar, under **Settings**, select **Replication**.
Create a secondary read replica in a separate region to modify the reader virtua
5. Select **Save**. The reader endpoint will now be pointed at the secondary replica, and the promote operation will now be tied to this replica.
+#### [REST API](#tab/restapi)
+
+You can now modify your reader endpoint to point to the newly created secondary replica by using a `PATCH` request. Remember to replace `{replicaserverName}` with the name of the newly created read replica.
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{sourceserverName}/virtualendpoints/{virtualendpointName}?api-version=2023-06-01-preview
+```
+
+```json
+{
+ "Properties": {
+ "EndpointType": "ReadWrite",
+ "Members": ["{replicaserverName}"]
+ }
+}
+```
+++ ## Promote replica to independent server Rather than switchover to a replica, it's also possible to break the replication of a replica such that it becomes its standalone server.
+#### [Portal](#tab/portal)
+ 1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL - Flexible Server primary server. 2. On the server sidebar, on the server menu, under **Settings**, select **Replication**.
Rather than switchover to a replica, it's also possible to break the replication
4. In the dialog, ensure the action is **Promote to independent server and remove from replication. This won't impact the primary server**.
- > [!NOTE]
- > Once a replica is promoted to an independent server, it cannot be added back to the replication set.
- 5. For **Data sync**, ensure **Planned - sync data before promoting** is selected. :::image type="content" source="./media/how-to-read-replicas-portal/replica-promote-independent.png" alt-text="Screenshot of promoting the replica to independent server."::: 6. Select **Promote**, the process begins. Once completed, the server will no longer be a replica of the primary. +
+#### [REST API](#tab/restapi)
+
+You can promote a replica to a standalone server using a `PATCH` request. To do this, send a `PATCH` request to the specified Azure Management REST API URL with the first `JSON` body, where `PromoteMode` is set to `standalone` and `PromoteOption` to `planned`. The second `JSON` body format, setting `ReplicationRole` to `None`, is deprecated but still mentioned here for backward compatibility.
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2023-06-01-preview
+```
++
+```json
+{
+ "Properties": {
+ "Replica": {
+ "PromoteMode": "standalone",
+ "PromoteOption": "planned"
+ }
+ }
+}
+```
+
+```json
+{
+ "Properties": {
+ "ReplicationRole": "None"
+ }
+}
+```
+++
+ > [!NOTE]
+ > Once a replica is promoted to an independent server, it cannot be added back to the replication set.
+
+
+## Delete virtual endpoint (preview)
+
+#### [Portal](#tab/portal)
+
+1. In the Azure portal, select the **primary** server.
+
+2. On the server sidebar, under **Settings**, select **Replication**.
+
+3. At the top of the page, locate the `Virtual endpoints (Preview)` section. Navigate to the three dots (menu options) next to the endpoint name, expand it, and choose `Delete`.
+
+4. A delete confirmation dialog will appear. It will warn you: "This action will delete the virtual endpoint `virtualendpointName`. Any clients connected using these domains may lose access." Acknowledge the implications and confirm by clicking on **Delete**.
++
+#### [REST API](#tab/restapi)
+
+To delete a virtual endpoint in a preview environment using Azure's REST API, you would issue an `HTTP DELETE` request. The request URL would be structured as follows:
+
+```http
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{serverName}/virtualendpoints/{virtualendpointName}?api-version=2023-06-01-preview
+```
++++ ## Delete a replica
+#### [Portal](#tab/portal)
+ You can delete a read replica similar to how you delete a standalone Azure Database for PostgreSQL - Flexible Server. 1. In the Azure portal, open the **Overview** page for the read replica. Select **Delete**.
You can also delete the read replica from the **Replication** window by followin
5. Acknowledge **Delete** operation.
+#### [REST API](#tab/restapi)
+To delete a primary or replica server, use the [delete API](/rest/api/postgresql/flexibleserver/servers/delete). If server has read replicas then read replicas should be deleted first before deleting the primary server.
+
+```http
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2022-12-01
+```
+++ ## Delete a primary server You can only delete the primary server once all read replicas have been deleted. Follow the instructions in the [Delete a replica](#delete-a-replica) section to delete replicas and then proceed with the steps below.
+#### [Portal](#tab/portal)
+ To delete a server from the Azure portal, follow these steps: 1. In the Azure portal, select your primary Azure Database for the PostgreSQL server.
To delete a server from the Azure portal, follow these steps:
:::image type="content" source="./media/how-to-read-replicas-portal/delete-primary-confirm.png" alt-text="Screenshot of confirming to delete the primary server.":::
+#### [REST API](#tab/restapi)
+To delete a primary or replica server, use the [delete API](/rest/api/postgresql/flexibleserver/servers/delete). If server has read replicas then read replicas should be deleted first before deleting the primary server.
+
+```http
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{sourceserverName}?api-version=2022-12-01
+```
++++ ## Monitor a replica Two metrics are available to monitor read replicas.
The **Read Replica Lag** metric shows the time since the last replayed transacti
## Related content -- [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md)
+- [Read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md)
postgresql How To Read Replicas Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-rest-api.md
- Title: Manage read replicas - Azure REST API - Azure Database for PostgreSQL - Flexible Server
-description: Learn how to manage read replicas in Azure Database for PostgreSQL - Flexible Server from the Azure REST API
----- Previously updated : 12/06/2022--
-# Create and manage read replicas from the Azure REST API
-
-In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL by using the REST API [Azure REST API](/rest/api/azure/). To learn more about read replicas, see the [overview](concepts-read-replicas.md).
-
-### Prerequisites
-An [Azure Database for PostgreSQL server](quickstart-create-server-portal.md) to be the primary server.
-
-### Create a read replica
-
-You can create a read replica by using the [create API](/rest/api/postgresql/flexibleserver/servers/create):
-
-```http
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2022-03-08-preview
-```
-
-```json
-{
- "location": "southeastasia",
- "properties": {
- "createMode": "Replica",
- "SourceServerResourceId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{sourceserverName}"
- }
-}
-```
-
-> [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
-
-A replica is created by using the same compute and storage settings as the primary. After a replica is created, several settings can be changed independently of the primary server: compute generation, vCores, storage, or authentication method. The pricing tier can also be changed independently, except to the Burstable tier.
-
-> [!IMPORTANT]
-> Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the primary.
-
-### List replicas
-
-You can view the list of replicas of a primary server using the replica list API:
-
-```http
-GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{serverName}/replicas?api-version=2022-03-08-preview
-```
-
-### Promote replica
-
-You can stop replication between a primary server and promote a read replica by using the [update API](/rest/api/postgresql/flexibleserver/servers/update).
-
-After you promote a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
-
-```http
-PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2022-03-08-preview
-```
-
-```json
-{
- "properties": {
- "ReplicationRole":"None"
- }
-}
-```
-
-### Delete a primary or replica server
-
-To delete a primary or replica server, use the [delete API](/rest/api/postgresql/flexibleserver/servers/delete). If server has read replicas then read replicas should be deleted first before deleting the primary server.
-
-```http
-DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2022-03-08-preview
-```
-
-## Next steps
-
-* Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
-* Learn how to [create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md).
postgresql How To Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-cli.md
description: This article describes how to stop/start operations in Azure Databa
--++ Last updated 11/30/2021
This article shows you how to perform restart, start and stop flexible server us
- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. - Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).-- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+- Log in to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login
This article shows you how to perform restart, start and stop flexible server us
az account set --subscription <subscription id> ``` -- Create a PostgreSQL Flexible Server if you have not already created one using the ```az postgres flexible-server create``` command.
+- Create a PostgreSQL Flexible Server if you haven't already created one using the ```az postgres flexible-server create``` command.
```azurecli az postgres flexible-server create --resource-group myresourcegroup --name myservername ``` ## Stop a running server
-To stop a server, run ```az postgres flexible-server stop``` command. If you are using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+To stop a server, run ```az postgres flexible-server stop``` command. If you're using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
**Usage:** ```azurecli
az postgres flexible-server stop
``` ## Start a stopped server
-To start a server, run ```az postgres flexible-server start``` command. If you are using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+To start a server, run ```az postgres flexible-server start``` command. If you're using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
**Usage:** ```azurecli
postgresql Quickstart Create Server Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-python-sdk.md
Title: 'Quickstart: Create an Azure Database for PostgreSQL Flexible Server - Azure libraries (SDK) for Python' description: In this Quickstart, learn how to create an Azure Database for PostgreSQL Flexible server using Azure libraries (SDK) for Python.-+
def create_postgres_flexible_server(subscription_id, resource_group, server_name
# Create PostgreSQL Flexible Server server_params = Server(
+ location='<location>',
sku=Sku(name='Standard_D4s_v3', tier='GeneralPurpose'), administrator_login='pgadmin', administrator_login_password='<mySecurePassword>', storage=Storage(storage_size_gb=32),
- version="14",
+ version="16",
create_mode="Create" )
if __name__ == '__main__':
subscription_id = '<subscription_id>' resource_group = '<resource_group>' server_name = '<servername>'
- location = 'eastus'
create_postgres_flexible_server(subscription_id, resource_group, server_name, location)
Replace the following parameters with your data:
- **subscription_id**: Your own [subscription ID](../../azure-portal/get-subscription-tenant-id.md#find-your-azure-subscription). - **resource_group**: The name of the resource group you want to use. The script will create a new resource group if it doesn't exist. -- **server_name**: A unique name that identifies your Azure Database for PostgreSQL server. The domain name `postgres.database.azure.com` is appended to the server name you provide. The server name must be at least 3 characters and at most 63 characters, and can only contain lowercase letters, numbers, and hyphens.
+- **server_name**: A unique name that identifies your Azure Database for PostgreSQL - Flexible Server. The domain name `postgres.database.azure.com` is appended to the server name you provide. The server name must be at least 3 characters and at most 63 characters, and can only contain lowercase letters, numbers, and hyphens.
+- **location**: The Azure region where you want to create your Azure Database for PostgreSQL - Flexible Server. It defines the geographical location where your server and its data reside. Choose a region close to your users for reduced latency. The location should be specified in the format of Azure region short names, like `westus2`, `eastus`, or `northeurope`.
- **administrator_login**: The primary administrator username for the server. You can create additional users after the server has been created. - **administrator_login_password**: A password for the primary administrator for the server. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).
-You can also customize other parameters like location, storage size, engine version, etc.
+You can also customize other parameters like storage size, engine version, etc.
> [!NOTE]
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Azure AI Search ([formerly known as "Azure Cognitive Search"](whats-new.md#new-service-name)) provides secure information retrieval at scale over user-owned content in traditional and conversational search applications.
-Information retrieval is foundational to any app that surfaces text and vectors. Common scenarios include catalog or document search, data exploration, and increasingly chat-style search modalities over proprietary grounding data. When you create a search service, you'll work with the following capabilities:
+Information retrieval is foundational to any app that surfaces text and vectors. Common scenarios include catalog or document search, data exploration, and increasingly chat-style copilot apps over proprietary grounding data. When you create a search service, you work with the following capabilities:
+ A search engine for [full text](search-lucene-query-architecture.md) and [vector search](vector-search-overview.md) over a search index
-+ Rich indexing, with [integrated data chunking and vectorization (preview)](vector-search-integrated-vectorization.md), [lexical analysis](search-analyzers.md) for text, and [optional AI enrichment](cognitive-search-concept-intro.md) for content extraction and transformation
++ Rich indexing with [integrated data chunking and vectorization (preview)](vector-search-integrated-vectorization.md), [lexical analysis](search-analyzers.md) for text, and [optional AI enrichment](cognitive-search-concept-intro.md) for content extraction and transformation + Rich query syntax for [vector queries](vector-search-how-to-query.md), text search, [hybrid search](hybrid-search-overview.md), fuzzy search, autocomplete, geo-search and others + Azure scale, security, and reach + Azure integration at the data layer, machine learning layer, Azure AI services and Azure OpenAI
On the search service itself, the two primary workloads are *indexing* and *quer
Azure AI Search is well suited for the following application scenarios:
-+ Search over your vector and text content, isolated from the internet.
++ Search over your vector and text content. You own or control what's searchable. + Consolidate heterogeneous content into a user-defined and populated search index composed of vectors and text.
search Vector Search Integrated Vectorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-integrated-vectorization.md
Here's a checklist of the components responsible for integrated vectorization:
+ A skillset providing a Text Split skill for data chunking, and a skill for vectorization (either the AzureOpenAiEmbedding skill or a custom skill pointing to an external embedding model). + Optionally, index projections (also defined in a skillset) to push chunked data to a secondary index + An embedding model, deployed on Azure OpenAI or available through an HTTP endpoint.
-+ An indexer for driving the process end-t-end. An indexer also specifies a schedule, field mappings, and properties for change detection.
++ An indexer for driving the process end-to-end. An indexer also specifies a schedule, field mappings, and properties for change detection. This checklist focuses on integrated vectorization, but your solution isn't limited to this list. You can add more skills for AI enrichment, create a knowledge store, add semantic ranking, add relevance tuning, and other query features.
service-bus-messaging Message Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sequencing.md
You can schedule messages using any of our clients in two ways:
Scheduled messages and their sequence numbers can also be discovered using [message browsing](message-browsing.md).
-The **SequenceNumber** for a scheduled message is only valid while the message is in this state. As the message transitions to the active state, the message is appended to the queue as if had been enqueued at the current instant, which includes assigning a new **SequenceNumber**.
+The **SequenceNumber** for a scheduled message is only valid while the message is in this state. As the message transitions to the active state, the message is appended to the queue as if it had been enqueued at the current instant, which includes assigning a new **SequenceNumber**.
Because the feature is anchored on individual messages and messages can only be enqueued once, Service Bus doesn't support recurring schedules for messages.
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
For more information on Red Hat support policies for all versions of RHEL, see [
## Image update behavior
-As of April 2019, Azure offers RHEL images that are connected to Extended Update Support (EUS) repositories by default and RHEL images that come connected to the regular (non-EUS) repositories by default. The default behavior of `sudo yum update` varies depending which RHEL image you provisioned from because different images are connected to different repositories. For more information on RHEL EUS, see [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) and [Red Hat Enterprise Linux Extended Update Support Overview](https://access.redhat.com/articles/rhel-eus).
+The Red Hat images provided in Azure Marketplace are connected by default to one of two different types of life-cycle repositories:
+
+- Non-EUS: Will have the latest available software published by Red Hat for their particular Red Hat Enterprise Linux (RHEL) repositories.
+- Extended Update Support (EUS): Updates won't go beyond a specific RHEL minor release.
+
+> [!NOTE]
+> For more information on RHEL EUS, see [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) and [Red Hat Enterprise Linux Extended Update Support Overview](https://access.redhat.com/articles/rhel-eus).
+
+The packages contained in the Red Hat Update Infrastructure repositories are published and maintained exclusively by Red Hat, extra packages to support custom Azure services, are published in independent repositories maintained by Microsoft.
For a full image list, run `az vm image list --offer RHEL --all -p RedHat --output table` using the Azure CLI. ### Images connected to non-EUS repositories
-If you provision a VM from a RHEL image that is connected to non-EUS repositories, it's upgraded to the latest RHEL minor version when you run `sudo yum update`. For example, if you provision a VM from a RHEL 8.4 PAYG image and run `sudo yum update`, you end up with a RHEL 8.8 VM, the latest minor version in the RHEL8 family.
+RHEL VM images connected to non-EUS repositories, it will upgrade to the latest RHEL minor version when you run `sudo yum update`. For example, if you provision a VM from a RHEL 8.4 PAYG image and run `sudo yum update`, you end up with a RHEL 8.9 VM, the latest minor version in the RHEL8 family.
Images that are connected to non-EUS repositories don't contain a minor version number in the SKU. The SKU is the third element in the image name. For example, all of the following images come attached to non-EUS repositories:
If you provision a VM from a RHEL image that is connected to EUS repositories, i
Images connected to EUS repositories contain a minor version number in the SKU. For example, all of the following images come attached to EUS repositories: ```output
-RedHat:RHEL:7_9:7.9.20230301107
-RedHat:RHEL:8_7:8.7.2023022801
-RedHat:RHEL:9_1:9.1.2022112113
+RedHat:RHEL:7.7:7.7.2022051301
+RedHat:RHEL:8_4:latest
+RedHat:RHEL:9_0:9.0.2023061412
```
+> [!NOTE]
+> Not all minor versions are valid EUS stops, for example, for RHEL8 only 8.1, 8.2, 8.4, 8.6 and 8.8 are valid EUS releases, while 8.3, 8.5 and 8.7 are not.
+ ## RHEL EUS and version-locking RHEL VMs Extended Update Support (EUS) repositories are available to customers who might want to lock their RHEL VMs to a certain RHEL minor release after provisioning the VM. You can version-lock your RHEL VM to a specific minor version by updating the repositories to point to the Extended Update Support repositories. You can also undo the EUS version-locking operation.
To remove the version lock, use the following commands. Run the commands as `roo
``` ### Switch a RHEL 7.x VM back to non-EUS (remove a version lock)
-Run the following as root:
+Run the following commands as root:
1. Remove the `releasever` file: ```bash rm /etc/yum/vars/releasever