Updates from: 09/17/2024 01:09:18
Service Microsoft Docs article Related commit history on GitHub Change details
app-service App Service Web Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md
Browse to the DNS names that you configured.
If you receive an HTTP 404 (Not Found) error when you browse to the URL of your custom domain, the two most likely causes are: - The browser client has cached the old IP address of your domain. Clear the cache and test the DNS resolution again. On a Windows machine, you can clear the cache with `ipconfig /flushdns`.-- You configured an IP-based certificate binding, and the app's IP address has changed because of it. [Remap the A record](configure-ssl-bindings.md#2-remap-records-for-ip-based-ssl) in your DNS entries to the new IP address.
+- You configured an IP-based certificate binding, and the app's IP address has changed because of it. [Remap the A record](configure-ssl-bindings.md#remap-records-for-ip-based-ssl) in your DNS entries to the new IP address.
If you receive a `Page not secure` warning or error, it's because your domain doesn't have a certificate binding yet. [Add a private certificate for the domain](configure-ssl-certificate.md) and [configure the binding](configure-ssl-bindings.md).
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md
Title: Secure a custom DNS with a TLS/SSL binding
-description: Secure HTTPS access to your custom domain by creating a TLS/SSL binding with a certificate. Improve your website's security by enforcing HTTPS or TLS 1.2.
+description: Help secure HTTPS access to your custom domain by creating a TLS/SSL binding with a certificate. Improve your website's security by enforcing HTTPS or TLS 1.2.
tags: buy-ssl-certificates - Previously updated : 04/20/2023+ Last updated : 09/16/2024
-# Secure a custom DNS name with a TLS/SSL binding in Azure App Service
+# Provide security for a custom DNS name with a TLS/SSL binding in App Service
-This article shows you how to secure the [custom domain](app-service-web-tutorial-custom-domain.md) in your [App Service app](./index.yml) or [function app](../azure-functions/index.yml) by creating a certificate binding. When you're finished, you can access your App Service app at the `https://` endpoint for your custom DNS name (for example, `https://www.contoso.com`).
+This article shows you how to provide security for the [custom domain](app-service-web-tutorial-custom-domain.md) in your [App Service app](./index.yml) or [function app](../azure-functions/index.yml) by creating a certificate binding. When you're finished, you can access your App Service app at the `https://` endpoint for your custom DNS name (for example, `https://www.contoso.com`).
-![Web app with custom TLS/SSL certificate](./media/configure-ssl-bindings/app-with-custom-ssl.png)
+![Web app with custom TLS/SSL certificate.](./media/configure-ssl-bindings/app-with-custom-ssl.png)
## Prerequisites -- [Scale up your App Service app](manage-scale-up.md) to one of the supported pricing tiers: **Basic**, **Standard**, **Premium**.
+- [Scale up your App Service app](manage-scale-up.md) to one of the supported pricing tiers: Basic, Standard, Premium.
- [Map a domain name to your app](app-service-web-tutorial-custom-domain.md) or [buy and configure it in Azure](manage-custom-dns-buy-domain.md). <a name="upload"></a>
-## 1. Add the binding
+## Add the binding
In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>: 1. From the left menu, select **App Services** > **\<app-name>**.
-1. From the left navigation of your app, select **Custom domains**
+1. From the left navigation of your app, select **Custom domains**.
-1. Next to the custom domain, select **Add binding**
+1. Next to the custom domain, select **Add binding**.
- :::image type="content" source="media/configure-ssl-bindings/secure-domain-launch.png" alt-text="A screenshot showing how to launch the Add TLS/SSL Binding dialog.":::
+ :::image type="content" source="media/configure-ssl-bindings/secure-domain-launch.png" alt-text="A screenshot showing how to launch the Add TLS/SSL Binding dialog." lightbox="media/configure-ssl-bindings/secure-domain-launch.png":::
1. If your app already has a certificate for the selected custom domain, you can select it in **Certificate**. If not, you must add a certificate using one of the selections in **Source**.
- - **Create App Service Managed Certificate** - Let App Service create a managed certificate for your selected domain. This option is the simplest. For more information, see [Create a free managed certificate](configure-ssl-certificate.md#create-a-free-managed-certificate).
- - **Import App Service Certificate** - In **App Service Certificate**, choose an [App Service certificate](configure-ssl-app-service-certificate.md) you've purchased for your selected domain.
+ - **Create App Service Managed Certificate** - Let App Service create a managed certificate for your selected domain. This option is the easiest. For more information, see [Create a free managed certificate](configure-ssl-certificate.md#create-a-free-managed-certificate).
+ - **Import App Service Certificate** - In **App Service Certificate**, select an [App Service certificate](configure-ssl-app-service-certificate.md) you've purchased for your selected domain.
- **Upload certificate (.pfx)** - Follow the workflow at [Upload a private certificate](configure-ssl-certificate.md#upload-a-private-certificate) to upload a PFX certificate from your local machine and specify the certificate password. - **Import from Key Vault** - Select **Select key vault certificate** and select the certificate in the dialog.
-1. In **TLS/SSL type**, choose between **SNI SSL** and **IP based SSL**.
+1. In **TLS/SSL type**, select either **SNI SSL** or **IP based SSL**.
- - **[SNI SSL](https://en.wikipedia.org/wiki/Server_Name_Indication)**: Multiple SNI SSL bindings may be added. This option allows multiple TLS/SSL certificates to secure multiple domains on the same IP address. Most modern browsers (including Internet Explorer, Chrome, Firefox, and Opera) support SNI (for more information, see [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication)).
- - **IP based SSL**: Only one IP SSL binding may be added. This option allows only one TLS/SSL certificate to secure a dedicated public IP address. After you configure the binding, follow the steps in [2. Remap records for IP based SSL](#2-remap-records-for-ip-based-ssl).<br/>IP SSL is supported only in **Basic** tier or higher.
+ - **[SNI SSL](https://en.wikipedia.org/wiki/Server_Name_Indication)**: Multiple SNI SSL bindings can be added. This option allows multiple TLS/SSL certificates to help secure multiple domains on the same IP address. Most modern browsers (including Microsoft Edge, Chrome, Firefox, and Opera) support SNI. (For more information, see [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication).)
+ - **IP based SSL**: Only one IP SSL binding can be added. This option allows only one TLS/SSL certificate to help secure a dedicated public IP address. After you configure the binding, follow the steps in [Remap records for IP-based SSL](#remap-records-for-ip-based-ssl).<br/>IP-based SSL is supported only in Standard tier or higher.
1. When adding a new certificate, validate the new certificate by selecting **Validate**. 1. Select **Add**.
- Once the operation is complete, the custom domain's TLS/SSL state is changed to **Secure**.
+ Once the operation is complete, the custom domain's TLS/SSL state is changed to **Secured**.
:::image type="content" source="media/configure-ssl-bindings/secure-domain-finished.png" alt-text="A screenshot showing the custom domain secured by a certificate binding."::: > [!NOTE]
-> A **Secure** state in the **Custom domains** means that it is secured with a certificate, but App Service doesn't check if the certificate is self-signed or expired, for example, which can also cause browsers to show an error or warning.
+> A **Secured** state in **Custom domains** means that a certificate is providing security, but App Service doesn't check if the certificate is self-signed or expired, for example, which can also cause browsers to show an error or warning.
-## 2. Remap records for IP based SSL
+## Remap records for IP-based SSL
-This step is needed only for IP based SSL. For an SNI SSL binding, skip to [Test HTTPS for your custom domain](#3-test-https).
+This step is needed only for IP-based SSL. For an SNI SSL binding, skip to [Test HTTPS](#test-https).
-There are two changes you need to make, potentially:
+There are potentially two changes you need to make:
- By default, your app uses a shared public IP address. When you bind a certificate with IP SSL, App Service creates a new, dedicated IP address for your app. If you mapped an A record to your app, update your domain registry with this new, dedicated IP address. Your app's **Custom domain** page is updated with the new, dedicated IP address. Copy this IP address, then [remap the A record](app-service-web-tutorial-custom-domain.md#create-the-dns-records) to this new IP address. -- If you have an SNI SSL binding to `<app-name>.azurewebsites.net`, [remap any CNAME mapping](app-service-web-tutorial-custom-domain.md#create-the-dns-records) to point to `sni.<app-name>.azurewebsites.net` instead (add the `sni` prefix).
+- If you have an SNI SSL binding to `<app-name>.azurewebsites.net`, [remap any CNAME mapping](app-service-web-tutorial-custom-domain.md#create-the-dns-records) to point to `sni.<app-name>.azurewebsites.net` instead. (Add the `sni` prefix.)
-## 3. Test HTTPS
+## Test HTTPS
-In various browsers, browse to `https://<your.custom.domain>` to verify that it serves up your app.
+Browse to `https://<your.custom.domain>` in various browsers to verify that your app appears.
-Your application code can inspect the protocol via the "x-appservice-proto" header. The header has a value of `http` or `https`.
+Your application code can inspect the protocol via the `x-appservice-proto` header. The header has a value of `http` or `https`.
> [!NOTE] > If your app gives you certificate validation errors, you're probably using a self-signed certificate. >
-> If that's not the case, you may have left out intermediate certificates when you export your certificate to the PFX file.
+> If that's not the case, you might have left out intermediate certificates when you exported your certificate to the PFX file.
## Frequently asked questions
Your application code can inspect the protocol via the "x-appservice-proto" head
#### How do I make sure that the app's IP address doesn't change when I make changes to the certificate binding?
-Your inbound IP address can change when you delete a binding, even if that binding is IP SSL. This is especially important when you renew a certificate that's already in an IP SSL binding. To avoid a change in your app's IP address, follow these steps in order:
+Your inbound IP address can change when you delete a binding, even if that binding is IP SSL. This is especially important when you renew a certificate that's already in an IP SSL binding. To avoid a change in your app's IP address, follow these steps, in order:
1. Upload the new certificate. 2. Bind the new certificate to the custom domain you want without deleting the old one. This action replaces the binding instead of removing the old one.
Your app allows [TLS](https://wikipedia.org/wiki/Transport_Layer_Security) 1.2 b
#### How do I handle TLS termination in App Service?
-In App Service, [TLS termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted or not, inspect the `X-Forwarded-Proto` header.
+In App Service, [TLS termination](https://wikipedia.org/wiki/TLS_termination_proxy) happens at the network load balancers, so all HTTPS requests reach your app as unencrypted HTTP requests. If your app logic needs to check if the user requests are encrypted, inspect the `X-Forwarded-Proto` header.
-Language specific configuration guides, such as the [Linux Node.js configuration](configure-language-nodejs.md#detect-https-session) guide, shows you how to detect an HTTPS session in your application code.
+Language-specific configuration guides, such as the [Linux Node.js configuration](configure-language-nodejs.md#detect-https-session) guide, show how to detect an HTTPS session in your application code.
## Automate with scripts
-### Azure CLI
+#### Azure CLI
[Bind a custom TLS/SSL certificate to a web app](scripts/cli-configure-ssl-certificate.md)
-### PowerShell
+#### PowerShell
[!code-powershell[main](../../powershell_scripts/app-service/configure-ssl-certificate/configure-ssl-certificate.ps1?highlight=1-3 "Bind a custom TLS/SSL certificate to a web app")]
-## More resources
+## Related content
* [Use a TLS/SSL certificate in your code in Azure App Service](configure-ssl-certificate-in-code.md)
-* [FAQ : App Service Certificates](./faq-configuration-and-management.yml)
+* [Frequently asked questions about creating or deleting resources in Azure App Service](./faq-configuration-and-management.yml)
application-gateway Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/log-analytics.md
Title: Examine WAF logs using Azure Log Analytics
+ Title: Examining logs using Azure Log Analytics
description: This article shows you how you can use Azure Log Analytics to examine Application Gateway Web Application Firewall (WAF) logs. Previously updated : 07/24/2023 Last updated : 09/16/2024
-# Use Log Analytics to examine Application Gateway Web Application Firewall (WAF) Logs
+# Use Log Analytics to examine Application Gateway Logs
-Once your Application Gateway WAF is operational, you can enable logs to inspect what is happening with each request. Firewall logs give insight to what the WAF is evaluating, matching, and blocking. With Log Analytics, you can examine the data inside the firewall logs to give even more insights. For more information about log queries, see [Overview of log queries in Azure Monitor](/azure/azure-monitor/logs/log-query-overview).
+Once your Application Gateway is operational, you can enable logs to inspect the events that occur for your resource. For example, the Application Gateway Firewall logs give insight to what the Web Application Firewall (WAF) is evaluating, matching, and blocking. With Log Analytics, you can examine the data inside the firewall logs to give even more insights. For more information about log queries, see [Overview of log queries in Azure Monitor](/azure/azure-monitor/logs/log-query-overview).
+
+In this article, we will look at the Web Application Firewall (WAF) logs. You can set up [other Application Gateway logs](application-gateway-diagnostics.md) in a similar way.
## Prerequisites * An Azure account with an active subscription is required. If you don't already have an account, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An Azure Web Application Firewall with logs enabled. For more information, see [Azure Web Application Firewall on Azure Application Gateway](../web-application-firewall/ag/ag-overview.md).
+* An Azure Application Gateway WAK SKU. For more information, see [Azure Web Application Firewall on Azure Application Gateway](../web-application-firewall/ag/ag-overview.md).
* A Log Analytics workspace. For more information about creating a Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](/azure/azure-monitor/logs/quick-create-workspace).
-## Import WAF logs
+## Sending logs
-To import your firewall logs into Log Analytics, see [Backend health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md#diagnostic-logging). When you have the firewall logs in your Log Analytics workspace, you can view data, write queries, create visualizations, and add them to your portal dashboard.
+To export your firewall logs into Log Analytics, see [Diagnostic logs for Application Gateway](application-gateway-diagnostics.md#firewall-log). When you have the firewall logs in your Log Analytics workspace, you can view data, write queries, create visualizations, and add them to your portal dashboard.
## Explore data with examples
-To view the raw data in the firewall log, you can run the following query:
+When using **AzureDiagnostics** table, you can view the raw data in the firewall log by running the following query:
``` AzureDiagnostics | where ResourceProvider == "MICROSOFT.NETWORK" and Category == "ApplicationGatewayFirewallLog"
+| limit 10
``` This looks similar to the following query: :::image type="content" source="media/log-analytics/log-query.png" alt-text="Screenshot of Log Analytics query." lightbox="media/log-analytics/log-query.png":::
-You can drill down into the data, and plot graphs or create visualizations from here. See the following queries as a starting point:
+When using **Resource-specific** table, you can view the raw data in the firewall log by running the following query. To know about the resource-specific tables, visit [Monitoring data reference](monitor-application-gateway-reference.md#supported-resource-log-categories-for-microsoftnetworkapplicationgateways).
+
+```
+AGWFirewallLogs
+| limit 10
+```
+
+You can drill down into the data, and plot graphs or create visualizations from here. Here are some more examples of AzureDiagnostics queries that you can use.
### Matched/Blocked requests by IP
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
For a conceptual overview of this feature, see [Azure RBAC on Azure Arc-enabled
1. Add the following specification under `volumes`: ```yml
- - name: azure-rbac
- hostPath:
+ - hostPath
path: /etc/guard type: Directory
+ name: azure-rbac
``` 1. Add the following specification under `volumeMounts`:
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
For more information, see [What is Edge Storage Accelerator?](../edge-storage-ac
## Connected registry on Arc-enabled Kubernetes -- **Supported distributions**: Connected registry for Arc-enabled Kubernetes clusters.-- **Supported Azure regions**: All regions where Azure Arc-enabled Kubernetes is available.
+- **Supported distributions**: AKS enabled by Azure Arc, Kubernetes using kind.
-The connected registry extension for Azure Arc enables you to sync container images between your Azure Container Registry (ACR) and your local on-prem Azure Arc-enabled Kubernetes cluster. The extension is deployed to the local or remote cluster and uses a synchronization schedule and window to sync images between the on-prem connected registry and the cloud ACR registry.
+The connected registry extension for Azure Arc allows you to synchronize container images between your Azure Container Registry (ACR) and your on-premises Azure Arc-enabled Kubernetes cluster. This extension can be deployed to either a local or remote cluster and utilizes a synchronization schedule and window to ensure seamless syncing of images between the on-premises connected registry and the cloud-based ACR.
For more information, see [Connected Registry for Arc-enabled Kubernetes clusters](../../container-registry/quickstart-connected-registry-arc-cli.md).
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
## Prerequisites
-In addition to these prerequisites, be sure to meet all [network requirements for Azure Arc-enabled Kubernetes](network-requirements.md).
- ### [Azure CLI](#tab/azure-cli)
+> [!IMPORTANT]
+> In addition to these prerequisites, be sure to meet all [network requirements for Azure Arc-enabled Kubernetes](network-requirements.md).
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A basic understanding of [Kubernetes core concepts](/azure/aks/concepts-clusters-workloads). * An [identity (user or service principal)](system-requirements.md#azure-ad-identity-requirements) which can be used to [log in to Azure CLI](/cli/azure/authenticate-azure-cli) and connect your cluster to Azure Arc.
In addition to these prerequisites, be sure to meet all [network requirements fo
> The cluster needs to have at least one node of operating system and architecture type `linux/amd64` and/or `linux/arm64`. See [Cluster requirements](system-requirements.md#cluster-requirements) for more about ARM64 scenarios. * At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU.
-* A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster.
+* A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster. To know more about what a kubeconfig file is and how to set context to point to your cluster, please refer to this [article](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
### [Azure PowerShell](#tab/azure-powershell)
+> [!IMPORTANT]
+> In addition to these prerequisites, be sure to meet all [network requirements for Azure Arc-enabled Kubernetes](network-requirements.md)
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A basic understanding of [Kubernetes core concepts](/azure/aks/concepts-clusters-workloads). * An [identity (user or service principal)](system-requirements.md#azure-ad-identity-requirements) which can be used to [log in to Azure PowerShell](/powershell/azure/authenticate-azureps) and connect your cluster to Azure Arc.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
Title: Guide for running C# Azure Functions in an isolated worker process
description: Learn how to use the .NET isolated worker model to run your C# functions in Azure, which lets you run your functions on currently supported versions of .NET and .NET Framework. Previously updated : 12/13/2023 Last updated : 09/05/2024 - template-concept - devx-track-dotnet
The following packages are required to run your .NET functions in an isolated wo
+ [Microsoft.Azure.Functions.Worker] + [Microsoft.Azure.Functions.Worker.Sdk]
+#### Version 2.x (Preview)
+
+The 2.x versions of the core packages change the supported framework assets and bring in support for new .NET APIs from these later versions. When using .NET 9 (Preview) or later, your app needs to reference version 2.0.0-preview1 or later of both packages.
+
+The 2.0.0-preview1 versions are compatible with code written against version 1.x. However, during the preview period, newer versions may introduce behavior changes that could influence the code you write.
+ ### Extension packages Because .NET isolated worker process functions use different binding types, they require a unique set of binding extension packages.
Dependency injection is simplified when compared to .NET in-process functions, w
For a .NET isolated process app, you use the .NET standard way of call [ConfigureServices] on the host builder and use the extension methods on [IServiceCollection] to inject specific services. The following example injects a singleton service dependency:
-
+ ```csharp .ConfigureServices(services => {
This code requires `using Microsoft.Extensions.DependencyInjection;`. To learn m
Dependency injection can be used to interact with other Azure services. You can inject clients from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) using the [Microsoft.Extensions.Azure](https://www.nuget.org/packages/Microsoft.Extensions.Azure) package. After installing the package, [register the clients](/dotnet/azure/sdk/dependency-injection#register-clients) by calling `AddAzureClients()` on the service collection in `Program.cs`. The following example configures a [named client](/dotnet/azure/sdk/dependency-injection#configure-multiple-service-clients-with-different-names) for Azure Blobs: ```csharp
+using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Azure;
+using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting; var host = new HostBuilder()
The [`ILogger<T>`][ILogger&lt;T&gt;] in this example was also obtained through d
### Middleware
-.NET isolated also supports middleware registration, again by using a model similar to what exists in ASP.NET. This model gives you the ability to inject logic into the invocation pipeline, and before and after functions execute.
+The isolated worker model also supports middleware registration, again by using a model similar to what exists in ASP.NET. This model gives you the ability to inject logic into the invocation pipeline, and before and after functions execute.
The [ConfigureFunctionsWorkerDefaults] extension method has an overload that lets you register your own middleware, as you can see in the following example.
The isolated worker model uses `System.Text.Json` by default. You can customize
The following example shows this using `ConfigureFunctionsWebApplication`, but it will also work for `ConfigureFunctionsWorkerDefaults`: ```csharp
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+ var host = new HostBuilder() .ConfigureFunctionsWebApplication((IFunctionsWorkerApplicationBuilder builder) => {
var host = new HostBuilder()
}); }) .Build();+
+host.Run();
``` You might wish to instead use JSON.NET (`Newtonsoft.Json`) for serialization. To do this, you would install the [`Microsoft.Azure.Core.NewtonsoftJson`](https://www.nuget.org/packages/Microsoft.Azure.Core.NewtonsoftJson) package. Then, in your service registration, you would reassign the `Serializer` property on the `WorkerOptions` configuration. The following example shows this using `ConfigureFunctionsWebApplication`, but it will also work for `ConfigureFunctionsWorkerDefaults`: ```csharp
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+ var host = new HostBuilder() .ConfigureFunctionsWebApplication((IFunctionsWorkerApplicationBuilder builder) => {
var host = new HostBuilder()
}); }) .Build();+
+host.Run();
``` ## Methods recognized as functions
To enable ASP.NET Core integration for HTTP:
ASP.NET Core has its own serialization layer, and it is not affected by [customizing general serialization configuration](#customizing-json-serialization). To customize the serialization behavior used for your HTTP triggers, you need to include an `.AddMvc()` call as part of service registration. The returned `IMvcBuilder` can be used to modify ASP.NET Core's JSON serialization settings. The following example shows how to configure JSON.NET (`Newtonsoft.Json`) for serialization using this approach: ```csharp
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+ var host = new HostBuilder() .ConfigureFunctionsWebApplication() .ConfigureServices(services =>
public class MyFunction {
The logger can also be obtained from a [FunctionContext] object passed to your function. Call the [GetLogger&lt;T&gt;] or [GetLogger] method, passing a string value that is the name for the category in which the logs are written. The category is usually the name of the specific function from which the logs are written. To learn more about categories, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
-Use the methods of [`ILogger<T>`][ILogger&lt;T&gt;] and [`ILogger`][ILogger] to write various log levels, such as `LogWarning` or `LogError`. To learn more about log levels, see the [monitoring article](functions-monitoring.md#log-levels-and-categories). You can customize the log levels for components added to your code by registering filters as part of the `HostBuilder` configuration:
+Use the methods of [`ILogger<T>`][ILogger&lt;T&gt;] and [`ILogger`][ILogger] to write various log levels, such as `LogWarning` or `LogError`. To learn more about log levels, see the [monitoring article](functions-monitoring.md#log-levels-and-categories). You can customize the log levels for components added to your code by registering filters:
```csharp using Microsoft.Azure.Functions.Worker;
var host = new HostBuilder()
As part of configuring your app in `Program.cs`, you can also define the behavior for how errors are surfaced to your logs. By default, exceptions thrown by your code can end up wrapped in an `RpcException`. To remove this extra layer, set the `EnableUserCodeException` property to "true" as part of configuring the builder: ```csharp
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.Hosting;
+ var host = new HostBuilder() .ConfigureFunctionsWorkerDefaults(builder => {}, options => { options.EnableUserCodeException = true; }) .Build();+
+host.Run();
``` ### Application Insights
The call to `ConfigureFunctionsApplicationInsights()` adds an `ITelemetryModule`
The rest of your application continues to work with `ILogger` and `ILogger<T>`. However, by default, the Application Insights SDK adds a logging filter that instructs the logger to capture only warnings and more severe logs. If you want to disable this behavior, remove the filter rule as part of service configuration: ```csharp
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Extensions.Logging;
+ var host = new HostBuilder() .ConfigureFunctionsWorkerDefaults() .ConfigureServices(services => {
Placeholders are a platform capability that improves cold start for apps targeti
az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework> ```
- In this example, also replace `<framework>` with the appropriate version string, such as `v8.0`, `v7.0`, or `v6.0`, according to your target .NET version.
+ In this example, also replace `<framework>` with the appropriate version string, such as `v8.0`, according to your target .NET version.
1. Make sure that your function app is configured to use a 64-bit process, which you can do by using this [az functionapp config set](/cli/azure/functionapp/config#az-functionapp-config-set) command:
Before a generally available release, a .NET version might be released in a _Pre
While it might be possible to target a given release from a local Functions project, function apps hosted in Azure might not have that release available. Azure Functions can only be used with Preview or Go-live releases noted in this section.
-Azure Functions doesn't currently work with any "Preview" or "Go-live" .NET releases. See [Supported versions][supported-versions] for a list of generally available releases that you can use.
+Azure Functions currently can be used with the following "Preview" or "Go-live" .NET releases:
+
+| Operating system | .NET preview version |
+| - | - |
+| Linux | .NET 9 Preview 7<sup>1</sup> |
+
+<sup>1</sup> To successfully target .NET 9, your project needs to reference the [2.x versions of the core packages](#version-2x-preview). If using Visual Studio, .NET 9 requires version 17.12 or later.
+
+See [Supported versions][supported-versions] for a list of generally available releases that you can use.
### Using a preview .NET SDK
To use Azure Functions with a preview version of .NET, you need to update your p
1. Installing the relevant .NET SDK version in your development 1. Changing the `TargetFramework` setting in your `.csproj` file
-When deploying to a function app in Azure, you also need to ensure that the framework is made available to the app. To do so on Windows, you can use the following CLI command. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as `v8.0`.
+When you deploy to your function app in Azure, you also need to ensure that the framework is made available to the app. During the preview period, some tools and experiences may not surface the new preview version as an option. If you do not see the preview version included in the Azure Portal, for example, you can use the REST API, Bicep templates, or the Azure CLI to configure the version manually.
+
+### [Windows](#tab/windows)
+
+For apps hosted on Windows, use the following Azure CLI command. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as `v8.0`.
```azurecli az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework> ```
+### [Linux](#tab/linux)
+
+For apps hosted on Linux, use the following Azure CLI command. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<version>` with the appropriate version string, such as `8.0`.
+
+```azurecli
+az functionapp config set -g <groupName> -n <appName> --linux-fx-version "dotnet-isolated|<version>"
+```
+++ ### Considerations for using .NET preview versions Keep these considerations in mind when using Functions with preview versions of .NET:
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md
When you supply `<PROJECT_FOLDER>`, the project is created in a new folder with
| **`--model`** | Sets the desired programming model for a target language when more than one model is available. Supported options are `V1` and `V2` for Python and `V3` and `V4` for Node.js. For more information, see the [Python developer guide](functions-reference-python.md#programming-model) and the [Node.js developer guide](functions-reference-node.md), respectively. | | **`--source-control`** | Controls whether a git repository is created. By default, a repository isn't created. When `true`, a repository is created. | | **`--worker-runtime`** | Sets the language runtime for the project. Supported values are: `csharp`, `dotnet`, `dotnet-isolated`, `javascript`,`node` (JavaScript), `powershell`, `python`, and `typescript`. For Java, use [Maven](functions-reference-java.md#create-java-functions). To generate a language-agnostic project with just the project files, use `custom`. When not set, you're prompted to choose your runtime during initialization. |
-| **`--target-framework`** | Sets the target framework for the function app project. Valid only with `--worker-runtime dotnet-isolated`. Supported values are: `net6.0` (default), `net7.0`, `net8.0`, and `net48` (.NET Framework 4.8). |
+| **`--target-framework`** | Sets the target framework for the function app project. Valid only with `--worker-runtime dotnet-isolated`. Supported values are: `net9.0` (preview), `net8.0` (default), `net6.0`, and `net48` (.NET Framework 4.8). |
| > [!NOTE]
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
zone_pivot_groups: programming-languages-set-functions
| Version | Support level | Description | | | | | | 4.x | GA | **_Recommended runtime version for functions in all languages._** Check out [Supported language versions](#languages). |
-| 1.x | GA ([support ends September 14, 2026](https://aka.ms/azure-functions-retirements/hostv1)) | Supported only for C# apps that must use .NET Framework. This version is in maintenance mode, with enhancements provided only in later versions. **Support will end for version 1.x on September 14, 2026.** We highly recommend you [migrate your apps to version 4.x](migrate-version-1-version-4.md?pivots=programming-language-csharp), which supports .NET Framework 4.8, .NET 6, and .NET 8.|
+| 1.x | GA ([support ends September 14, 2026](https://aka.ms/azure-functions-retirements/hostv1)) | Supported only for C# apps that must use .NET Framework. This version is in maintenance mode, with enhancements provided only in later versions. **Support will end for version 1.x on September 14, 2026.** We highly recommend you [migrate your apps to version 4.x](migrate-version-1-version-4.md?pivots=programming-language-csharp), which supports .NET Framework 4.8, .NET 6, .NET 8, and .NET 9.|
> [!IMPORTANT] > As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of extended support. For more information, see [Retired versions](#retired-versions).
In Visual Studio, you select the runtime version when you create a project. Azur
<AzureFunctionsVersion>v4</AzureFunctionsVersion> ```
-You can choose `net8.0`, `net6.0`, or `net48` as the target framework if you are using the [isolated worker model](dotnet-isolated-process-guide.md). If you are using the [in-process model](./functions-dotnet-class-library.md), you can choose `net8.0` or `net6.0`, and you must include the `Microsoft.NET.Sdk.Functions` extension set to at least `4.4.0`.
+If you are using the [isolated worker model](dotnet-isolated-process-guide.md), you can choose, `net8.0`, `net6.0`, or `net48` as the target framework. You can also choose to use [preview support](./dotnet-isolated-process-guide.md#preview-net-versions) for `net9.0`. If you are using the [in-process model](./functions-dotnet-class-library.md), you can choose `net8.0` or `net6.0`, and you must include the `Microsoft.NET.Sdk.Functions` extension set to at least `4.4.0`.
.NET 7 was previously supported on the isolated worker model but reached the end of official support on [May 14, 2024][dotnet-policy].
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
On version 4.x of the Functions runtime, your .NET function app targets .NET 6 w
> [!TIP] > **We recommend upgrading to .NET 8 on the isolated worker model.** This provides a quick migration path to the fully released version with the longest support window from .NET.
-This guide doesn't present specific examples for .NET 7 or .NET 6. If you need to target these versions, you can adapt the .NET 8 examples.
+This guide doesn't present specific examples for .NET 9 (Preview) or .NET 6. If you need to target these versions, you can adapt the .NET 8 examples.
## Prepare for migration
azure-functions Update Language Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/update-language-versions.md
Use these steps to update the project on your local computer:
1. Ensure you have [installed the target version of the .NET SDK](https://dotnet.microsoft.com/download/dotnet).
-1. Update your references to the latest stable versions of: [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/) and [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/).
+ If you are targeting a preview version, consult the [Functions guidance for preview .NET versions](./dotnet-isolated-process-guide.md#preview-net-versions) to ensure that the version is supported. Additional steps may be required for .NET previews.
+
+1. Update your references to the latest versions of: [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/) and [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/).
1. Update your project's target framework to the new version. For C# projects, you must update the `<TargetFramework>` element in the `.csproj` file. See [Target frameworks](/dotnet/standard/frameworks) for specifics related to the chosen version.
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Azure Maps consists of the following services that can provide geographic contex
Data is imperative for maps. Use the Data registry service to access geospatial data, used with spatial operations or image composition, previously uploaded to your [Azure Storage]. By bringing customer data closer to the Azure Maps service, you reduce latency and increase productivity. For more information, see [Data registry] in the Azure Maps REST API documentation.
-> [!NOTE]
->
-> **Azure Maps Data service retirement**
->
-> The Azure Maps Data service (both [v1] and [v2]) is now deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service will need to be updated to use the Azure Maps [Data registry] service by 9/16/24. For more information, see [How to create data registry].
- ### Geolocation service Use the Geolocation service to retrieve the two-letter country/region code for an IP address. This service can help you enhance user experience by providing customized application content based on geographic location.
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
The following list shows the QPS usage limits for each Azure Maps service by Pri
| Creator - Alias | 10 | Not Available | Not Available | | Creator - Conversion, Dataset, Feature State, Features, Map Configuration, Style, Routeset, TilesetDetails, Wayfinding | 50 | Not Available | Not Available | | Data registry service | 50 | 50 |  Not Available  |
-| Data service (Deprecated<sup>1</sup>) | 50 | 50 |  Not Available  |
| Geolocation service | 50 | 50 | 50 | | Render service - Road tiles | 500 | 500 | 50 | | Render service - Satellite tiles | 250 | 250 | Not Available |
The following list shows the QPS usage limits for each Azure Maps service by Pri
| Traffic service | 50 | 50 | 50 | | Weather service | 50 | 50 | 50 |
-<sup>1</sup> The Azure Maps Data service (both [v1] and [v2]) is now deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service will need to be updated to use the Azure Maps [Data Registry] service by 9/16/24. For more information, see [How to create data registry].
- When QPS limits are reached, an HTTP 429 error is returned. If you're using the Gen 2 or Gen 1 S1 pricing tiers, you can create an Azure Maps *Technical* Support Request in the [Azure portal] to increase a specific QPS limit if needed. QPS limits for the Gen 1 S0 pricing tier can't be increased. [Azure portal]: https://portal.azure.com/
azure-maps Rest Api Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-api-azure-maps.md
The most recent stable release of the Azure Maps services.
| API | API version | Description | |--|-|-|
-| [Data] | 2.0 | The Azure Maps Data v2 service is deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service need to be updated to use the Azure Maps [Data registry] service by 9/16/24. For more information, see [How to create data registry]. |
| [Data Registry] | 2023-06-01 | Programmatically store and update geospatial data to use in spatial operations. | | [Geolocation] | 1.0 | Convert IP addresses to country/region ISO codes. | | [Render] | 2024-04-01 | Get road, satellite/aerial, weather, traffic map tiles, and static map images. |
There are previous stable releases of an Azure Maps services that are still in u
| API | API version | Description | |--|-|-|
-| [Data][Data-v1] | 1.0 | The Azure Maps Data v1 service is deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service need to be updated to use the Azure Maps [Data registry] service by 9/16/24. For more information, see [How to create data registry]. |
| [Render][Render v1] | 1.0 | Get road, satellite/aerial, weather, traffic map tiles and static map images.<BR>The Azure Maps [Render v1] service is now deprecated and will be retired on 9/17/26. To avoid service disruptions, all calls to Render v1 API needs to be updated to use the latest version of the [Render] API by 9/17/26. | | [Search][Search-v1] | 1.0 | Geocode addresses and coordinates, search for business listings and places by name or category and get administrative boundary polygons. This is version 1.0 of the Search service. For the latest version, see [Search]. |
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
The following table summarizes the Azure Maps services that generate transaction
| Azure Maps Service | Billable | Transaction Calculation | Meter | |--|-|-|-|
-| Data service (Deprecated<sup>1</sup>) | Yes, except for `MapDataStorageService.GetDataStatus` and `MapDataStorageService.GetUserData`, which are nonbillable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
| [Data registry] | Yes | One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>| | [Geolocation]| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>| | [Render] | Yes, except Get Copyright API, Get Attribution API and Terra maps (`MapTile.GetTerraTile` and `layer=terra`) which are nonbillable.|<ul><li>15 tiles = 1 transaction</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table]. |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
The following table summarizes the Azure Maps services that generate transaction
| [Traffic] | Yes | One request = 1 transaction (except tiles)<br>15 tiles = 1 transaction | <ul><li>Location Insights Traffic (Gen2 pricing)</li><li>Standard S1 Traffic Transactions (Gen1 S1 pricing)</li><li>Standard Traffic Transactions (Gen1 S0 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li></ul> | | [Weather] | Yes | One request = 1 transaction | <ul><li>Location Insights Weather (Gen2 pricing)</li><li>Standard S1 Weather Transactions (Gen1 S1 pricing)</li><li>Standard Weather Transactions (Gen1 S0 pricing)</li></ul> |
-<sup>1</sup> The Azure Maps Data service (both [v1] and [v2]) is now deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service need to be updated to use the Azure Maps [Data Registry] service by 9/16/24. For more information, see [How to create data registry].
- > [!TIP] > > Unlike Bing Maps, Azure Maps doesnΓÇÖt use [session IDs]. Instead, Azure Maps offers a number of free transactions each month as shown in [Azure Maps pricing]. For example, you get 5,000 free *Base Map Tile* transactions per month. Each transaction can include up to 15 tiles for a total of 75,000 tiles rendered for free each month.
azure-netapp-files Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/reserved-capacity.md
+
+ Title: Reserved capacity for Azure NetApp Files
+description: Learn how to optimize TCO with capacity reservations in Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 09/16/2024++
+# Reserved capacity for Azure NetApp Files
+
+You can save money on the storage costs for Azure NetApp Files with capacity reservations. Azure NetApp Files reserved capacity offers you a discount on capacity for storage costs when you commit to a reservation for one or three years, optimizing your TCO. A reservation provides a fixed amount of storage capacity for the term of the reservation.
+
+Azure NetApp Files reserved capacity can significantly reduce your capacity costs for storing data in your Azure NetApp Files volumes. How much you save depends on the total capacity you choose to reserve, and the [service level](azure-netapp-files-service-levels.md) chosen.
+
+For pricing information about reservation capacity for Azure NetApp Files, see [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/).
+
+## Reservation terms for Azure NetApp Files
+
+This section describes the terms of an Azure NetApp Files capacity reservation.
+
+>[!NOTE]
+>Azure NetApp Files reserved capacity covers matching capacity pools in the selected service level and region. When using capacity pools configured with [Standard storage with cool access](manage-cool-access.md), only "hot" tier consumption is covered by the reserved capacity benefit.
+
+### Reservation capacity
+
+You can purchase Azure NetApp Files reserved capacity in units of 100 TiB and 1 PiB per month for a one- or three-year term for a particular service level within a region.
+
+### Reservation scope
+
+Azure NetApp Files reserved capacity is available for a single subscription and multiple subscriptions (shared scope). When scoped to a single subscription, the reservation discount is applied to the selected subscription only. When scoped to multiple subscriptions, the reservation discount is shared across those subscriptions within the customer's billing context.
+
+A reservation applies to your usage within the purchased scope and cannot be limited to a specific NetApp account, capacity pools, container, or object within the subscription.
+
+Any capacity reservation for Azure NetApp Files covers only the capacity pools within the service level selected. Add-on features such as cross-region replication and backup are not included in the reservation. As soon as you buy a reservation, the capacity charges that match the reservation attributes are charged at the discount rates instead of the pay-as-you go rates.
+
+For more information on Azure reservations, see [What are Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md).
+
+### Supported service level options
+
+Azure NetApp Files reserved capacity is available for Standard, Premium, and Ultra service levels in units of 100 TiB and 1 PiB.
+
+### Requirements for purchase
+
+To purchase reserved capacity:
+* You must be in the **Owner** role for at least one Enterprise or individual subscription with pay-as-you-go rates.
+* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the EA portal. Or, if that setting is disabled, you must be an EA Admin on the subscription.
+* For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy Azure NetApp Files reserved capacity.
+
+## Determine required capacity before purchase
+
+When you purchase an Azure NetApp Files reservation, you must choose the region and tier for the reservation. Your reservation is valid only for data stored in that region and tier. For example, suppose you purchase a reservation for Azure NetApp Files *Premium* service level in US East. That reservation applies to neither *Premium* capacity pools for that subscription in US West nor capacity pools for other service levels (for example, *Ultra* service level in US East). Additional reservations can be purchased.
+
+Reservations are available for 100-TiB or 1-PiB increments, with higher discounts for 1-PiB increments. When you purchase a reservation in the Azure portal, Microsoft might provide you with recommendations based on your previous usage to help determine which reservation you should purchase.
+
+Purchasing an Azure NetApp Files reserved capacity does not automatically increase your regional capacity. Azure reservations for Azure NetApp Files are not an on-demand capacity guarantee. If your capacity reservation requires a quota increase, it's recommended you complete that before making the reservation. For more information, see [Regional capacity in Azure NetApp Files](regional-capacity-quota.md).
+
+## Purchase Azure NetApp Files reserved capacity
+
+You can purchase Azure NetApp Files reserved capacity through the [Azure portal](https://portal.azure.com/). You can pay for the reservation up front or with monthly payments. For more information about purchasing with monthly payments, see [Purchase Azure reservations with up front or monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md).
+
+To purchase reserved capacity:
+
+1. Navigate to the [**Purchase reservations**](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/Browse_AddCommand) blade in the Azure portal.
+
+2. Select **Azure NetApp Files** to buy a new reservation.
+
+3. Fill in the required fields as described in the table that appears.
+
+4. After you select the parameters for your reservation, the Azure portal displays the cost. The portal also shows the discount percentage over pay-as-you-go billing.
+
+5. In the **Purchase reservations** blade, review the total cost of the reservation. You can also provide a name for the reservation.
+
+After you purchase a reservation, it is automatically applied to any existing [Azure NetApp Files capacity pools](azure-netapp-files-set-up-capacity-pool.md) that match the terms of the reservation. If you haven't created any Azure NetApp Files capacity pools, the reservation applies when you create a resource that matches the terms of the reservation. In either case, the term of the reservation begins immediately after a successful purchase.
+
+## Exchange or refund a reservation
+
+You can exchange or refund a reservation, with certain limitations. For more information about Azure Reservations policies, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+
+<!--
+### Exchange a reservation
+
+Exchanging a reservation enables you to receive a prorated refund based on the unused portion of the reservation. You can then apply the refund to the purchase price of a new Azure NetApp Files reservation.
+
+There's no limit on the number of exchanges you can make. Also, there's no fee associated with an exchange. The new reservation that you purchase must be of equal or greater value than the prorated credit from the original reservation. An Azure NetApp Files reservation can be exchanged only for another Azure NetApp Files reservation, and not for a reservation for any other Azure service.
+
+### Refund a reservation
+
+You can cancel an Azure NetApp Files reservation at any time. When you cancel, you'll receive a prorated refund based on the remaining term of the reservation, minus a 12% early termination fee. The maximum refund per year is $50,000.
+
+Cancelling a reservation immediately terminates the reservation and returns the remaining months to Microsoft. The remaining prorated balance, minus the fee, will be refunded to your original form of purchase. -->
+
+## Expiration of a reservation
+
+When a reservation expires, any Azure NetApp Files capacity that you are using under that reservation is billed at the pay-as-you go rate. Reservations don't renew automatically.
+
+An email notification is sent 30 days prior to the expiration of the reservation, and again on the expiration date. To continue taking advantage of the cost savings that a reservation provides, renew it no later than the expiration date.
+
+## Need help? Contact us
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+* [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
+* [Understand how reservation discounts are applied to Azure storage services](../cost-management-billing/reservations/understand-storage-charges.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Previously updated : 08/20/2024 Last updated : 09/16/2024
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## September 2024
+
+* [Reserved capacity](reserved-capacity.md) is now generally available (GA)
+
+ Pay-as-you-go pricing is the most convenient way to purchase cloud storage when your workloads are dynamic or changing over time. However, some workloads are more predictable with stable capacity usage over an extended period. These workloads can benefit from savings in exchange for a longer-term commitment. With a one-year or three-year commitment of Azure NetApp Files reserved capacity, you can save up to 34% on sustained usage of Azure NetApp Files. Reserved capacity is available in stackable increments of 100 TiB and 1 PiB on Standard, Premium and Ultra service levels in a given region. Reserved capacity can be used in a single subscription (single-subscription scope), or across multiple subscriptions (shared scope) in the same Azure tenant. Azure NetApp Files reserved capacity benefits are automatically applied to existing Azure NetApp Files capacity pools in the matching region and service level. Azure NetApp Files reserved capacity not only provides cost savings but also improves financial predictability and stability, allowing for more effective budgeting. Additional usage is conveniently billed at the regular pay-as-you-go rate.
+
+ For more detail, see the [Azure NetApp Files reserved capacity](reserved-capacity.md) or see reservations in the Azure portal.
+
## August 2024 * [Azure NetApp Files storage with cool access](cool-access-introduction.md) is now generally available (GA) and supported with the Standard, Premium, and Ultra service levels. Cool access is also now supported for destination volumes in cross-region/cross-zone relationships.
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
Refer to the table to find details about resolution dates or possible workaround
| [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 aren't exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. Azure VMware Solution is currently rolling out [7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) to address this issue. | March 2024 - Resolved in [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) | | The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | The AV64 SKU now supports 7 Fault Domains and all vSAN storage policies. For more information, see [AV64 supported Azure regions](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table) | June 2024 | | VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you're using NE appliances in a HA configuration. | Feb 2024 - Resolved in [VMware HCX 4.8.2](https://docs.vmware.com/en/VMware-HCX/4.8.2/rn/vmware-hcx-482-release-notes/https://docsupdatetracker.net/index.html) |
-| [VMSA-2024-0006](https://www.vmware.com/security/advisories/VMSA-2024-0006.html) ESXi Use-after-free and Out-of-bounds write vulnerability | March 2024 | Microsoft has confirmed the applicability of the vulnerabilities and is rolling out the provided VMware updates. | Ongoing 2024 - Resolved in [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) |
+| [VMSA-2024-0006](https://www.vmware.com/security/advisories/VMSA-2024-0006.html) ESXi Use-after-free and Out-of-bounds write vulnerability | March 2024 | For ESXi 7.0, Microsoft worked with Broadcom on an AVS specific hotfix as part of the [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) rollout. For the 8.0 rollout, Azure VMware Solution is deploying [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) which is not vulnerable. | Auguest 2024 - Resolved in [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) and [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) |
| When I run the VMware HCX Service Mesh Diagnostic wizard, all diagnostic tests will be passed (green check mark), yet failed probes will be reported. See [HCX - Service Mesh diagnostics test returns 2 failed probes](https://knowledge.broadcom.com/external/article?legacyId=96708) | 2024 | None, this will be fixed in 4.9+. | N/A | | [VMSA-2024-0011](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24308) Out-of-bounds read/write vulnerability (CVE-2024-22273) | June 2024 | Microsoft has confirmed the applicability of the CVE-2024-22273 vulnerability and it will be addressed in the upcoming 8.0u2b Update. | July 2024 | | [VMSA-2024-0012](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24453) Multiple Vulnerabilities in the DCERPC Protocol and Local Privilege Escalations | June 2024 | Microsoft, working with Broadcom, adjudicated the risk of these vulnerabilities at an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 aren't exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. A plan is being put in place to address these vulnerabilities at a future date TBD. | N/A |
azure-vmware Configure Azure Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md
Azure Elastic storage area network (SAN) addresses the problem of workload optim
The following prerequisites are required to continue. -- Register for the preview by filling out the [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR8FVh9RJVPdOk_mdTpp--pZUN0RKUklROEc4UE1RRFpRMkhNVFAySTM1TC4u). - Verify you have a Dev/Test private cloud in a [region that Elastic SAN is available in](../storage/elastic-san/elastic-san-create.md). - Know the availability zone your private cloud is in. - In the UI, select an Azure VMware Solution host.
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Previously updated : 6/3/2024 Last updated : 9/16/2024
Azure VMware Solution implements a shared responsibility model that defines dist
The shared responsibility matrix table outlines the main tasks that customers and Microsoft each handle in deploying and managing both the private cloud and customer application workloads. The following table provides a detailed list of roles and responsibilities between the customer and Microsoft, which encompasses the most frequent tasks and definitions. For further questions, contact Microsoft.
backup Backup Azure Troubleshoot Blob Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-blob-backup.md
Title: Troubleshoot Blob backup and restore issues
-description: In this article, learn about symptoms, causes, and resolutions of Azure Backup failures related to Blob backup and restore.
+description: In this article, learn about symptoms, causes, and resolutions of Azure Backup failures related to the Azure Blob backups and restore.
Previously updated : 07/24/2024 Last updated : 09/16/2024
This article provides troubleshooting information to address issues you encounte
**Error message**: Incorrect containers selected for operation.
-**Recommendation**: This error may occur if one or more containers included in the scope of protection no longer exist in the protected storage account. We recommend to re-trigger the operation after modifying the protected container list using the edit backup instance option.
+**Recommendation**: This error may occur if one or more containers included in the scope of protection no longer exist in the protected storage account. We recommend you to re-trigger the operation after modifying the protected container list using the edit backup instance option.
### UserErrorCrossTenantOrsPolicyDisabled
This article provides troubleshooting information to address issues you encounte
**Error message**: The operation can't be performed while a restore is in progress on the source account. **Recommendation**: You need to retrigger the operation once the in-progress restore completes. +
+## Common errors for Azure Blob vaulted backup
+
+### UserErrorInvalidParameterInRequest
+
+**Error code**: `UserErrorInvalidParameterInRequest`
+
+**Error message**: Request parameter is invalid.
+
+**Recommended action**: Retry the operation with valid inputs.
++
+### UserErrorRequestDisallowedByAzurePolicy
+
+**Error code**: `UserErrorRequestDisallowedByAzurePolicy`
+
+**Error message**: An Azure Policy is configured on the resource which is preventing the operation.
+
+**Recommended action**: Correct the policy and retry the operation.
+
+### LongRunningRestoreTrackingFailure
+
+**Error code**: `LongRunningRestoreTrackingFailure`
++
+**Error message**: Failed to track the long-running restore operation. The operation is still running and is expected to complete the restore of data.
+
+**Recommended action**: Track further progress of this operation using the storage account's activity log for Restore blob ranges operation.
+
+### LongRunningBackupTrackingFailure
+
+**Error code**: `LongRunningBackupTrackingFailure`
+
+**Error message**: Failed to track the long-running backup operation. The operation is still running and is expected to complete the backup of data.
+
+**Recommended action**: Track further progress of this operation using the storage account's activity log or check the blob replication status.
+
+### LongRunningOperationTrackingFailure
+
+**Error code**: `LongRunningOperationTrackingFailure`
+
+**Error message**: Failed to track the long-running operation. The operation is still running and is expected to complete.
+
+**Recommended action**: Track further progress on the operation in storage account activity log.
+
+### UserErrorVaultedBackupFeatureNotEnabled
+
+**Error code**: `UserErrorVaultedBackupFeatureNotEnabled`
+
+**Error message**: The subscription must be registered for the required features to use vaulted backup for blobs.
+
+**Recommended action**: Register your subscription for Microsoft.Storage/HardenBackup and Microsoft.DataProtection/BlobVaultedBackup.
+
+### ObjectReplicationPolicyCreationFailure
+
+**Error code**: `ObjectReplicationPolicyCreationFailure`
+
+**Error message**: Failed to create object replication policy on the storage account.
+
+**Recommended action**: Failed to create object replication policy. Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support.
+
+### UserErrorRequiredStorageFeaturesDisabled
+
+**Error code**: `UserErrorRequiredStorageFeaturesDisabled`
+
+**Error message**: The operation failed due to required storage feature(s) being disabled on the storage account.
+
+**Recommended action**: Enable required features for Azure backup on source Storage account.
+
+### UserErrorSelectedContainerPartOfAnotherORPolicy
+
+**Error code**: `UserErrorSelectedContainerPartOfAnotherORPolicy`
+
+**Error message**: The selected container is present in another Object replication policy. A given container can be part of only one OR policy at a time.
+
+**Recommended action**: The container from other OR policy. Or change protection intent.
+
+### UserErrorTooManyRestoreCriteriaGivenForBlobRestore
+
+**Error code**: `UserErrorTooManyRestoreCriteriaGivenForBlobRestore`
+
+**Error message**: The count of containers passed in the restore request exceeds the supported limit.
+
+**Recommended action**: Reduce the number of containers in item level restore request to adhere to the limit.
+
+### UserErrorTooManyPrefixesGivenForBlobRestore
+
+**Error code**: `UserErrorTooManyPrefixesGivenForBlobRestore `
+
+**Error message**: Restore operation failed as too many blob prefixes were given for a container
+
+**Recommended action**: Limit the number of blob prefixes to lower it on a per-container basis.
+
+### UserErrorStopProtectionNotSupportedForBlobOperationalBackup
+
+**Error code**: `UserErrorStopProtectionNotSupportedForBlobOperationalBackup`
+
+**Error message**: Stop protection is not supported for operational tier blob backups
+
+**Recommended action**: Operational tier blob backups cannot be stopped.up
+
+### UserErrorAzureStorageAccountManagementOperationLimitReached
+
+**Error code**: `UserErrorAzureStorageAccountManagementOperationLimitReached`
+
+**Error message**: Requested operation failed due to throttling by the Azure service. Azure Storage account management list operations limit reached.
+
+**Recommended action&&: Wait for a few minutes and then try the operation again.
+
+### UserErrorBlobVersionDeletedDuringBackup
+
+**Error code**: `UserErrorBlobVersionDeletedDuringBackup`
+
+**Error message**: The backup failed due to one or more blob versions getting deleted in the backup job duration.
+
+**Recommended action**: We recommend you to avoid tampering with blob versions while a backup job is in progress. Ensure the minimum retention configured for versions in the life cycle management policy is 7 days.
+
+### UserErrorBlobVersionArchivedDuringBackup
+
+**Error code**: `UserErrorBlobVersionArchivedDuringBackup`
+
+**Error message**: The backup failed due to one or more blob versions moving to the archive tier in the backup job duration.
+
+**Recommended action**: We recommend you to avoid tampering with blob versions while a backup job is in progress. Ensure the minimum retention configured for versions in the life cycle management policy is *7 days*.
+
+### UserErrorBlobVersionArchivedAndDeletedDuringBackup
+
+**Error code**: `UserErrorBlobVersionArchivedAndDeletedDuringBackup`
+
+**Error message**: The backup failed due to one or more blob versions moving to the archive tier or getting deleted in the backup job duration.
+
+**Recommended action**: We recommend you to avoid tampering with blob versions while a backup job is in progress. Ensure the minimum retention configured for versions in the life cycle management policy is 7 days.
+
+### UserErrorContainerHasImmutabilityPolicyDuringRestore
+
+**Error code**: `UserErrorContainerHasImmutabilityPolicyDuringRestore`
+
+**Error message**: One or more containers selected for the restore has an immutability policy.
+
+**Recommended action**: Remove the immutability policy from the container and retry the operation.
+
+### UserErrorArchivedRecoveryPointsRequestedForRestore
+
+**Error code**: `UserErrorArchivedRecoveryPointsRequestedForRestore`
+
+**Error message**: One or more containers selected for the restore have archived recovery points.
+
+**Recommended action**: Re-hydrate archived recovery points or remove containers with archived recovery points from request and retry the operation.
+
+### UserErrorObjectReplicationPolicyDeletionFailureOnRestoreTarget
+
+**Error code**: `UserErrorObjectReplicationPolicyDeletionFailureOnRestoreTarget`
+
+**Error message**: Failed to delete object replication policy on the restore target storage account.
+
+**Recommended action**: Check if a resource lock is preventing deletion. The restore has succeeded but the object replication policy created during restore operation was not cleaned up after restore completion. You must delete the policy to prevent issues in future restores.
+
+### UserErrorRestoreFailurePreviousObjectReplicationPolicyNotDeleted
+
+**Error code**: `UserErrorRestoreFailurePreviousObjectReplicationPolicyNotDeleted`
+
+**Error message**: Failed to restore the backup instance because an object replication policy from a previous restore is still on the restore target storage account
+
+**Recommended action**: Remove the old object replication policy from the restore target and retry.
+
+### UserErrorKeyVaultKeyWasNotFound
+
+**Error code**: `UserErrorKeyVaultKeyWasNotFound`
+
+**Error message**: Operation failed because key vault key is not found to unwrap the encryption key.
+
+**Recommended action**: Check the key vault settings.
+
+### UserErrorInvalidResourceUriInRequest
+
+**Error code**: `UserErrorInvalidResourceUriInRequest`
+
+**Error message**: Operation failed due to invalid Resource Uri in the request.
+
+**Recommended action**: Fix the Resource Uri format in the request object and trigger the operation again.
+
+### UserErrorDatasourceAndBackupVaultLocationMismatch
+
+**Error code**: `UserErrorDatasourceAndBackupVaultLocationMismatch`
+
+**Error message**: Operation failed because the datasource location is different from the Backup Vault location.
+
+**Recommended action**: Ensure that the datasource and the Backup Vault are in the same location.
+
+### LinkedAuthorizationFailed
+
+*Error code**: `LinkedAuthorizationFailed`
+
+**Error message**: The client [user name] with object ID has permissions to perform the required action [operation name] on scope [vault name], however, it does not have the required permissions to perform action(s) [operation name] on the linked scope [datasource name].
+
+**Recommended action**: Ensure that you have read access on the datasource associated with this backup instance to be able to trigger the restore operation. Once the required permissions are provided, retry the operation.
baremetal-infrastructure About Nc2 On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md
description: Learn about Nutanix Cloud Clusters on Azure and the benefits it off
Previously updated : 8/15/2024 Last updated : 9/16/2024
NC2 on Azure implements a shared responsibility model that defines distinct role
On-premises Nutanix environments require the Nutanix customer to support all the hardware and software for running the platform. For NC2 on Azure, Microsoft maintains the hardware for the customer. Microsoft manages the Azure BareMetal specialized compute hardware and its data and control plane platform for underlay network. Microsoft supports if the customers plan to bring their existing Azure Subscription, VNet, vWAN, etc.
communication-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md
Title: What's new in Azure Communication Services
-description: All of the latest additions to Azure Communication Services
+description: Learn about the latest additions to Azure Communication Services.
# What's new in Azure Communication Services
-We created this page to keep you updated on new features, blog posts, and other useful information related to Azure Communication Services.
+Use this article to stay updated on new features and other useful information related to Azure Communication Services.
[!INCLUDE [Survey Request](./includes/survey-request.md)] ## May 2024
-### Data Retention with Chat threads
+### Data retention with chat threads
-Developers can now create chat threads with a retention policy between 30 and 90 days. This feature is in public preview.
+Developers can now create chat threads with a retention policy of 30 to 90 days. This feature is in preview.
-This policy is optional ΓÇô developers can choose to create a chat thread with infinite retention (as always) or set a retention policy between 30 and 90 days. If the thread needs to be kept for longer than 90 days, you can extend the time using the update chat thread property API. The policy is geared for data management in organizations that need to move data into their archives for historical purposes or delete the data within a given period.
+Setting a retention policy is optional. Developers can choose to create a chat thread with infinite retention (the default) or set a retention policy of 30 to 90 days. If you need to keep the thread for longer than 90 days, you can extend the time by using the Update Chat Thread Properties API. The policy is geared toward data management in organizations that need to move data into their archives for historical purposes or delete the data within a particular period.
-Existing chat threads aren't affected by the policy.
+The policy doesn't affect existing chat threads.
For more information, see:+ - [Chat concepts](./concepts/chat/concepts.md#chat-data) - [Create Chat Thread - REST API](/rest/api/communication/chat/chat/create-chat-thread#noneretentionpolicy) - [Update Chat Thread Properties - REST API](/rest/api/communication/chat/chat-thread/update-chat-thread-properties#noneretentionpolicy) ### PowerPoint Live
-Now in general availability, PPT Live gives both the presenter and audience an inclusive and engaging experience. PPT Live combines the best parts of presenting in PowerPoint with the connection and collaboration of a Microsoft Teams meeting.
-
+Now in general availability, PowerPoint Live gives both the presenter and the audience an engaging experience. PowerPoint Live combines presenting in PowerPoint with the connection and collaboration of a Microsoft Teams meeting.
-Meeting participants can now view PowerPoint Live sessions initiated by a Teams client using the Azure Communication Services Web UI Library. Participants can follow along with a presentation and view presenter annotations. Developers can use this function via our composites including `CallComposite` and `CallWithChatComposite`, and through components such as `VideoGallery`.
-For more information, see [Introducing PowerPoint Live in Microsoft Teams](https://techcommunity.microsoft.com/t5/microsoft-365-blog/introducing-powerpoint-live-in-microsoft-teams/ba-p/2140980) and [Present from PowerPoint Live in Microsoft Teams](https://support.microsoft.com/en-us/office/present-from-powerpoint-live-in-microsoft-teams-28b20e74-7165-499c-9bd4-0ad975d448ad).
+Meeting participants can now view PowerPoint Live sessions initiated by a Teams client by using the Azure Communication Services Web UI Library. Participants can follow along with a presentation and view presenter annotations. Developers can use this function through composites such as `CallComposite` and `CallWithChatComposite`, and through components such as `VideoGallery`.
-### Live Reactions
+For more information, see [Introducing PowerPoint Live in Microsoft Teams (blog post)](https://techcommunity.microsoft.com/t5/microsoft-365-blog/introducing-powerpoint-live-in-microsoft-teams/ba-p/2140980) and [Present from PowerPoint Live in Microsoft Teams](https://support.microsoft.com/en-us/office/present-from-powerpoint-live-in-microsoft-teams-28b20e74-7165-499c-9bd4-0ad975d448ad).
-During live calls, participants can react with emojis: like, love, applause, laugh, and surprise.
+### Live reactions
+Now generally available, the updated UI library composites and components include reactions during live calls. The UI Library supports these reactions: &#128077; like, &#129505; love, &#128079; applause, &#128514; laugh, &#128558; surprise.
-Now generally available, the updated UI library composites and components include call reactions. The UI Library supports the following list of live call reactions: &#128077; like reaction, &#129505; heart reaction, &#128079; applause reaction, &#128514; laughter reaction, &#128558; surprise reaction.
-Call reactions are associated with the participant sending it and are visible to all types of participants (in-tenant, guest, federated, anonymous). Call reactions are supported in all types of calls such as Rooms, groups, and meetings (scheduled, private, channel) of all sizes (small, large, extra-large).
+Call reactions are associated with the participant who sends it and are visible to all types of participants (in-tenant, guest, federated, anonymous). Call reactions are supported in all types of calls such as rooms, groups, and meetings (scheduled, private, channel) of all sizes (small, large, extra large).
-Adding this feature encourages greater engagement within calls, as people can now react in real time without needing to speak or interrupt.
+Adding this feature encourages greater engagement within calls, because people can react in real time without needing to speak or interrupt. Developers can use this feature by adding:
-- The ability to have live call reactions added to `CallComposite` and `CallwithChatComposite` on web.-- Call reactions added at the component level.
+- The ability to have live call reactions to `CallComposite` and `CallwithChatComposite` composites on the web.
+- Call reactions at the component level.
For more information, see [Reactions](./how-tos/calling-sdk/reactions.md).
-### Closed Captions
+### Closed captions
Promote accessibility by displaying text of the audio in video calls. Already available for app-to-Teams calls, this general availability release adds support for closed captions in all app-to-app calls.
-For more information, see [Closed Captions overview](./concepts/voice-video-calling/closed-captions.md).
+For more information, see [Closed captions overview](./concepts/voice-video-calling/closed-captions.md).
You can also learn more about [Azure Communication Services interoperability with Teams](./concepts/teams-interop.md). ### Copilot for Call Diagnostics
-AI can help app developers across every step of the development lifecycle: designing, building, and operating. Developers with [Microsoft Copilot for Azure (public preview)](/azure/copilot/overview) can use Copilot within Call Diagnostics to understand and resolve many calling issues. For example, developers can ask Copilot questions, such as:
+AI can help app developers across every step of the development lifecycle: designing, building, and operating. Developers can use [Microsoft Copilot in Azure (preview)](/azure/copilot/overview) within Call Diagnostics to understand and resolve many calling issues. For example, developers can ask Copilot these questions:
- How do I run network diagnostics in Azure Communication Services VoIP calls? - How can I optimize my calls for poor network conditions?-- How do I fix common causes of poor media streams in Azure Communication calls?
+- How do I fix common causes of poor media streams in Azure Communication Services calls?
- How can I fix the subcode 41048, which caused the video part of my call to fail?
-Developers can use Call Diagnostics to understand call quality and reliability across the organization to deliver a great customer calling experience. Many issues can affect the quality of your calls, such as poor internet connectivity, software compatibility issues, and technical difficulties with devices.
+Call Diagnostics can help developers understand call quality and reliability, so they can deliver a great calling experience to customers. Many issues can affect the quality of your calls, such as poor internet connectivity, software incompatibilities, and technical difficulties with devices.
-Getting to the root cause of these issues can alleviate potentially frustrating situations for all call participants, whether they're a patient checking in for a doctorΓÇÖs call, or a student taking a lesson with their teacher. Call Diagnostics enables developers to drill down into the data to identify root problems and find a solution. You can use the built-in visualizations in Azure portal or connect underlying usage and quality data to your own systems.
+Getting to the root cause of these issues can alleviate potentially frustrating situations for all call participants, whether they're patients checking in for a doctor's call or students taking a lesson with a teacher. Call Diagnostics enables developers to drill down into the data to identify root problems and find a solution. You can use the built-in visualizations in the Azure portal or connect underlying usage and quality data to your own systems.
For more information, see [Call Diagnostics](./concepts/voice-video-calling/call-diagnostics.md). ## April 2024
-### Business-to-consumer extensibility with Microsoft Teams for Calling
+### Business-to-consumer extensibility with Microsoft Teams for calling
-Now in general availability, developers can take advantage of calling interoperability for Microsoft Teams users in Azure Communication Services Calling workflows.
+Developers can take advantage of calling interoperability for Microsoft Teams users in Azure Communication Services calling workflows. This feature is now in general availability.
-Developers can use [Call Automation APIs](./concepts/call-automation/call-automation.md) to bring Teams users into business-to-consumer (B2C) calling workflows and interactions, helping you deliver advanced customer service solutions. This interoperability is offered over VoIP to reduce telephony infrastructure overhead. Developers can add Teams users to Azure Communication Services calls using the participant's Entra object ID (OID).
+Developers can use [Call Automation APIs](./concepts/call-automation/call-automation.md) to bring Teams users into business-to-consumer (B2C) calling workflows and interactions, which helps you deliver advanced customer service solutions. This interoperability is offered over VoIP to reduce telephony infrastructure overhead. Developers can add Teams users to Azure Communication Services calls by using the participants' Microsoft Entra object IDs (OIDs).
-#### Use Cases
+#### Use cases
-- **Teams as an extension of agent desktop**: Connect your CCaaS solution to Teams and enable your agents to handle customer calls on Teams. Having Teams as the single-pane-of-glass solution for both internal and B2C communication increases agent productivity and empowers them to deliver first-class service to customers.
+- **Teams as an extension of an agent desktop**: Connect your contact center as a service (CCaaS) solution to Teams and enable your agents to handle customer calls on Teams. Having Teams as the single-pane-of-glass solution for both internal and B2C communication can increase agents' productivity and empower them to deliver first-class service to customers.
-- **Expert Consultation**: Businesses can use Teams to invite subject matter experts into their customer service workflows for expedient issue resolution and improve first call resolution rate.
+- **Expert consultation**: Businesses can use Teams to invite subject matter experts into their customer service workflows for expedient issue resolution and to improve the rate of first-call resolution.
-Azure Communication Services B2C extensibility with Microsoft Teams makes it easy for customers to reach sales and support teams and for businesses to deliver effective customer experiences.
+Azure Communication Services B2C extensibility with Microsoft Teams helps customers reach sales and support teams and helps businesses deliver effective customer experiences.
-For more information, see [Call Automation workflows interop with Microsoft Teams](./concepts/call-automation/call-automation-teams-interop.md).
+For more information, see [Call Automation workflow interoperability with Microsoft Teams](./concepts/call-automation/call-automation-teams-interop.md).
-### Image Sharing in Microsoft Teams meetings
+### Image sharing in Microsoft Teams meetings
-Microsoft Teams users can now share images with Azure Communication Services users in the context of a Teams meeting. This feature is now generally available. Image sharing enhances collaboration in real time for meetings. Image overlay is also supported for users to look at it in detail.
+Microsoft Teams users can share images with Azure Communication Services users in the context of a Teams meeting. This feature is now generally available. Image sharing enhances collaboration in real time for meetings. Image overlay is also supported for users to look at it in detail.
-Image sharing is helpful in many scenarios, such as a business sharing photos showcasing their work or doctors sharing images with patients for after care instructions.
+Image sharing is helpful in many scenarios, such as a business that shares photos to showcase its work or doctors who share images with patients for aftercare instructions.
-Try out this feature using either our UI Library or the Chat SDK. The SDK is available in C# (.NET), JavaScript, Python, and Java:
+Try out this feature by using either the UI Library or the Chat SDK. The SDK is available in C# (.NET), JavaScript, Python, and Java. For more information, see:
-- [Enable inline image using UI Library in Teams Meetings](./tutorials/inline-image-tutorial-interop-chat.md)-- [Sample: Image Sharing](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-jointeamsmeeting--join-teams-meeting#adding-image-sharing)
+- [Enable an inline image by using the UI Library in Teams meetings](./tutorials/inline-image-tutorial-interop-chat.md)
+- [GitHub sample: Adding image sharing](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-jointeamsmeeting--join-teams-meeting#adding-image-sharing)
-### Deep Noise Suppression for Desktop
+### Deep noise suppression
-Deep noise suppression is currently in public preview. Noise suppression improves VoIP and video calls by eliminating background noise, making it easier to talk and listen. For example, if you're taking an Azure Communication Services WebJS call in a coffee shop with considerable noise, turning on noise suppression can significantly improve the calling experience by eliminating the background noise from the shop.
+Deep noise suppression is currently in preview. Noise suppression improves VoIP and video calls by eliminating background noise, so it's easier to talk and listen. For example, if you're taking an Azure Communication Services WebJS call in a coffee shop, turning on noise suppression can improve the calling experience by eliminating background sounds from the shop.
For more information, see [Add audio quality enhancements to your audio calling experience](./tutorials/audio-quality-enhancements/add-noise-supression.md).
-### Calling native SDKs for Android, iOS, and Windows
+### Calling SDKs for Android, iOS, and Windows
-We updated the Calling native SDKs to improve the customer experience. This release includes:
+We updated the native Calling SDKs to improve the customer experience. This release includes:
- Custom background for video calls - Proxy configuration-- Android TelecomManager-- Unidirectional Data Channel-- Time To Live lifespan for push notifications
+- Android TelecomManager integration
+- Unidirectional communication in Data Channel
+- Time-to-live lifespan for push notifications
#### Custom background for video calls
-Custom background for video calls is generally available. This feature enables customers to remove distractions behind them. The custom image backgrounds feature enables customers to upload their own personalized images for use as background.
+Custom background for video calls is generally available. This feature enables customers to remove distractions behind them. Customers can upload their own personalized images for use as a background.
-For example, business owners can use the Calling SDK to show custom backgrounds in place of the actual background. You can, for example, upload an image of a modern and spacious office and set it as its background for video calls. Anyone who joins the call sees the customized background, which looks realistic and natural. You can also use custom branding images as background to show a fresh image to your customers.
+For example, business owners can use the Calling SDK to show custom backgrounds in place of the actual background. You can, for example, upload an image of a modern and spacious office and set it as the background for video calls. Anyone who joins the call sees the customized background, which looks realistic and natural. You can also use custom branding images as a background to show fresh images to your customers.
-For more information, see [QuickStart: Add video effects to your video calls](./quickstarts/voice-video-calling/get-started-video-effects.md).
+For more information, see [Quickstart: Add video effects to your video calls](./quickstarts/voice-video-calling/get-started-video-effects.md).
#### Proxy configuration
-Proxy configuration is now generally available. Some environments such as highly regulated industries or those dealing with confidential information require proxies to secure and control network traffic. You can use the Calling SDK to configure the HTTP and media proxies for your Azure Communication Services calls. This way, you can ensure that your communications are compliant with the network policies and regulations. You can use the native SDK methods to set the proxy configuration for your app.
+Proxy configuration is now generally available. Some environments, such as industries that are highly regulated or that deal with confidential information, require proxies to help secure and control network traffic. You can use the Calling SDK to configure the HTTP and media proxies for your Azure Communication Services calls. This way, you can ensure that your communications are compliant with network policies and regulations. You can use the native SDK methods to set the proxy configuration for your app.
+
+For more information, see [Proxy your calling traffic](./tutorials/proxy-calling-support-tutorial.md?pivots=platform-android).
-For more information, see [Tutorial: Proxy your calling traffic](./tutorials/proxy-calling-support-tutorial.md?pivots=platform-android).
+#### Android TelecomManager integration
-#### Android TelecomManager
+Android TelecomManager manages audio and video calls on Android devices. Use Android TelecomManager to provide a consistent user experience across various Android apps and devices, such as showing incoming and outgoing calls in the system UI, routing audio to devices, and handling call interruptions.
-Android TelecomManager manages audio and video calls on Android devices. Use Android TelecomManager to provide a consistent user experience across different Android apps and devices, such as showing incoming and outgoing calls in the system UI, routing audio to the device, and handling call interruptions. Now you can integrate your app with the Android TelecomManager to take advantage of its features for your custom calling scenarios.
+Now you can integrate your app with Android TelecomManager to take advantage of its features for your custom calling scenarios. For more information, see [Integrate with TelecomManager](./how-tos/calling-sdk/telecommanager-integration.md).
-For more information, see [Integrate with TelecomManager on Android](./how-tos/calling-sdk/telecommanager-integration.md).
+#### Unidirectional communication in Data Channel
-#### Unidirectional Data Channel
+The Data Channel API is generally available. Data Channel includes unidirectional communication, which enables real-time messaging during audio and video calls. By using this API, you can integrate data exchange functions into the applications to help provide a seamless communication experience for users.
-The Data Channel API is generally available. Data Channel includes unidirectional communication, which enables real-time messaging during audio and video calls. Using this API, you can integrate data exchange functions into the applications, providing a seamless communication experience for users. The Data Channel API enables users to instantly send and receive messages during an ongoing audio or video call, promoting smooth and efficient communication. In group call scenarios, a participant can send messages to a single participant, a specific set of participants, or all participants within the call. This flexibility enhances communication and collaboration among users during group interactions.
+The Data Channel API enables users to instantly send and receive messages during an ongoing audio or video call, promoting smooth and efficient communication. In a group call, a participant can send messages to a single participant, a specific set of participants, or all participants within the call. This flexibility enhances communication and collaboration among users during group interactions.
For more information, see [Data Channel](./concepts/voice-video-calling/data-channel.md).
-#### Time To Live lifespan for push notifications
+#### Time-to-live lifespan for push notifications
-The Time To Live (TTL) for push notifications is now generally available. TTL is the duration for which a push notification token is valid. Using a longer duration TTL can help your app reduce the number of new token requests from your users and improve the experience.
+The time to live (TTL) for push notifications is now generally available. TTL is the duration for which a push notification token is valid. Using a longer-duration TTL can help your app reduce the number of new token requests from your users and improve the experience.
-For example, suppose you created an app that enables patients to book virtual medical appointments. The app uses push notifications to display incoming call UI when the app isn't in the foreground. Previously, the app had to request a new push notification token from the user every 24 hours, which could be annoying and disruptive. With the extended TTL feature, you can now configure the push notification token to last for up to six months, depending on your business needs. This way, the app can avoid frequent token requests and provide a smoother calling experience for your customers.
+For example, suppose you created an app that enables patients to book virtual medical appointments. The app uses push notifications to display an incoming call UI when the app isn't in the foreground. Previously, the app had to request a new push notification token from the user every 24 hours, which could be annoying and disruptive. With the extended TTL feature, you can now configure the push notification token to last for up to six months, depending on your business needs. This way, the app can avoid frequent token requests and provide a smoother calling experience for your customers.
-For more information, see [TTL token in Enable push notifications for calls](./how-tos/calling-sdk/push-notifications.md).
+For more information, see [Enable push notifications for calls](./how-tos/calling-sdk/push-notifications.md#ttl-token).
### Calling SDK native UI Library updates
-This update includes Troubleshooting on the native UI Library for Android and iOS, and Audio only mode in the UI Library.
-
-Using the Azure Communication Services Calling SDK native UI Library, you can now generate encrypted logs for troubleshooting and provide your customers with an optional Audio only mode for joining calls.
+By using the Azure Communication Services Calling SDK native UI Library, you can now generate encrypted logs for troubleshooting and provide customers with an optional audio-only mode for joining calls.
#### Troubleshooting on the native UI Library for Android and iOS
-Now in general availability, you can encrypt logs when troubleshooting on the Calling SDK native UI Library for Android and iOS. You can easily generate encrypted logs to share with Azure support. While ideally calls just work, or developers self-remediate issues, customers always have Azure support as a last-line-of-defense. And we strive to make those engagements as easy and fast as possible.
+Now in general availability, you can encrypt logs when troubleshooting on the Calling SDK native UI Library for Android and iOS. You can easily generate encrypted logs to share with Azure support. Ideally, calls just work, or developers self-remediate issues. But customers always have Azure support as a last line of defense. And we strive to make those engagements as easy and fast as possible.
For more information, see [Troubleshoot the UI Library](./how-tos/ui-library-sdk/troubleshooting.md).
-#### Audio only mode in the UI Library
+#### Audio-only mode in the UI Library
-The Audio only mode in the Calling SDK UI Library is now generally available. It enables participants to join calls using only their audio, without sharing or receiving video. Participants can use this feature to conserve bandwidth and maximize privacy. When activated, the Audio only mode automatically disables the video function for both sending and receiving streams and adjusts the UI to reflect this change by removing video-related controls.
+The audio-only mode in the Calling SDK UI Library is now generally available. It enables participants to join calls by using only their audio, without sharing or receiving video. Participants can use this feature to conserve bandwidth and maximize privacy.
-For more information, see [Enable audio only mode in the UI Library](./how-tos/ui-library-sdk/audio-only-mode.md).
+When audio-only mode is activated, it automatically disables the video function for both sending and receiving streams. It adjusts the UI to reflect this change by removing video-related controls.
+For more information, see [Enable audio-only mode in the UI Library](./how-tos/ui-library-sdk/audio-only-mode.md).
## March 2024
-### Calling to Microsoft Teams Call Queues and Auto Attendants
+### Calling to Microsoft Teams call queues and auto attendants
-Azure Communication Services Calling to Teams call queues and auto attendants and click-to-call for Teams Phone are now generally available. Organizations can enable customers to easily reach their sales and support members on Microsoft Teams with just a single click. When you add a [click-to-call widget](./tutorials/calling-widget/calling-widget-tutorial.md) onto a website, such as a **Sales** button that points to a sales department, or a **Purchase** button that points to procurement, customers are just one click away from a direct connection into a Teams call queue or auto attendant.
+Calling to Teams call queues and auto attendants is now generally available in Azure Communication Services, along with click-to-call for Teams Phone.
-Learn more about joining your calling app to a Teams [call queue](./quickstarts/voice-video-calling/get-started-teams-call-queue.md) or [auto attendant](./quickstarts/voice-video-calling/get-started-teams-auto-attendant.md), and about [building contact center applications](./tutorials/contact-center.md).
+Organizations can enable customers to quickly reach their sales and support members on Microsoft Teams. When you add a [click-to-call widget](./tutorials/calling-widget/calling-widget-tutorial.md) onto a website, such as a **Sales** button that points to a sales department or a **Purchase** button that points to procurement, customers are just one click away from a direct connection to a Teams call queue or auto attendant.
-### Email Updates
+Learn more about joining your calling app to a Teams [call queue](./quickstarts/voice-video-calling/get-started-teams-call-queue.md) or [auto attendant](./quickstarts/voice-video-calling/get-started-teams-auto-attendant.md), and about [building contact center applications](./tutorials/contact-center.md).
-Updates to Azure Communication Services Email service:
+### Email updates
-- SMTP-- Opt-out management-- PowerShell cmdlets-- CLI extension
+Updates to the Azure Communication Services email service include SMTP support, opt-out management, Azure PowerShell cmdlets, and Azure CLI extensions.
#### SMTP
-SMTP as a Service for Email is now generally available. Developers can use the SMTP support in Azure Communication Services to easily send emails, improve security features, and have more control over outgoing communications.
+SMTP support in Azure Communication Services email is now generally available. Developers can use it to easily send emails, improve security features, and have more control over outgoing communications.
-The SMTP Relay Service acts as a link between email clients and mail servers and helps deliver emails more effectively. It sets up a specialized relay infrastructure that not only handles higher throughput needs and successful email delivery, but also improves authentication to protect communication. This service also offers businesses a centralized platform that lets them manage outgoing emails for all B2C communications and get insights into email traffic.
+The SMTP relay service acts as a link between email clients and mail servers to help deliver emails more effectively. It sets up a specialized relay infrastructure that not only handles higher throughput needs and successful email delivery, but also improves authentication to help protect communication. This service also offers businesses a centralized platform that lets them manage outgoing emails for all B2C communications and get insights into email traffic.
-With this capability, customers can switch from on-premises SMTP solutions or link their line of business applications to a cloud-based solution platform with Azure Communication Services Email. SMTP as a Service enables:
+With this capability, customers can switch from on-premises SMTP solutions or link their line-of-business applications to a cloud-based solution platform with Azure Communication Services email. SMTP support enables:
-- Secure and reliable SMTP endpoint with TLS 1.2 encryptions-- Access with Microsoft Entra Application ID to secure authentication for sending emails using SMTP.-- High volume sending support for B2C communications using SMTP and REST APIs.-- The security and compliance to honor and respect data handling and privacy requirements that Azure promises to our customers.
+- A reliable SMTP endpoint with TLS 1.2 encryption.
+- Authentication with a Microsoft Entra application ID for sending emails via SMTP.
+- High-volume sending support for B2C communications via SMTP and REST APIs.
+- Compliance with data-handling and privacy requirements for customers.
-Learn more about [SMTP as a Service](./concepts/email/email-smtp-overview.md).
+For more information, see [Email SMTP support](./concepts/email/email-smtp-overview.md).
-#### Opt-out Management
+#### Opt-out management
-Email opt-out management, now in public preview, offers a powerful platform with a centralized managed unsubscribe list and opt-out preferences saved to our data store. This feature helps developers meet guidelines of email providers who often require one-click list-unsubscribe implementation in the emails sent from their platforms. Opt-out Management helps you identify and avoid significant delivery problems. You can maintain compliance by adding suppression list features to help improve reputation and enable customers to easily manage opt-outs.
+Email opt-out management, now in preview, offers a centralized unsubscribe list and opt-out preferences saved to a data store. This feature helps developers meet guidelines of email providers who require one-click list-unsubscribe implementation in the emails sent from their platforms.
+Opt-out management helps you identify and avoid delivery problems. You can maintain compliance by adding suppression list features to help improve reputation and enable customers to easily manage opt-outs.
+ Get started with [Manage email opt-out capabilities](./concepts/email/email-optout-management.md).
-#### PowerShell Cmdlets & CLI extension
+#### Azure PowerShell cmdlets and Azure CLI extensions
-##### PowerShell Cmdlets
+To enhance the developer experience, Azure Communication Services is introducing more Azure PowerShell cmdlets and Azure CLI extensions for working with email.
-To enhance the developer experience, Azure Communication Services is introducing more PowerShell cmdlets and Azure CLI extensions for working with Azure Communication Service Email.
+##### Azure PowerShell cmdlets
-With the addition of these new cmdlets developers can now use Azure PowerShell cmdlets for all CRUD operations for Email Service including:
+With the addition of the new cmdlets, developers can use Azure PowerShell cmdlets for all CRUD (create, read, update, delete) operations for the email service, including:
-- Create Communication Service Resource (existing)-- Create Email Service Resource (new)-- Create Domain (Azure Managed or Custom Domain) Resource (new)-- Initiate/Cancel Custom Domain verification (new)
+- Create a communication service resource (existing)
+- Create an email service resource (new)
+- Create a resource for an Azure-managed or custom domain (new)
+- Initiate or cancel custom domain verification (new)
- Add a sender username to a domain (new)-- Link a Domain Resource to a Communication Service Resource (existing)
+- Link a domain resource to a communication service resource (existing)
-Learn more at [PowerShell cmdlets](/powershell/module/az.communication/).
+Learn more in the [Azure PowerShell reference](/powershell/module/az.communication/).
-##### Azure CLI extension for Email Service Resources management
+##### Azure CLI extensions
-Developers can use Azure CLI extensions for their end-to-end send email flow including:
+Developers can use Azure CLI extensions for their end-to-end flow for sending email, including:
-- Create Communication Service Resource (existing)-- Create Email Service Resource (new)-- Create Domain (Azure Managed or Custom Domain) Resource (new)
+- Create a communication service resource (existing)
+- Create an email service resource (new)
+- Create a resource for an Azure-managed or custom domain (new)
- Add a sender username to a domain (new)-- Link a Domain Resource to a Communication Service Resource (existing)-- Send an Email (existing)
+- Link a domain resource to a communication service resource (existing)
+- Send an email (existing)
-Learn more in [Extensions](/cli/azure/communication/email).
+Learn more in the [Azure CLI reference](/cli/azure/communication/email).
## February 2024
-### Limited Access User Tokens
+### Limited-access user tokens
-New, limited access user tokens are now in general availability. Limited access user tokens enable customers to exercise finer grain control over user capabilities such as to start a new call/chat or participate in an ongoing call/chat.
+Limited-access user tokens are now in general availability. Limited-access user tokens enable customers to exercise finer control over user capabilities such as starting a new call/chat or participating in an ongoing call/chat.
-When a customer creates an Azure Communication Services user identity, the user is granted the capability to participate in chats or calls, using access tokens. For example, a user must have a chat-token to participate in chat threads. Similarly, a VoIP token is required to participate in VoIP call. A user can have multiple tokens simultaneously.
+When a customer creates an Azure Communication Services user identity, the user is granted the capability to participate in chats or calls through access tokens. For example, a user must have a chat token to participate in chat threads or a VoIP token to participate in VoIP calls. A user can have multiple tokens simultaneously.
-With the limited access tokens, Azure Communication Services supports controlling full access versus limited access within chat and calling. Customers can now control the userΓÇÖs ability to initiate a new call or chat as opposed to participating in existing calls or chats.
+With the limited-access tokens, Azure Communication Services supports controlling full access versus limited access within chats and calls. Customers can control users' ability to initiate a new call or chat, as opposed to participating in existing calls or chats.
-These tokens solve the cold-call or cold-chat issue. For example, without limited access tokens if a user has VoIP token, they can initiate calls and participate in calls. So theoretically, a defendant could call a judge directly or a patient could call a doctor directly. This is undesirable for most businesses. With new limited access tokens, developers are able to give a limited access token to a patient so they can join a call but can't initiate a direct call to anyone.
+These tokens solve the cold-call or cold-chat issue. For example, without limited-access tokens, a user who has a VoIP token can initiate calls and participate in calls. So theoretically, a defendant could call a judge directly or a patient could call a doctor directly. This situation is undesirable for most businesses. Developers can now give a limited-access token to a patient who then can join a call but can't initiate a direct call to anyone.
For more information, see [Identity model](./concepts/identity-model.md). ### Try Phone Calling
-Try Phone Calling, now in public preview, is a tool in Azure portal that helps customers confirm the setup of a telephony connection by making a phone call. It applies to both Voice Calling (PSTN) and direct routing. Try Phone Calling enables developers to quickly test Azure Communication Services calling capabilities, without an existing app or code on their end.
+Try Phone Calling, now in preview, is a tool in the Azure portal that helps customers confirm the setup of a telephony connection by making a phone call. It applies to both voice calling (PSTN) and direct routing. Try Phone Calling enables developers to quickly test Azure Communication Services calling capabilities, without an existing app or code on their end.
-Learn more about [Try Phone Calling](./concepts/telephony/try-phone-calling.md).
+For more information, see [Try Phone Calling](./concepts/telephony/try-phone-calling.md).
-
-### UI Native Library Updates
+### Native UI Library updates
-Updates to the UI Native Library including moving User facing diagnostics to general availability and releasing 1:1 Calling and an iOS CallKit integration.
+Updates to the native UI Library including moving User Facing Diagnostics to general availability and releasing one-to-one calling and iOS CallKit integration.
-### User Facing Diagnostics
+#### User Facing Diagnostics
-User Facing Diagnostics (UFD) is now available in general availability. User Facing Diagnostics enhance the user experience by providing a set of events that can be triggered when some signal of the call is triggered, for example, when some participant is talking but the microphone is muted, or if the device isn't connected to a network. Developers can subscribe to triggers such as weak network signals or muted microphones, ensuring that you're always aware of any factors impacting your calls.
+User Facing Diagnostics is now in general availability. This feature enhances the user experience by providing a set of events that can be triggered when some signal of the call is triggered. For example, an event can be triggered when a participant is talking but the microphone is muted, or if the device isn't connected to a network. You can subscribe to triggers such as weak network signals or muted microphones, so you're always aware of any factors that affect calls.
-By bringing UFD into the UI Library, we help customers implement events. This provides a more fluid experience. Customers can use UFDs to notify end-users in real time if they face connectivity and quality issues during the call. Issues can include muted microphones, network issues, or other problems. Customers receive a toast notification during the call to indicate quality issues. This also helps by sending telemetry to help you track any event and review the call status.
+Bringing User Facing Diagnostics into the UI Library helps customers implement events for a more fluid experience. Customers can use User Facing Diagnostics to notify users in real time if they face connectivity and quality problems during the call, such as network problems. Users receive a pop-up notification about these problems during the call. This feature also sends telemetry to help you track any event and review the call status.
For more information, see [User Facing Diagnostics](./concepts/voice-video-calling/user-facing-diagnostics.md).
-### 1:1 Calling
+#### One-to-one calling
+
+One-to-one calling for Android and iOS is now available in preview version 1.6.0. With this latest preview release, starting a call is as simple as a tap. Recipients are promptly alerted with a push notification to answer or decline the call.
-One-on-one calling for Android and iOS is now available. With this latest public preview release, starting a call is as simple as a tap. Recipients are promptly alerted with a push notification to answer or decline the call. If the iOS native application requires direct calling between two entities, developers can use the 1:1 calling function to make it happen. For example, a client needing to make a call to their financial advisor to make account changes. This feature is currently in public preview version 1.6.0.
+If the iOS-native application requires direct calling between two entities, developers can use the one-to-one calling function to make it happen. An example scenario is a client who needs to call a financial advisor to make account changes.
For more information, see [Set up one-to-one calling and push notifications in the UI Library](./how-tos/ui-library-sdk/one-to-one-calling.md).
-### iOS CallKit Integrations
+#### iOS CallKit integration
-Azure Communication Services seamlessly integrates CallKit, in public preview, for a native iOS call experience. Now, calls made through the Native UI SDK have the same iOS calling features such as notification, call history, and call on hold. These iOS features blend perfectly with the existing native experience.
+Azure Communication Services integrates CallKit, in preview, for a native iOS call experience. Now, calls made through the Native UI SDK have the same iOS calling features, such as notification, call history, and call on hold. These iOS features blend seamlessly with the existing native experience.
-UI Library developers can use this integration to avoid spending time on integration. The iOS CallKit provides an out of the box experience, meaning that integrated apps use the same interfaces as regular cellular calls. For end-users, incoming VoIP calls display the familiar iOS call screen, providing a consistent and intuitive experience.
+This update enables UI Library developers to avoid spending time on integration. CallKit provides an out-of-the-box experience, meaning that integrated apps use the same interfaces as regular cellular calls. For users, incoming VoIP calls display the familiar iOS call screen for a consistent and intuitive experience.
For more information, see [Integrate CallKit into the UI Library](./how-tos/ui-library-sdk/callkit.md). ### PSTN Direct Offers
-Azure Communication Services has continued to expand Direct Offers to new geographies. We just launched PSTN Direct Offers in general availability for 42 countries.
-
-The full list of countries where we offer PSTN Direct Offers:
+Azure Communication Services continues to expand Direct Offers to new geographies. PSTN Direct Offers is in general availability for 42 countries:
-Argentina, Australia, Austria, Belgium, Brazil, Canada, Chile, China, Colombia, Denmark, Finland, France, Germany, Hong Kong, Indonesia, Ireland, Israel, Italy, Japan, Luxembourg, Malaysia, Mexico, Netherlands, New Zealand, Norway, Philippines, Poland, Portugal, Puerto Rico, Saudi Arabia, Singapore, Slovakia, South Africa, South Korea, Spain, Sweden, Switzerland, Taiwan, Thailand, UAE (United Arab Emirates), United Kingdom, and United States
+> Argentina, Australia, Austria, Belgium, Brazil, Canada, Chile, China, Colombia, Denmark, Finland, France, Germany, Hong Kong, Indonesia, Ireland, Israel, Italy, Japan, Luxembourg, Malaysia, Mexico, Netherlands, New Zealand, Norway, Philippines, Poland, Portugal, Puerto Rico, Saudi Arabia, Singapore, Slovakia, South Africa, South Korea, Spain, Sweden, Switzerland, Taiwan, Thailand, UAE (United Arab Emirates), United Kingdom, United States
-In addition to getting all current offers into general availability, we have introduced over 400 new cross-country offers.
+In addition to getting all current offers into general availability, we've introduced more than 400 new cross-country offers.
Check all the new countries, phone number types, and capabilities at [Country/regional availability of telephone numbers and subscription eligibility](./concepts/numbers/sub-eligibility-number-capability.md). ## January 2024
-### Dial out to a PSTN number
+### Dial-out to a PSTN number
-Virtual Rooms support VoIP audio and video calling. Now you can also dial out PSTN numbers and include the PSTN participants in an ongoing call. Virtual Rooms empowers developers to exercise control over PSTN dial out capability in two ways. Developers can not only enable/disable PSTN dial-out capability for specific Virtual Rooms but can also control which users in Virtual Rooms can initiate PSTN dial-out. Only the users with Presenter role can initiate a PSTN Dial-out ensuring secure and structured communication.
+Virtual Rooms support VoIP audio and video calling. Now you can also dial out PSTN numbers and include the PSTN participants in an ongoing call.
+
+Virtual Rooms empower developers to exercise control over PSTN dial-out capability in two ways. Developers can not only enable/disable PSTN dial-out capability for specific Virtual Rooms but also control which users in Virtual Rooms can initiate PSTN dial-out. Only users who have the Presenter role can initiate a PSTN dial-out, to help ensure secure and structured communication.
For more information, see [Quickstart: Create and manage a room resource](./quickstarts/rooms/get-started-rooms.md).
-### Remote mute call participants
+### Remote mute of call participants
-Participants can now mute other participants in Virtual Rooms calls. Previously, participants in Virtual Rooms calls could only mute/unmute themselves. There are times when you want to mute other participants due to background noise or if someoneΓÇÖs microphone is left unmuted.
+Participants can now mute other participants in Virtual Rooms calls. Previously, participants in Virtual Rooms calls could only mute/unmute themselves. There are times when participants want to mute other people due to background noise or if someone's microphone is left unmuted.
Participants in the Presenter role can mute a participant, multiple participants, or all other participants. Users retain the ability to unmute themselves as needed. For privacy reasons, no one can unmute other participants. For more information, see [Mute other participants](./how-tos/calling-sdk/manage-calls.md#mute-other-participants).
-### Call Recording in Virtual Rooms
+### Call recording in Virtual Rooms
-Developers can now start, pause, and stop call recording in calls conducted in Virtual Rooms. Call Recording is a service-side capability, and developers start, pause, stop recording using server-side API calls. This feature enables invited participants who might not make the original session to view the recording and stay up-to-date asynchronously.
+Developers can now start, pause, and stop call recording in calls conducted in Virtual Rooms. Call recording is a service-side capability. Developers start, pause, and stop recording by using server-side API calls. This feature enables invited participants who might not make the original session to view the recording and stay up to date asynchronously.
For more information, see [Manage call recording on the client](./how-tos/calling-sdk/record-calls.md).
-### Closed Captions in Virtual Rooms
-
-Closed captions is the conversion of a voice or video call audio track into written words that appear in real time. Closed captions are also a useful tool for participants who prefer to read the audio text in order to engage more actively in conversations and meetings. Closed captions also help in scenarios where participants might be in noisy environments or have audio equipment problems.
+### Closed captions in Virtual Rooms
-Closed captions are never saved and are only visible to the user that enabled it.
+Closed captioning is the conversion of an audio track for a voice or video call into written words that appear in real time. Closed captions are a useful tool for participants who prefer to read the audio text in order to engage more actively in conversations and meetings. Closed captions also help in scenarios where participants might be in noisy environments or have audio equipment problems.
+Closed captions are never saved and are visible only to the user who enabled them.
-For more information, see [Closed Captions overview](./concepts/voice-video-calling/closed-captions.md).
+For more information, see [Closed captions overview](./concepts/voice-video-calling/closed-captions.md).
## December 2023 ### Call Diagnostics
-Azure Communication Services Call Diagnostics (CD) is available in Public Preview. Call Diagnostics help developers troubleshoot and improve their voice and video calling applications.
+Azure Communication Services Call Diagnostics is available in preview. Call Diagnostics helps developers troubleshoot and improve their applications for voice and video calling.
-Call Diagnostics is an Azure Monitor experience that offers specialized telemetry and diagnostic pages in the Azure portal. With Call Diagnostics, you can access and analyze data, visualizations, and insights for each call. Then you can identify and resolve issues that affect the end-user experience.
+Call Diagnostics is an Azure Monitor experience that offers specialized telemetry and diagnostic pages in the Azure portal. With Call Diagnostics, you can access and analyze data, visualizations, and insights for each call. Then you can identify and resolve issues that affect the user experience.
-Call Diagnostics works with other ACS features, such as noise suppression and pre-call troubleshooting, to deliver beautiful, reliable video calling experiences that are easy to develop and operate. Call Diagnostics is now available in Public Preview. Try it today and see how Azure can help you make every call a success.
+Call Diagnostics works with other Azure Communication Services features, such as noise suppression and pre-call troubleshooting, to deliver reliable video-calling experiences that are easy to develop and operate.
For more information, see [Call Diagnostics](./concepts/voice-video-calling/call-diagnostics.md).
-### WebJS Calling Updates
+### WebJS Calling updates
-Several WebJS Calling features moved to general availability: Media Quality Statics, Video Constraints, and Data Channel.
+The following APIs for WebJS Calling features moved to general availability: Media Quality Statistics, Video Constraints, and Data Channel.
#### Media Quality Statistics
-Developers can leverage the Media Quality Statistics API to better understand their video calling quality and reliability experience real time from within the calling SDK. By giving developers the ability to understand from the client side what their end customers are experiencing they can delve deeper into understanding and mitigating any issues that arise for their end users.
+Developers can use the Media Quality Statistics API to better understand their video-calling quality and reliability experience in real time from within the Calling SDK. When developers understand from the client side what their customers are experiencing, they can delve deeper into understanding and mitigating any problems that arise for users.
- For more information, see [Media Quality Statistics](./concepts/voice-video-calling/media-quality-sdk.md).
+For more information, see [Media quality statistics](./concepts/voice-video-calling/media-quality-sdk.md).
#### Video Constraints
-Developers can use Video Constraints to better manage the overall quality of calls. For example, if a developer knows that a participant has a poor internet connection, the developer can limit video resolution size on the sender side to use less bandwidth. The result is an improved calling experience for the participant.
+Developers can use the Video Constraints API to better manage the overall quality of calls. For example, if a developer knows that a participant has a poor internet connection, the developer can limit video resolution size on the sender side to use less bandwidth. The result is an improved calling experience for the participant.
-Improve your calling experience as described in [Quickstart: Set video constraints in your calling app](./quickstarts/voice-video-calling/get-started-video-constraints.md).
+For more information about improving the calling experience by using the Video Constraints API, see [Quickstart: Set video constraints in your calling app](./quickstarts/voice-video-calling/get-started-video-constraints.md).
#### Data Channel
-The Data Channel API enables real-time messaging during audio and video calls. This function enables developers to manage their own data pipeline and send their own unique messages to remote participants on a call. The data channel enhances communication capabilities by enabling local participants to connect directly to remote participants when the scenario requires.
+The Data Channel API enables real-time messaging during audio and video calls. This function enables developers to manage their own data pipeline and send their own unique messages to remote participants on a call. A data channel enhances communication capabilities by enabling local participants to connect directly to remote participants when the scenario requires it.
Get started with [Quickstart: Add Data Channel messaging to your calling app](./quickstarts/voice-video-calling/get-started-data-channel.md).
+## Related content
-## Related articles
-
-For a complete list of new features and bug fixes, see the [releases page](https://github.com/Azure/Communication/releases) on GitHub. For more blog posts, see the [Azure Communication Services blog](https://techcommunity.microsoft.com/t5/azure-communication-services/bg-p/AzureCommunicationServicesBlog).
+- For a complete list of new features and bug fixes, see the [releases page](https://github.com/Azure/Communication/releases) on GitHub.
+- For more blog posts, see the [Azure Communication Services blog](https://techcommunity.microsoft.com/t5/azure-communication-services/bg-p/AzureCommunicationServicesBlog).
container-apps Java Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-metrics.md
Java Virtual Machine (JVM) metrics are critical for monitoring the health and pe
::: zone pivot="azure-portal"
-To make the collection of Java metrics available to your app, you have to create your container app with some specific settings.
+To make the collection of Java metrics available to your app, configure your container app with some specific settings.
+
+# [create](#tab/create)
In the *Create* window, if you select for *Deployment source* the **Container image** option, then you have access to stack-specific features.
Under the *Development stack-specific features* and for the *Development stack*,
Once you select the Java development stack, the *Customize Java features for your app* window appears. Next to the *Java features* label, select **JVM core metrics**.
+# [update](#tab/update)
+
+1. Go to your container app in the [Azure portal](https://portal.azure.com).
+
+1. In the *Overview* section, under *Essentials*, find *Development stack* and select **manage**.
+
+1. In the *Development stack* drop down, select **Java**.
+
+1. Select **Apply**.
+ ::: zone-end ::: zone pivot="azure-cli"
container-apps Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/samples.md
Refer to the following samples to learn how to use Azure Container Apps in diffe
| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-on-Azure-Container-Apps)<br /> | This sample demonstrates ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps. | | [ASP.NET Core front-end with two back-end APIs on Azure Container Apps (with Dapr)](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-with-DAPR-on-Azure-Container-Apps)<br /> | Demonstrates how ASP.NET Core 6.0 is used to build a cloud-native application hosted in Azure Container Apps using Dapr. | | [Deploy Drupal on Azure Container Apps](https://github.com/Azure-Samples/drupal-on-azure-container-apps) | Demonstrates how to deploy a Drupal site to Azure Container Apps, with Azure Database for MariaDB, and Azure Files to store static assets.|
-| [Launch Your First Java app](https://github.com/spring-projects/spring-petclinic) |A monolithic Java application called PetClinic built with Spring Framework. PetClinic is a well-known sample application provided by the Spring Framework community. |
+| [Launch Your First Java app](java-get-started.md?pivots=war) |A monolithic Java application called PetClinic built with Spring Framework. PetClinic is a well-known sample application provided by the Spring Framework community. |
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [Defender for Cloud - Pre-Purchase](/azure/defender-for-cloud/prepurchase-plan?toc=/azure/cost-management-billing/reservations/toc.json) - [Disk Storage](/azure/virtual-machines/disks-reserved-capacity) - [Microsoft Fabric](fabric-capacity.md)
+- [Microsoft Sentinel - Pre-Purchase](../../sentinel/billing-pre-purchase-plan.md?toc=/azure/cost-management-billing/reservations/toc.json)
- [SAP HANA Large Instances](prepay-hana-large-instances-reserved-capacity.md) - [Software plans](/azure/virtual-machines/linux/prepay-suse-software-charges?toc=/azure/cost-management-billing/reservations/toc.json) - [SQL Database](/azure/azure-sql/database/reserved-capacity-overview?toc=/azure/cost-management-billing/reservations/toc.json)
cost-management-billing Save Compute Costs Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/save-compute-costs-reservations.md
Previously updated : 08/28/2023 Last updated : 09/16/2024
For more information, see [Self-service exchanges and refunds for Azure Reservat
- **Azure Dedicated Host** - Only the compute costs are included with the Dedicated host. - **Azure Disk Storage reservations** - A reservation only covers premium SSDs of P30 size or greater. It doesn't cover any other disk types or sizes smaller than P30. - **Azure Backup Storage reserved capacity** - A capacity reservation lowers storage costs of backup data in a Recovery Services Vault.
+- **Azure NetApp Files** - A capacity reservation covers matching capacity pools in the selected service level and region. When using capacity pools configured with [Standard storage with cool access](../../azure-netapp-files/manage-cool-access.md), only "hot" tier consumption is covered by the reserved capacity benefit.
Software plans:
For Windows virtual machines and SQL Database, the reservation discount doesn't
## Need help? Contact us.
-If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
## Next steps
databox Data Box Deploy Export Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-ordered.md
Complete the following configuration prerequisites for Data Box service and devi
* Make sure that you have an existing resource group that you can use with your Azure Data Box. * Make sure that your Azure Storage account that you want to export data from is one of the supported Storage account types as described [Supported storage accounts for Data Box](data-box-system-requirements.md#supported-storage-accounts).
+
+> [!NOTE]
+> The Export functionality will not include Access Control List (ACL) or metadata regarding the files and folders. If you are exporting Azure Files data, you may consider using a tool such as Robocopy to apply ACLs to the target folders prior to import.
### For device
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
For example, for the *admin* user:
shell> system reboot ```
-### Shutdown an appliance
+### Shut down an appliance
Use the following commands to shut down the OT sensor appliance.
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
shell> ```
+## Traffic capture filters
+
+To reduce alert fatigue and focus your network monitoring on high priority traffic, you may decide to filter the traffic that streams into Defender for IoT at the source. Capture filters allow you to block high-bandwidth traffic at the hardware layer, optimizing both appliance performance and resource usage.
+
+Use include an/or exclude lists to create and configure capture filters on your OT network sensors, making sure that you don't block any of the traffic that you want to monitor.
+
+The basic use case for capture filters uses the same filter for all Defender for IoT components. However, for advanced use cases, you may want to configure separate filters for each of the following Defender for IoT components:
+
+- `horizon`: Captures deep packet inspection (DPI) data
+- `collector`: Captures PCAP data
+- `traffic-monitor`: Captures communication statistics
+
+> [!NOTE]
+> - Capture filters don't apply to [Defender for IoT malware alerts](../alert-engine-messages.md#malware-engine-alerts), which are triggered on all detected network traffic.
+>
+> - The capture filter command has a character length limit that's based on the complexity of the capture filter definition and the available network interface card capabilities. If your requested filter commmand fails, try grouping subnets into larger scopes and using a shorter capture filter command.
+
+### Create a basic filter for all components
+
+The method used to configure a basic capture filter differs, depending on the user performing the command:
+
+- **cyberx** user: Run the specified command with specific attributes to configure your capture filter.
+- **admin** user: Run the specified command, and then enter values as [prompted by the CLI](#create-a-basic-capture-filter-using-the-admin-user), editing your include and exclude lists in a nano editor.
+
+Use the following commands to create a new capture filter:
+
+|User |Command |Full command syntax |
+||||
+| **admin** | `network capture-filter` | No attributes.|
+| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -m MODE [-S]` |
+
+Supported attributes for the *cyberx* user are defined as follows:
+
+|Attribute |Description |
+|||
+|`-h`, `--help` | Shows the help message and exits. |
+|`-i <INCLUDE>`, `--include <INCLUDE>` | The path to a file that contains the devices and subnet masks you want to include, where `<INCLUDE>` is the path to the file. For example, see [Sample include or exclude file](#txt). |
+|`-x EXCLUDE`, `--exclude EXCLUDE` | The path to a file that contains the devices and subnet masks you want to exclude, where `<EXCLUDE>` is the path to the file. For example, see [Sample include or exclude file](#txt). |
+|- `-etp <EXCLUDE_TCP_PORT>`, `--exclude-tcp-port <EXCLUDE_TCP_PORT>` | Excludes TCP traffic on any specified ports, where the `<EXCLUDE_TCP_PORT>` defines the port or ports you want to exclude. Delimitate multiple ports by commas, with no spaces. |
+|`-eup <EXCLUDE_UDP_PORT>`, `--exclude-udp-port <EXCLUDE_UDP_PORT>` | Excludes UDP traffic on any specified ports, where the `<EXCLUDE_UDP_PORT>` defines the port or ports you want to exclude. Delimitate multiple ports by commas, with no spaces. |
+|`-itp <INCLUDE_TCP_PORT>`, `--include-tcp-port <INCLUDE_TCP_PORT>` | Includes TCP traffic on any specified ports, where the `<INCLUDE_TCP_PORT>` defines the port or ports you want to include. Delimitate multiple ports by commas, with no spaces. |
+|`-iup <INCLUDE_UDP_PORT>`, `--include-udp-port <INCLUDE_UDP_PORT>` | Includes UDP traffic on any specified ports, where the `<INCLUDE_UDP_PORT>` defines the port or ports you want to include. Delimitate multiple ports by commas, with no spaces. |
+|`-vlan <INCLUDE_VLAN_IDS>`, `--include-vlan-ids <INCLUDE_VLAN_IDS>` | Includes VLAN traffic by specified VLAN IDs, `<INCLUDE_VLAN_IDS>` defines the VLAN ID or IDs you want to include. Delimitate multiple VLAN IDs by commas, with no spaces. |
+|`-p <PROGRAM>`, `--program <PROGRAM>` | Defines the component for which you want to configure a capture filter. Use `all` for basic use cases, to create a single capture filter for all components. <br><br>For advanced use cases, create separate capture filters for each component. For more information, see [Create an advanced filter for specific components](#create-an-advanced-filter-for-specific-components).|
+|`-m <MODE>`, `--mode <MODE>` | Defines an include list mode, and is relevant only when an include list is used. Use one of the following values: <br><br>- `internal`: Includes all communication between the specified source and destination <br>- `all-connected`: Includes all communication between either of the specified endpoints and external endpoints. <br><br>For example, for endpoints A and B, if you use the `internal` mode, included traffic will only include communications between endpoints **A** and **B**. <br>However, if you use the `all-connected` mode, included traffic will include all communications between A *or* B and other, external endpoints. |
+
+<a name="txt"></a>**Sample include or exclude file**
+
+For example, an include or exclude **.txt** file might include the following entries:
+
+```txt
+192.168.50.10
+172.20.248.1
+```
+
+#### Create a basic capture filter using the admin user
+
+If you're creating a basic capture filter as the *admin* user, no attributes are passed in the [original command](#create-a-basic-filter-for-all-components). Instead, a series of prompts is displayed to help you create the capture filter interactively.
+
+Reply to the prompts displayed as follows:
+
+1. `Would you like to supply devices and subnet masks you wish to include in the capture filter? [Y/N]:`
+
+ Select `Y` to open a new include file, where you can add a device, channel, and/or subnet that you want to include in monitored traffic. Any other traffic, not listed in your include file, isn't ingested to Defender for IoT.
+
+ The include file is opened in the [Nano](https://www.nano-editor.org/dist/latest/cheatsheet.html) text editor. In the include file, define devices, channels, and subnets as follows:
+
+ |Type |Description |Example |
+ ||||
+ |**Device** | Define a device by its IP address. | `1.1.1.1` includes all traffic for this device. |
+ |**Channel** | Define a channel by the IP addresses of its source and destination devices, separated by a comma. | `1.1.1.1,2.2.2.2` includes all of the traffic for this channel. |
+ |**Subnet** | Define a subnet by its network address. | `1.1.1` includes all traffic for this subnet. |
+
+ List multiple arguments in separate rows.
+
+1. `Would you like to supply devices and subnet masks you wish to exclude from the capture filter? [Y/N]:`
+
+ Select `Y` to open a new exclude file where you can add a device, channel, and/or subnet that you want to exclude from monitored traffic. Any other traffic, not listed in your exclude file, is ingested to Defender for IoT.
+
+ The exclude file is opened in the [Nano](https://www.nano-editor.org/dist/latest/cheatsheet.html) text editor. In the exclude file, define devices, channels, and subnets as follows:
+
+ |Type |Description |Example |
+ ||||
+ | **Device** | Define a device by its IP address. | `1.1.1.1` excludes all traffic for this device. |
+ | **Channel** | Define a channel by the IP addresses of its source and destination devices, separated by a comma. | `1.1.1.1,2.2.2.2` excludes all of the traffic between these devices. |
+ | **Channel by port** | Define a channel by the IP addresses of its source and destination devices, and the traffic port. | `1.1.1.1,2.2.2.2,443` excludes all of the traffic between these devices and using the specified port.|
+ | **Subnet** | Define a subnet by its network address. | `1.1.1` excludes all traffic for this subnet. |
+ | **Subnet channel** | Define subnet channel network addresses for the source and destination subnets. | `1.1.1,2.2.2` excludes all of the traffic between these subnets. |
+
+ List multiple arguments in separate rows.
+
+1. Reply to the following prompts to define any TCP or UDP ports to include or exclude. Separate multiple ports by comma, and press ENTER to skip any specific prompt.
+
+ - `Enter tcp ports to include (delimited by comma or Enter to skip):`
+ - `Enter udp ports to include (delimited by comma or Enter to skip):`
+ - `Enter tcp ports to exclude (delimited by comma or Enter to skip):`
+ - `Enter udp ports to exclude (delimited by comma or Enter to skip):`
+ - `Enter VLAN ids to include (delimited by comma or Enter to skip):`
+
+ For example, enter multiple ports as follows: `502,443`
+
+1. `In which component do you wish to apply this capture filter?`
+
+ Enter `all` for a basic capture filter. For [advanced use cases](#create-an-advanced-capture-filter-using-the-admin-user), create capture filters for each Defender for IoT component separately.
+
+1. `Type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/N]:`
+
+ This prompt allows you to configure which traffic is in scope. Define whether you want to collect traffic where both endpoints are in scope, or only one of them is in the specified subnet. Supported values include:
+
+ - `internal`: Includes all communication between the specified source and destination
+ - `all-connected`: Includes all communication between either of the specified endpoints and external endpoints.
+
+ For example, for endpoints A and B, if you use the `internal` mode, included traffic will only include communications between endpoints **A** and **B**. <br>However, if you use the `all-connected` mode, included traffic will include all communications between A *or* B and other, external endpoints.
+
+ The default mode is `internal`. To use the `all-connected` mode, select `Y` at the prompt, and then enter `all-connected`.
+
+The following example shows a series of prompts that creates a capture filter to exclude subnet `192.168.x.x` and port `9000:`
+
+```bash
+root@xsense: network capture-filter
+Would you like to supply devices and subnet masks you wish to include in the capture filter? [y/N]: n
+Would you like to supply devices and subnet masks you wish to exclude from the capture filter? [y/N]: y
+You've exited the editor. Would you like to apply your modifications? [y/N]: y
+Enter tcp ports to include (delimited by comma or Enter to skip):
+Enter udp ports to include (delimited by comma or Enter to skip):
+Enter tcp ports to exclude (delimited by comma or Enter to skip):9000
+Enter udp ports to exclude (delimited by comma or Enter to skip):9000
+Enter VLAN ids to include (delimited by comma or Enter to skip):
+In which component do you wish to apply this capture filter?all
+Would you like to supply a custom base capture filter for the collector component? [y/N]: n
+Would you like to supply a custom base capture filter for the traffic_monitor component? [y/N]: n
+Would you like to supply a custom base capture filter for the horizon component? [y/N]: n
+type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/n]: internal
+Please respond with 'yes' or 'no' (or 'y' or 'n').
+type Y for "internal" otherwise N for "all-connected" (custom operation mode enabled) [Y/n]: y
+starting "/usr/local/bin/cyberx-xsense-capture-filter --exclude /var/cyberx/media/capture-filter/exclude --exclude-tcp-port 9000 --exclude-udp-port 9000 --program all --mode internal --from-shell"
+No include file given
+Loaded 1 unique channels
+(000) ret #262144
+(000) ldh [12]
+......
+......
+......
+debug: set new filter for horizon '(((not (net 192.168))) and (not (tcp port 9000)) and (not (udp port 9000))) or (vlan and ((not (net 192.168))) and (not (tcp port 9000)) and (not (udp port 9000)))'
+root@xsense:
+```
+
+### Create an advanced filter for specific components
+
+When configuring advanced capture filters for specific components, you can use your initial include and exclude files as a base, or template, capture filter. Then, configure extra filters for each component on top of the base as needed.
+
+To create a capture filter for *each* component, make sure to repeat the entire process for each component.
+
+> [!NOTE]
+> If you've created different capture filters for different components, the mode selection is used for all components. Defining the capture filter for one component as `internal` and the capture filter for another component as `all-connected` isn't supported.
+
+|User |Command |Full command syntax |
+||||
+| **admin** | `network capture-filter` | No attributes.|
+| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-capture-filter` | `cyberx-xsense-capture-filter [-h] [-i INCLUDE] [-x EXCLUDE] [-etp EXCLUDE_TCP_PORT] [-eup EXCLUDE_UDP_PORT] [-itp INCLUDE_TCP_PORT] [-iup INCLUDE_UDP_PORT] [-vlan INCLUDE_VLAN_IDS] -p PROGRAM [-o BASE_HORIZON] [-s BASE_TRAFFIC_MONITOR] [-c BASE_COLLECTOR] -m MODE [-S]` |
+
+The following extra attributes are used for the *cyberx* user to create capture filters for each component separately:
+
+|Attribute |Description |
+|||
+|`-p <PROGRAM>`, `--program <PROGRAM>` | Defines the component for which you want to configure a capture filter, where `<PROGRAM>` has the following supported values: <br>- `traffic-monitor` <br>- `collector` <br>- `horizon` <br>- `all`: Creates a single capture filter for all components. For more information, see [Create a basic filter for all components](#create-a-basic-filter-for-all-components).|
+|`-o <BASE_HORIZON>`, `--base-horizon <BASE_HORIZON>` | Defines a base capture filter for the `horizon` component, where `<BASE_HORIZON>` is the filter you want to use. <br> Default value = `""` |
+|`-s BASE_TRAFFIC_MONITOR`, `--base-traffic-monitor BASE_TRAFFIC_MONITOR` | Defines a base capture filter for the `traffic-monitor` component. <br> Default value = `""` |
+|`-c BASE_COLLECTOR`, `--base-collector BASE_COLLECTOR` | Defines a base capture filter for the `collector` component. <br> Default value = `""` |
+
+Other attribute values have the same descriptions as in the basic use case, described [earlier](#create-a-basic-filter-for-all-components).
+
+#### Create an advanced capture filter using the admin user
+
+If you're creating a capture filter for each component separately as the *admin* user, no attributes are passed in the [original command](#create-an-advanced-filter-for-specific-components). Instead, a series of prompts is displayed to help you create the capture filter interactively.
+
+Most of the prompts are identical to [basic use case](#create-a-basic-capture-filter-using-the-admin-user). Reply to the following extra prompts as follows:
+
+1. `In which component do you wish to apply this capture filter?`
+
+ Enter one of the following values, depending on the component you want to filter:
+
+ - `horizon`
+ - `traffic-monitor`
+ - `collector`
+
+1. You're prompted to configure a custom base capture filter for the selected component. This option uses the capture filter you configured in the previous steps as a base, or template, where you can add extra configurations on top of the base.
+
+ For example, if you'd selected to configure a capture filter for the `collector` component in the previous step, you're prompted: `Would you like to supply a custom base capture filter for the collector component? [Y/N]:`
+
+ Enter `Y` to customize the template for the specified component, or `N` to use the capture filter you'd configured earlier as it is.
+
+Continue with the remaining prompts as in the [basic use case](#create-a-basic-capture-filter-using-the-admin-user).
+
+### List current capture filters for specific components
+
+Use the following commands to show details about the current capture filters configured for your sensor.
+
+|User |Command |Full command syntax |
+||||
+| **admin** | Use the following commands to view the capture filters for each component: <br><br>- **horizon**: `edit-config horizon_parser/horizon.properties` <br>- **traffic-monitor**: `edit-config traffic_monitor/traffic-monitor` <br>- **collector**: `edit-config dumpark.properties` | No attributes |
+| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | Use the following commands to view the capture filters for each component: <br><br>-**horizon**: `nano /var/cyberx/properties/horizon_parser/horizon.properties` <br>- **traffic-monitor**: `nano /var/cyberx/properties/traffic_monitor/traffic-monitor.properties` <br>- **collector**: `nano /var/cyberx/properties/dumpark.properties` | No attributes |
+
+These commands open the following files, which list the capture filters configured for each component:
+
+|Name |File |Property |
+||||
+|**horizon** | `/var/cyberx/properties/horizon.properties` | `horizon.processor.filter` |
+|**traffic-monitor** | `/var/cyberx/properties/traffic-monitor.properties` | `horizon.processor.filter` |
+|**collector** | `/var/cyberx/properties/dumpark.properties` | `dumpark.network.filter` |
+
+For example with the **admin** user, with a capture filter defined for the *collector* component that excludes subnet 192.168.x.x and port 9000:
+
+```bash
+
+root@xsense: edit-config dumpark.properties
+ GNU nano 2.9.3 /tmp/tmpevt4igo7/tmpevt4igo7
+
+dumpark.network.filter=(((not (net 192.168))) and (not (tcp port 9000)) and (not
+dumpark.network.snaplen=4096
+dumpark.packet.filter.data.transfer=false
+dumpark.infinite=true
+dumpark.output.session=false
+dumpark.output.single=false
+dumpark.output.raw=true
+dumpark.output.rotate=true
+dumpark.output.rotate.history=300
+dumpark.output.size=20M
+dumpark.output.time=30S
+```
+
+### Reset all capture filters
+
+Use the following command to reset your sensor to the default capture configuration with the *cyberx* user, removing all capture filters.
+
+|User |Command |Full command syntax |
+||||
+| **cyberx**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `cyberx-xsense-capture-filter -p all -m all-connected` | No attributes |
+
+If you want to modify the existing capture filters, run the [earlier](#create-a-basic-filter-for-all-components) command again, with new attribute values.
+
+To reset all capture filters using the *admin* user, run the [earlier](#create-a-basic-filter-for-all-components) command again, and respond `N` to all [prompts](#create-a-basic-capture-filter-using-the-admin-user) to reset all capture filters.
+
+The following example shows the command syntax and response for the *cyberx* user:
+
+```bash
+root@xsense:/# cyberx-xsense-capture-filter -p all -m all-connected
+starting "/usr/local/bin/cyberx-xsense-capture-filter -p all -m all-connected"
+No include file given
+No exclude file given
+(000) ret #262144
+(000) ret #262144
+debug: set new filter for dumpark ''
+No include file given
+No exclude file given
+(000) ret #262144
+(000) ret #262144
+debug: set new filter for traffic-monitor ''
+No include file given
+No exclude file given
+(000) ret #262144
+(000) ret #262144
+debug: set new filter for horizon ''
+root@xsense:/#
+```
+ ## Next steps > [!div class="nextstepaction"]
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
For more information, see [Free trial](billing.md#free-trial).
Before you start, you need: 1. An email address to be used as the contact for your new Microsoft Tenant
-1. A Global Admin permissions (Entra ID role on the tenant)
+1. A Billing Admin permissions (Entra ID role on the tenant)
1. Credit card details for your new Azure subscription, although you aren't charged until you switch from the **Free Trial** to the **Pay-As-You-Go** plan ## Add a trial license
defender-for-iot Manage Subscriptions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md
Before performing the procedures in this article, make sure that you have:
For more information, see [Enterprise IoT security in Microsoft Defender XDR](concept-enterprise.md#enterprise-iot-security-in-microsoft-defender-xdr). -- Access to the Microsoft Defender Portal as a [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator)
+- Access to the Microsoft Defender Portal as a [Security administrator](../../active-directory/roles/permissions-reference.md#security-administrator)
## Obtain a standalone, Enterprise IoT trial license
defender-for-iot Manage Users Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-sensor.md
For more information, see [Defender for IoT CLI users and access](references-wor
1. Exit the file and run `sudo monit restart all` to apply your changes.
-## Control user session timeouts
-
-By default, on-premises users are signed out of their sessions after 30 minutes of inactivity. Admin users can use the local CLI access to either turn this feature on or off, or to adjust the inactivity thresholds. For more information, see [Defender for IoT CLI users and access](references-work-with-defender-for-iot-cli-commands.md) and [CLI command reference from OT network sensors](cli-ot-sensor.md).
-
-> [!NOTE]
-> Any changes made to user session timeouts are reset to defaults when you [update the OT monitoring software](update-ot-software.md).
-
-**Prerequisites**: This procedure is available for the *admin*, *cyberx*, and *cyberx_host* users only.
-
-**To control sensor user session timeouts**:
-
-1. Sign in to your sensor via a terminal and run:
-
- ```cli
- sudo nano /var/cyberx/properties/authentication.properties
- ```
-
- The following output appears:
-
- ```cli
- infinity_session_expiration=true
- session_expiration_default_seconds=0
- session_expiration_admin_seconds=1800
- session_expiration_security_analyst_seconds=1800
- session_expiration_read_only_users_seconds=1800
- certifcate_validation=false
- crl_timeout_secounds=3
- crl_retries=1
- cm_auth_token=
-
- ```
-
-1. Do one of the following:
-
- - **To turn off user session timeouts entirely**, change `infinity_session_expiration=true` to `infinity_session_expiration=false`. Change it back to turn it back on again.
-
- - **To adjust an inactivity timeout period**, adjust one of the following values to the required time, in seconds:
-
- - `session_expiration_default_seconds` for all users
- - `session_expiration_admin_seconds` for *Admin* users only
- - `session_expiration_security_analyst_seconds` for *Security Analyst* users only
- - `session_expiration_read_only_users_seconds` for *Read Only* users only
- ## Next steps For more information, see [Audit user activity](track-user-activity.md).
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
The following tables list the activities available by CLI and the privileged use
|Service area |Users |Actions | |||| |Sensor health | *admin*, *cyberx* | [Check OT monitoring services health](cli-ot-sensor.md#check-ot-monitoring-services-health) |
-|Reboot and shutdown | *admin*, *cyberx*, *cyberx_host* | [Restart an appliance](cli-ot-sensor.md#restart-an-appliance)<br>[Shut down an appliance](cli-ot-sensor.md#shutdown-an-appliance) |
+|Reboot and shutdown | *admin*, *cyberx*, *cyberx_host* | [Restart an appliance](cli-ot-sensor.md#restart-an-appliance)<br>[Shut down an appliance](cli-ot-sensor.md#shut-down-an-appliance) |
|Software versions | *admin*, *cyberx* | [Show installed software version](cli-ot-sensor.md#show-installed-software-version) <br>[Update software version](update-ot-software.md) | |Date and time | *admin*, *cyberx*, *cyberx_host* | [Show current system date/time](cli-ot-sensor.md#show-current-system-datetime) | |NTP | *admin*, *cyberx* | [Turn on NTP time sync](cli-ot-sensor.md#turn-on-ntp-time-sync)<br>[Turn off NTP time sync](cli-ot-sensor.md#turn-off-ntp-time-sync) |
The following tables list the activities available by CLI and the privileged use
|Service area |Users |Actions | |||| |Password management | *cyberx*, *cyberx_host* | [Change local user passwords](cli-ot-sensor.md#change-local-user-passwords) |
-| Sign-in configuration| *admin*, *cyberx*, *cyberx_host* |[Control user session timeouts](manage-users-sensor.md#control-user-session-timeouts) |
| Sign-in configuration | *cyberx* | [Define maximum number of failed sign-ins](manage-users-sensor.md#define-maximum-number-of-failed-sign-ins) | ### Network configuration commands
The following tables list the activities available by CLI and the privileged use
|Physical interfaces management | *admin* | [Locate a physical port by blinking interface lights](cli-ot-sensor.md#locate-a-physical-port-by-blinking-interface-lights) | |Physical interfaces management | *admin*, *cyberx* | [List connected physical interfaces](cli-ot-sensor.md#list-connected-physical-interfaces) |
+### Traffic capture filter commands
+
+|Service area |Users |Actions |
+||||
+| Capture filter management | *admin*, *cyberx* | [Create a basic filter for all components](cli-ot-sensor.md#create-a-basic-filter-for-all-components)<br>[Create an advanced filter for specific components](cli-ot-sensor.md#create-an-advanced-filter-for-specific-components) <br>[List current capture filters for specific components](cli-ot-sensor.md#list-current-capture-filters-for-specific-components) <br> [Reset all capture filters](cli-ot-sensor.md#reset-all-capture-filters) |
+ ## Defender for IoT CLI access To access the Defender for IoT CLI, sign in to your OT or Enterprise IoT sensor or your on-premises management console using a terminal emulator and SSH.
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
az network firewall nat-rule create --collection-name exampleset --destination-a
Navigate to the Azure Firewall frontend IP address in a browser to validate connectivity.
-You should see the AKS voting app. In this example, the Firewall public IP was `52.253.228.132`.
+You should see the AKS voting app. In this example, the Firewall public IP was `203.0.113.32`.
-![Screenshot shows the A K S Voting App with buttons for Cats, Dogs, and Reset, and totals.](./media/aks-vote.png)
## Clean up resources
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
For sample queries for this table, see [Resource Graph sample queries for securi
- microsoft.security/assessments - Sample query: [Count healthy, unhealthy, and not applicable resources per recommendation](../samples/samples-by-category.md#count-healthy-unhealthy-and-not-applicable-resources-per-recommendation)
- - Sample query: [List Azure Security Center recommendations](../samples/samples-by-category.md#list-azure-security-center-recommendations)
+ - Sample query: [List Azure Security Center recommendations](../samples/samples-by-category.md)
- Sample query: [List Container Registry vulnerability assessment results](../samples/samples-by-category.md#list-container-registry-vulnerability-assessment-results) - Sample query: [List Qualys vulnerability assessment results](../samples/samples-by-category.md#list-qualys-vulnerability-assessment-results) - microsoft.security/assessments/subassessments
For sample queries for this table, see [Resource Graph sample queries for securi
- Sample query: [Get specific IoT alert](../samples/samples-by-category.md#get-specific-iot-alert) - microsoft.security/locations/alerts (Security Alerts) - microsoft.security/pricings
- - Sample query: [Show Azure Defender pricing tier per subscription](../samples/samples-by-category.md#show-azure-defender-pricing-tier-per-subscription)
+ - Sample query: [Show Azure Defender pricing tier per subscription](../samples/samples-by-category.md)
- microsoft.security/regulatorycompliancestandards - Sample query: [Regulatory compliance state per compliance standard](../samples/samples-by-category.md#regulatory-compliance-state-per-compliance-standard) - microsoft.security/regulatorycompliancestandards/regulatorycompliancecontrols
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
+
+ Title: List of sample Azure Resource Graph queries by category
+description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more.
Last updated : 06/05/2024+++++
+# Azure Resource Graph sample queries by category
+
+This page is a collection of Azure Resource Graph sample queries grouped by general and service
+categories. To jump to a specific **category**, use the links on the top of the page.
+Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature.
+
+## Azure Advisor
++
+## Azure App Service
++
+## Azure Arc
++
+## Azure Arc-enabled Kubernetes
++
+## Azure Arc-enabled servers
++
+## Azure Center for SAP solutions
+++
+## Azure Container Registry
++
+## Azure Cosmos DB
++
+## Azure Key Vault
++
+## Azure Monitor
+++++
+## Azure Orbital Ground Station
++
+## Azure Policy
+++
+## Azure Policy guest configuration
++
+## Azure RBAC
+++++++
+## Azure Service Health
++
+## Azure SQL
++
+## Azure Storage
++
+## Azure Virtual Machines
++
+## General
++
+## IoT Defender
++
+## Management groups
++
+## Microsoft Defender
++
+## Networking
+++
+## Resource health
++
+## Tags
++
+## Virtual Machine Scale Sets
+++
+## Next steps
+
+- Learn more about the [query language](../concepts/query-language.md).
+- Learn more about how to [explore resources](../concepts/explore-resources.md).
hdinsight-aks Hdinsight Aks Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/release-notes/hdinsight-aks-release-notes-archive.md
Title: Archived release notes for Azure HDInsight on AKS
description: Archived release notes for Azure HDInsight on AKS. Get development tips and details for Trino, Flink, and Spark. Previously updated : 08/05/2024 Last updated : 09/05/2024 # Azure HDInsight on AKS archived release notes
Last updated 08/05/2024
Azure HDInsight on AKS is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on this [GitHub repository](https://github.com/Azure/HDInsight-on-aks/releases).
+### Release date: Aug 05, 2024
+
+**This release applies to the following**
+
+- Cluster Pool Version: 1.2
+- Cluster Version: 1.2.1
+- AKS version: 1.27
+
+### New Features
+
+**MSI based SQL authentication**
+Users can now authenticate external Azure SQL DB Metastore with MSI instead of User ID password authentication. This feature helps to further secure the cluster connection with Metastore.
+
+**Configurable VM SKUs for Head node, SSH node**
+This functionality allows users to choose specific SKUs for head nodes, worker nodes, and SSH nodes, offering the flexibility to select according to the use case and the potential to lower total cost of ownership (TCO).
+
+**Multiple MSI in cluster**
+Users can configure multiple MSI for cluster admins operations and for job related resource access. This feature allows users to demarcate and control the access to the cluster and data lying in the storage account.
+For example, one MSI for access to data in storage account and dedicated MSI for cluster operations.
+
+### Updated
+
+**Script action**
+Script Action now can be added with Sudo user permission. Users can now install multiple dependencies including custom jars to customize the clusters as required.
+
+**Library Management**
+Maven repository shortcut feature added to the Library Management in this release. User can now install Maven dependencies directly from the open-source repositories.
+
+**Spark 3.4**
+Spark 3.4 update brings a range of new features includes
+* API enhancements
+* Structured streaming improvements
+* Improved usability and developer experience
+
+> [!IMPORTANT]
+> To take benefit of all these **latest features**, you are required to create a new cluster pool with 1.2 and cluster version 1.2.1
+
+### Known issues
+
+- **Workload identity limitation:**
+ - There's a known [limitation](/azure/aks/workload-identity-overview#limitations) when transitioning to workload identity. This limitation is due to the permission-sensitive nature of FIC operations. Users can't perform deletion of a cluster by deleting the resource group. Cluster deletion requests must be triggered by the application/user/principal with FIC/delete permissions. In case, the FIC deletion fails, the high-level cluster deletion also fails.
+ - **User Assigned Managed Identities (UAMI)** support ΓÇô There's a limit of 20 FICs per UAMI. You can only create 20 Federated Credentials on an identity. In HDInsight on AKS cluster, FIC (Federated Identity Credential) and SA have one-to-one mapping and only 20 SAs can be created against an MSI. If you want to create more clusters, then you are required to provide different MSIs to overcome the limitation.
+ - Creation of federated identity credentials is currently not supported on user-assigned managed identities created in [these regions](/entra/workload-id/workload-identity-federation-considerations#unsupported-regions-user-assigned-managed-identities)
+
+
+### Operating System version
+
+- Mariner OS 2.0
+
+**Workload versions**
+
+|Workload|Version|
+| -- | -- |
+|Trino | 440 |
+|Flink | 1.17.0 |
+|Apache Spark | 3.4 |
+
+**Supported Java and Scala versions**
+
+|Workload |Java|Scala|
+| -- | -- | -- |
+|Trino |Open JDK 21.0.2  |- |
+|Flink |Open JDK 11.0.21 |2.12.7 |
+|Spark |Open JDK 1.8.0_345  |2.12.15 |
+
+The preview is available in the following [regions](../overview.md#region-availability-public-preview).
+
+If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) or refer to the [Support options](../hdinsight-aks-support-help.md) page. If you have product specific feedback, write us on [aka.ms/askhdinsight](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR6HHTBN7UDpEhLm8BJmDhGJURDhLWEhBVE5QN0FQRUpHWDg4ODlZSDA4RCQlQCN0PWcu).
++ ### Release date: March 20, 2024 **This release applies to the following**
hdinsight-aks Hdinsight Aks Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/release-notes/hdinsight-aks-release-notes.md
Title: Release notes for Azure HDInsight on AKS
description: Latest release notes for Azure HDInsight on AKS. Get development tips and details for Trino, Flink, Spark, and more. Previously updated : 08/05/2024 Last updated : 09/16/2024 # Azure HDInsight on AKS release notes
Last updated 08/05/2024
[!INCLUDE [retirement-notice](../includes/retirement-notice.md)] [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -- This article provides information about the **most recent** Azure HDInsight on AKS release updates. For information on earlier releases, see [Azure HDInsight on AKS archived release notes](./hdinsight-aks-release-notes-archive.md). If you would like to subscribe on release notes, watch releases on this [GitHub repository](https://github.com/Azure/HDInsight-on-aks/releases). ## Summary
You can refer to [What's new](../whats-new.md) page for all the details of the f
## Release Information
-### Release date: Aug 05, 2024
+### Release date: Sep 05, 2024
**This release applies to the following**
You can refer to [What's new](../whats-new.md) page for all the details of the f
- Cluster Version: 1.2.1 - AKS version: 1.27
-### New Features
-
-**MSI based SQL authentication**
-Users can now authenticate external Azure SQL DB Metastore with MSI instead of User ID password authentication. This feature helps to further secure the cluster connection with Metastore.
-**Configurable VM SKUs for Head node, SSH node**
-This functionality allows users to choose specific SKUs for head nodes, worker nodes, and SSH nodes, offering the flexibility to select according to the use case and the potential to lower total cost of ownership (TCO).
-
-**Multiple MSI in cluster**
-Users can configure multiple MSI for cluster admins operations and for job related resource access. This feature allows users to demarcate and control the access to the cluster and data lying in the storage account.
-For example, one MSI for access to data in storage account and dedicated MSI for cluster operations.
### Updated
-**Script action**
-Script Action now can be added with Sudo user permission. Users can now install multiple dependencies including custom jars to customize the clusters as required.
-
-**Library Management**
-Maven repository shortcut feature added to the Library Management in this release. User can now install Maven dependencies directly from the open-source repositories.
+The latest API version release is as follows.
-**Spark 3.4**
-Spark 3.4 update brings a range of new features includes
-* API enhancements
-* Structured streaming improvements
-* Improved usability and developer experience
+https://github.com/Azure/azure-rest-api-specs/blob/main/specification/hdinsight/resource-manager/Microsoft.HDInsight/HDInsightOnAks/preview/2024-05-01-preview/hdinsight.json.
> [!IMPORTANT] > To take benefit of all these **latest features**, you are required to create a new cluster pool with 1.2 and cluster version 1.2.1
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
Access to diagnostic logs is essential for any healthcare service. Compliance wi
3. Select **+ Add diagnostic settings**.
- [ ![Screenshot of the diagnostic settings page in the Azure portal.](media/diagnostic-logs/fhir-diagnostic-settings-screen.png) ](media/diagnostic-logs/fhir-diagnostic-settings-screen.png#lightbox)
+ [![Screenshot of the diagnostic settings page in the Azure portal.](media/diagnostic-logs/fhir-diagnostic-settings-screen.png) ](media/diagnostic-logs/fhir-diagnostic-settings-screen.png#lightbox)
4. Enter the **Diagnostic setting name**.
- [ ![Screenshot of the destination details and the checkbox for enabling or disabling audit logs.](media/diagnostic-logs/fhir-diagnostic-settings-add.png) ](media/diagnostic-logs/fhir-diagnostic-settings-add.png#lightbox)
+ [![Screenshot of the destination details and the checkbox for enabling or disabling audit logs.](media/diagnostic-logs/fhir-diagnostic-settings-add.png) ](media/diagnostic-logs/fhir-diagnostic-settings-add.png#lightbox)
5. Select the method that you want to use to access your logs:
healthcare-apis Using Rest Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-rest-client.md
grant_type=client_credentials
@token = {{getAADToken.response.body.access_token}} ```
-[ ![Get access token](media/rest-config.png) ](media/rest-config.png#lightbox)
+[![Get access token](media/rest-config.png)](media/rest-config.png#lightbox)
> [!NOTE] > When the FHIR service audience parameter is not mapped to the FHIR service endpoint url, the resource parameter value should be mapped to the Audience value under the FHIR Service Authentication blade.
GET {{fhirurl}}/Patient/<patientid>
Authorization: Bearer {{token}} ```
-[ ![GET Patient](media/rest-patient.png) ](media/rest-patient.png#lightbox)
+[![GET Patient](media/rest-patient.png)](media/rest-patient.png#lightbox)
## Run PowerShell or CLI You can run PowerShell or CLI scripts within Visual Studio Code. Press `CTRL` and the `~` key and select PowerShell or Bash. You can find more details on [Integrated Terminal](https://code.visualstudio.com/docs/editor/integrated-terminal). ### PowerShell in Visual Studio Code
-[ ![running PowerShell](media/rest-powershell.png) ](media/rest-powershell.png#lightbox)
+[![running PowerShell](media/rest-powershell.png)](media/rest-powershell.png#lightbox)
### CLI in Visual Studio Code
-[ ![running CLI](media/rest-cli.png) ](media/rest-cli.png#lightbox)
+[![running CLI](media/rest-cli.png)](media/rest-cli.png#lightbox)
## Troubleshooting
import-export Storage Import Export Data To Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-files.md
Before you create an import job to transfer data into Azure Files, carefully rev
## Step 1: Prepare the drives
-This step generates a journal file. The journal file stores basic information such as drive serial number, encryption key, and storage account details.
+Attach the external disk to the file share and run WAImportExport.exe file. This step generates a journal file. The journal file stores basic information such as drive serial number, encryption key, and storage account details.
Do the following steps to prepare the drives.
iot-operations Howto Configure Dataflow Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md
spec:
authentication: method: serviceAccountToken serviceAccountTokenSettings:
- audience: aio-mqtt
+ audience: aio-mq
mqttSettings: {} ```
spec:
mqttSettings: host: example.mqttbroker.com:8883 tls:
- mode: enabled
+ mode: Enabled
trustedCaCertificateConfigMap: <your CA certificate config map> ```
spec:
endpointType: kafka authentication: method: systemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- audience: "https://eventgrid.azure.net"
+ systemAssignedManagedIdentitySettings:
+ audience: <your Event Hubs namespace>.servicebus.windows.net
kafkaSettings:
- host: <NAMESPACE>.servicebus.windows.net:9093
+ host: <your Event Hubs namespace>.servicebus.windows.net:9093
tls: mode: Enabled consumerGroupId: mqConnector
spec:
kafkaSettings: host: example.kafka.com:9093 tls:
- mode: enabled
+ mode: Enabled
consumerGroupId: mqConnector ```
load-balancer Load Balancer Ipv6 Internet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-cli.md
- Title: Create a public load balancer with IPv6 - Azure CLI-
-description: With this learning path, get started creating a public load balancer with IPv6 using Azure CLI.
--
-keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
-- Previously updated : 06/26/2024----
-# Create a public load balancer with IPv6 using Azure CLI
-
->[!NOTE]
->This article describes an introductory IPv6 feature to allow Basic Load Balancers to provide both IPv4 and IPv6 connectivity. Comprehensive IPv6 connectivity is now available with [IPv6 for Azure VNETs](../virtual-network/ip-services/ipv6-overview.md) which integrates IPv6 connectivity with your Virtual Networks and includes key features such as IPv6 Network Security Group rules, IPv6 User-defined routing, IPv6 Basic and Standard load balancing, and more. IPv6 for Azure VNETs is the recommended standard for IPv6 applications in Azure.
-See [IPv6 for Azure VNET PowerShell Deployment](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)
-
-An Azure load balancer is a Layer-4 (TCP, UDP) load balancer. Load balancers provide high availability by distributing incoming traffic among healthy service instances in cloud services or virtual machines in a load balancer set. Load balancers can also present these services on multiple ports or multiple IP addresses or both.
-
-## Example deployment scenario
-
-The following diagram illustrates the load balancing solution that's deployed by using the example template described in this article.
-
-![Load balancer scenario](./media/load-balancer-ipv6-internet-cli/lb-ipv6-scenario-cli.png)
-
-In this scenario, you create the following Azure resources:
-
-* Two virtual machines (VMs)
-* A virtual network interface for each VM with both IPv4 and IPv6 addresses assigned
-* A public load balancer with an IPv4 and an IPv6 public IP address
-* An availability set that contains the two VMs
-* Two load balancing rules to map the public VIPs to the private endpoints
-
-## Deploy the solution by using Azure CLI
-
-The following steps show how to create a public load balancer by using Azure CLI. Using CLI, you create and configure each object individually, and then put them together to create a resource.
-
-To deploy a load balancer, create and configure the following objects:
-
-* **Frontend IP configuration**: Contains public IP addresses for incoming network traffic.
-* **Backend address pool**: Contains network interfaces (NICs) for the virtual machines to receive network traffic from the load balancer.
-* **Load balancing rules**: Contains rules that map a public port on the load balancer to a port in the backend address pool.
-* **Inbound NAT rules**: Contains network address translation (NAT) rules that map a public port on the load balancer to a port for a specific virtual machine in the backend address pool.
-* **Probes**: Contains health probes that are used to check the availability of virtual machine instances in the backend address pool.
-
-## Set up Azure CLI
-
-In this example, you run the Azure CLI tools in a PowerShell command window. To improve readability and reuse, you use PowerShell's scripting capabilities, not the Azure PowerShell cmdlets.
-
-1. [Install and Configure the Azure CLI](/cli/azure/install-azure-cli) by following the steps in the linked article and sign in to your Azure account.
-
-2. Set up PowerShell variables for use with the Azure CLI commands:
-
- ```powershell
- $subscriptionid = "########-####-####-####-############" # enter subscription id
- $location = "southcentralus"
- $rgName = "pscontosorg1southctrlus09152016"
- $vnetName = "contosoIPv4Vnet"
- $vnetPrefix = "10.0.0.0/16"
- $subnet1Name = "clicontosoIPv4Subnet1"
- $subnet1Prefix = "10.0.0.0/24"
- $subnet2Name = "clicontosoIPv4Subnet2"
- $subnet2Prefix = "10.0.1.0/24"
- $dnsLabel = "contoso09152016"
- $lbName = "myIPv4IPv6Lb"
- ```
-
-## Create a resource group, a load balancer, a virtual network, and subnets
-
-1. Create a resource group:
-
- ```azurecli
- az group create --name $rgName --location $location
- ```
-
-2. Create a load balancer:
-
- ```azurecli
- $lb = az network lb create --resource-group $rgname --location $location --name $lbName
- ```
-
-3. Create a virtual network:
-
- ```azurecli
- $vnet = az network vnet create --resource-group $rgname --name $vnetName --location $location --address-prefixes $vnetPrefix
- ```
-
-4. In this virtual network, create two subnets:
-
- ```azurecli
- $subnet1 = az network vnet subnet create --resource-group $rgname --name $subnet1Name --address-prefix $subnet1Prefix --vnet-name $vnetName
- $subnet2 = az network vnet subnet create --resource-group $rgname --name $subnet2Name --address-prefix $subnet2Prefix --vnet-name $vnetName
- ```
-
-## Create public IP addresses for the frontend pool
-
-1. Set up the PowerShell variables:
-
- ```powershell
- $publicIpv4Name = "myIPv4Vip"
- $publicIpv6Name = "myIPv6Vip"
- ```
-
-2. Create a public IP address for the frontend IP pool:
-
- ```azurecli
- $publicipV4 = az network public-ip create --resource-group $rgname --name $publicIpv4Name --location $location --version IPv4 --allocation-method Dynamic --dns-name $dnsLabel
- $publicipV6 = az network public-ip create --resource-group $rgname --name $publicIpv6Name --location $location --version IPv6 --allocation-method Dynamic --dns-name $dnsLabel
- ```
-
- > [!IMPORTANT]
- > The load balancer uses the domain label of the public IP as its fully qualified domain name (FQDN). This a change from classic deployment, which uses the cloud service name as the load balancer FQDN.
- >
- > In this example, the FQDN is *contoso09152016.southcentralus.cloudapp.azure.com*.
-
-## Create frontend and backend pools
-
-In this section, you create the following IP pools:
-* The frontend IP pool that receives the incoming network traffic on the load balancer.
-* The backend IP pool where the frontend pool sends the load-balanced network traffic.
-
-1. Set up the PowerShell variables:
-
- ```powershell
- $frontendV4Name = "FrontendVipIPv4"
- $frontendV6Name = "FrontendVipIPv6"
- $backendAddressPoolV4Name = "BackendPoolIPv4"
- $backendAddressPoolV6Name = "BackendPoolIPv6"
- ```
-
-2. Create a frontend IP pool, and associate it with the public IP that you created in the previous step and the load balancer.
-
- ```azurecli
- $frontendV4 = az network lb frontend-ip create --resource-group $rgname --name $frontendV4Name --public-ip-address $publicIpv4Name --lb-name $lbName
- $frontendV6 = az network lb frontend-ip create --resource-group $rgname --name $frontendV6Name --public-ip-address $publicIpv6Name --lb-name $lbName
- $backendAddressPoolV4 = az network lb address-pool create --resource-group $rgname --name $backendAddressPoolV4Name --lb-name $lbName
- $backendAddressPoolV6 = az network lb address-pool create --resource-group $rgname --name $backendAddressPoolV6Name --lb-name $lbName
- ```
-
-## Create the probe, NAT rules, and load balancer rules
-
-This example creates the following items:
-
-* A probe rule to check for connectivity to TCP port 80.
-* A NAT rule to translate all incoming traffic on port 3389 to port 3389 for RDP.\*
-* A NAT rule to translate all incoming traffic on port 3391 to port 3389 for remote desktop protocol (RDP).\*
-* A load balancer rule to balance all incoming traffic on port 80 to port 80 on the addresses in the backend pool.
-
-\* NAT rules are associated with a specific virtual-machine instance behind the load balancer. The network traffic that arrives on port 3389 is sent to the specific virtual machine and port that's associated with the NAT rule. You must specify a protocol (UDP or TCP) for a NAT rule. You can't assign both protocols to the same port.
-
-1. Set up the PowerShell variables:
-
- ```powershell
- $probeV4V6Name = "ProbeForIPv4AndIPv6"
- $natRule1V4Name = "NatRule-For-Rdp-VM1"
- $natRule2V4Name = "NatRule-For-Rdp-VM2"
- $lbRule1V4Name = "LBRuleForIPv4-Port80"
- $lbRule1V6Name = "LBRuleForIPv6-Port80"
- ```
-
-2. Create the probe.
-
- The following example creates a TCP probe that checks for connectivity to the backend TCP port 80 every 15 seconds. After two consecutive failures, it marks the backend resource as unavailable.
-
- ```azurecli
- $probeV4V6 = az network lb probe create --resource-group $rgname --name $probeV4V6Name --protocol tcp --port 80 --interval 15 --threshold 2 --lb-name $lbName
- ```
-
-3. Create inbound NAT rules that allow RDP connections to the backend resources:
-
- ```azurecli
- $inboundNatRuleRdp1 = az network lb inbound-nat-rule create --resource-group $rgname --name $natRule1V4Name --frontend-ip-name $frontendV4Name --protocol Tcp --frontend-port 3389 --backend-port 3389 --lb-name $lbName
- $inboundNatRuleRdp2 = az network lb inbound-nat-rule create --resource-group $rgname --name $natRule2V4Name --frontend-ip-name $frontendV4Name --protocol Tcp --frontend-port 3391 --backend-port 3389 --lb-name $lbName
- ```
-
-4. Create load balancer rules that send traffic to different backend ports, depending on the front end that received the request.
-
- ```azurecli
- $lbruleIPv4 = az network lb rule create --resource-group $rgname --name $lbRule1V4Name --frontend-ip-name $frontendV4Name --backend-pool-name $backendAddressPoolV4Name --probe-name $probeV4V6Name --protocol Tcp --frontend-port 80 --backend-port 80 --lb-name $lbName
- $lbruleIPv6 = az network lb rule create --resource-group $rgname --name $lbRule1V6Name --frontend-ip-name $frontendV6Name --backend-pool-name $backendAddressPoolV6Name --probe-name $probeV4V6Name --protocol Tcp --frontend-port 80 --backend-port 8080 --lb-name $lbName
- ```
-
-5. Check your settings:
-
- ```azurecli
- az network lb show --resource-group $rgName --name $lbName
- ```
-
- Expected output:
-
- ```output
- info: Executing command network lb show
- info: Looking up the load balancer "myIPv4IPv6Lb"
- data: Id : /subscriptions/########-####-####-####-############/resourceGroups/pscontosorg1southctrlus09152016/providers/Microsoft.Network/loadBalancers/myIPv4IPv6Lb
- data: Name : myIPv4IPv6Lb
- data: Type : Microsoft.Network/loadBalancers
- data: Location : southcentralus
- data: Provisioning state : Succeeded
- data:
- data: Frontend IP configurations:
- data: Name Provisioning state Private IP allocation Private IP Subnet Public IP
- data: --
- data: FrontendVipIPv4 Succeeded Dynamic myIPv4Vip
- data: FrontendVipIPv6 Succeeded Dynamic myIPv6Vip
- data:
- data: Probes:
- data: Name Provisioning state Protocol Port Path Interval Count
- data: - -- - - -- --
- data: ProbeForIPv4AndIPv6 Succeeded Tcp 80 15 2
- data:
- data: Backend Address Pools:
- data: Name Provisioning state
- data:
- data: BackendPoolIPv4 Succeeded
- data: BackendPoolIPv6 Succeeded
- data:
- data: Load Balancing Rules:
- data: Name Provisioning state Load distribution Protocol Frontend port Backend port Enable floating IP Idle timeout in minutes
- data: -- -- -- - --
- data: LBRuleForIPv4-Port80 Succeeded Default Tcp 80 80 false 4
- data: LBRuleForIPv6-Port80 Succeeded Default Tcp 80 8080 false 4
- data:
- data: Inbound NAT Rules:
- data: Name Provisioning state Protocol Frontend port Backend port Enable floating IP Idle timeout in minutes
- data: - -- - --
- data: NatRule-For-Rdp-VM1 Succeeded Tcp 3389 3389 false 4
- data: NatRule-For-Rdp-VM2 Succeeded Tcp 3391 3389 false 4
- info: network lb show
- ```
-
-## Create NICs
-
-Create NICs and associate them with NAT rules, load balancer rules, and probes.
-
-1. Set up the PowerShell variables:
-
- ```powershell
- $nic1Name = "myIPv4IPv6Nic1"
- $nic2Name = "myIPv4IPv6Nic2"
- $subnet1Id = "/subscriptions/$subscriptionid/resourceGroups/$rgName/providers/Microsoft.Network/VirtualNetworks/$vnetName/subnets/$subnet1Name"
- $subnet2Id = "/subscriptions/$subscriptionid/resourceGroups/$rgName/providers/Microsoft.Network/VirtualNetworks/$vnetName/subnets/$subnet2Name"
- $backendAddressPoolV4Id = "/subscriptions/$subscriptionid/resourceGroups/$rgname/providers/Microsoft.Network/loadbalancers/$lbName/backendAddressPools/$backendAddressPoolV4Name"
- $backendAddressPoolV6Id = "/subscriptions/$subscriptionid/resourceGroups/$rgname/providers/Microsoft.Network/loadbalancers/$lbName/backendAddressPools/$backendAddressPoolV6Name"
- $natRule1V4Id = "/subscriptions/$subscriptionid/resourceGroups/$rgname/providers/Microsoft.Network/loadbalancers/$lbName/inboundNatRules/$natRule1V4Name"
- $natRule2V4Id = "/subscriptions/$subscriptionid/resourceGroups/$rgname/providers/Microsoft.Network/loadbalancers/$lbName/inboundNatRules/$natRule2V4Name"
- ```
-
-2. Create a NIC for each back end, and add an IPv6 configuration:
-
- ```azurecli
- $nic1 = az network nic create --name $nic1Name --resource-group $rgname --location $location --private-ip-address-version "IPv4" --subnet $subnet1Id --lb-address-pools $backendAddressPoolV4Id --lb-inbound-nat-rules $natRule1V4Id
- $nic1IPv6 = az network nic ip-config create --resource-group $rgname --name "IPv6IPConfig" --private-ip-address-version "IPv6" --lb-address-pools $backendAddressPoolV6Id --nic-name $nic1Name
-
- $nic2 = az network nic create --name $nic2Name --resource-group $rgname --location $location --private-ip-address-version "IPv4" --subnet $subnet2Id --lb-address-pools $backendAddressPoolV4Id --lb-inbound-nat-rules $natRule2V4Id
- $nic2IPv6 = az network nic ip-config create --resource-group $rgname --name "IPv6IPConfig" --private-ip-address-version "IPv6" --lb-address-pools $backendAddressPoolV6Id --nic-name $nic2Name
- ```
-
-## Create the backend VM resources, and attach each NIC
-
-To create VMs, you must have a storage account. For load balancing, the VMs need to be members of an availability set. For more information about creating VMs, see [Create an Azure VM by using PowerShell](/azure/virtual-machines/windows/quick-create-powershell?toc=%2fazure%2fload-balancer%2ftoc.json).
-
-1. Set up the PowerShell variables:
-
- ```powershell
- $availabilitySetName = "myIPv4IPv6AvailabilitySet"
- $vm1Name = "myIPv4IPv6VM1"
- $vm2Name = "myIPv4IPv6VM2"
- $nic1Id = "/subscriptions/$subscriptionid/resourceGroups/$rgname/providers/Microsoft.Network/networkInterfaces/$nic1Name"
- $nic2Id = "/subscriptions/$subscriptionid/resourceGroups/$rgname/providers/Microsoft.Network/networkInterfaces/$nic2Name"
- $imageurn = "MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest"
- $vmUserName = "vmUser"
- $mySecurePassword = "PlainTextPassword*1"
- ```
-
- > [!WARNING]
- > This example uses the username and password for the VMs in cleartext. Take appropriate care when you use these credentials in cleartext. For a more secure method of handling credentials in PowerShell, see the [`Get-Credential`](/powershell/module/microsoft.powershell.security/get-credential) cmdlet.
-
-2. Create the availability set:
-
- ```azurecli
- $availabilitySet = az vm availability-set create --name $availabilitySetName --resource-group $rgName --location $location
- ```
-
-3. Create the virtual machines with the associated NICs:
-
- ```azurecli
- az vm create --resource-group $rgname --name $vm1Name --image $imageurn --admin-username $vmUserName --admin-password $mySecurePassword --nics $nic1Id --location $location --availability-set $availabilitySetName --size "Standard_A1"
-
- az vm create --resource-group $rgname --name $vm2Name --image $imageurn --admin-username $vmUserName --admin-password $mySecurePassword --nics $nic2Id --location $location --availability-set $availabilitySetName --size "Standard_A1"
- ```
load-balancer Load Balancer Ipv6 Internet Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-ps.md
- Title: Create an Internet-facing load balancer with IPv6 - Azure PowerShell-
-description: Learn how to create an Internet facing load balancer with IPv6 using PowerShell for Resource Manager.
--
-keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
-- Previously updated : 06/26/2024----
-# Get started creating an Internet facing load balancer with IPv6 using PowerShell for Resource Manager
-
-> [!div class="op_single_selector"]
-> * [PowerShell](load-balancer-ipv6-internet-ps.md)
-> * [Azure CLI](load-balancer-ipv6-internet-cli.md)
-> * [Template](load-balancer-ipv6-internet-template.md)
-
->[!NOTE]
->This article describes an introductory IPv6 feature to allow Basic Load Balancers to provide both IPv4 and IPv6 connectivity. Comprehensive IPv6 connectivity is now available with [IPv6 for Azure VNETs](../virtual-network/ip-services/ipv6-overview.md) which integrates IPv6 connectivity with your Virtual Networks and includes key features such as IPv6 Network Security Group rules, IPv6 User-defined routing, IPv6 Basic and Standard load balancing, and more. IPv6 for Azure VNETs is the recommended standard for IPv6 applications in Azure.
-See [IPv6 for Azure VNET PowerShell Deployment](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)
-
-An Azure load balancer is a Layer-4 (TCP, UDP) load balancer. The load balancer provides high availability by distributing incoming traffic among healthy service instances in cloud services or virtual machines in a load balancer set. Azure Load Balancer can also present those services on multiple ports, multiple IP addresses, or both.
--
-## Example deployment scenario
-
-The following diagram illustrates the load balancing solution being deployed in this article.
-
-![Load balancer scenario](./media/load-balancer-ipv6-internet-ps/lb-ipv6-scenario.png)
-
-In this scenario you'll create the following Azure resources:
-
-* an Internet-facing Load Balancer with an IPv4 and an IPv6 Public IP address
-* two load balancing rules to map the public VIPs to the private endpoints
-* an Availability Set to that contains the two VMs
-* two virtual machines (VMs)
-* a virtual network interface for each VM with both IPv4 and IPv6 addresses assigned
-
-## Deploying the solution using the Azure PowerShell
-
-The following steps show how to create an Internet facing load balancer using Azure Resource Manager with PowerShell. With Azure Resource Manager, each resource is created and configured individually, then put together to create a resource.
-
-To deploy a load balancer, you create and configure the following objects:
-
-* Frontend IP configuration - contains public IP addresses for incoming network traffic.
-* Backend address pool - contains network interfaces (NICs) for the virtual machines to receive network traffic from the load balancer.
-* Load balancing rules - contains rules mapping a public port on the load balancer to port in the backend address pool.
-* Inbound NAT rules - contains rules mapping a public port on the load balancer to a port for a specific virtual machine in the backend address pool.
-* Probes - contains health probes used to check availability of virtual machines instances in the backend address pool.
-
-For more information, see [Azure Load Balancer components](./components.md).
-
-## Set up PowerShell to use Resource Manager
-
-Make sure you have the latest production version of the Azure Resource Manager module for PowerShell.
-
-1. Sign into Azure
-
- ```azurepowershell-interactive
- Connect-AzAccount
- ```
-
- Enter your credentials when prompted.
-
-2. Check the subscriptions for the account
-
- ```azurepowershell-interactive
- Get-AzSubscription
- ```
-
-3. Choose which of your Azure subscriptions to use.
-
- ```azurepowershell-interactive
- Select-AzSubscription -SubscriptionId 'GUID of subscription'
- ```
-
-4. Create a resource group (skip this step if using an existing resource group)
-
- ```azurepowershell-interactive
- New-AzResourceGroup -Name NRP-RG -location "West US"
- ```
-
-## Create a virtual network and a public IP address for the frontend IP pool
-
-1. Create a virtual network with a subnet.
-
- ```azurepowershell-interactive
- $backendSubnet = New-AzVirtualNetworkSubnetConfig -Name LB-Subnet-BE -AddressPrefix 10.0.2.0/24
- $vnet = New-AzvirtualNetwork -Name VNet -ResourceGroupName NRP-RG -Location 'West US' -AddressPrefix 10.0.0.0/16 -Subnet $backendSubnet
- ```
-
-2. Create Azure Public IP address (PIP) resources for the frontend IP address pool. Be sure to change the value for `-DomainNameLabel` before running the following commands. The value must be unique within the Azure region.
-
- ```azurepowershell-interactive
- $publicIPv4 = New-AzPublicIpAddress -Name 'pub-ipv4' -ResourceGroupName NRP-RG -Location 'West US' -AllocationMethod Static -IpAddressVersion IPv4 -DomainNameLabel lbnrpipv4
- $publicIPv6 = New-AzPublicIpAddress -Name 'pub-ipv6' -ResourceGroupName NRP-RG -Location 'West US' -AllocationMethod Dynamic -IpAddressVersion IPv6 -DomainNameLabel lbnrpipv6
- ```
-
- > [!IMPORTANT]
- > The load balancer uses the domain label of the public IP as prefix for its FQDN. In this example, the FQDNs are *lbnrpipv4.westus.cloudapp.azure.com* and *lbnrpipv6.westus.cloudapp.azure.com*.
-
-## Create a Frontend IP configurations and a Backend Address Pool
-
-1. Create frontend address configuration that uses the Public IP addresses you created.
-
- ```azurepowershell-interactive
- $FEIPConfigv4 = New-AzLoadBalancerFrontendIpConfig -Name "LB-Frontendv4" -PublicIpAddress $publicIPv4
- $FEIPConfigv6 = New-AzLoadBalancerFrontendIpConfig -Name "LB-Frontendv6" -PublicIpAddress $publicIPv6
- ```
-
-2. Create backend address pools.
-
- ```azurepowershell-interactive
- $backendpoolipv4 = New-AzLoadBalancerBackendAddressPoolConfig -Name "BackendPoolIPv4"
- $backendpoolipv6 = New-AzLoadBalancerBackendAddressPoolConfig -Name "BackendPoolIPv6"
- ```
-
-## Create LB rules, NAT rules, a probe, and a load balancer
-
-This example creates the following items:
-
-* a NAT rule to translate all incoming traffic on port 443 to port 4443
-* a load balancer rule to balance all incoming traffic on port 80 to port 80 on the addresses in the backend pool.
-* a load balancer rule to allow RDP connection to the VMs on port 3389.
-* a probe rule to check the health status on a page named *HealthProbe.aspx* or a service on port 8080
-* a load balancer that uses all these objects
-
-1. Create the NAT rules.
-
- ```azurepowershell-interactive
- $inboundNATRule1v4 = New-AzLoadBalancerInboundNatRuleConfig -Name "NicNatRulev4" -FrontendIpConfiguration $FEIPConfigv4 -Protocol TCP -FrontendPort 443 -BackendPort 4443
- $inboundNATRule1v6 = New-AzLoadBalancerInboundNatRuleConfig -Name "NicNatRulev6" -FrontendIpConfiguration $FEIPConfigv6 -Protocol TCP -FrontendPort 443 -BackendPort 4443
- ```
-
-2. Create a health probe. There are two ways to configure a probe:
-
- HTTP probe
-
- ```azurepowershell-interactive
- $healthProbe = New-AzLoadBalancerProbeConfig -Name 'HealthProbe-v4v6' -RequestPath 'HealthProbe.aspx' -Protocol http -Port 80 -IntervalInSeconds 15 -ProbeCount 2
- ```
-
- or TCP probe
-
- ```azurepowershell-interactive
- $healthProbe = New-AzLoadBalancerProbeConfig -Name 'HealthProbe-v4v6' -Protocol Tcp -Port 8080 -IntervalInSeconds 15 -ProbeCount 2
- $RDPprobe = New-AzLoadBalancerProbeConfig -Name 'RDPprobe' -Protocol Tcp -Port 3389 -IntervalInSeconds 15 -ProbeCount 2
- ```
-
- For this example, we're going to use the TCP probes.
-
-3. Create a load balancer rule.
-
- ```azurepowershell-interactive
- $lbrule1v4 = New-AzLoadBalancerRuleConfig -Name "HTTPv4" -FrontendIpConfiguration $FEIPConfigv4 -BackendAddressPool $backendpoolipv4 -Probe $healthProbe -Protocol Tcp -FrontendPort 80 -BackendPort 8080
- $lbrule1v6 = New-AzLoadBalancerRuleConfig -Name "HTTPv6" -FrontendIpConfiguration $FEIPConfigv6 -BackendAddressPool $backendpoolipv6 -Probe $healthProbe -Protocol Tcp -FrontendPort 80 -BackendPort 8080
- $RDPrule = New-AzLoadBalancerRuleConfig -Name "RDPrule" -FrontendIpConfiguration $FEIPConfigv4 -BackendAddressPool $backendpoolipv4 -Probe $RDPprobe -Protocol Tcp -FrontendPort 3389 -BackendPort 3389
- ```
-
-4. Create the load balancer using the previously created objects.
-
- ```azurepowershell-interactive
- $NRPLB = New-AzLoadBalancer -ResourceGroupName NRP-RG -Name 'myNrpIPv6LB' -Location 'West US' -FrontendIpConfiguration $FEIPConfigv4,$FEIPConfigv6 -InboundNatRule $inboundNATRule1v6,$inboundNATRule1v4 -BackendAddressPool $backendpoolipv4,$backendpoolipv6 -Probe $healthProbe,$RDPprobe -LoadBalancingRule $lbrule1v4,$lbrule1v6,$RDPrule
- ```
-
-## Create NICs for the backend VMs
-
-1. Get the Virtual Network and Virtual Network Subnet, where the NICs need to be created.
-
- ```azurepowershell-interactive
- $vnet = Get-AzVirtualNetwork -Name VNet -ResourceGroupName NRP-RG
- $backendSubnet = Get-AzVirtualNetworkSubnetConfig -Name LB-Subnet-BE -VirtualNetwork $vnet
- ```
-
-2. Create IP configurations and NICs for the VMs.
-
- ```azurepowershell-interactive
- $nic1IPv4 = New-AzNetworkInterfaceIpConfig -Name "IPv4IPConfig" -PrivateIpAddressVersion "IPv4" -Subnet $backendSubnet -LoadBalancerBackendAddressPool $backendpoolipv4 -LoadBalancerInboundNatRule $inboundNATRule1v4
- $nic1IPv6 = New-AzNetworkInterfaceIpConfig -Name "IPv6IPConfig" -PrivateIpAddressVersion "IPv6" -LoadBalancerBackendAddressPool $backendpoolipv6 -LoadBalancerInboundNatRule $inboundNATRule1v6
- $nic1 = New-AzNetworkInterface -Name 'myNrpIPv6Nic0' -IpConfiguration $nic1IPv4,$nic1IPv6 -ResourceGroupName NRP-RG -Location 'West US'
-
- $nic2IPv4 = New-AzNetworkInterfaceIpConfig -Name "IPv4IPConfig" -PrivateIpAddressVersion "IPv4" -Subnet $backendSubnet -LoadBalancerBackendAddressPool $backendpoolipv4
- $nic2IPv6 = New-AzNetworkInterfaceIpConfig -Name "IPv6IPConfig" -PrivateIpAddressVersion "IPv6" -LoadBalancerBackendAddressPool $backendpoolipv6
- $nic2 = New-AzNetworkInterface -Name 'myNrpIPv6Nic1' -IpConfiguration $nic2IPv4,$nic2IPv6 -ResourceGroupName NRP-RG -Location 'West US'
- ```
-
-## Create virtual machines and assign the newly created NICs
-
-For more information about creating a VM, see [Create and preconfigure a Windows Virtual Machine with Resource Manager and Azure PowerShell](/azure/virtual-machines/windows/quick-create-powershell?toc=%2fazure%2fload-balancer%2ftoc.json)
-
-1. Create an Availability Set and Storage account
-
- ```azurepowershell-interactive
- New-AzAvailabilitySet -Name 'myNrpIPv6AvSet' -ResourceGroupName NRP-RG -location 'West US'
- $availabilitySet = Get-AzAvailabilitySet -Name 'myNrpIPv6AvSet' -ResourceGroupName NRP-RG
- New-AzStorageAccount -ResourceGroupName NRP-RG -Name 'mynrpipv6stacct' -Location 'West US' -SkuName "Standard_LRS"
- $CreatedStorageAccount = Get-AzStorageAccount -ResourceGroupName NRP-RG -Name 'mynrpipv6stacct'
- ```
-
-2. Create each VM and assign the previous created NICs
-
- ```azurepowershell-interactive
- $mySecureCredentials= Get-Credential -Message "Type the username and password of the local administrator account."
-
- $vm1 = New-AzVMConfig -VMName 'myNrpIPv6VM0' -VMSize 'Standard_G1' -AvailabilitySetId $availabilitySet.Id
- $vm1 = Set-AzVMOperatingSystem -VM $vm1 -Windows -ComputerName 'myNrpIPv6VM0' -Credential $mySecureCredentials -ProvisionVMAgent -EnableAutoUpdate
- $vm1 = Set-AzVMSourceImage -VM $vm1 -PublisherName MicrosoftWindowsServer -Offer WindowsServer -Skus 2012-R2-Datacenter -Version "latest"
- $vm1 = Add-AzVMNetworkInterface -VM $vm1 -Id $nic1.Id -Primary
- $osDisk1Uri = $CreatedStorageAccount.PrimaryEndpoints.Blob.ToString() + "vhds/myNrpIPv6VM0osdisk.vhd"
- $vm1 = Set-AzVMOSDisk -VM $vm1 -Name 'myNrpIPv6VM0osdisk' -VhdUri $osDisk1Uri -CreateOption FromImage
- New-AzVM -ResourceGroupName NRP-RG -Location 'West US' -VM $vm1
-
- $vm2 = New-AzVMConfig -VMName 'myNrpIPv6VM1' -VMSize 'Standard_G1' -AvailabilitySetId $availabilitySet.Id
- $vm2 = Set-AzVMOperatingSystem -VM $vm2 -Windows -ComputerName 'myNrpIPv6VM1' -Credential $mySecureCredentials -ProvisionVMAgent -EnableAutoUpdate
- $vm2 = Set-AzVMSourceImage -VM $vm2 -PublisherName MicrosoftWindowsServer -Offer WindowsServer -Skus 2012-R2-Datacenter -Version "latest"
- $vm2 = Add-AzVMNetworkInterface -VM $vm2 -Id $nic2.Id -Primary
- $osDisk2Uri = $CreatedStorageAccount.PrimaryEndpoints.Blob.ToString() + "vhds/myNrpIPv6VM1osdisk.vhd"
- $vm2 = Set-AzVMOSDisk -VM $vm2 -Name 'myNrpIPv6VM1osdisk' -VhdUri $osDisk2Uri -CreateOption FromImage
- New-AzVM -ResourceGroupName NRP-RG -Location 'West US' -VM $vm2
- ```
load-balancer Load Balancer Ipv6 Internet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-template.md
- Title: Deploy an Internet-facing load-balancer with IPv6 - Azure template-
-description: Learn how to deploy IPv6 support for Azure Load Balancer and load-balanced VMs using an Azure template.
--
-keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot
-- Previously updated : 06/26/2024----
-# Deploy an Internet-facing load-balancer solution with IPv6 using a template
-
-> [!div class="op_single_selector"]
-> * [PowerShell](load-balancer-ipv6-internet-ps.md)
-> * [Azure CLI](load-balancer-ipv6-internet-cli.md)
-> * [Template](load-balancer-ipv6-internet-template.md)
--
->[!NOTE]
->This article describes an introductory IPv6 feature to allow Basic Load Balancers to provide both IPv4 and IPv6 connectivity. Comprehensive IPv6 connectivity is now available with [IPv6 for Azure VNETs](../virtual-network/ip-services/ipv6-overview.md) which integrates IPv6 connectivity with your Virtual Networks and includes key features such as IPv6 Network Security Group rules, IPv6 User-defined routing, IPv6 Basic and Standard load balancing, and more. IPv6 for Azure VNETs is the recommended standard for IPv6 applications in Azure.
-See [IPv6 for Azure VNET PowerShell Deployment](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md)
-
-An Azure load balancer is a Layer-4 (TCP, UDP) load balancer. The load balancer provides high availability by distributing incoming traffic among healthy service instances in cloud services or virtual machines in a load balancer set. Azure Load Balancer can also present those services on multiple ports, multiple IP addresses, or both.
-
-## Example deployment scenario
-
-The following diagram illustrates the load balancing solution being deployed using the example template described in this article.
-
-![Diagram shows an example scenario used in this article, including a workstation client connected to an Azure Load Balancer over the Internet, connected in turn to two virtual machines.](./media/load-balancer-ipv6-internet-template/lb-ipv6-scenario.png)
-
-In this scenario you create the following Azure resources:
-
-* a virtual network interface for each VM with both IPv4 and IPv6 addresses assigned
-* an Internet-facing Load Balancer with an IPv4 and an IPv6 Public IP address
-* two load balancing rules to map the public VIPs to the private endpoints
-* an Availability Set that contains the two VMs
-* two virtual machines (VMs)
-
-## Deploying the template using the Azure portal
-
-This article references a template that is published in the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/load-balancer-ipv6-create/) gallery. You can download the template from the gallery or launch the deployment in Azure directly from the gallery. This article assumes you have downloaded the template to your local computer.
-
-1. Open the Azure portal and sign in with an account that has permissions to create VMs and networking resources within an Azure subscription. Also, unless you're using existing resources, the account needs permission to create a resource group and a storage account.
-2. Select "+New" from the menu then type "template" in the search box. Select "Template deployment" from the search results.
-
- ![Screenshot shows the Azure portal with New and Template deployment selected.](./media/load-balancer-ipv6-internet-template/lb-ipv6-portal-step2.png)
-
-3. In the Everything blade, select "Template deployment."
-
- ![Screenshot shows Template deployment in the Marketplace.](./media/load-balancer-ipv6-internet-template/lb-ipv6-portal-step3.png)
-
-4. Select "Create."
-
- ![Screenshot shows the description of Template deployment in the Marketplace.](./media/load-balancer-ipv6-internet-template/lb-ipv6-portal-step4.png)
-
-5. Select "Edit template." Delete the existing contents and copy/paste in the entire contents of the template file (to include the start and end { }), then select "Save."
-
- > [!NOTE]
- > If you are using Microsoft Internet Explorer, when you paste you receive a dialog box asking you to allow access to the Windows clipboard. Click "Allow access."
-
- ![Screenshot shows the firest step of a Custom deployment, which is Edit template.](./media/load-balancer-ipv6-internet-template/lb-ipv6-portal-step5.png)
-
-6. Select "Edit parameters." In the Parameters blade, specify the values per the guidance in the Template parameters section, then select "Save" to close the Parameters blade. In the Custom Deployment blade, select your subscription, an existing resource group or create one. If you're creating a resource group, then select a location for the resource group. Next, select **Legal terms**, then select **Purchase** for the legal terms. Azure begins deploying the resources. It takes several minutes to deploy all the resources.
-
- ![Screenshot shows the steps involved in the Custom deployment, starting with entering template parameter values.](./media/load-balancer-ipv6-internet-template/lb-ipv6-portal-step6.png)
-
- For more information about these parameters, see the [Template parameters and variables](#template-parameters-and-variables) section later in this article.
-
-7. To see the resources created by the template, select Browse, scroll down the list until you see "Resource groups," then select it.
-
- ![Screenshot shows the Azure portal with Browse and Resource groups selected.](./media/load-balancer-ipv6-internet-template/lb-ipv6-portal-step7.png)
-
-8. On the Resource groups blade, select the name of the resource group you specified in step 6. You see a list of all the resources that were deployed. If all went well, it should say "Succeeded" under "Last deployment." If not, ensure that the account you're using has permissions to create the necessary resources.
-
- ![Screenshot shows the status of the last deployment for a resource group, in this example, Succeeded.](./media/load-balancer-ipv6-internet-template/lb-ipv6-portal-step8.png)
-
- > [!NOTE]
- > If you browse your Resource Groups immediately after completing step 6, "Last deployment" will display the status of "Deploying" while the resources are being deployed.
-
-9. Select "myIPv6PublicIP" in the list of resources. You see that it has an IPv6 address under IP address, and that its DNS name is the value you specified for the dnsNameforIPv6LbIP parameter in step 6. This resource is the public IPv6 address and host name that is accessible to Internet-clients.
-
- ![Screenshot shows the IPv6 public address.](./media/load-balancer-ipv6-internet-template/lb-ipv6-portal-step9.png)
-
-## Validate connectivity
-
-When the template has deployed successfully, you can validate connectivity by completing the following tasks:
-
-1. Sign in to the Azure portal and connect to each of the VMs created by the template deployment. If you deployed a Windows Server VM, run ipconfig /all from a command prompt. You see that the VMs have both IPv4 and IPv6 addresses. If you deployed Linux VMs, you need to configure the Linux OS to receive dynamic IPv6 addresses using the instructions provided for your Linux distribution.
-2. From an IPv6 Internet-connected client, initiate a connection to the public IPv6 address of the load balancer. To confirm that the load balancer is balancing between the two VMs, you could install a web server like Microsoft Internet Information Services (IIS) on each of the VMs. The default web page on each server could contain the text "Server0" or "Server1" to uniquely identify it. Then, open an Internet browser on an IPv6 Internet-connected client and browse to the hostname you specified for the dnsNameforIPv6LbIP parameter of the load balancer to confirm end-to-end IPv6 connectivity to each VM. If you only see the web page from only one server, you may need to clear your browser cache. Open multiple private browsing sessions. You should see a response from each server.
-3. From an IPv4 Internet-connected client, initiate a connection to the public IPv4 address of the load balancer. To confirm that the load balancer is load balancing the two VMs, you could test using IIS, as detailed in Step 2.
-4. From each VM, initiate an outbound connection to an IPv6 or IPv4-connected Internet device. In both cases, the source IP seen by the destination device is the public IPv4 or IPv6 address of the load balancer.
-
-> [!NOTE]
-> To test connectivity for both an IPv4 and an IPv6 frontend of a Load Balancer, an ICMP ping can be sent to the frontend of the Load Balancer. Note that the IP addresses shown in the diagram are examples of values that you might see. Since the IPv6 addresses are assigned dynamically, the addresses you receive will differ and can vary by region. Also, it is common for the public IPv6 address on the load balancer to start with a different prefix than the private IPv6 addresses in the backend pool.
-
-## Template parameters and variables
-
-An Azure Resource Manager template contains multiple variables and parameters that you can customize to your needs. Variables are used for fixed values that you don't want a user to change. Parameters are used for values that you want a user to provide when deploying the template. The example template is configured for the scenario described in this article. You can customize this to needs of your environment.
-
-The example template used in this article includes the following variables and parameters:
-
-| Parameter / Variable | Notes |
-| | |
-| adminUsername |Specify the name of the admin account used to sign in to the virtual machines with. |
-| adminPassword |Specify the password for the admin account used to sign in to the virtual machines with. |
-| dnsNameforIPv4LbIP |Specify the DNS host name you want to assign as the public name of the load balancer. This name resolves to the load balancer's public IPv4 address. The name must be lowercase and match the regex: ^[a-z][a-z0-9-]{1,61}[a-z0-9]$. |
-| dnsNameforIPv6LbIP |Specify the DNS host name you want to assign as the public name of the load balancer. This name resolves to the load balancer's public IPv6 address. The name must be lowercase and match the regex: ^[a-z][a-z0-9-]{1,61}[a-z0-9]$. This can be the same name as the IPv4 address. When a client sends a DNS query for this name, Azure returns both the A and AAAA records when the name is shared. |
-| vmNamePrefix |Specify the VM name prefix. The template appends a number (0, 1, etc.) to the name when the VMs are created. |
-| nicNamePrefix |Specify the network interface name prefix. The template appends a number (0, 1, etc.) to the name when the network interfaces are created. |
-| storageAccountName |Enter the name of an existing storage account or specify the name of a new one to be created by the template. |
-| availabilitySetName |Enter then name of the availability set to be used with the VMs |
-| addressPrefix |The address prefix used to define the address range of the Virtual Network |
-| subnetName |The name of the subnet in created for the VNet |
-| subnetPrefix |The address prefix used to define the address range of the subnet |
-| vnetName |Specify the name for the VNet used by the VMs. |
-| ipv4PrivateIPAddressType |The allocation method used for the private IP address (Static or Dynamic) |
-| ipv6PrivateIPAddressType |The allocation method used for the private IP address (Dynamic). IPv6 only supports Dynamic allocation. |
-| numberOfInstances |The number of load balanced instances deployed by the template |
-| ipv4PublicIPAddressName |Specify the DNS name you want to use to communicate with the public IPv4 address of the load balancer. |
-| ipv4PublicIPAddressType |The allocation method used for the public IP address (Static or Dynamic) |
-| Ipv6PublicIPAddressName |Specify the DNS name you want to use to communicate with the public IPv6 address of the load balancer. |
-| ipv6PublicIPAddressType |The allocation method used for the public IP address (Dynamic). IPv6 only supports Dynamic allocation. |
-| lbName |Specify the name of the load balancer. This name is displayed in the portal or used when referring to it with a CLI or PowerShell command. |
-
-The remaining variables in the template contain derived values that are assigned when Azure creates the resources. Don't change those variables.
-
-## Next steps
-
-For the JSON syntax and properties of a load balancer in a template, see [Microsoft.Network/loadBalancers](/azure/templates/microsoft.network/loadbalancers).
networking Network Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/network-monitoring-overview.md
Performance Monitor is part of Network Performance Monitor and is network monito
* Monitor the health of the network, without the need for SNMP - For more information, view the following articles: * [Configure a Network Performance Monitor Solution in Azure Monitor logs](/previous-versions/azure/azure-monitor/insights/network-performance-monitor)
openshift Azure Redhat Openshift Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md
Previously updated : 08/08/2024 Last updated : 09/06/2024
Azure Red Hat OpenShift receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the latest releases.
+## Version 4.15 - September 2024
+
+We're pleased to announce the launch of OpenShift 4.15 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.15](https://docs.openshift.com/container-platform/4.15/welcome/https://docsupdatetracker.net/index.html) as an installable version. You can check the end of support date on the [support lifecycle page](/azure/openshift/support-lifecycle) for previous versions.
+
+In addition to making version 4.15 available as an installable version, this release also makes the following features generally available:
+
+- CLI for multiple public IP addresses for larger clusters up to 250 nodes
+ ## Updates - August 2024 - You can now create up to 20 IP addresses per Azure Red Hat OpenShift cluster load balancer. This feature was previously in preview but is now generally available. See [Configure multiple IP addresses per cluster load balancer](howto-multiple-ips.md) for details. Azure Red Hat OpenShift 4.x has a 250 pod-per-node limit and a 250 compute node limit. For instructions on adding large clusters, see [Deploy a large Azure Red Hat OpenShift cluster](howto-large-clusters.md).
openshift Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/create-cluster.md
Previously updated : 06/12/2024 Last updated : 09/13/2024 #Customer intent: As a developer, I want learn how to create an Azure Red Hat OpenShift cluster, scale it, and then clean up resources so that I am not charged for what I'm not using.
If you provide a custom domain for your cluster, note the following points:
* The OpenShift console will be available at a URL such as `https://console-openshift-console.apps.example.com`, instead of the built-in domain `https://console-openshift-console.apps.<random>.<location>.aroapp.io`.
-* By default, OpenShift uses self-signed certificates for all of the routes created on custom domains `*.apps.example.com`. If you choose to use custom DNS after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom CA for your ingress controller](https://docs.openshift.com/container-platform/4.6/security/certificates/replacing-default-ingress-certificate.html) and a [custom CA for your API server](https://docs.openshift.com/container-platform/4.6/security/certificates/api-server.html).
+* By default, OpenShift uses self-signed certificates for all of the routes created on custom domains `*.apps.example.com`. If you choose to use custom DNS after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom CA for your ingress controller](https://docs.openshift.com/container-platform/latest/security/certificates/replacing-default-ingress-certificate.html) and a [custom CA for your API server](https://docs.openshift.com/container-platform/latest/security/certificates/api-server.html).
### Create a virtual network containing two empty subnets
Run the following command to create a cluster. If you choose to use either of th
* Optionally, you can [pass your Red Hat pull secret](#get-a-red-hat-pull-secret-optional), which enables your cluster to access Red Hat container registries along with other content. Add the `--pull-secret @pull-secret.txt` argument to your command. * Optionally, you can [use a custom domain](#prepare-a-custom-domain-for-your-cluster-optional). Add the `--domain foo.example.com` argument to your command, replacing `foo.example.com` with your own custom domain. +
+<!--
> [!NOTE] > If you're adding any optional arguments to your command, be sure to close the argument on the preceding line of the command with a trailing backslash.
+-->
+
+> [!NOTE]
+> The maximum number of worker nodes definable at creation time is 50. You can scale out up to 250 nodes after the cluster is created.
```azurecli-interactive az aro create \
az aro create \
--worker-subnet worker-subnet ```
-After executing the `az aro create` command, it normally takes about 35 minutes to create a cluster.
+After executing the `az aro create` command, it normally takes about 45 minutes to create a cluster.
+
+#### Large scale ARO clusters
+
+If you are looking to deploy an Azure Red Hat OpenShift cluster with more than 100 worker nodes please see the [Deploy a large Azure Red Hat OpenShift cluster](howto-large-clusters.md)
#### Selecting a different ARO version
openshift Howto Large Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-large-clusters.md
Previously updated : 08/15/2024 Last updated : 09/12/2024 # Deploy a large Azure Red Hat OpenShift cluster
-This article provides the steps and best practices for deploying large scale Azure Red Hat OpenShift clusters up to 250 nodes. For clusters of that size, a combination of control plane nodes and infrastructure nodes is needed to ensure the cluster functions properly is recommended.
+This article provides the steps and best practices for deploying large scale Azure Red Hat OpenShift clusters up to 250 worker nodes. For clusters of that size, there are some recommendations regarding control plane nodes and infrastructure nodes.
> [!CAUTION]
-> Before deleting a large cluster, descale the cluster to 120 nodes or below.
+> Before deleting a cluster with more than 120 nodes, scale down the cluster to 120 nodes or less.
>
-## Deploy a cluster
+## Recommendations
+
+### Control plane nodes
-For clusters with over 101 control plane nodes, use the following [virtual machine instance types](support-policies-v4.md#supported-virtual-machine-sizes) size recommendations (or similar, newer generation instance types):
+For clusters with over 100 worker nodes, the following [virtual machine instance types](support-policies-v4.md#supported-virtual-machine-sizes) (or similar, newer generation instance types) are recommended for control plane nodes:
- Standard_D32s_v3 - Standard_D32s_v4 - Standard_D32s_v5
-Following is a sample script using Azure CLI to deploy a cluster with Standard_D32s_v5 as the control plane node:
-
-```azurecli
-#az aro create \ --resource-group $RESOURCEGROUP \ --name $CLUSTER \ --vnet aro-vnet \ --master-subnet master-subnet \ --worker-subnet worker-subnet --master-vm-size Standard_D32s_v5
-```
-
-## Deploy infrastructure nodes for the cluster
+### Infrastructure nodes
-For clusters with over 101 nodes, infrastructure nodes are required to separate cluster workloads (such as prometheus) to minimize contention with other workloads.
-
-> [!NOTE]
-> It's recommended that you deploy three (3) infrastructure nodes per cluster for redundancy and scalability needs.
->
+For clusters with over 100 worker nodes, infrastructure nodes are required to separate cluster workloads (such as Prometheus) to minimize contention with other workloads. You should deploy three (3) infrastructure nodes per cluster for redundancy and scalability needs.
The following instance types are recommended for infrastructure nodes: - Standard_E16as_v5 - Standard_E16s_v5
-For instructions on configuring infrastructure nodes, see [Deploy infrastructure nodes in an Azure Red Hat OpenShift cluster](howto-infrastructure-nodes.md).
+For instructions on configuring infrastructure nodes, see [Deploy infrastructure nodes in an Azure Red Hat OpenShift cluster](howto-infrastructure-nodes.md). This will be configured after cluster deployment.
+
+### Add IP addresses to the load balancer
+
+Azure Red Hat OpenShift public clusters are created with a public load balancer that's used for outbound connectivity from inside the cluster. By default, one public IP address is configured on that public load balancer, and that limits the maximum node count of your cluster to 62. To be able to scale your cluster to the maximum supported number of 250 nodes, you need to assign multiple additional public IP addresses to the load balancer. You can configure up to 20 IP addresses per cluster. The outbound rules and frontend IP configurations are adjusted to accommodate the number of IP addresses.
-## Add IP addresses to the cluster
+For example, a cluster with 180 worker nodes needs at least (3) three IP addresses (180 nodes / 62 nodes per IP).
-A maximum of 20 IP addresses can be added to a load balancer. One (1) IP address is needed per 65 nodes, so a cluster with 250 nodes requires a minimum of four (4) IP addresses.
+This can be accomplished as part of the cluster creation process or later, after the cluster is created.
-To add IP addresses to the load balancer using Azure CLI, run the following command:
+## Deploy a cluster
+
+When deploying a large cluster, you must start with at most 50 worker nodes at creation time, then scale the cluster out to the desired number of worker nodes, up to 250 worker nodes.
+
+> [!NOTE]
+> While you can define up to 50 worker nodes at creation time, it's best to start with a small cluster (e.g, three (3) worker nodes) and then scale out to the desired number of worker nodes after the cluster is installed.
+>
+
+Follow the steps provided in [Create an Azure Red Hat OpenShift cluster](https://learn.microsoft.com/azure/openshift/create-cluster?tabs=azure-cli) until the "Create the cluster" steps, then continue as instructed:
-`az aro update -n [clustername] ΓÇôg [resourcegroup] --lb-ip-count 20`
+The sample command below using the Azure CLI can be used to deploy a cluster with Standard_D32s_v5 as the control plane nodes, requesting three public IP addresses, and defining nine worker nodes:
-To add IP addresses through a rest API call:
+```azurecli
+az aro create \
+ --resource-group $RESOURCEGROUP \
+ --name $CLUSTER \
+ --vnet aro-vnet \
+ --master-subnet master-subnet \
+ --worker-subnet worker-subnet \
+ --master-vm-size Standard_D32s_v5 \
+ --worker-count 9 \
+ --lb-ip-count 3
+```
+
+To add IP addresses to the load balancer using the Azure CLI after the cluster is created, run the following command:
-```rest
-az rest --method patch --url https://management.azure.com/subscriptions/fe16a035-e540-4ab7-80d9-373fa9a3d6ae/resourceGroups/shared-cluster/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/shared-cluster?api-version=2023-07-01-preview --body '{"properties": {"networkProfile": {"loadBalancerProfile": {"managedOutboundIps": {"count": 5}}}}}' --headers "Content-Type=application/json"
+```azurecli
+az aro update
+ --name <CLUSTER_NAME>
+ ΓÇô-resource-group <RESOURCE_GROUP>
+ --lb-ip-count <PUBLIC_IP_COUNT>`
```
+You can then configure the corresponding OpenShift MachineSets to obtain the number of worker nodes desired. See [Manually scaling a compute machine set](https://docs.openshift.com/container-platform/latest/machine_management/manually-scaling-machineset.html) for more details.
+
+Once the cluster is successfully installed, proceed to deploying infrastructure nodes as defined in the [Infrastructure nodes](#infrastructure-nodes) section.
openshift Howto Multiple Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-multiple-ips.md
Title: Configure multiple IP addresses for ARO cluster load balancers
+ Title: Configure multiple IP addresses for Azure Red Hat OpenShift cluster load balancers
description: Discover how to configure multiple IP addresses for ARO cluster load balancers. Previously updated : 03/05/2024 Last updated : 09/11/2024+ #Customer intent: As an ARO SRE, I need to configure multiple outbound IP addresses per ARO cluster load balancers
-# Configure multiple IP addresses per ARO cluster load balancer
+# Configure multiple IP addresses per Azure Red Hat OpenShift cluster load balancer
-ARO public clusters are created with a public load balancer that's used for outbound connectivity from inside the cluster. By default, one public IP address is configured on that public load balancer, and that limits the maximum node count of your cluster to 65. To be able to scale your cluster to the maximum supported number of 250 nodes, you need to assign multiple additional public IP addresses to the load balancer.
+Azure Red Hat OpenShift public clusters are created with a public load balancer that's used for outbound connectivity from inside the cluster. By default, one public IP address is configured on that public load balancer, and that limits the maximum node count of your cluster to 62. To be able to scale your cluster to the maximum supported number of 250 nodes, you need to assign multiple additional public IP addresses to the load balancer.
You can configure up to 20 IP addresses per cluster. The outbound rules and frontend IP configurations are adjusted to accommodate the number of IP addresses. > [!CAUTION]
-> Before deleting a large cluster, descale the cluster to 120 nodes or below.
->
-
-> [!NOTE]
-> The [API](/rest/api/openshift/open-shift-clusters/update?view=rest-openshift-2023-11-22&tabs=HTTP) method for using this feature is generally available. General availability for using the CLI for this feature is coming soon. The [preview version](#download-aro-extension-wheel-file-preview-only) of this feature can still be used through the CLI.
+> Before deleting a cluster with more than 120 nodes, scale down the cluster to 120 nodes or less.
> ## Requirements The multiple public IPs feature is only available on the current network architecture used by ARO; older clusters don't support this feature. If your cluster was created before OpenShift Container Platform (OCP) version 4.5, this feature isn't available even if you upgraded your OCP version since then.
-If you're unsure if your cluster was created before the current version of OCP, use the following commands to check.
+If you're unsure if your cluster was created before OCP version 4.5, use the following commands to check.
-To get the cluster managed resource group:
+Get the cluster managed resource group:
``` RESOURCEGROUP=aro-rg # the name of the resource group your cluster is in
az network lb list -g $CLUSTER_RESOURCEGROUP -o table
If you have a loadbalancer named `$CLUSTER-public-lb`, the cluster has the older network architecture and can't use the multiple public IP feature.
-### Download ARO extension wheel file (Preview only)
-
-In order to run the commands in this article, you must first download the ARO extension wheel file from [https://aka.ms/az-aroext-latest](https://aka.ms/az-aroext-latest). To install the extension, run the following command:
-
-`az extension add -s <path to downloaded whl file>`
- ## Create the cluster with multiple IP addresses
-To create a new ARO cluster with multiple managed IPs on the public load balancer, use the following command with the desired number of IPs in the `--load-balancer-managed-outbound-ip-count` parameter. In the example below, seven (7) IP addresses will be created:
+To create a new ARO cluster with multiple managed IPs on the public load balancer, use the following command with the desired number of IPs in the `--load-balancer-managed-outbound-ip-count` parameter. In the example below, seven (7) IP addresses are created:
```
-az aro create --resource-group aroResourceGroup --name aroCluster \
-
- --load-balancer-managed-outbound-ip-count 7 \
+az aro create \
+ --resource-group aroResourceGroup \
+ --name aroCluster \
+ --load-balancer-managed-outbound-ip-count 7
```
+See [Deploy a large Azure Red Hat OpenShift cluster](howto-large-clusters.md) for more information about deploying a large cluster.
+ ## Update the number of IP addresses on existing clusters To update the number of managed IPs on the public load balancer of an existing ARO cluster, use the following command with the desired number of IPs in the `--load-balancer-managed-outbound-ip-count` parameter. In the example below, the number of IPs for the cluster will be updated to four (4): ```
-az aro update --resource-group aroResourceGroup --name aroCluster \
-
- --load-balancer-managed-outbound-ip-count 4
+az aro update \
+ --resource-group aroResourceGroup \
+ --name aroCluster \
+ --load-balancer-managed-outbound-ip-count 4
```
-You can use this update method to either increase or decrease the number of IPs on a cluster to be between 1 and 20. Note that scaling down the number of clusters can interrupt the outbound network traffic from the cluster.
+You can use this update method to either increase or decrease the number of IPs on a cluster to be between 1 and 20. Scaling down the number of clusters can interrupt the outbound network traffic from the cluster.
openshift Support Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-lifecycle.md
Previously updated : 07/17/2024 Last updated : 09/10/2024 # Support lifecycle for Azure Red Hat OpenShift 4
See the following guide for the [past Red Hat OpenShift Container Platform (upst
|4.12|January 2023| August 19 2023|October 17 2024| |4.13|May 2023| December 15 2023|November 17 2024| |4.14|October 2023| April 25 2024|May 1 2025|
-|4.15|February 2024| Coming soon|June 27 2025|
+|4.15|February 2024| September 4 2024|June 27 2025|
+|4.16|June 2024| Coming soon|November 2 2025|
## FAQ
operator-service-manager Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/release-notes.md
The following release notes are generally available (GA):
* Release Notes for Version 2.0.2777-132 * Release Notes for Version 2.0.2783-134 * Release Notes for Version 2.0.2788-135
+* Release Notes for Version 2.0.2804-137
### Release Attestation These releases are produced compliant with MicrosoftΓÇÖs Secure Development Lifecycle. This lifecycle includes processes for authorizing software changes, antimalware scanning, and scanning and mitigating security bugs and vulnerabilities.
The following bug fixes, or other defect resolutions, are delivered with this re
#### Security Related Updates None+
+## Release 2.0.2804-137
+
+Document Revision 1.1
+
+### Release Summary
+Azure Operator Service Manager is a cloud orchestration service that enables automation of operator network-intensive workloads, and mission critical applications hosted on Azure Operator Nexus. Azure Operator Service Manager unifies infrastructure, software, and configuration management with a common model into a single interface, both based on trusted Azure industry standards. This August 30, 2024 Azure Operator Service Manager release includes updating the NFO version to 2.0.2804-137, the details of which are further outlined in the remainder of this document.
+
+### Release Details
+* Release Version: Version 2.0.2804-137
+* Release Date: August 30, 2024
+* Is NFO update required: YES, Update only
+* Dependency Versions: Go/1.22.4 - Helm/3.15.2
+
+### Release Installation
+This release can be installed with as an update on top of release 2.0.2788-135.
+
+### Release Highlights
+#### High availability for cluster registry and webhook.
+This version restores the high availability features first introduced with release 2.0.2783-134. When enabled, the singleton pod, used in earlier releases, is replaced with a replica set and optionally allows for horizontal auto scaling.
+
+#### Enhanced internal certificate management and rotation.
+This version implements internal certificate management using a new method which does not take dependency on cert-manager. Instead, a private internal service is used to handle requirements for certificate management and rotation within the AOSM namespace.
+
+#### Safe Upgrades NF Level Rollback
+This version introduces new user options to control behavior when a failure occurs during an upgrade. While pause on failure remains the default, a user can now optionally enable rollback on failure. If a failure occure, with rollback on failure any prior completed NfApps will be reverted to prior state using helm rollback command. See [learn documentation](safe-upgrades-nf-level-rollback.md) for more details on usage.
+
+### Issues Resolved in This Release
+
+#### Bugfix Related Updates
+The following bug fixes, or other defect resolutions, are delivered with this release, for either Network Function Operator (NFO) or resource provider (RP) components.
+
+* NFO - Enhance cluster registry performance by preventing unnecessary or repeated image downloads.
+
+#### Security Related Updates
+
+* CVE - A total of one CVE is addressed in this release.
playwright-testing How To Configure Visual Comparisons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-configure-visual-comparisons.md
Example service config that runs visual comparisons and configures the path for
```typeScript import { defineConfig } from '@playwright/test';
+import { getServiceConfig, ServiceOS } from '@azure/microsoft-playwright-testing';
import config from './playwright.config';
-import dotenv from 'dotenv';
-
-dotenv.config();
-
-// Name the test run if it's not named yet.
-process.env.PLAYWRIGHT_SERVICE_RUN_ID = process.env.PLAYWRIGHT_SERVICE_RUN_ID || new Date().toISOString();
-
-// Can be 'linux' or 'windows'.
-const os = process.env.PLAYWRIGHT_SERVICE_OS || 'linux';
-
-export default defineConfig(config, {
- workers: 20,
-
- // Enable screenshot testing and configure directory with expectations.
- ignoreSnapshots: false,
- snapshotPathTemplate: `{testDir}/__screenshots__/{testFilePath}/${os}/{arg}{ext}`,
-
- use: {
- // Specify the service endpoint.
- connectOptions: {
- wsEndpoint: `${process.env.PLAYWRIGHT_SERVICE_URL}?cap=${JSON.stringify({
- os,
- runId: process.env.PLAYWRIGHT_SERVICE_RUN_ID
- })}`,
- timeout: 30000,
- headers: {
- 'x-mpt-access-key': process.env.PLAYWRIGHT_SERVICE_ACCESS_TOKEN!
- },
- // Allow service to access the localhost.
- exposeNetwork: '<loopback>'
- }
+
+/* Learn more about service configuration at https://aka.ms/mpt/config */
+export default defineConfig(
+ config,
+ getServiceConfig(config, {
+ exposeNetwork: '<loopback>',
+ timeout: 30000,
+ os: ServiceOS.LINUX
+ }),
+ {
+ /*
+ Playwright Testing service reporter is added by default.
+ This will override any reporter options specified in the base playwright config.
+ If you are using more reporters, please update your configuration accordingly.
+ */
+ reporter: [["list"], ['@azure/microsoft-playwright-testing/reporter']],
+ ignoreSnapshots: false,
+ // Enable screenshot testing and configure directory with expectations.ΓÇâ
+ snapshotPathTemplate: `{testDir}/__screenshots__/{testFilePath}/${ServiceOS.LINUX}/{arg}{ext}`,
}
-});
+);
``` ## Related content
playwright-testing How To Manage Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-manage-access-tokens.md
You can only delete access tokens that you created in a workspace. To create an
## Related content - Learn more about [managing access to a workspace](./how-to-manage-workspace-access.md).
+- Learn more about [managing authentication to the workspace](./how-to-manage-authentication.md)
playwright-testing How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-manage-authentication.md
+
+ Title: Microsoft Playwright Testing authentication
+description: Learn how to manage authentication and authorization for Microsoft Playwright Testing preview
+ Last updated : 09/07/2024+++
+# Manage authentication and authorization for Microsoft Playwright Testing preview
+
+In this article, you learn how to manage authentication and authorization for Microsoft Playwright Testing preview. Authentication is required to run Playwright tests on cloud-hosted browsers and to publish test results and artifacts to the service.
+
+By default, [Microsoft Entra ID](/entra/identity/) is used for authentication. This method is more secure and is the recommended authentication method. You can't disable authentication using Microsoft Entra ID. However, you can also use access tokens to authenticate and authorize.
++
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Background
+
+Microsoft Playwright Testing Preview is built on the Playwright open-source framework. It runs Playwright tests on cloud-hosted browsers and publishes reports and artifacts back to the service.
+
+To use the service, the client must authenticate with the service to access the browsers. Similarly, publishing results and artifacts requires authenticated API interactions. The service offers two authentication methods: Microsoft Entra ID and access tokens.
+
+Microsoft Entra ID uses your Azure credentials, requiring a sign-in to your Azure account for secure access. Alternatively, you can generate an access token from your Playwright workspace and use it in your setup. However, we strongly recommend Microsoft Entra ID for authentication due to its enhanced security. Access tokens, while convenient, function like long-lived passwords and are more susceptible to being compromised.
+
+## Enable authentication using access-tokens
+
+Microsoft Playwright Testing service also supports authentication using access tokens. This authentication method is less secure. We recommend using Microsoft Entra ID to authenticate to the service.
+
+> [!CAUTION]
+> Your workspace access tokens are similar to a password for your Microsoft Playwright Testing workspace. Always be careful to protect your access tokens. Avoid distributing access tokens to other users, hard-coding them, or saving them anywhere in plain text that is accessible to others.
+
+Revoke and recreate your tokens if you believe they are compromised.
+
+To enable authentication using access tokens:
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account and select your workspace.
+
+1. Select the settings icon on the home page to go to the workspace settings.
+
+1. Select the **Authentication** page and turn on **Enable authentication using Access tokens**
++
+ :::image type="content" source="./media/how-to-manage-authentication/playwright-testing-enable-access-token.png" alt-text="Screenshot that shows the access tokens settings page in the Playwright portal." lightbox="./media/how-to-manage-authentication/playwright-testing-enable-access-token.png":::
+
+> [!CAUTION]
+> Authentication using access tokens is less secure. [Learn how to manage access tokens](./how-to-manage-access-tokens.md)
+
+## Set up authentication using access-tokens
+
+1. While running the tests, enable access token auth in the `playwright.service.config.ts` file in your setup.
+
+ ```typescript
+ /* Learn more about service configuration at https://aka.ms/mpt/config */
+ export default defineConfig(config, getServiceConfig( config {
+ serviceAuthType:'ACCESS_TOKEN'
+ }));
+ ```
+
+1. Create access token
+
+ Follow the steps to [create an access token](./how-to-manage-access-tokens.md#generate-a-workspace-access-token)
+
+1. Set up your environment
+
+ To set up your environment, you have to configure the `PLAYWRIGHT_SERVICE_ACCESS_TOKEN` environment variable with the value you obtained in the previous steps.
+
+ We recommend that you use the `dotenv` module to manage your environment. With `dotenv`, you define your environment variables in the `.env` file.
+
+ 1. Add the `dotenv` module to your project:
+
+ ```shell
+ npm i --save-dev dotenv
+ ```
+
+ 1. Create a `.env` file alongside the `playwright.config.ts` file in your Playwright project:
+
+ ```
+ PLAYWRIGHT_SERVICE_ACCESS_TOKEN={MY-ACCESS-TOKEN}
+ ```
+
+ Make sure to replace the `{MY-ACCESS-TOKEN}` text placeholder with the value you copied earlier.
++
+## Run tests on the service and publish results
+
+Run Playwright tests against cloud-hosted browsers and publish the results to the service using the configuration you created above.
+
+```typescript
+npx playwright test --config=playwright.service.config.ts --workers=20
+```
+
+## Related content
+
+- Learn more about [managing access tokens](./how-to-manage-access-tokens.md).
playwright-testing How To Test Local Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-test-local-applications.md
You can specify one or multiple networks by using a list of rules. For example,
You can configure the `exposeNetwork` option in `playwright.service.config.ts`. The following example shows how to expose the `localhost` network by using the [`<loopback>`](https://en.wikipedia.org/wiki/Loopback) rule: ```typescript
-export default defineConfig(config, {
- workers: 20,
- use: {
- // Specify the service endpoint.
- connectOptions: {
- wsEndpoint: `${process.env.PLAYWRIGHT_SERVICE_URL}?cap=${JSON.stringify({
- // Can be 'linux' or 'windows'.
- os: process.env.PLAYWRIGHT_SERVICE_OS || 'linux',
- runId: process.env.PLAYWRIGHT_SERVICE_RUN_ID
- })}`,
- timeout: 30000,
- headers: {
- 'x-mpt-access-key': process.env.PLAYWRIGHT_SERVICE_ACCESS_TOKEN!
- },
- // Allow service to access the localhost.
- exposeNetwork: '<loopback>'
- }
- }
-});
+import { getServiceConfig, ServiceOS } from "@azure/microsoft-playwright-testing";
+import { defineConfig } from "@playwright/test";
+import { AzureCliCredential } from "@azure/identity";
+import config from "./playwright.config";
+
+export default defineConfig(
+ config,
+ getServiceConfig(config, {
+ exposeNetwork: '<loopback>', // Allow service to access the localhost.
+ }),
+);
+ ``` You can now reference `localhost` in the Playwright test code, and run the tests on cloud-hosted browsers with Microsoft Playwright Testing:
playwright-testing How To Try Playwright Testing Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-try-playwright-testing-free.md
# Try Microsoft Playwright Testing Preview for free
-Microsoft Playwright Testing Preview is a fully managed service for end-to-end testing built on top of Playwright. With the free trial, you can try Microsoft Playwright Testing for free for 30 days and 100 test minutes. In this article, you learn about the limits of the free trial, how to get started, and how to track your free trial usage.
+Microsoft Playwright Testing Preview is a fully managed service for end-to-end testing built on top of Playwright. With the free trial, you can try Microsoft Playwright Testing for free for 30 days, 100 test minutes, and 1,000 test results. In this article, you learn about the limits of the free trial, how to get started, and how to track your free trial usage.
> [!IMPORTANT] > Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
The following table lists the limits for the Microsoft Playwright Testing free t
|-|-| | Duration of trial | 30 days | | Total test minutes┬╣ | 100 minutes |
+| Total test results┬╣ | 1,000 results |
| Number of workspaces┬▓┬│ | 1 |
-┬╣ If you run a test that exceeds the free trial test minute limit, only the overage test minutes account towards the pay-as-you-go billing model.
+┬╣ If your usage exceeds either the free test minute limit or the free test result limit, only the overage counts toward the pay-as-you-go billing model. The two features are billed separately. See [Microsoft Playwright Testing preview pricing](https://aka.ms/mpt/pricing)
┬▓ These limits only apply to the *first* workspace you create in your Azure subscription. Any subsequent workspaces you create in the subscription automatically uses the pay-as-you-go billing model.
In the list of all workspaces, you can view a banner message that indicates if a
When you exceed any of the limits of the free trial, your workspace is automatically converted to the pay-as-you-go billing model.
-All test runs, access tokens, and other artifacts linked to your workspace remain available.
+All test runs, test results, and other artifacts linked to your workspace remain available.
## Next step
playwright-testing How To Use Service Config File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-use-service-config-file.md
+
+ Title: Microsoft Playwright Testing service configuration file options
+description: Learn how to use options available in configuration file with Microsoft Playwright Testing preview
+ Last updated : 09/07/2024++
+# Use options available in configuration file with Microsoft Playwright Testing preview
+
+This article shows you how to use the options available in the `playwright.service.config.ts` file that was generated for you.
+If you don't have this file in your code, follow the QuickStart guide, see [Quickstart: Run end-to-end tests at scale with Microsoft Playwright Testing Preview](./quickstart-run-end-to-end-tests.md)
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* Follow the Quickstart guide and set up a project to run with Microsoft Playwright Testing service. See, [Quickstart: Run end-to-end tests at scale with Microsoft Playwright Testing Preview](./quickstart-run-end-to-end-tests.md)
+
+Here's version of the `playwright.service.config.ts` file with all the available options:
+
+```typescript
+import { getServiceConfig, ServiceOS } from "@azure/microsoft-playwright-testing";
+import { defineConfig } from "@playwright/test";
+import { AzureCliCredential } from "@azure/identity";
+import config from "./playwright.config";
+
+export default defineConfig(
+ config,
+ getServiceConfig(config, {
+ serviceAuthType:'ACCESS_TOKEN' // Use this option when you want to authenticate using access tokens. This mode of auth should be enabled for the workspace.
+ os: ServiceOS.WINDOWS, // Select the operating system where you want to run tests.
+ runId: new Date().toISOString(), // Set a unique ID for every test run to distinguish them in the service portal.
+ credential: new AzureCliCredential(), // Select the authentication method you want to use with Entra.
+ useCloudHostedBrowsers: true, // Select if you want to use cloud-hosted browsers to run your Playwright tests.
+ exposeNetwork: '<loopback>', // Use this option to connect to local resources from your Playwright test code without having to configure additional firewall settings.
+ timeout: 30000 // Set the timeout for your tests.
+ }),
+ {
+ reporter: [
+ ["list"],
+ [
+ "@azure/microsoft-playwright-testing/reporter",
+ {
+ enableGitHubSummary: true, // Enable/disable GitHub summary in GitHub Actions workflow.
+ },
+ ],
+ ],
+ },
+);
+
+```
+
+## Settings in `playwright.service.config.ts` file
+
+* **`serviceAuthType`**:
+ - **Description**: This setting allows you to choose the authentication method you want to use for your test run.
+ - **Available Options**:
+ - `ACCESS_TOKEN` to use access tokens. You need to enable authentication using access tokens if you want to use this option, see [manage authentication](./how-to-manage-authentication.md).
+ - `ENTRA_ID` to use Microsoft Entra ID for authentication. It's the default mode.
+ - **Default Value**: `ENTRA_ID`
+ - **Example**:
+ ```typescript
+ serviceAuthType:'ENTRA_ID'
+ ```
++
+* **`os`**:
+ - **Description**: This setting allows you to choose the operating system where the browsers running Playwright tests are hosted.
+ - **Available Options**:
+ - `ServiceOS.WINDOWS` for Windows OS.
+ - `ServiceOS.LINUX` for Linux OS.
+ - **Default Value**: `ServiceOS.LINUX`
+ - **Example**:
+ ```typescript
+ os: ServiceOS.WINDOWS
+ ```
+
+* **`runId`**:
+ - **Description**: This setting allows you to set a unique ID for every test run to distinguish them in the service portal.
+ - **Example**:
+ ```typescript
+ runId: new Date().toISOString()
+ ```
+
+* **`credential`**:
+ - **Description**: This setting allows you to select the authentication method you want to use with Microsoft Entra ID.
+ - **Example**:
+ ```typescript
+ credential: new AzureCliCredential()
+ ```
+
+* **`useCloudHostedBrowsers`**
+ - **Description**: This setting allows you to choose whether to use cloud-hosted browsers or the browsers on your client machine to run your Playwright tests. If you disable this option, your tests run on the browsers of your client machine instead of cloud-hosted browsers, and you don't incur any charges.
+ - **Default Value**: true
+ - **Example**:
+ ```typescript
+ useCloudHostedBrowsers: true
+ ```
+
+* **`exposeNetwork`**
+ - **Description**: This setting allows you to connect to local resources from your Playwright test code without having to configure another firewall settings. To learn more, see [how to test local applications](./how-to-test-local-applications.md)
+ - **Example**:
+ ```typescript
+ exposeNetwork: '<loopback>'
+ ```
+
+* **`timeout`**
+ - **Description**: This setting allows you to set timeout for your tests connecting to the cloud-hosted browsers.
+ - **Example**:
+ ```typescript
+ timeout: 30000,
+ ```
+
+* **`reporter`**
+ - **Description**: The `playwright.service.config.ts` file extends the playwright config file of your setup. This option overrides the existing reporters and sets Microsoft Playwright Testing reporter. You can add or modify this list to include the reporters that you want to use. You're billed for Microsoft Playwright Testing reporting if you add `@azure/microsoft-playwright-testing/reporter`.
+ - **Default Value**: ["@azure/microsoft-playwright-testing/reporter"]
+ - **Example**:
+ ```typescript
+ reporter: [
+ ["list"],
+ ["@azure/microsoft-playwright-testing/reporter"],
+ ```
+* **`enableGitHubSummary`**:
+ - **Description**: This setting allows you to configure the Microsoft Playwright Testing service reporter. You can choose whether to include the test run summary in the GitHub summary when running in GitHub Actions.
+ - **Default Value**: true
+ - **Example**:
+ ```typescript
+ reporter: [
+ ["list"],
+ [
+ "@azure/microsoft-playwright-testing/reporter",
+ {
+ enableGitHubSummary: true,
+ },
+ ],
+ ]
+ ```
+
playwright-testing How To Use Service Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-use-service-features.md
+
+ Title: Microsoft Playwright Testing features
+description: Learn how to use different features offered by Microsoft Playwright Testing service
+ Last updated : 09/07/2024+++
+# Use features of Microsoft Playwright Testing preview
+
+In this article, you learn how to use the features provided by Microsoft Playwright Testing preview.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A Microsoft Playwright Testing workspace. To create a workspace, see [Quickstart: Run Playwright tests at scale](./quickstart-run-end-to-end-tests.md).
+- To manage features, your Azure account needs to have the [Contributor](/azure/role-based-access-control/built-in-roles#owner) or [Owner](/azure/role-based-access-control/built-in-roles#contributor) role at the workspace level. Learn more about [managing access to a workspace](./how-to-manage-workspace-access.md).
+
+## Background
+
+Microsoft Playwright Testing preview allows you to:
+- Run your Playwright tests on cloud-hosted browsers.
+- Publish test reports and artifacts to the service and view them in the service portal.
+
+These features have their own pricing plans and are billed separately. You can choose to use either feature or both. These features can be enabled or disabled for the workspace or for any specific run. To know more about pricing, see [Microsoft Playwright Testing preview pricing](https://aka.ms/mpt/pricing)
+
+## Manage feature for the workspace
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select the workspace settings icon, and then go to the **General** page to view the workspace settings.
+
+1. Navigate to **Feature management** section.
+
+ :::image type="content" source="./media/how-to-use-service-features/playwright-testing-enable-reporting-for-workspace.png" alt-text="Screenshot that shows the workspace settings page in the Playwright Testing portal for Feature Management." lightbox="./media/how-to-use-service-features/playwright-testing-enable-reporting-for-workspace.png":::
++
+1. Choose the features you want to enable for your workspace.
+
+ Currently, you can choose to enable or disable only reporting feature of the service. By default, reporting is enabled for the workspace.
+
+## Manage features while running tests
+
+You can also choose to use either feature or both for a test run.
+
+> [!IMPORTANT]
+> You can only use a feature in a test run if it is enabled for the workspace.
+
+1. In your Playwright setup, go to `playwright.service.config.ts` file and use these settings for feature management.
+
+```typescript
+import { getServiceConfig, ServiceOS } from "@azure/microsoft-playwright-testing";
+import { defineConfig } from "@playwright/test";
+import { AzureCliCredential } from "@azure/identity";
+import config from "./playwright.config";
+
+export default defineConfig(
+ config,
+ getServiceConfig(config, {
+ useCloudHostedBrowsers: true, // Select if you want to use cloud-hosted browsers to run your Playwright tests.
+ }),
+ {
+ reporter: [
+ ["list"],
+ ["@azure/microsoft-playwright-testing/reporter"], //Microsoft Playwright Testing reporter
+ ],
+ },
+);
+```
+- **`useCloudHostedBrowsers`**:
+ - **Description**: This setting allows you to choose whether to use cloud-hosted browsers or the browsers on your client machine to run your Playwright tests. If you disable this option, your tests run on the browsers of your client machine instead of cloud-hosted browsers, and you do not incur any charges. You can still configure reporting options.
+ - **Default Value**: true
+ - **Example**:
+ ```typescript
+ useCloudHostedBrowsers: true
+ ```
+- **`reporter`**
+ - **Description**: The `playwright.service.config.ts` file extends the Playwright configuration file of your setup. This option overrides the existing reporters and sets the Microsoft Playwright Testing reporter. You can add or modify this list to include the reporters you want to use. You are billed for Microsoft Playwright Testing reporting if you add `@azure/microsoft-playwright-testing/reporter`. This feature can be used independently of cloud-hosted browsers, meaning you donΓÇÖt have to run tests on service-managed browsers to get reports and artifacts on the Playwright portal.
+ - **Default Value**: ["@azure/microsoft-playwright-testing/reporter"]
+ - **Example**:
+ ```typescript
+ reporter: [
+ ["list"],
+ ["@azure/microsoft-playwright-testing/reporter"]],
+ ```
++
+## Related content
+
+- Learn more about [Microsoft Playwright Testing preview pricing](https://aka.ms/mpt/pricing).
playwright-testing Overview What Is Microsoft Playwright Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/overview-what-is-microsoft-playwright-testing.md
# What is Microsoft Playwright Testing Preview?
-Microsoft Playwright Testing Preview is a fully managed service for end-to-end testing built on top of Playwright. With Playwright, you can automate end-to-end tests to ensure your web applications work the way you expect it to, across different web browsers and operating systems. The service abstracts the complexity and infrastructure for running Playwright tests with high parallelization.
+Microsoft Playwright Testing Preview is a fully managed service for end-to-end testing built on top of Playwright. With Playwright, you can automate end-to-end tests to ensure your web applications work the way you expect it to, across different web browsers and operating systems. The service abstracts the complexity and infrastructure for running Playwright tests and managing results and artifacts. The service runs tests with high parallelization and stores test results and artifacts to help you ship features faster and troubleshoot easily.
Run your Playwright test suite in the cloud, without changes to your test code or modifications to your tooling setup. Use the [Playwright Test Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-playwright.playwright) for a rich editor experience, or use the Playwright CLI to add automation within your continuous integration (CI) workflow.
Modern web apps need to work flawlessly across numerous browsers, operating syst
- Microsoft Playwright Testing supports all [browsers supported by Playwright](https://playwright.dev/docs/release-notes).
+## Troubleshoot tests easily using reporting and artifacts
+
+As applications grow, maintaining quality is crucial. Use the reporting feature of the service to troubleshoot test results with rich artifacts.
+
+- Publish test results and artifacts to the service and view them in the service portal for faster troubleshooting.
+- Integrate reporting with CI pipelines to get rich, consolidated reports.
+ ## Endpoint testing Use cloud-hosted remote browsers to test web applications regardless of where they're hosted, without having to allow inbound connections on your firewall.
Microsoft Playwright Testing instantiates cloud-hosted browsers across different
:::image type="content" source="./media/overview-what-is-microsoft-playwright-testing/playwright-testing-architecture-overview.png" alt-text="Diagram that shows an architecture overview of Microsoft Playwright Testing." lightbox="./media/overview-what-is-microsoft-playwright-testing/playwright-testing-architecture-overview.png":::
-After a test run completes, Playwright sends the test run metadata to the service. The test results, trace files, and other test run files are available on the client machine.
+After a test run completes, the test results, trace files, and other test run files are available on the client machine. These are then published to the service from the client machine and can be viewed in the service portal.
-To run existing tests with Microsoft Playwright Testing requires no changes to your test code. Add a service configuration file to your test project, and specify your workspace settings, such as the access token and the service endpoint.
+To run existing tests with Microsoft Playwright Testing requires no changes to your test code, install the Microsoft Playwright Testing service package and specify the service endpoint for your workspace.
Learn more about how to [determine the optimal configuration for optimizing test suite completion](./concept-determine-optimal-configuration.md).
Learn more about how to [determine the optimal configuration for optimizing test
Microsoft Playwright Testing doesn't store or process customer data outside the region you deploy the workspace in. When you use the regional affinity feature, the metadata is transferred from the cloud hosted browser region to the workspace region in a secure and compliant manner.
-Microsoft Playwright Testing automatically encrypts all data stored in your workspace with keys managed by Microsoft (service-managed keys). For example, this data includes workspace details and Playwright test run meta data like test start and end time, test minutes, and who ran the test.
+Microsoft Playwright Testing automatically encrypts all data stored in your workspace with keys managed by Microsoft (service-managed keys). For example, this data includes workspace details, Playwright test run meta data like test start and end time, test minutes, who ran the test, and test results and artifacts generated by Playwright which are published to the service.
## Next step
playwright-testing Playwright Testing Reporting With Sharding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/playwright-testing-reporting-with-sharding.md
Playwright's sharding enables you to split your test suite to run across multipl
You can use Playwright Testing's reporting feature to get a consolidated report of a test run with sharding. You need to make sure you set the variable `PLAYWRIGHT_SERVICE_RUN_ID` so that it remains same across all shards. > [!IMPORTANT]
-> Microsoft Playwright Testing's reporting feature is currently in invite-only preview. If you want to try it out, [submit a request for access to the preview](https://aka.ms/mpt/reporting-signup).
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+* Set up continuous end-to-end testing. Complete the [Quickstart: Set up continuous end-to-end testing with Microsoft Playwright Testing Preview](./quickstart-automate-end-to-end-testing.md) to set up continuous integration (CI) pipeline.
## Set up variables
Here's an example of how you can set it in your pipeline via GitHub Actions.
```yml name: Playwright Tests on:
- push:
+ push:
branches: [ main, master ]
- pull_request:
+ pull_request:
branches: [ main, master ]
-jobs:
- playwright-tests:
- timeout-minutes: 60
- runs-on: ubuntu-latest
+ workflow_dispatch:
strategy: fail-fast: false matrix: shardIndex: [1, 2, 3, 4] shardTotal: [4]
+permissions:
+ id-token: write
+ contents: read
+jobs:
+ test:
+ timeout-minutes: 60
+ runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v4
- - uses: actions/setup-node@v4
- with:
+ - uses: actions/checkout@v3
+ - uses: actions/setup-node@v3
+ with:
node-version: 18
+ # This step is to sign-in to Azure to run tests from GitHub Action workflow.
+ # You can choose how set up Authentication to Azure from GitHub Actions, this is one example.
+ - name: Login to Azure with AzPowershell (enableAzPSSession true)
+ uses: azure/login@v2
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ enable-AzPSSession: true
+ - name: Install dependencies working-directory: path/to/playwright/folder # update accordingly
- run: npm ci
-
- - name: Install Playwright browsers # Add this step if not using cloud-hosted browsers
- working-directory: path/to/playwright/folder # update accordingly
- run: npx playwright install --with-deps
-
- - name: Install reporting package
- working-directory: path/to/playwright/folder # update accordingly
- run: | # Use your GitHub PAT to install reporting package.
- npm config set @microsoft:registry=https://npm.pkg.github.com
- npm set //npm.pkg.github.com/:_authToken ${{secrets.PAT_TOKEN_PACKAGE}}
- npm install
-
+ run: npm ci
+ - name: Run Playwright tests working-directory: path/to/playwright/folder # update accordingly
- env:
- # Access token and regional endpoint for Microsoft Playwright Testing
- PLAYWRIGHT_SERVICE_ACCESS_TOKEN: ${{ secrets.PLAYWRIGHT_SERVICE_ACCESS_TOKEN }}
+ env:
+ # Regional endpoint for Microsoft Playwright Testing
PLAYWRIGHT_SERVICE_URL: ${{ secrets.PLAYWRIGHT_SERVICE_URL }}
- PLAYWRIGHT_SERVICE_RUN_ID: ${{ github.run_id }}-${{ github.run_attempt }}-${{ github.sha }} #This Run_ID will be unique and will remain same across all shards
- run: npx playwright test --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }}
+ PLAYWRIGHT_SERVICE_RUN_ID: ${{ github.run_id }}-${{ github.run_attempt }}-${{ github.sha } #This Run_ID will be unique and will remain same across all shards
+ run: npx playwright test --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }}
```
playwright-testing Quickstart Automate End To End Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-automate-end-to-end-testing.md
# Quickstart: Set up continuous end-to-end testing with Microsoft Playwright Testing Preview
-In this quickstart, you set up continuous end-to-end testing with Microsoft Playwright Testing Preview to validate that your web app runs correctly across different browsers and operating systems with every code commit. Learn how to add your Playwright tests to a continuous integration (CI) workflow, such as GitHub Actions, Azure Pipelines, or other CI platforms.
+In this quickstart, you set up continuous end-to-end testing with Microsoft Playwright Testing Preview to validate that your web app runs correctly across different browsers and operating systems with every code commit and troubleshoot tests easily using the service dashboard. Learn how to add your Playwright tests to a continuous integration (CI) workflow, such as GitHub Actions, Azure Pipelines, or other CI platforms.
-After you complete this quickstart, you have a CI workflow that runs your Playwright test suite at scale with Microsoft Playwright Testing.
+After you complete this quickstart, you have a CI workflow that runs your Playwright test suite at scale and helps you troubleshoot tests easily with Microsoft Playwright Testing.
> [!IMPORTANT] > Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
After you complete this quickstart, you have a CI workflow that runs your Playwr
- A GitHub account. If you don't have a GitHub account, you can [create one for free](https://github.com/). - A GitHub repository that contains your Playwright test specifications and GitHub Actions workflow. To create a repository, see [Creating a new repository](https://docs.github.com/github/creating-cloning-and-archiving-repositories/creating-a-new-repository). - A GitHub Actions workflow. If you need help with getting started with GitHub Actions, see [create your first workflow](https://docs.github.com/en/actions/quickstart)
+- Set up authentication from GitHub Actions to Azure. See [Use GitHub Actions to connect to Azure](/azure/developer/github/connect-from-azure)
# [Azure Pipelines](#tab/pipelines) - An Azure DevOps organization and project. If you don't have an Azure DevOps organization, you can [create one for free](/azure/devops/organizations/projects/create-project). - A pipeline definition. If you need help with getting started with Azure Pipelines, see [create your first pipeline](/azure/devops/pipelines/create-first-pipeline).
+- Azure Resource Manager Service connection to securely authenticate to the service from Azure Pipelines, see [Azure Resource Manager service connection](/azure/devops/pipelines/library/connect-to-azure)
-## Configure a service access token
-
-Microsoft Playwright Testing uses access tokens to authorize users to run Playwright tests with the service. You can generate a service access token in the Playwright portal, and then specify the access token in the service configuration file.
-
-To generate an access token and store it as a CI workflow secret, perform the following steps:
-
-1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
-
-1. Select the workspace settings icon, and then go to the **Access tokens** page.
-
- :::image type="content" source="./media/quickstart-automate-end-to-end-testing/playwright-testing-generate-new-access-token.png" alt-text="Screenshot that shows the access tokens settings page in the Playwright Testing portal." lightbox="./media/quickstart-automate-end-to-end-testing/playwright-testing-generate-new-access-token.png":::
-
-1. Select **Generate new token** to create a new access token for your CI workflow.
-
-1. Enter the access token details, and then select **Generate token**.
-
- :::image type="content" source="./media/quickstart-automate-end-to-end-testing/playwright-testing-generate-token.png" alt-text="Screenshot that shows setup guide in the Playwright Testing portal, highlighting the 'Generate token' button." lightbox="./media/quickstart-automate-end-to-end-testing/playwright-testing-generate-token.png":::
-
- :::image type="content" source="./media/quickstart-automate-end-to-end-testing/playwright-testing-copy-access-token.png" alt-text="Screenshot that shows how to copy the generated access token in the Playwright Testing portal." lightbox="./media/quickstart-automate-end-to-end-testing/playwright-testing-copy-access-token.png":::
-
-1. Store the access token in a CI workflow secret to avoid specifying the token in clear text in the workflow definition:
-
- # [GitHub Actions](#tab/github)
-
- 1. Go to your GitHub repository, and select **Settings** > **Secrets and variables** > **Actions**.
- 1. Select **New repository secret**.
- 1. Enter the secret details, and then select **Add secret** to create the CI/CD secret.
-
- | Parameter | Value |
- | -- | |
- | **Name** | *PLAYWRIGHT_SERVICE_ACCESS_TOKEN* |
- | **Value** | Paste the workspace access token you copied previously. |
-
- 1. Select **OK** to create the workflow secret.
-
- # [Azure Pipelines](#tab/pipelines)
-
- 1. Go to your Azure DevOps project.
- 1. Go to the **Pipelines** page, select the appropriate pipeline, and then select **Edit**.
- 1. Locate the **Variables** for this pipeline.
- 1. Add a new variable.
- 1. Enter the variable details, and then select **Add secret** to create the CI/CD secret.
-
- | Parameter | Value |
- | -- | |
- | **Name** | *PLAYWRIGHT_SERVICE_ACCESS_TOKEN* |
- | **Value** | Paste the workspace access token you copied previously. |
- | **Keep this value secret** | Check this value |
-
- 1. Select **OK**, and then **Save** to create the workflow secret.
-
-
- ## Get the service region endpoint URL In the service configuration, you have to provide the region-specific service endpoint. The endpoint depends on the Azure region you selected when creating the workspace.
If you haven't configured your Playwright tests yet for running them on cloud-ho
1. Save and commit the file to your source code repository.
+## Update package.json file
+
+Update the `package.json` file in your repository to add details about Microsoft Playwright Testing service package in `devDependencies` section.
+
+```json
+"devDependencies": {
+ "@azure/microsoft-playwright-testing": "^1.0.0-beta.3"
+}
+```
## Update the workflow definition Update the CI workflow definition to run your Playwright tests with the Playwright CLI. Pass the [service configuration file](#add-service-configuration-file) as an input parameter for the Playwright CLI. You configure your environment by specifying environment variables.
Update the CI workflow definition to run your Playwright tests with the Playwrig
# [GitHub Actions](#tab/github) ```yml
+
+ # This step is to sign-in to Azure to run tests from GitHub Action workflow.
+ # You can choose how set up Authentication to Azure from GitHub Actions, this is one example.
+ - name: Login to Azure with AzPowershell (enableAzPSSession true)
+ uses: azure/login@v2
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ enable-AzPSSession: true
+ - name: Install dependencies
- working-directory: path/to/playwright/folder # update accordingly
+ working-directory: path/to/playwright/folder # update accordingly
run: npm ci - name: Run Playwright tests
- working-directory: path/to/playwright/folder # update accordingly
+ working-directory: path/to/playwright/folder # update accordingly
env:
- # Access token and regional endpoint for Microsoft Playwright Testing
- PLAYWRIGHT_SERVICE_ACCESS_TOKEN: ${{ secrets.PLAYWRIGHT_SERVICE_ACCESS_TOKEN }}
+ # Regional endpoint for Microsoft Playwright Testing
PLAYWRIGHT_SERVICE_URL: ${{ secrets.PLAYWRIGHT_SERVICE_URL }} PLAYWRIGHT_SERVICE_RUN_ID: ${{ github.run_id }}-${{ github.run_attempt }}-${{ github.sha }} run: npx playwright test -c playwright.service.config.ts --workers=20
Update the CI workflow definition to run your Playwright tests with the Playwrig
targetType: 'inline' script: 'npm ci' workingDirectory: path/to/playwright/folder # update accordingly
-
- - task: PowerShell@2
- enabled: true
- displayName: "Run Playwright tests"
- env:
- PLAYWRIGHT_SERVICE_ACCESS_TOKEN: $(PLAYWRIGHT_SERVICE_ACCESS_TOKEN)
+
+ - task: AzureCLI@2
+ displayName: Run Playwright Test
+ env:
PLAYWRIGHT_SERVICE_URL: $(PLAYWRIGHT_SERVICE_URL)
+ PLAYWRIGHT_SERVICE_RUN_ID: ${{ parameters.runIdPrefix }}$(Build.DefinitionName) - $(Build.BuildNumber) - $(System.JobAttempt)
inputs:
- targetType: 'inline'
- script: 'npx playwright test -c playwright.service.config.ts --workers=20'
- workingDirectory: path/to/playwright/folder # update accordingly
+ azureSubscription: My_Service_Connection # Service connection used to authenticate this pipeline with Azure to use the service
+ scriptType: 'pscore'
+ scriptLocation: 'inlineScript'
+ inlineScript: |
+ npx playwright test -c playwright.service.config.ts --workers=20
+ addSpnToEnvironment: true
+ workingDirectory: path/to/playwright/folder # update accordingly
- task: PublishPipelineArtifact@1 displayName: Upload Playwright report
Update the CI workflow definition to run your Playwright tests with the Playwrig
1. Save and commit your changes.
- When the CI workflow is triggered, your Playwright tests will run in your Microsoft Playwright Testing workspace on cloud-hosted browsers, across 20 parallel workers.
+ When the CI workflow is triggered, your Playwright tests run in your Microsoft Playwright Testing workspace on cloud-hosted browsers, across 20 parallel workers.
+
+> [!NOTE]
+> Reporting feature is enabled by default for existing workspaces. This is being rolled out in stages and will take a few days. To avoid failures, confirm that `Rich diagnostics using reporting` setting is ON for your workspace before proceeding. See, [Enable reporting for workspace](./how-to-use-service-features.md#manage-feature-for-the-workspace).
> [!CAUTION]
-> With Microsoft Playwright Testing, you get charged based on the number of total test minutes. If you're a first-time user or [getting started with a free trial](./how-to-try-playwright-testing-free.md), you might start with running a single test at scale instead of your full test suite to avoid exhausting your free test minutes.
+> With Microsoft Playwright Testing, you get charged based on the number of total test minutes and test result published. If you're a first-time user or [getting started with a free trial](./how-to-try-playwright-testing-free.md), you might start with running a single test at scale instead of your full test suite to avoid exhausting your free test minutes and test results.
> > After you validate that the test runs successfully, you can gradually increase the test load by running more tests with the service. >
Update the CI workflow definition to run your Playwright tests with the Playwrig
> > ```npx playwright test {name-of-file.spec.ts} --config=playwright.service.config.ts```
-## Enable test results reporting
-
-Microsoft Playwright Testing now supports viewing test results in the Playwright Portal. During preview access is only available by [invitation only](https://aka.ms/mpt/reporting-signup).
-
-> [!Important]
-> The reporting feature of Microsoft Playwright Testing service is free of charge during the invite-only preview. However, existing functionality of cloud-hosted browsers continues to bill per the [Azure pricing plan](https://aka.ms/mpt/pricing).
-Once you have access to the reporting tool, use the following steps to set up your tests.
-
-1. From the workspace home page, navigate to *Settings*.
-
- :::image type="content" source="./media/quickstart-automate-end-to-end-testing/playwright-testing-select-settings.png" alt-text="Screenshot that shows settings selection for a workspace in the Playwright Testing portal." lightbox="./media/quickstart-automate-end-to-end-testing/playwright-testing-select-settings.png":::
-
-1. From *Settings*, select **General** and make sure reporting is **Enabled**.
-
- :::image type="content" source="./media/quickstart-automate-end-to-end-testing/playwright-testing-enable-reporting.png" alt-text="Screenshot that shows how to enable reporting for a workspace in the Playwright Testing portal." lightbox="./media/quickstart-automate-end-to-end-testing/playwright-testing-enable-reporting.png":::
-
-1. Create a GitHub Personal Access Token by following these [steps](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic).
-
- You need to provide `read:packages` permissions to the token. This token is referred to as `PAT_TOKEN_PACKAGE` for the rest of this article.
-
-1. Store the GitHub token in a CI workflow secret to avoid specifying the token in clear text in the workflow definition:
-
- # [GitHub Actions](#tab/github)
-
- 1. Go to your GitHub repository, and select **Settings** > **Secrets and variables** > **Actions**.
- 1. Select **New repository secret**.
- 1. Enter the secret details, and then select **Add secret** to create the CI/CD secret.
-
- | Parameter | Value |
- | -- | |
- | **Name** | *PAT_TOKEN_PACKAGE* |
- | **Value** | Paste the GitHub personal access token you copied previously. |
-
- 1. Select **OK** to create the workflow secret.
-
- # [Azure Pipelines](#tab/pipelines)
-
- 1. Go to your Azure DevOps project.
- 1. Go to the **Pipelines** page, select the appropriate pipeline, and then select **Edit**.
- 1. Locate the **Variables** for this pipeline.
- 1. Add a new variable.
- 1. Enter the variable details, and then select **Add secret** to create the CI/CD secret.
-
- | Parameter | Value |
- | -- | |
- | **Name** | *PAT_TOKEN_PACKAGE* |
- | **Value** | Paste the GitHub personal access token you copied previously. |
- | **Keep this value secret** | Check this value |
-
- 1. Select **OK**, and then **Save** to create the workflow secret.
-
-
-1. Update package.json file with the package.
-
- ```json
- "dependencies": {
- "@microsoft/mpt-reporter": "0.1.0-19072024-private-preview"
- }
- ```
-5. Update the Playwright config file.
-
- Add Playwright Testing reporter to `Playwright.config.ts` in the same way you use other reporters.
-
- ```typescript
- import { defineConfig } from '@playwright/test';
-
- export default defineConfig({
- reporter: [
- ['list'],
- ['json', { outputFile: 'test-results.json' }],
- ['@microsoft/mpt-reporter'] // Microsoft Playwright Testing reporter
- ],
- });
- ```
- Make sure that the artifacts are enabled in the config for better troubleshooting.
-
- ```typescript
- use: {
- // ...
- trace: 'on-first-retry',
- video:'retain-on-failure',
- screenshot:'only-on-failure',
- }
- ```
-
-3. Update the CI workflow definition to install the reporting package before running the tests to publish the report of your Playwright tests in Microsoft Playwright Testing.
--
- # [GitHub Actions](#tab/github)
-
- ```yml
- - name: Install reporting package
- working-directory: path/to/playwright/folder # update accordingly
- run: |
- npm config set @microsoft:registry=https://npm.pkg.github.com
- npm set //npm.pkg.github.com/:_authToken ${{secrets.PAT_TOKEN_PACKAGE}}
- npm install
-
- - name: Run Playwright tests
- working-directory: path/to/playwright/folder # update accordingly
- env:
- # Access token and regional endpoint for Microsoft Playwright Testing
- PLAYWRIGHT_SERVICE_ACCESS_TOKEN: ${{ secrets.PLAYWRIGHT_SERVICE_ACCESS_TOKEN }}
- PLAYWRIGHT_SERVICE_URL: ${{ secrets.PLAYWRIGHT_SERVICE_URL }}
- PLAYWRIGHT_SERVICE_RUN_ID: ${{ github.run_id }}-${{ github.run_attempt }}-${{ github.sha }}
- run: npx playwright test
- ```
-
- # [Azure Pipelines](#tab/pipelines)
-
- ```yml
- - task: PowerShell@2
- enabled: true
- displayName: "Install reporting package"
- inputs:
- targetType: 'inline'
- script: |
- 'npm config set @microsoft:registry=https://npm.pkg.github.com'
- 'npm set //npm.pkg.github.com/:_authToken ${PAT_TOKEN_PACKAGE}'
- 'npm install'
- workingDirectory: path/to/playwright/folder # update accordingly
-
- - task: PowerShell@2
- enabled: true
- displayName: "Run Playwright tests"
- env:
- PLAYWRIGHT_SERVICE_ACCESS_TOKEN: $(PLAYWRIGHT_SERVICE_ACCESS_TOKEN)
- PLAYWRIGHT_SERVICE_URL: $(PLAYWRIGHT_SERVICE_URL)
- PLAYWRIGHT_SERVICE_RUN_ID: $(Build.DefinitionName) - $(Build.BuildNumber) - $(System.JobAttempt)
- inputs:
- targetType: 'inline'
- script: 'npx playwright test -c playwright.service.config.ts --workers=20'
- workingDirectory: path/to/playwright/folder # update accordingly
- ```
-
-
- > [!TIP]
- > You can use Microsoft Playwright Testing service to publish test results to the portal independent of the cloud-hosted browsers feature.
+> [!TIP]
+> You can use Microsoft Playwright Testing service features independently. You can publish test results to the portal without using the cloud-hosted browsers feature and you can also use only cloud-hosted browsers to expedite your test suite without publishing test results.
## Related content
playwright-testing Quickstart Run End To End Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-run-end-to-end-tests.md
# Quickstart: Run end-to-end tests at scale with Microsoft Playwright Testing Preview
-In this quickstart, you learn how to run your Playwright tests with highly parallel cloud browsers using Microsoft Playwright Testing Preview. Use cloud infrastructure to validate your application across multiple browsers, devices, and operating systems.
+In this quickstart, you learn how to run your Playwright tests with highly parallel cloud browsers and troubleshoot failed tests easily using Microsoft Playwright Testing Preview. Use cloud infrastructure to validate your application across multiple browsers, devices, and operating systems. Publish the results and artifacts generated by Playwright to the service and view them in the service portal.
-After you complete this quickstart, you have a Microsoft Playwright Testing workspace to run your Playwright tests at scale.
+After you complete this quickstart, you have a Microsoft Playwright Testing workspace to run your Playwright tests at scale and view test results and artifacts in the service portal.
> [!IMPORTANT] > Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
After you complete this quickstart, you have a Microsoft Playwright Testing work
* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * Your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or one of the [classic administrator roles](/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles). * A Playwright project. If you don't have project, create one by using the [Playwright getting started documentation](https://playwright.dev/docs/intro) or use our [Microsoft Playwright Testing sample project](https://github.com/microsoft/playwright-testing-service/tree/main/samples/get-started).
+* Azure CLI. If you don't have Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Create a workspace
To get started with running your Playwright tests at scale on cloud browsers, yo
When the workspace creation finishes, you're redirected to the setup guide.
-## Create an access token for service authentication
+## Install Microsoft Playwright Testing package
-Microsoft Playwright Testing uses access tokens to authorize users to run Playwright tests with the service. You first generate a service access token in the Playwright portal, and then store the value in an environment variable.
+To use the service, install Microsoft Playwright Testing package.
-To generate the access token, perform the following steps:
+```npm
+npm init @azure/microsoft-playwright-testing
+```
-1. In the workspace setup guide, in **Create an access token**, select **Generate token**.
+This generates `playwright.service.config.ts` file which serves to:
- :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-generate-token.png" alt-text="Screenshot that shows setup guide in the Playwright Testing portal, highlighting the 'Generate token' button." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-generate-token.png":::
+- Direct and authenticate Playwright to the Microsoft Playwright Testing service.
+- Adds a reporter to publish test results and artifacts.
-1. Copy the access token for the workspace.
-
- You need the access token value for configuring your environment in a later step.
-
- :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-copy-access-token.png" alt-text="Screenshot that shows how to copy the generated access token in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-copy-access-token.png":::
+If you already have this file, the package asks you to override it.
## Configure the service region endpoint
-In the service configuration, you have to provide the region-specific service endpoint. The endpoint depends on the Azure region you selected when creating the workspace.
+In your setup, you have to provide the region-specific service endpoint. The endpoint depends on the Azure region you selected when creating the workspace.
To get the service endpoint URL, perform the following steps:
To get the service endpoint URL, perform the following steps:
## Set up your environment
-To set up your environment, you have to configure the `PLAYWRIGHT_SERVICE_ACCESS_TOKEN` and `PLAYWRIGHT_SERVICE_URL` environment variables with the values you obtained in the previous steps.
+To set up your environment, you have to configure the `PLAYWRIGHT_SERVICE_URL` environment variable with the value you obtained in the previous steps.
We recommend that you use the `dotenv` module to manage your environment. With `dotenv`, you define your environment variables in the `.env` file.
We recommend that you use the `dotenv` module to manage your environment. With `
1. Create a `.env` file alongside the `playwright.config.ts` file in your Playwright project: ```
- PLAYWRIGHT_SERVICE_ACCESS_TOKEN={MY-ACCESS-TOKEN}
PLAYWRIGHT_SERVICE_URL={MY-REGION-ENDPOINT} ```
- Make sure to replace the `{MY-ACCESS-TOKEN}` and `{MY-REGION-ENDPOINT}` text placeholders with the values you copied earlier.
+ Make sure to replace the `{MY-REGION-ENDPOINT}` text placeholder with the value you copied earlier.
-> [!CAUTION]
-> Make sure that you don't add the `.env` file to your source code repository to avoid leaking your access token value.
-## Add a service configuration file
+## Set up Authentication
+
+To run your Playwright tests in your Microsoft Playwright Testing workspace, you need to authenticate the Playwright client where you're running the tests with the service. This could be your local dev machine or CI machine.
+
+The service offers two authentication methods: Microsoft Entra ID and Access Tokens.
-To run your Playwright tests in your Microsoft Playwright Testing workspace, you need to add a service configuration file alongside your Playwright configuration file. The service configuration file references the environment variables to get the workspace endpoint and your access token.
+Microsoft Entra ID uses your Azure credentials, requiring a sign-in to your Azure account for secure access. Alternatively, you can generate an access token from your Playwright workspace and use it in your setup.
-To add the service configuration to your project:
+##### Set up authentication using Microsoft Entra ID
-1. Create a new file `playwright.service.config.ts` alongside the `playwright.config.ts` file.
+Microsoft Entra ID is the default and recommended authentication for the service. From your local dev machine, you can use [Azure CLI](/cli/azure/install-azure-cli) to sign-in
- Optionally, use the `playwright.service.config.ts` file in the [sample repository](https://github.com/microsoft/playwright-testing-service/blob/main/samples/get-started/playwright.service.config.ts).
+```CLI
+az login
+```
+> [!NOTE]
+> If you're a part of multiple Microsoft Entra tenants, make sure you sign in to the tenant where your workspace belongs. You can get the tenant ID from Azure portal. See [Find your Microsoft Entra Tenant](/azure/azure-portal/get-subscription-tenant-id#find-your-microsoft-entra-tenant). Once you get the ID, sign-in using the command `az login --tenant <TenantID>`
-1. Add the following content to it:
+##### Set up authentication using access tokens
- :::code language="typescript" source="~/playwright-testing-service/samples/get-started/playwright.service.config.ts":::
+You can generate an access token from your Playwright Testing workspace and use it in your setup. However, we strongly recommend Microsoft Entra ID for authentication due to its enhanced security. Access tokens, while convenient, function like long-lived passwords and are more susceptible to being compromised.
-1. Save the file.
+1. Authentication using access tokens is disabled by default. To use, [Enable access-token based authentication](./how-to-manage-authentication.md#enable-authentication-using-access-tokens)
+
+2. [Set up authentication using access tokens](./how-to-manage-authentication.md#set-up-authentication-using-access-tokens)
+
+> [!CAUTION]
+> We strongly recommend using Microsoft Entra ID for authentication to the service. If you are using access tokens, see [How to Manage Access Tokens](./how-to-manage-access-tokens.md)
## Run your tests at scale with Microsoft Playwright Testing
You've now prepared the configuration for running your Playwright tests in the c
### Run a single test at scale
-With Microsoft Playwright Testing, you get charged based on the number of total test minutes. If you're a first-time user or [getting started with a free trial](./how-to-try-playwright-testing-free.md), you might start with running a single test at scale instead of your full test suite to avoid exhausting your free test minutes.
+With Microsoft Playwright Testing, you get charged based on the number of total test minutes and number of test results published. If you're a first-time user or [getting started with a free trial](./how-to-try-playwright-testing-free.md), you might start with running a single test at scale instead of your full test suite to avoid exhausting your free trial limits.
+
+> [!NOTE]
+> Reporting feature is enabled by default for existing workspaces. This is being rolled out in stages and will take a few days. To avoid failures, confirm that `Rich diagnostics using reporting` setting is ON for your workspace before proceeding. See, [Enable reporting for workspace](./how-to-use-service-features.md#manage-feature-for-the-workspace).
After you validate that the test runs successfully, you can gradually increase the test load by running more tests with the service.
To run a single Playwright test in Visual Studio Code with Microsoft Playwright
You can now run multiple tests with the service, or run your entire test suite on remote browsers. > [!CAUTION]
-> Depending on the size of your test suite, you might incur additional charges for the test minutes beyond your allotted free test minutes.
+> Depending on the size of your test suite, you might incur additional charges for the test minutes beyond your allotted free test minutes and free test results.
### Run a full test suite at scale
To run your Playwright test suite in Visual Studio Code with Microsoft Playwrigh
-## View test runs in the Playwright portal
-
-Go to the [Playwright portal](https://aka.ms/mpt/portal) to view the test run metadata and activity log for your workspace.
--
-The activity log lists for each test run the following details: the total test completion time, the number of parallel workers, and the number of test minutes.
--
-## View test results in the Playwright portal
-
-Microsoft Playwright Testing now supports viewing test results in the Playwright Portal. This feature is only available as an [invite only feature](https://aka.ms/mpt/reporting-signup).
-
-> [!Important]
-> The reporting feature of Microsoft Playwright Testing service is free of charge during the invite-only preview. However, existing functionality of any cloud-hosted browsers continues to bill per the Azure pricing plan.
-
-Once you have access to the reporting tool, use the following steps to set up your tests.
-
-1. From the workspace home page, navigate to *Settings*.
-
- :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-select-settings.png" alt-text="Screenshot that shows settings selection for a workspace in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-select-settings.png":::
+## View test runs and results in the Playwright portal
-3. From *Settings*, select **General** and make sure reporting is **Enabled**.
-
- :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-enable-reporting.png" alt-text="Screenshot that shows how to enable reporting for a workspace in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-enable-reporting.png":::
+Go to the [Playwright portal](https://aka.ms/mpt/portal) to view the test runs and test results for your workspace.
-4. Make sure the environment is set up correctly as mentioned in the section **Set up your environment**.
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-test-run-page.png" alt-text="Screenshot that shows the test runs for a workspace in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-test-run-page.png":::
-5. Install reporting package
-
- Since the feature is currently not public, you need to perform a few extra steps to install the package. These steps won't be needed once the feature becomes public.
-
- 1. Create a file with name `.npmrc` at the same location as your Playwright config file.
-
- 1. Add the following content to the file and save.
- ```bash
- @microsoft:registry=https://npm.pkg.github.com
- ```
- 1. Create a GitHub Personal Access Token by following these [steps](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic).
-
- You need to provide `read:packages` permissions to the token. This token is referred to as `PAT_TOKEN_PACKAGE` for the rest of this article.
-
- 1. Run the following command in your terminal, at the location of your Playwright config file. Replace `PAT_TOKEN_PACKAGE` with the token generated in the previous step.
- ```bash
- npm set //npm.pkg.github.com/:_authToken PAT_TOKEN_PACKAGE
- ```
-
- 1. Update package.json file with the package.
-
- ```json
- "dependencies": {
- "@microsoft/mpt-reporter": "0.1.0-19072024-private-preview"
- }
- ```
-
-
- 1. Run `npm install` to install the package.
-
-6. Update Playwright.config file
-
- Add Playwright Testing reporter to `Playwright.config.ts` in the same way you use other reporters.
-
- ```typescript
- import { defineConfig } from '@playwright/test';
-
- export default defineConfig({
- reporter: [
- ['list'],
- ['json', { outputFile: 'test-results.json' }],
- ['@microsoft/mpt-reporter'] // Microsoft Playwright Testing reporter
- ],
- });
- ```
- Make sure that the artifacts are enabled in the config for better troubleshooting.
-
- ```typescript
- use: {
- // ...
- trace: 'on-first-retry',
- video:'retain-on-failure',
- screenshot:'only-on-failure',
- }
- ```
-
-7. Run Playwright tests
-
- You can run `npx playwright test` command and view the results and artifacts on Playwright Testing portal.
-
- :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-test-run-page.png" alt-text="Screenshot that shows the test runs for a workspace in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-test-run-page.png":::
+The test run contains the CI information, test run status, workers used, duration, and billable minutes. If you open a test run, you can see the results and artifacts for each test along with other information.
> [!TIP]
-> You can use Microsoft Playwright Testing service to publish test results to the portal independent of the cloud-hosted browsers feature.
+> You can use Microsoft Playwright Testing service features independently. You can publish test results to the portal without using the cloud-hosted browsers feature and you can also use only cloud-hosted browsers to expedite your test suite without publishing test results.
+
+> [!NOTE]
+> The test results and artifacts that you publish are retained on the service for 90 days. After that, they are automatically deleted.
## Optimize parallel worker configuration
playwright-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/resource-limits-quotas-capacity.md
While the service is in preview, the following limits apply on a per-subscriptio
## Test code limitations -- Only tests Playwright version 1.37 and higher is supported.
+- Only tests Playwright version 1.47 and higher is supported with the Microsoft Playwright Testing service package.
- Only the Playwright runner and test code written in JavaScript or TypeScript are supported. ## Supported operating systems and browsers
route-server Configure Route Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/configure-route-server.md
+
+ Title: Configure and manage Azure Route Server
+description: Learn how to configure and manage Azure Route Server using the Azure portal, PowerShell, or Azure CLI.
++++ Last updated : 09/16/2024+++
+# Configure Azure Route Server
+
+In this article, you learn how to configure and manage Azure Route Server using the Azure portal, PowerShell, or Azure CLI.
+
+## Prerequisites
+
+# [**Portal**](#tab/portal)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- A route server.
++
+# [**PowerShell**](#tab/powershell)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- A route server.
+
+- Azure Cloud Shell or Azure PowerShell.
+
+ The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the cmdlets in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
++
+# [**Azure CLI**](#tab/cli)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- A route server.
+
+- Azure Cloud Shell or Azure CLI.
+
+ The steps in this article run the Azure CLI commands interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+ You can also [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. If you run Azure CLI locally, sign in to Azure using the [az login](/cli/azure/reference-index#az-login) command.
+++
+## Add a peer
+
+In this section, you learn how to add a BGP peering to your route server to peer with a network virtual appliance (NVA).
+
+# [**Portal**](#tab/portal)
+
+1. Go to the route server that you want to peer with an NVA.
+
+1. under **Settings**, select **Peers**.
+
+1. Select **+ Add** to add a new peer.
+
+1. On the **Add Peer** page, enter the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | A name to identify the peer. It doesn't have to be the same name of the NVA. |
+ | ASN | The Autonomous System Number (ASN) of the NVA. For more information, see [What Autonomous System Numbers (ASNs) can I use?](route-server-faq.md#what-autonomous-system-numbers-asns-can-i-use) |
+ | IPv4 Address | The private IP address of the NVA. |
+
+1. Select **Add** to save the configuration.
+
+ :::image type="content" source="./media/configure-route-server/add-peer.png" alt-text="Screenshot that shows how to add the NVA to the route server as a peer." lightbox="./media/configure-route-server/add-peer.png":::
+
+ Once the peer NVA is successfully added, you can see it in the list of peers with a **Succeeded** provisioning state.
+
+ :::image type="content" source="./media/configure-route-server/peer-list.png" alt-text="Screenshot that shows the route server's peers." lightbox="./media/configure-route-server/peer-list.png":::
+
+ To complete the peering setup, you must configure the NVA to establish a BGP session with the route server's peer IPs and ASN. You can find the route server's Peer IPs and ASN in the **Overview** page:
+
+ :::image type="content" source="./media/configure-route-server/route-server-overview.png" alt-text="Screenshot that shows the Overview page of a route server. " lightbox="./media/configure-route-server/route-server-overview.png":::
+
+ [!INCLUDE [NVA peering note](../../includes/route-server-note-nva-peering.md)]
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Add-AzRouteServerPeer](/powershell/module/az.network/add-azrouteserverpeer) cmdlet to add a new peer to the route server.
+
+```azurepowershell-interactive
+Add-AzRouteServerPeer -PeerName 'myNVA' -PeerAsn '65001' -PeerIp '10.0.0.4' -ResourceGroupName 'myResourceGroup' -RouteServerName 'myRouteServer'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `-PeerName` | A name to identify the peer. It doesn't have to be the same name of the NVA. |
+| `-PeerAsn` | The Autonomous System Number (ASN) of the NVA. For more information, see [What Autonomous System Numbers (ASNs) can I use?](route-server-faq.md#what-autonomous-system-numbers-asns-can-i-use) |
+| `-PeerIp` | The private IP address of the NVA. |
+| `-ResourceGroupName` | The resource group name of your route server. |
+| `-RouteServerName` | The route server name. This parameter is required when there are more than one route server in the same resource group. |
+
+After you successfully add the peer NVA, you must configure the NVA to establish a BGP session with the route server's peer IPs and ASN. Use [Get-AzRouteServer](/powershell/module/az.network/get-azrouteserver) cmdlet to find the route server's peer IPs and ASN:
+
+```azurepowershell-interactive
+Get-AzRouteServer -ResourceGroupName 'myResourceGroup' -RouteServerName 'myRouteServer'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `-ResourceGroupName` | The resource group name of your route server. |
+| `-RouteServerName` | The route server name. You need this parameter when there are more than one route server in the same resource group. |
++
+# [**Azure CLI**](#tab/cli)
+
+Use [az network routeserver peering create](/cli/azure/network/routeserver/peering#az-network-routeserver-peering-create) command to add a new peer to the route server.
+
+```azurecli-interactive
+az network routeserver peering create --name 'myNVA' --peer-asn '65001' --peer-ip '10.0.0.4' --resource-group 'myResourceGroup' --routeserver 'myRouteServer'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `--name` | A name to identify the peer. It doesn't have to be the same name of the NVA. |
+| `--peer-asn` | The Autonomous System Number (ASN) of the NVA. For more information, see [What Autonomous System Numbers (ASNs) can I use?](route-server-faq.md#what-autonomous-system-numbers-asns-can-i-use) |
+| `--peer-ip` | The private IP address of the NVA. |
+| `--resource-group` | The resource group name of your route server. |
+| `--routeserver` | The route server name. |
+
+After you successfully add the peer NVA, you must configure the NVA to establish a BGP session with the route server's peer IPs and ASN. Use [az network routeserver show](/cli/azure/network/routeserver#az-network-routeserver-show) command to find the route server's peer IPs and ASN:
+
+```azurecli-interactive
+az network routeserver show --name 'myRouteServer' --resource-group 'myResourceGroup'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `--name` | The route server name. |
+| `--resource-group` | The resource group name of your route server. |
++++
+## Configure route exchange
+
+In this section, you learn how to enable exchanging routes between your route server and the virtual network gateway (ExpressRoute or VPN) that exists in the same virtual network.
+++
+# [**Portal**](#tab/portal)
+
+1. Go to the route server that you want to configure.
+
+1. Under **Settings**, select **Configuration**.
+
+1. Select **Enabled** for the **Branch-to-branch** setting and then select **Save**.
+
+ :::image type="content" source="./media/configure-route-server/enable-route-exchange.png" alt-text="Screenshot that shows how to enable route exchange in a route server." lightbox="./media/configure-route-server/enable-route-exchange.png":::
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Update-AzRouteServer](/powershell/module/az.network/update-azrouteserver) cmdlet to enable or disable route exchange between the route server and the virtual network gateway.
+
+```azurepowershell-interactive
+Update-AzRouteServer -RouteServerName 'myRouteServer' -ResourceGroupName 'myResourceGroup' -AllowBranchToBranchTraffic 1
+```
+
+| Parameter | Value |
+| -- | -- |
+| `-RouteServerName` | The route server name. |
+| `-ResourceGroupName` | The resource group name of your route server. |
+| `-AllowBranchToBranchTraffic` | The route exchange parameter. Accepted values: `1` and `0`. |
+
+To disable route exchange, set the `-AllowBranchToBranchTraffic` parameter to `0`.
+
+Use [Get-AzRouteServer](/powershell/module/az.network/get-azrouteserver) cmdlet to verify the configuration.
+
+# [**Azure CLI**](#tab/cli)
+
+Use [az network routeserver update](/cli/azure/network/routeserver#az-network-routeserver-update) command to enable or disable route exchange between the route server and the virtual network gateway.
+
+```azurecli-interactive
+az network routeserver peering show --name 'myRouteServer' --resource-group 'myResourceGroup' --allow-b2b-traffic true
+```
+
+| Parameter | Value |
+| -- | -- |
+| `--name` | The route server name. |
+| `--resource-group` | The resource group name of your route server. |
+| `--allow-b2b-traffic` | The route exchange parameter. Accepted values: `true` and `false`. |
+
+To disable route exchange, set the `--allow-b2b-traffic` parameter to `false`.
+
+Use [az network routeserver show](/cli/azure/network/routeserver#az-network-routeserver-show) command to verify the configuration.
+++
+## Configure routing preference
+
+In this section, you learn how to configure route preference to influence the route learning and selection of your route server.
+
+# [**Portal**](#tab/portal)
+
+1. Go to the route server that you want to configure.
+
+1. Under **Settings**, select **Configuration**.
+
+1. Select the routing preference that you want. Available options: **ExpressRoute** (default), **VPN**, and **ASPath**.
+
+1. Select **Save**
+
+ :::image type="content" source="./media/configure-route-server/configure-routing-preference.png" alt-text="Screenshot that shows how to configure routing preference in a route server." lightbox="./media/configure-route-server/configure-routing-preference.png":::
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Update-AzRouteServer](/powershell/module/az.network/update-azrouteserver) cmdlet to configure the routing preference setting of your route server.
+
+```azurepowershell-interactive
+Update-AzRouteServer -RouteServerName 'myRouteServer' -ResourceGroupName 'myResourceGroup' -HubRoutingPreference 'ASPath'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `-RouteServerName` | The route server name. |
+| `-ResourceGroupName` | The resource group name of your route server. |
+| `-HubRoutingPreference` | The routing preference. Accepted values: `ExpressRoute` (default), `VpnGateway`, and `ASPath`. |
+
+Use [Get-AzRouteServer](/powershell/module/az.network/get-azrouteserver) cmdlet to verify the configuration.
+
+# [**Azure CLI**](#tab/cli)
+
+Use [az network routeserver update](/cli/azure/network/routeserver#az-network-routeserver-update) command to configure the routing preference setting of your route server.
+
+```azurecli-interactive
+az network routeserver peering show --name 'myRouteServer' --resource-group 'myResourceGroup' --hub-routing-preference 'ASPath'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `--name` | The route server name. |
+| `--resource-group` | The resource group name of your route server. |
+| `--hub-routing-preference` | The routing preference. Accepted values: `ExpressRoute` (default), `VpnGateway`, and `ASPath`. |
+
+Use [az network routeserver show](/cli/azure/network/routeserver#az-network-routeserver-show) command to verify the configuration.
++++
+## View a peer
+
+In this section, you learn how to view the details of a peer.
+
+# [**Portal**](#tab/portal)
+
+1. Go to the route server that you want to peer with an NVA.
+
+1. under **Settings**, select **Peers**.
+
+1. In the list of peers, you can see the name, ASN, IP address, and provisioning state of any of the configured peers.
+
+ :::image type="content" source="./media/configure-route-server/peer-list.png" alt-text="Screenshot that shows the configuration of a route server's peer." lightbox="./media/configure-route-server/peer-list.png":::
++
+# [**PowerShell**](#tab/powershell)
+
+Use [Get-AzRouteServerPeer](/powershell/module/az.network/get-azrouteserverpeer) cmdlet to view a route server peering.
+
+```azurepowershell-interactive
+Get-AzRouteServerPeer -PeerName 'myNVA' -ResourceGroupName 'myResourceGroup' -RouteServerName 'myRouteServer'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `-PeerName` | The peer name. |
+| `-ResourceGroupName` | The resource group name of your route server. |
+| `-RouteServerName` | The route server name. |
++
+# [**Azure CLI**](#tab/cli)
+
+Use [az network routeserver peering show](/cli/azure/network/routeserver/peering#az-network-routeserver-peering-show) command to view a route server peering.
+
+```azurecli-interactive
+az network routeserver peering show --name 'myNVA' --resource-group 'myResourceGroup' --routeserver 'myRouteServer'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `--name` | The peer name. |
+| `--resource-group` | The resource group name of your route server. |
+| `--routeserver` | The route server name. |
+++++
+## View advertised and learned routes
+
+In this section, you learn how to view the route server's advertised and learned routes.
+
+# [**Portal**](#tab/portal)
+
+Use [PowerShell](?tabs=powershell#view-advertised-and-learned-routes) or [Azure CLI](?tabs=cli#view-advertised-and-learned-routes) to view the advertised and learned routes.
+
+# [**PowerShell**](#tab/powershell)
+
+Use the [Get-AzRouteServerPeerAdvertisedRoute](/powershell/module/az.network/get-azrouteserverpeeradvertisedroute) cmdlet to view routes advertised by a route server.
+
+```azurepowershell-interactive
+Get-AzRouteServerPeerAdvertisedRoute -PeerName 'myNVA' -ResourceGroupName 'myResourceGroup' -RouteServerName 'myRouteServer'
+```
+
+Use the [Get-AzRouteServerPeerLearnedRoute](/powershell/module/az.network/get-azrouteserverpeerlearnedroute) cmdlet to view routes learned by a route server.
+
+```azurepowershell-interactive
+Get-AzRouteServerPeerLearnedRoute -PeerName 'myNVA' -ResourceGroupName 'myResourceGroup' -RouteServerName 'myRouteServer'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `-PeerName` | The peer name. |
+| `-ResourceGroupName` | The resource group name of your route server. |
+| `-RouteServerName` | The route server name. |
++
+# [**Azure CLI**](#tab/cli)
+
+Use the [az network routeserver peering list-advertised-routes](/cli/azure/network/routeserver/peering#az-network-routeserver-peering-list-advertised-routes) command to view routes advertised by a route server.
++
+```azurecli-interactive
+az network routeserver peering list-advertised-routes --name 'myNVA' --resource-group 'myResourceGroup' --routeserver 'myRouteServer'
+```
+
+Use the [az network routeserver peering list-learned-routes](/cli/azure/network/routeserver/peering#az-network-routeserver-peering-list-learned-routes) command to view routes learned by a route server.
+
+```azurecli-interactive
+az network routeserver peering list-learned-routes --name 'myNVA' --resource-group 'myResourceGroup' --routeserver 'myRouteServer'
+```
+
+| Parameter | Value |
+| -- | -- |
+|` --name` | The peer name. |
+| `--resource-group` | The resource group name of your route server. |
+| `--routeserver` | The route server name. |
+++
+## Delete a peer
+
+In this section, you learn how to delete an existing peering with a network virtual appliance (NVA).
+
+# [**Portal**](#tab/portal)
+
+1. Go to the route server that you want to delete its NVA peering.
+
+1. under **Settings**, select **Peers**.
+
+1. Select the ellipses **...** next to the peer that you want to delete, and then select **Delete**.
+
+ :::image type="content" source="./media/configure-route-server/delete-peer.png" alt-text="Screenshot that shows how to delete a route server's peer." lightbox="./media/configure-route-server/delete-peer.png":::
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Remove-AzRouteServerPeer](/powershell/module/az.network/remove-azrouteserverpeer) cmdlet to delete a route server peering.
+
+```azurepowershell-interactive
+Get-AzRouteServerPeer -PeerName 'myNVA' -ResourceGroupName 'myResourceGroup' -RouteServerName 'myRouteServer'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `-PeerName` | The peer name. |
+| `-ResourceGroupName` | The resource group name of your route server. |
+| `-RouteServerName` | The route server name. |
+
+# [**Azure CLI**](#tab/cli)
+
+Use [az network routeserver peering delete](/cli/azure/network/routeserver/peering#az-network-routeserver-peering-delete) command to delete a route server peering.
+
+```azurecli-interactive
+az network routeserver peering delete --name 'myNVA' --resource-group 'myResourceGroup' --routeserver 'myRouteServer'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `--name` | The peer name. |
+| `--resource-group` | The resource group name of your route server. |
+| `--routeserver` | The route server name. |
+++
+## Delete a route server
+
+In this section, you learn how to delete an existing route server.
+
+# [**Portal**](#tab/portal)
+
+1. Go to the route server that you want to delete.
+
+1. Select **Delete** from the **Overview** page.
+
+1. Select **Confirm** to delete the route server.
+
+ :::image type="content" source="./media/configure-route-server/delete-route-server.png" alt-text="Screenshot that shows how to delete a route server." lightbox="./media/configure-route-server/delete-route-server.png":::
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Remove-AzRouteServer](/powershell/module/az.network/remove-azrouteserver) cmdlet to delete a route server.
+
+```azurepowershell-interactive
+Remove-AzRouteServer -RouteServerName 'myRouteServer' -ResourceGroupName 'myResourceGroup'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `-RouteServerName` | The route server name. |
+| `-ResourceGroupName` | The resource group name of your route server. |
+
+# [**Azure CLI**](#tab/cli)
+
+Use [az network routeserver delete](/cli/azure/network/routeserver#az-network-routeserver-delete) command to delete a route server.
+
+```azurecli-interactive
+az network routeserver delete --name 'myRouteServer' --resource-group 'myResourceGroup'
+```
+
+| Parameter | Value |
+| -- | -- |
+| `--name` | The route server name. |
+| `--resource-group` | The resource group name of your route server. |
+++
+## Related content
+
+- [Create a route server using the Azure portal](quickstart-configure-route-server-portal.md)
+- [Configure BGP peering between a route server and (NVA)](peer-route-server-with-virtual-appliance.md)
+- [Monitor Azure Route Server](monitor-route-server.md)
route-server Hub Routing Preference Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/hub-routing-preference-cli.md
- Title: Configure routing preference - Azure CLI-
-description: Learn how to configure routing preference in Azure Route Server using the Azure CLI to influence its route selection.
---- Previously updated : 11/15/2023--
-#CustomerIntent: As an Azure administrator, I want learn how to use routing preference setting so that I can influence route selection in Azure Route Server using the Azure CLI.
--
-# Configure routing preference to influence route selection using the Azure CLI
-
-Learn how to use routing preference setting in Azure Route Server to influence its route learning and selection. For more information, see [Routing preference](hub-routing-preference.md).
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An Azure route server. If you need to create a Route Server, see [Create and configure Azure Route Server](quickstart-configure-route-server-cli.md).-- Azure Cloud Shell or Azure CLI installed locally.-
-## View routing preference configuration
-
-Use [az network routeserver show](/cli/azure/network/routeserver#az-network-routeserver-show()) to view the current route server configuration including its routing preference setting.
-
-```azurecli-interactive
-# Show the Route Server configuration.
-az network routeserver show --resource-group 'myResourceGroup' --name 'myRouteServer'
-```
-
-In the output, you can see the current routing preference setting in front of **"hubRoutingPreference"**:
-
-```output
-{
- "allowBranchToBranchTraffic": false,
- "etag": "W/\"00000000-1111-2222-3333-444444444444\"",
- "hubRoutingPreference": "ExpressRoute",
- "id": "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualHubs/myRouteServer",
- "kind": "RouteServer",
- "location": "eastus",
- "name": "myRouteServer",
- "provisioningState": "Succeeded",
- "resourceGroup": "myResourceGroup",
- "routeTable": {
- "routes": []
- },
- "routingState": "Provisioned",
- "sku": "Standard",
- "tags": {},
- "type": "Microsoft.Network/virtualHubs",
- "virtualHubRouteTableV2s": [],
- "virtualRouterAsn": 65515,
- "virtualRouterAutoScaleConfiguration": {
- "minCapacity": 2
- },
- "virtualRouterIps": [
- "10.1.1.5",
- "10.1.1.4"
- ]
-}
-```
-
-> [!NOTE]
-> The default routing preference setting is **ExpressRoute**.
-
-## Configure routing preference
-
-Use [az network routeserver update](/cli/azure/network/routeserver#az-network-routeserver-update()) to update routing preference setting.
-
-```azurecli-interactive
-# Change the routing preference to AS Path.
-az network routeserver update --name 'myRouteServer' --hub-routing-preference 'ASPath' --resource-group 'myResourceGroup'
-```
-
-```azurecli-interactive
-# Change the routing preference to VPN Gateway.
-az network routeserver update --name 'myRouteServer' --hub-routing-preference 'VpnGateway' --resource-group 'myResourceGroup'
-```
-
-```azurecli-interactive
-# Change the routing preference to ExpressRoute.
-az network routeserver update --name 'myRouteServer' --hub-routing-preference 'ExpressRoute' --resource-group 'myResourceGroup'
-```
-
-## Related content
--- [Create and configure Route Server](quickstart-configure-route-server-cli.md)-- [Monitor Azure Route Server](monitor-route-server.md)-- [Azure Route Server FAQ](route-server-faq.md)
route-server Hub Routing Preference Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/hub-routing-preference-portal.md
- Title: Configure routing preference - Azure portal-
-description: Learn how to configure routing preference in Azure Route Server using the Azure portal to influence its route selection.
---- Previously updated : 11/15/2023-
-#CustomerIntent: As an Azure administrator, I want learn how to use routing preference setting so that I can influence route selection in Azure Route Server using the Azure portal.
--
-# Configure routing preference to influence route selection using the Azure portal
-
-Learn how to use routing preference setting in Azure Route Server to influence its route learning and selection. For more information, see [Routing preference](hub-routing-preference.md).
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An Azure route server. If you need to create a Route Server, see [Create and configure Azure Route Server](quickstart-configure-route-server-portal.md).-
-## Configure routing preference
-
-1. Sign in to [Azure portal](https://portal.azure.com).
-
-1. In the search box at the top of the portal, enter ***route server***. Select **Route Servers** from the search results.
-
- :::image type="content" source="./media/hub-routing-preference-portal/portal.png" alt-text="Screenshot of searching for Azure Route Server in the Azure portal." lightbox="./media/hub-routing-preference-portal/portal.png":::
-
-1. Select the Route Server that you want to configure.
-
-1. Select **Configuration**.
-
-1. In the **Configuration** page, select **VPN**, **ASPath** or **ExpressRoute**.
-
- :::image type="content" source="./media/hub-routing-preference-portal/routing-preference-configuration.png" alt-text="Screenshot of configuring routing preference of a Route Server in the Azure portal.":::
-
- > [!NOTE]
- > The default routing preference setting is **ExpressRoute**.
-
-1. Select **Save**.
-
-## Related content
--- [Create and configure Route Server](quickstart-configure-route-server-portal.md)-- [Monitor Azure Route Server](monitor-route-server.md)-- [Azure Route Server FAQ](route-server-faq.md)
route-server Hub Routing Preference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/hub-routing-preference-powershell.md
- Title: Configure routing preference - PowerShell-
-description: Learn how to configure routing preference in Azure Route Server using Azure PowerShell to influence its route selection.
---- Previously updated : 11/15/2023--
-#CustomerIntent: As an Azure administrator, I want learn how to use routing preference setting so that I can influence route selection in Azure Route Server using Azure PowerShell.
--
-# Configure routing preference to influence route selection using PowerShell
-
-Learn how to use routing preference setting in Azure Route Server to influence its route learning and selection. For more information, see [Routing preference](hub-routing-preference.md).
--
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An Azure route server. If you need to create a Route Server, see [Create and configure Azure Route Server](quickstart-configure-route-server-powershell.md).-- Azure Cloud Shell or Azure PowerShell installed locally.-
-## View routing preference configuration
-
-Use [Get-AzRouteServer](/powershell/module/az.network/get-azrouteserver) to view the current routing preference configuration.
-
-```azurepowershell-interactive
-# Get the Route Server.
-Get-AzRouteServer -ResourceGroupName 'myResourceGroup'
-```
-
-In the output, you can see the current routing preference setting under **HubRoutingPreference**:
-
-```output
-ResourceGroupName Name Location RouteServerAsn RouteServerIps ProvisioningState HubRoutingPreference
- -- -- -- -- --
-myResourceGroup myRouteServer eastus 65515 {10.1.1.5, 10.1.1.4} Succeeded ExpressRoute
-```
-
-> [!NOTE]
-> The default routing preference setting is **ExpressRoute**.
-
-## Configure routing preference
-
-Use [Update-AzRouteServer](/powershell/module/az.network/update-azrouteserver) to configure routing preference.
-
-```azurepowershell-interactive
-# Change the routing preference to AS Path.
-Update-AzRouteServer -RouteServerName 'myRouteServer' -HubRoutingPreference 'ASPath' -ResourceGroupName 'myResourceGroup'
-```
-
-```azurepowershell-interactive
-# Change the routing preference to VPN Gateway.
-Update-AzRouteServer -RouteServerName 'myRouteServer' -HubRoutingPreference 'VpnGateway' -ResourceGroupName 'myResourceGroup'
-```
-
-```azurepowershell-interactive
-# Change the routing preference to ExpressRoute.
-Update-AzRouteServer -RouteServerName 'myRouteServer' -HubRoutingPreference 'ExpressRoute' -ResourceGroupName 'myResourceGroup'
-```
-
-> [!IMPORTANT]
-> Include ***-AllowBranchToBranchTraffic*** parameter to enable **route exchange (branch-to-branch)** even if it was enabled before running the **Update-AzRouteServer** cmdlet. For more information, see [Configure route exchange](quickstart-configure-route-server-powershell.md#configure-route-exchange).
-
-## Related content
--- [Create and configure Route Server](quickstart-configure-route-server-powershell.md)-- [Monitor Azure Route Server](monitor-route-server.md)-- [Azure Route Server FAQ](route-server-faq.md)
route-server Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/hub-routing-preference.md
When Route Server has multiple routes to an on-premises destination prefix, Rout
## Next step > [!div class="nextstepaction"]
-> [Configure routing preference](hub-routing-preference-portal.md)
+> [Configure routing preference](configure-route-server.md#configure-routing-preference)
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
Previously updated : 09/03/2024 Last updated : 09/16/2024
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- September 16, 2024: Included section on supported clock sources in Azure VMs in [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md)
- September 03, 2024: Included Mv3 High Memory and Very High Memory in HANA storage configuration in [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md), [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md), and [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md) - August 22, 2024: Added documentation option for SAPHanaSR-angi as separate tab in [High availability for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) and [High availability of SAP HANA scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md). - July 29, 2024: Changes in [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](./high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-netapp-files.md), [Azure VMs high availability for SAP NetWeaver on SLES](./high-availability-guide-suse.md), [Azure VMs high availability for SAP NetWeaver on SLES multi-SID guide](./high-availability-guide-suse-multi-sid.md) with the instructions of managing SAP ASCS and ERS instances SAP startup framework when configured with systemd.
sap Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations.md
Previously updated : 11/09/2023 Last updated : 09/16/2024
Deploy the VMs in Azure by using:
- Azure PowerShell cmdlets. - The Azure CLI.
-You also can deploy a complete installed SAP HANA platform on the Azure VM services through the [SAP Cloud platform](https://cal.sap.com/). The installation process is described in [Deploy SAP S/4HANA or BW/4HANA on Azure](./cal-s4h.md).
+You also can deploy a complete installed SAP HANA platform on the Azure VM services through the [SAP Cloud platform](https://cal.sap.com). The installation process is described in [Deploy SAP S/4HANA or BW/4HANA on Azure](./cal-s4h.md).
>[!IMPORTANT] > In order to use M208xx_v2 VMs, you need to be careful selecting your Linux image. For more information, see [Memory optimized virtual machine sizes](/azure/virtual-machines/mv2-series).
To deploy SAP HANA in Azure without a site-to-site connection, you still want to
Another description on how to use Azure NVAs to control and monitor access from Internet without the hub and spoke VNet architecture can be found in the article [Deploy highly available network virtual appliances](/azure/architecture/reference-architectures/dmz/nva-ha).
+### Clock source options in Azure VMs
+SAP HANA requires reliable and accurate timing information to perform optimally. Traditionally Azure VMs running on Azure hypervisor used only Hyper-V TSC page as a default clock source. Technology advancements in hardware, host OS and Linux guest OS kernels made it possible to provide "Invariant TSC" as a clock source option on some Azure VM SKUs.
+
+Hyper-V TSC page (`hyperv_clocksource_tsc_page`) is supported on all Azure VMs as a clock source.
+If the underlying hardware, hypervisor and guest OS linux kernel support Invariant TSC, `tsc` will be offered as available and supported clock source in the guest OS on Azure VMs.
+ ## Configuring Azure infrastructure for SAP HANA scale-out In order to find out the Azure VM types that are certified for either OLAP scale-out or S/4HANA scale-out, check the [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;iaas;ve:24). A checkmark in the column 'Clustering' indicates scale-out support. Application type indicates whether OLAP scale-out or S/4HANA scale-out is supported. For details on nodes certified in scale-out, review the entry for a specific VM SKU listed in the SAP HANA hardware directory.
sentinel Billing Pre Purchase Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-pre-purchase-plan.md
+
+ Title: Optimize costs with a pre-purchase plan
+
+description: Learn how to save costs and buy a Microsoft Sentinel pre-purchase plan
++ Last updated : 07/10/2024++
+#customerintent: As a SOC administrator or a billing specialist, I want to know how to buy a pre-purchase plan and whether commit units will benefit us financially.
++
+# Optimize Microsoft Sentinel costs with a pre-purchase plan
+
+Save on your Microsoft Sentinel costs when you buy a pre-purchase plan. Pre-purchase plans are commit units (CUs) bought at discounted tiers in your purchasing currency for a specific product. The more you buy, the greater the discount. Purchased CUs pay down qualifying costs in US dollars (USD). So, if Microsoft Sentinel generates a retail cost of $100, then 100 Sentinel CUs (SCUs) are consumed.
+
+Any eligible Microsoft Sentinel retail costs automatically deduct first from your SCUs over the course of its one year term or until they are depleted. Your pre-purchase plan SCUs start paying for your Microsoft Sentinel workspace costs without needing to redeploy or reassign the plan, and by default automatically renew to ensure you continue saving.
+
+## Prerequisites
+
+To buy a pre-purchase plan, you must have one of the following Azure subscriptions and roles:
+- For an Azure subscription, the owner role or reservation purchaser role is required.
+- For an Enterprise Agreement (EA) subscription, the [**Reserved Instances** policy option](../cost-management-billing/manage/direct-ea-administration.md#view-and-manage-enrollment-policies) must be enabled. To enable that policy option, you must be an EA administrator of the subscription.
+- For a Cloud Solution Provider (CSP) subscription, follow one of these articles:
+ - [Buy Azure reservations on behalf of a customer](/partner-center/customers/azure-reservations-buying)
+ - [Allow the customer to buy their own reservations](/partner-center/customers/give-customers-permission)
+
+>[!NOTE]
+> Microsoft Sentinel Commit Units are different from Security Compute Units in Copilot for Security. Customers cannot use Sentinel Commit Units to run Copilot workloads and vice versa.
+
+## Determine the right size to buy
+
+Pre-purchase plans pair nicely with Microsoft Sentinel commitment tiers. Once you plan your Microsoft Sentinel ingestion volume, choose an appropriate commitment tier. Then it's easier to decide on the size of a pre-purchase plan to buy. Microsoft Sentinel pre-purchase plans have a term agreement of one year.
+
+Here's an example of the decision making and cost savings for a pre-purchase plan. If you have a commitment tier of 200 GB/day, there's an associated monthly estimated cost for both the ingestion to the workspace and the analysis for Microsoft Sentinel. For example purposes, let's say that monthly cost is $20,000 USD with simplified pricing and provides a 39% savings over the pay-as-you-go tier with the same 200 GB/day.
+
+A $100,000 USD pre-purchase plan covers five months of that commitment tier but is valid for paying Microsoft Sentinel costs for 12 months. That pre-purchase plan is bought at a 22% discount for $78,000 USD.
+
+The savings for the commitment tier and the pre-purchase plan combine. The original pay-as-you-go price for five months of 200 GB/day ingestion and analysis costs is about $160,000 USD. With an accurate commitment tier and a pre-purchase plan, the cost is reduced to $78,000 USD for a combined savings of over 51%.
+
+For more information, see the following articles:
+- [Switch to simplified pricing](enroll-simplified-pricing-tier.md)
+- [Set or change commitment tier](billing-reduce-costs.md#set-or-change-pricing-tier)
+
+>[!IMPORTANT]
+> The prices mentioned are for example purposes only. To determine the latest commitment tier prices, see [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
+
+All Microsoft Sentinel pricing tiers qualify for Microsoft Sentinel pre-purchase plans. From your Microsoft Sentinel bill, these costs are the entries with the **Sentinel** service name in the invoice details. These costs don't include Azure Monitor tiers, retention, restore and search costs. Eligible Microsoft Sentinel usage is deducted from the pre-purchased Microsoft Sentinel CUs automatically.
+
+For more information on how to view Microsoft Sentinel simplified or classic pricing tiers in your invoice details, see [Understand your Microsoft Sentinel bill](billing.md#understand-your-microsoft-sentinel-bill).
+
+Keep in mind, Microsoft Sentinel integrates with many other Azure services that have separate costs not eligible to use with the pre-purchase SCUs. For more information, see [Costs and pricing for other services](billing.md#costs-and-pricing-for-other-services).
+
+## Purchase Microsoft Sentinel commit units
+
+Purchase Microsoft Sentinel pre-purchase plans in the [Azure portal reservations](https://portal.azure.com/#view/Microsoft_Azure_Reservations/ReservationsBrowseBlade/productType/Reservations).
+
+1. Go to the [Azure portal](https://portal.azure.com)
+1. Navigate to the **Reservations** service.
+1. On the **Purchase reservations page**, select **Microsoft Sentinel Pre-Purchase Plan**.
+1. On the **Select the product you want to purchase** page, select a subscription. Use the **Subscription** list to select the subscription used to pay for the reserved capacity. The payment method of the subscription is charged the upfront costs for the reserved capacity. Charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage.
+1. Select a scope.
+ - **Single resource group scope** - Applies the reservation discount to the matching resources in the selected resource group only.
+ - **Single subscription scope** - Applies the reservation discount to the matching resources in the selected subscription.
+ - **Shared scope** - Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. For Enterprise Agreement customers, the billing context is the enrollment.
+ - **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope.
+1. Select how many Microsoft Sentinel commit units you want to purchase.
+
+ `Need Sentinel screenshot here`
+ :::image type="content" source="media/sentinel-pre-purchase-plan.png" alt-text="Screenshot showing Microsoft Sentinel pre-purchase plan discount tiers and their term lengths." lightbox="media/sentinel-pre-purchase-plan.png":::
+
+1. Choose to automatically renew the pre-purchase reservation. *The setting is configured to renew automatically by default*. For more information, see [Renew a reservation](../cost-management-billing/reservations/reservation-renew.md).
+
+## Change scope and ownership
+
+You can make the following types of changes to a reservation after purchase:
+
+- Update reservation scope
+- Update who can view or manage the reservation. For more information, see [Who can manage a reservation by default](../cost-management-billing/reservations/manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default).
+
+You can't split or merge a **Microsoft Sentinel Pre-Purchase Plan**. For more information about managing reservations, see [Manage reservations after purchase](../cost-management-billing/reservations/manage-reserved-vm-instance.md).
+
+## Cancellations and exchanges
+
+Cancel and exchange isn't supported for **Microsoft Sentinel Pre-Purchase Plans**. All purchases are final.
+
+## Related content
+
+To learn more about Azure Reservations, see the following articles:
+- [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md)
+- [Manage Azure Reservations](../cost-management-billing/reservations/manage-reserved-vm-instance.md)
+
+To learn more about Microsoft Sentinel costs, see [Plan costs and understand Microsoft Sentinel pricing and billing](billing.md).
sentinel Billing Reduce Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-reduce-costs.md
Title: Reduce costs for Microsoft Sentinel description: Learn how to reduce costs for Microsoft Sentinel by using different methods in the Azure portal.--++ Previously updated : 03/07/2024 Last updated : 07/09/2024 appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal
To learn more about how to monitor your costs, see [Manage and monitor costs for
For workspaces still using classic pricing tiers, the Microsoft Sentinel pricing tiers don't include Log Analytics charges. For more information, see [Simplified pricing tiers](billing.md#simplified-pricing-tiers).
+## Buy a pre-purchase plan
+
+Save on your Microsoft Sentinel costs when you pre-purchase Microsoft Sentinel commit units (CUs). Use the pre-purchased CUs at any time during the one year purchase term.
+
+Any eligible Microsoft Sentinel costs deduct first from the pre-purchased CUs automatically. You don't need to redeploy or assign a pre-purchased plan to your Microsoft Sentinel workspaces for the CU usage to get the pre-purchase discounts.
+
+For more information, see [Optimize Microsoft Sentinel costs with a pre-purchase plan](billing-pre-purchase-plan.md).
+ ## Separate non-security data in a different workspace Microsoft Sentinel analyzes all the data ingested into Microsoft Sentinel-enabled Log Analytics workspaces. It's best to have a separate workspace for non-security operations data, to ensure it doesn't incur Microsoft Sentinel costs.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
For more information about the codeless connector platform, see [Create a codele
- [[Recommended] Forcepoint NGFW via AMA](data-connectors/recommended-forcepoint-ngfw-via-ama.md) - [Barracuda CloudGen Firewall](data-connectors/barracuda-cloudgen-firewall.md) - [Exchange Security Insights Online Collector (using Azure Functions)](data-connectors/exchange-security-insights-online-collector.md)-- [Exchange Security Insights On-Premise Collector](data-connectors/exchange-security-insights-on-premise-collector.md)
+- [Exchange Security Insights On-Premises Collector](data-connectors/exchange-security-insights-on-premises-collector.md)
- [Microsoft Exchange Logs and Events](data-connectors/microsoft-exchange-logs-and-events.md) - [Forcepoint DLP](data-connectors/forcepoint-dlp.md) - [MISP2Sentinel](data-connectors/misp2sentinel.md)
sentinel Cyber Blind Spot Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cyber-blind-spot-integration.md
+
+ Title: "Cyber Blind Spot Integration (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Cyber Blind Spot Integration (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 04/26/2024+++++
+# Cyber Blind Spot Integration (using Azure Functions) connector for Microsoft Sentinel
+
+Through the API integration, you have the capability to retrieve all the issues related to your CBS organizations via a RESTful interface.
+
+This is autogenerated content. For changes, contact the solution provider.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Azure function app code** | https://raw.githubusercontent.com/CTM360-Integrations/Azure-Sentinel/ctm360-HV-CBS-azurefunctionapp/Solutions/CTM360/Data%20Connectors/CBS/AzureFunctionCTM360_CBS.zip |
+| **Log Analytics table(s)** | CBSLog_Azure_1_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Cyber Threat Management 360](https://www.ctm360.com/) |
+
+## Query samples
+
+**All logs**
+
+ ```kusto
+CBSLog_Azure_1_CL
+
+ | take 10
+ ```
+++
+## Prerequisites
+
+To integrate with Cyber Blind Spot Integration (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to a 'CyberBlindSpot' to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+**STEP 1 - Configuration steps for the 'CyberBlindSpot' API**
+
+The provider should provide or link to detailed steps to configure the 'CyberBlindSpot' API endpoint so that the Azure Function can authenticate to it successfully, get its authorization key or token, and pull the appliance's logs into Microsoft Sentinel.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the 'CyberBlindSpot' connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the 'CyberBlindSpot' API authorization key(s) readily available.
++++
+**Option 1 - Azure Resource Manager (ARM) Template**
+
+Use this method for automated deployment of the 'CyberBlindSpot' connector.
+
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-CTM360-CBS-azuredeploy) [![Deploy to Azure Gov](https://aka.ms/deploytoazuregovbutton)](https://aka.ms/sentinel-CTM360-CBS-azuredeploy-gov)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the **Workspace ID**, **Workspace Key**, **API **, 'and/or Other required fields'.
+>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+Option 2 - Manual Deployment of Azure Functions
+
+Use the following step-by-step instructions to deploy the CTM360 CBS data connector manually with Azure Functions (Deployment via Visual Studio Code).
++
+**1. Deploy a Function App**
+
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
+
+1. Download the [Azure Function App](https://raw.githubusercontent.com/CTM360-Integrations/Azure-Sentinel/ctm360-HV-CBS-azurefunctionapp/Solutions/CTM360/Data%20Connectors/CBS/AzureFunctionCTM360_CBS.zip) file. Extract archive to your local development computer.
+2. Start VS Code. Choose File in the main menu and select Open Folder.
+3. Select the top level folder from extracted files.
+4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
+If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
+If you're already signed in, go to the next step.
+5. Provide the following information at the prompts:
+
+ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
+
+ b. **Select Subscription:** Choose the subscription to use.
+
+ c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
+
+ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. CTIXYZ).
+
+ e. **Select a runtime:** Choose Python 3.8.
+
+ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
+
+6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
+7. Go to Azure Portal for the Function App configuration.
++
+**2. Configure the Function App**
+
+1. In the Function App, select the Function App Name and select **Configuration**.
+2. In the **Application settings** tab, select **+ New application setting**.
+3. Add each of the following application settings individually, with their respective string values (case-sensitive):
+ CTM360AccountID
+ WorkspaceID
+ WorkspaceKey
+ CTM360Key
+ FUNCTION_NAME
+ logAnalyticsUri - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
+3. Once all application settings have been entered, click **Save**.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/ctm360wll1698919697848.ctm360_microsoft_sentinel_solution?tab=Overview) in the Azure Marketplace.
sentinel Exchange Security Insights On Premises Collector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/exchange-security-insights-on-premises-collector.md
+
+ Title: "Exchange Security Insights On-Premises Collector connector for Microsoft Sentinel"
+description: "Learn how to install the connector Exchange Security Insights On-Premises Collector to connect your data source to Microsoft Sentinel."
++ Last updated : 04/26/2024+++++
+# Exchange Security Insights On-Premises Collector connector for Microsoft Sentinel
+
+Connector used to push Exchange On-Premises Security configuration for Microsoft Sentinel Analysis
+
+This is autogenerated content. For changes, contact the solution provider.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | ESIExchangeConfig_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) |
+
+## Query samples
+
+**View how many Configuration entries exist on the table**
+
+ ```kusto
+ESIExchangeConfig_CL
+ | summarize by GenerationInstanceID_g, EntryDate_s, ESIEnvironment_s
+ ```
+++
+## Prerequisites
+
+To integrate with Exchange Security Insights On-Premises Collector make sure you have:
+
+- **Service Account with Organization Management role**: The service Account that launch the script as scheduled task needs to be Organization Management to be able to retrieve all the needed security Information.
++
+## Vendor installation instructions
+
+Parser deployment **(When using Microsoft Exchange Security Solution, Parsers are automatically deployed)**
+
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected. Follow the steps for each Parser to create the Kusto Functions alias : [**ExchangeConfiguration**](https://aka.ms/sentinel-ESI-ExchangeConfiguration-OnPrem-parser) and [**ExchangeEnvironmentList**](https://aka.ms/sentinel-ESI-ExchangeEnvironmentList-OnPrem-parser)
++
+1. Install the ESI Collector Script on a server with Exchange Admin PowerShell console
+
+This is the script that will collect Exchange Information to push content in Microsoft Sentinel.
+
++
+2. Configure the ESI Collector Script
+
+Be sure to be local administrator of the server.
+In 'Run as Administrator' mode, launch the 'setup.ps1' script to configure the collector.
+ Fill the Log Analytics (Microsoft Sentinel) Workspace information.
+ Fill the Environment name or leave empty. By default, choose 'Def' as Default analysis. The other choices are for specific usage.
+++
+3. Schedule the ESI Collector Script (If not done by the Install Script due to lack of permission or ignored during installation)
+
+The script needs to be scheduled to send Exchange configuration to Microsoft Sentinel.
+ We recommend scheduling the script once a day.
+ The account used to launch the Script needs to be a member of the group Organization Management
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-exchangesecurityinsights?tab=Overview) in the Azure Marketplace.
sentinel Enroll Simplified Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enroll-simplified-pricing-tier.md
Keep in mind, the simplified effective per GB price for a Microsoft Sentinel ena
A Log Analytics workspace automatically configures its pricing tier to match the simplified pricing tier if Microsoft Sentinel is removed from a workspace while simplified pricing is enabled. For example, if the simplified pricing was configured for 100 GB/day Commitment tier in Microsoft Sentinel, the pricing tier of the Log Analytics workspace changes to 100 GB/day Commitment tier once Microsoft Sentinel is removed from the workspace. ### Will switching reduce my costs?
-Though the goal of the experience is to merely simplify the pricing and cost management experience without impacting actual costs, two primary scenarios exist for a cost reduction when switching to a simplified pricing tier.
+Though the goal of the experience is to merely simplify the pricing and cost management experience without impacting actual costs, three primary scenarios exist for a cost reduction when switching to a simplified pricing tier.
+- Reduce Microsoft Sentinel costs with a [pre-purchase plan](billing-pre-purchase-plan.md). Commit units of a pre-purchase plan don't apply to Log Analytics costs in the classic pricing tier. Since the entire simplified pricing tier is categorized as a Microsoft Sentinel cost, your effective spend with the simplified pricing tier is reduced with a pre-purchase plan that approaches your commitment tier.
- The combined [Defender for Servers](/azure/defender-for-cloud/faq-defender-for-servers#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) benefit results in a total cost savings if utilized by the workspace. - If one of the separate pricing tiers for Log Analytics or Microsoft Sentinel was inappropriately mismatched, the simplified pricing tier could result in cost saving.
sentinel Summary Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/summary-rules.md
This section reviews common scenarios for creating summary rules in Microsoft Se
```kusto let csl_columnmatch=(column_name: string) { CommonSecurityLog
- | where TimeGenerated > startofday(ago(1d))
| where isnotempty(column_name) | extend Date = format_datetime(TimeGenerated, "yyyy-MM-dd"),
Most of the data sources are raw logs that are noisy and have high volume, but h
## Use summary rules with auxiliary logs (sample process)
-This procedure describes a sample process for using summary rules with [auxiliary logs](basic-logs-use-cases.md), using a custom connection created via an AMR template to ingest CEF data from Logstash.
+This procedure describes a sample process for using summary rules with [auxiliary logs](basic-logs-use-cases.md), using a custom connection created via an ARM template to ingest CEF data from Logstash.
1. Set up your custom CEF connector from Logstash:
This procedure describes a sample process for using summary rules with [auxiliar
// Daily Network traffic trend Per Destination IP along with Data transfer stats // Frequency - Daily - Maintain 30 day or 60 Day History. ΓÇ» Custom_CommonSecurityLog
- ΓÇ» | where TimeGenerated > ago(1d)
ΓÇ» | extend Day = format_datetime(TimeGenerated, "yyyy-MM-dd") ΓÇ» | summarize Count= count(), DistinctSourceIps = dcount(SourceIP), NoofByesTransferred = sum(SentBytes), NoofBytesReceived = sum(ReceivedBytes) ΓÇ» by Day,DestinationIp, DeviceVendor
sentinel Tutorial Enrich Ip Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-enrich-ip-information.md
To complete this tutorial, make sure you have:
- [**Microsoft Sentinel Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) on the Log Analytics workspace where Microsoft Sentinel is deployed. - [**Logic App Contributor**](../role-based-access-control/built-in-roles.md#logic-app-contributor), and **Owner** or equivalent, on whichever resource group will contain the playbook created in this tutorial.
+- Installed [VirusTotal Solution from the Content Hub](https://azuremarketplace.microsoft.com/en-gb/marketplace/apps/azuresentinel.azure-sentinel-solution-virustotal?tab=Overview)
+ - A (free) [VirusTotal account](https://www.virustotal.com/gui/my-apikey) will suffice for this tutorial. A production implementation requires a VirusTotal Premium account. ## Create a playbook from a template
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## September 2024 +
+- [Azure reservations now have pre-purchase plans available for Microsoft Sentinel](#pre-purchase-plans-now-available-for-microsoft-sentinel)
- [Import/export of automation rules now generally available (GA)](#importexport-of-automation-rules-now-generally-available-ga) - [Google Cloud Platform data connectors are now generally available (GA)](#google-cloud-platform-data-connectors-are-now-generally-available-ga) - [Microsoft Sentinel now generally available (GA) in Azure Israel Central](#microsoft-sentinel-now-generally-available-ga-in-azure-israel-central)
+### Pre-purchase plans now available for Microsoft Sentinel
+
+Pre-purchase plans are a type of Azure reservation. When you buy a pre-purchase plan, you get commit units (CUs) at discounted tiers for a specific product. Microsoft Sentinel commit units (SCUs) apply towards eligible costs in your workspace. When you have predictable costs, choosing the right pre-purchase plan saves you money!
+
+For more information, see [Optimize costs with a pre-purchase plan](billing-pre-purchase-plan.md).
+ ### Import/export of automation rules now generally available (GA) The ability to export automation rules to Azure Resource Manager (ARM) templates in JSON format, and to import them from ARM templates, is now generally available after a [short preview period](#export-and-import-automation-rules-preview).
service-bus-messaging Message Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-counters.md
This article shows you different ways of getting message counts for a queue or s
If an application wants to scale resources based on the length of the queue, it should do so with a measured pace. The acquisition of the message counters is an expensive operation inside the message broker, and executing it frequently directly and adversely impacts the entity performance.
+Another useful metric to consider for scaling is the time between when the latest message was sent and when it was processed, also known as "critical time". This is helpful for scenarios where a queue may have thousands of messages in it, but the processing is fast enough to keep up, giving a "critical time" of only a couple of seconds, which may be more than enough for something like an email sending endpoint. Third-party libraries like [NServiceBus](https://docs.particular.net/nservicebus/operations/opentelemetry#meters-emitted-meters) emit this and other useful metrics via OpenTelemetry.
+ > [!NOTE] > The messages that are sent to a Service Bus topic are forwarded to subscriptions for that topic. So, the active message count on the topic itself is 0, as those messages have been successfully forwarded to the subscription. Get the message count at the subscription and verify that it's greater than 0. Even though you see messages at the subscription, they are actually stored in a storage owned by the topic. If you look at the subscriptions, then they would have non-zero message count (which add up to 323 MB of space for this entire entity).
service-bus-messaging Service Bus End To End Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-end-to-end-tracing.md
One of the common problems in micro services development is the ability to trace operation from a client through all the services that are involved in processing. It's useful for debugging, performance analysis, A/B testing, and other typical diagnostics scenarios. One part of this problem is tracking logical pieces of work. It includes message processing result and latency and external dependency calls. Another part is correlation of these diagnostics events beyond process boundaries.
-When a producer sends a message through a queue, it typically happens in the scope of some other logical operation, initiated by some other client or service. The same operation is continued by consumer once it receives a message. Both producer and consumer (and other services that process the operation), presumably emit telemetry events to trace the operation flow and result. In order to correlate such events and trace operation end-to-end, each service that reports telemetry has to stamp every event with a trace context.
+When a producer sends a message through a queue, it typically happens in the scope of some other logical operation, initiated by some other client or service. The same operation is continued by consumer once it receives a message. Both producer and consumer (and other services that process the operation), presumably emit telemetry events to trace the operation flow and result. In order to correlate such events and trace operation end-to-end, each service that reports telemetry has to stamp every event with a trace context. One library that can help developers have all of this telemetry emitted by default is [NServiceBus](https://docs.particular.net/nservicebus/operations/opentelemetry).
Microsoft Azure Service Bus messaging has defined payload properties that producers and consumers should use to pass such trace context. The protocol is based on the [W3C Trace-Context](https://www.w3.org/TR/trace-context/).
service-bus-messaging Service Bus Integrate With Rabbitmq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-integrate-with-rabbitmq.md
Here's a few scenarios in which we can make use of these capabilities:
- **Hybrid Cloud**: Your company just acquired a third party that uses RabbitMQ for their messaging needs. They are on a different cloud. While they transition to Azure you can already start sharing data by bridging RabbitMQ with Azure Service Bus. - **Third-Party Integration**: A third party uses RabbitMQ as a broker, and wants to send their data to us, but they are outside our organization. We can provide them with SAS Key giving them access to a limited set of Azure Service Bus queues where they can forward their messages to.
-The list goes on, but we can solve most of these use cases by bridging RabbitMQ to Azure.
+The list goes on, but we can solve most of these use cases by [bridging](/azure/architecture/patterns/messaging-bridge) RabbitMQ to Azure.
First you need to create a free Azure account by signing up [here](https://azure.microsoft.com/free/)
Once the policy has been created click on it to see the **Primary Connection Str
:::image type="content" source="./media/service-bus-integrate-with-rabbitmq/sas-policy-key.png" alt-text="Get SAS Policy":::
-Before you can use that connection string, you'll need to convert it to RabbitMQ's AMQP connection format. So go to the [connection string converter tool](https://red-mushroom-0f7446a0f.azurestaticapps.net/) and paste your connection string in the form, click convert. You'll get a connection string that's RabbitMQ ready. (That website runs everything local in your browser so your data isn't sent over the wire). You can access its source code on [GitHub](https://github.com/videlalvaro/connstring_to_amqp).
+Before you can use that connection string, you'll need to convert it to RabbitMQ's AMQP connection format. So go to the [connection string converter tool](https://amqpconnconverter.github.io/) and paste your connection string in the form, click convert. You'll get a connection string that's RabbitMQ ready. (That website runs everything local in your browser so your data isn't sent over the wire). You can access its source code on [GitHub](https://github.com/amqpconnconverter/amqpconnconverter.github.io).
:::image type="content" source="./media/service-bus-integrate-with-rabbitmq/converter.png" alt-text="Convert connection string":::
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
Now you're ready to create an NFS file share and provide network-level security
1. Select **+ File Share**.
-1. Name the new file share *qsfileshare* and enter "100" for the minimum **Provisioned capacity**, or provision more capacity (up to 102,400 GiB) to get more performance. Select **NFS** protocol, leave **No Root Squash** selected, and select **Create**.
+1. Name the new file share *qsfileshare* and enter "100" for the minimum **Provisioned capacity**, or provision more capacity (up to 102,400 GiB) to get more performance. Select **NFS** protocol, choose a **Root Squash** setting, and select **Create**. To learn more about root squash and its security benefits for NFS file shares, see [Configure root squash for Azure Files](nfs-root-squash.md).
:::image type="content" source="media/storage-files-quick-create-use-linux/create-nfs-share.png" alt-text="Screenshot showing how to name the file share and provision capacity to create a new N F S file share." lightbox="media/storage-files-quick-create-use-linux/create-nfs-share.png" border="true":::
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
> [!CAUTION] > Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.2
-> * End of Support was announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023.
-> * Effective July 8, 2024, Azure Synapse discontinued official support for Spark 3.2 Runtimes. The Synapse Spark Team is moving forward with the 3.2 __job disablement__ process September 12, 2024, beginning with partial pools and jobs disablement. We will continue with further full disablement on October 31st, 2024.
-* In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired as of July 8, 2024. Existing workflows will continue to run but security updates and bug fixes will no longer be available. Metadata will temporarily remain in the Synapse workspace.
-* **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.**
-
+> * End of Support was announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023. In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 retired as of July 8, 2024. Existing workflows will continue to run but security updates and bug fixes will no longer be available. Metadata will temporarily remain in the Synapse workspace.
+> * Effective July 8, 2024, Azure Synapse discontinued official support for Spark 3.2 Runtimes. However, based on requests received from multiple customers, we have extended the usage of this runtime until **October 31st, 2024,** and **job disablement** will start soon after that.
+> * **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to** **[Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md).**
+> * For up-to-date information, a detailed list of changes, and specific release notes for Spark runtimes, check and subscribe to **[Spark Runtimes Releases and Updates](https://github.com/microsoft/synapse-spark-runtime)**.
## Component versions | Component | Version |
To check the libraries included in Azure Synapse Runtime for Apache Spark 3.2 fo
- [Azure Synapse Analytics](../overview-what-is.md) - [Apache Spark Documentation](https://spark.apache.org/docs/3.2.1/)+ - [Apache Spark Concepts](apache-spark-concepts.md) ## Migration between Apache Spark versions - support
virtual-network Virtual Network Tap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tap-overview.md
# Virtual network TAP > [!IMPORTANT]
-> Virtual network TAP Preview is currently on hold in all Azure regions. You can email us at <azurevnettap@microsoft.com> with your subscription ID and we will notify you with future updates about the preview. In the interim, you can use agent based or NVA solutions that provide TAP/Network Visibility functionality through our [Packet Broker partner solutions](#virtual-network-tap-partner-solutions) available in [Azure Marketplace Offerings](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances%3Ball&search=Network%20Traffic&filters=partners).
+> Virtual network TAP Preview is currently in Private Preview in select Azure regions. You can sign up for our Previews using the sign form (https://forms.office.com/r/EWqbgLGNcV) and we will notify you when you are selected. In the interim, you can use agent based or NVA solutions that provide TAP/Network Visibility functionality through our [Packet Broker partner solutions](#virtual-network-tap-partner-solutions) available in [Azure Marketplace Offerings](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances%3Ball&search=Network%20Traffic&filters=partners).
Azure virtual network TAP (Terminal Access Point) allows you to continuously stream your virtual machine network traffic to a network packet collector or analytics tool. The collector or analytics tool is provided by a [network virtual appliance](https://azure.microsoft.com/solutions/network-appliances/) partner. For a list of partner solutions that are validated to work with virtual network TAP, see [partner solutions](#virtual-network-tap-partner-solutions).