Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
app-service | Configure Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md | This guide provides key concepts and instructions for containerization of Window ::: zone pivot="container-linux" -This guide provides key concepts and instructions for containerization of Linux apps in App Service. If are new to Azure App Service, follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first. There's also a [multi-container app quickstart](quickstart-multi-container.md) and [tutorial](tutorial-multi-container-app.md). For sidecar containers (preview), see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md). +This guide provides key concepts and instructions for containerization of Linux apps in App Service. If are new to Azure App Service, follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first. For sidecar containers (preview), see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md). ::: zone-end Further troubleshooting information is available at the Azure App Service blog: ## Configure multi-container apps +> [!NOTE] +> Sidecar containers (preview) will succeed multi-container apps in App Service. To get started, see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md). + - [Use persistent storage in Docker Compose](#use-persistent-storage-in-docker-compose) - [Preview limitations](#preview-limitations) - [Docker Compose options](#docker-compose-options) The following lists show supported and unsupported Docker Compose configuration ::: zone pivot="container-linux" > [!div class="nextstepaction"]-> [Tutorial: Multi-container WordPress app](tutorial-multi-container-app.md) +> [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md) ::: zone-end |
app-service | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/getting-started.md | Title: Getting started with Azure App Service -description: Take the first steps toward working with Azure App Service. This is a longer description that meets the length requirement. +description: Take the first steps toward working with Azure App Service. Decide on a stack and choose from various actions to get your app running. Previously updated : 8/31/2023 Last updated : 8/27/2024 zone_pivot_groups: app-service-getting-started-stacks Use the following resources to get started with .NET. | **Create your first .NET app** | Using one of the following tools:<br><br>- [Visual Studio](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vs)<br>- [Visual Studio Code](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vscode)<br>- [Command line](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-cli)<br>- [Azure PowerShell](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-ps)<br>- [Azure portal](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-azure-portal) | | **Deploy your app** | - [Configure ASP.NET](./configure-language-dotnet-framework.md)<br>- [Configure ASP.NET core](./configure-language-dotnetcore.md?pivots=platform-linux)<br>- [GitHub actions](./deploy-github-actions.md) | | **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)|-| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)| -| **Connect to a database** | - [.NET with Azure SQL Database](./app-service-web-tutorial-dotnet-sqldatabase.md)<br>- [.NET Core with Azure SQL DB](./tutorial-dotnetcore-sqldb-app.md)| +| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add TLS/SSL certificate](./configure-ssl-certificate.md)| +| **Connect to a database** | - [.NET with Azure SQL Database](./app-service-web-tutorial-dotnet-sqldatabase.md)<br>- [.NET Core with Azure SQL Database](./tutorial-dotnetcore-sqldb-app.md)| | **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=dotnet&pivots=container-linux-vscode)<br>- [Windows - Visual Studio](./quickstart-custom-container.md?tabs=dotnet&pivots=container-windows-vs)| | **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)| Use the following resources to get started with Node.js. | **Create your first Node app** | Using one of the following tools:<br><br>- [Visual Studio Code](./quickstart-nodejs.md?tabs=linux&pivots=development-environment-vscode)<br>- [CLI](./quickstart-nodejs.md?tabs=linux&pivots=development-environment-cli)<br>- [Azure portal](./quickstart-nodejs.md?tabs=linux&pivots=development-environment-azure-portal) | | **Deploy your app** | - [Configure Node](./configure-language-nodejs.md?pivots=platform-linux)<br>- [GitHub actions](./deploy-github-actions.md) | | **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)|-| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)| +| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add TLS/SSL certificate](./configure-ssl-certificate.md)| | **Connect to a database** | - [MongoDB](./tutorial-nodejs-mongodb-app.md)| | **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=node&pivots=container-linux-vscode)| | **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)| Use the following resources to get started with Java. | **Create your first Java app** | Using one of the following tools:<br><br>- [Maven deploy with an embedded web server](./quickstart-java.md?pivots=java-javase)<br>- [Maven deploy to a Tomcat server](./quickstart-java.md?pivots=java-tomcat)<br>- [Maven deploy to a JBoss server](./quickstart-java.md?pivots=java-jboss) | | **Deploy your app** | - [With Maven](configure-language-java-deploy-run.md?pivots=platform-linux#maven)<br>- [With Gradle](configure-language-java-deploy-run.md?pivots=platform-linux#gradle)<br>- [Deploy War](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With popular IDEs (VS Code, IntelliJ, and Eclipse)](configure-language-java-deploy-run.md?pivots=platform-linux#ides)<br>- [Deploy WAR or JAR packages directly](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With GitHub Actions](./deploy-github-actions.md) | | **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)|-| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)| +| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add TLS/SSL certificate](./configure-ssl-certificate.md)| | **Connect to a database** |- [Java Spring with Cosmos DB](./tutorial-java-spring-cosmosdb.md)| | **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=python&pivots=container-linux-vscode)| | **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)| Use the following resources to get started with PHP. | **Monitor your app**|- [Troubleshoot with Azure Monitor](./tutorial-troubleshoot-monitor.md)<br>- [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)| | **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)| | **Connect to a database** | - [MySQL with PHP](./tutorial-php-mysql-app.md)|-| **Custom containers** |- [Multi-container](./quickstart-multi-container.md)<br>- [Sidecar containers](tutorial-custom-container-sidecar.md)| +| **Custom containers** |- [Sidecar containers](tutorial-custom-container-sidecar.md)| | **Review best practices** | - [Scale your app]()<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)| ::: zone-end |
app-service | Quickstart Multi Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md | - Title: 'Quickstart: Create a multi-container app' -description: Get started with multi-container apps on Azure App Service by deploying your first multi-container app. -keywords: azure app service, web app, linux, docker, compose, multicontainer, multi-container, web app for containers, multiple containers, container, wordpress, azure db for mysql, production database with containers -- Previously updated : 11/18/2022-----# Create a multi-container (preview) app using a Docker Compose configuration --> [!NOTE] -> Sidecar containers (preview) will succeed multi-container apps in App Service. To get started, see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md). --[Web App for Containers](overview.md#app-service-on-linux) provides a flexible way to use Docker images. This quickstart shows how to deploy a multi-container app (preview) to Web App for Containers in the [Cloud Shell](../cloud-shell/overview.md) using a Docker Compose configuration. --![Sample multi-container app on Web App for Containers][1] ----This article requires version 2.0.32 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. --## Download the sample --For this quickstart, you use the compose file from [Docker](https://docs.docker.com/samples/wordpress/). The configuration file can be found at [Azure Samples](https://github.com/Azure-Samples/multicontainerwordpress). --[!code-yml[Main](../../azure-app-service-multi-container/docker-compose-wordpress.yml)] --In the Cloud Shell, create a quickstart directory and then change to it. --```bash -mkdir quickstart --cd $HOME/quickstart -``` --Next, run the following command to clone the sample app repository to your quickstart directory. Then change to the `multicontainerwordpress` directory. --```bash -git clone https://github.com/Azure-Samples/multicontainerwordpress --cd multicontainerwordpress -``` --## Create a resource group ---In the Cloud Shell, create a resource group with the [`az group create`](/cli/azure/group#az-group-create) command. The following example creates a resource group named *myResourceGroup* in the *South Central US* location. To see all supported locations for App Service on Linux in **Standard** tier, run the [`az appservice list-locations --sku S1 --linux-workers-enabled`](/cli/azure/appservice#az-appservice-list-locations) command. --```azurecli-interactive -az group create --name myResourceGroup --location "South Central US" -``` --You generally create your resource group and the resources in a region near you. --When the command finishes, a JSON output shows you the resource group properties. --## Create an Azure App Service plan --In the Cloud Shell, create an App Service plan in the resource group with the [`az appservice plan create`](/cli/azure/appservice/plan#az-appservice-plan-create) command. --The following example creates an App Service plan named `myAppServicePlan` in the **Standard** pricing tier (`--sku S1`) and in a Linux container (`--is-linux`). --```azurecli-interactive -az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku S1 --is-linux -``` --When the App Service plan has been created, the Azure CLI shows information similar to the following example: --<pre> -{ - "adminSiteName": null, - "appServicePlanName": "myAppServicePlan", - "geoRegion": "South Central US", - "hostingEnvironmentProfile": null, - "id": "/subscriptions/0000-0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan", - "kind": "linux", - "location": "South Central US", - "maximumNumberOfWorkers": 1, - "name": "myAppServicePlan", - < JSON data removed for brevity. > - "targetWorkerSizeId": 0, - "type": "Microsoft.Web/serverfarms", - "workerTierName": null -} -</pre> --## Create a Docker Compose app --> [!NOTE] -> Docker Compose on Azure App Services currently has a limit of 4,000 characters when converted to Base64 at this time. --In your Cloud Shell terminal, create a multi-container [web app](overview.md#app-service-on-linux) in the `myAppServicePlan` App Service plan with the [az webapp create](/cli/azure/webapp#az-webapp-create) command. Don't forget to replace _\<app_name>_ with a unique app name (valid characters are `a-z`, `0-9`, and `-`). --```azurecli -az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app_name> --multicontainer-config-type compose --multicontainer-config-file compose-wordpress.yml -``` --When the web app has been created, the Azure CLI shows output similar to the following example: --<pre> -{ - "additionalProperties": {}, - "availabilityState": "Normal", - "clientAffinityEnabled": true, - "clientCertEnabled": false, - "cloningInfo": null, - "containerSize": 0, - "dailyMemoryTimeQuota": 0, - "defaultHostName": "<app_name>.azurewebsites.net", - "enabled": true, - < JSON data removed for brevity. > -} -</pre> --### Browse to the app --Browse to the deployed app at (`http://<app_name>.azurewebsites.net`). The app may take a few minutes to load. If you receive an error, allow a few more minutes then refresh the browser. --![Sample multi-container app on Web App for Containers][1] --**Congratulations**, you've created a multi-container app in Web App for Containers. ---## Next steps --> [!div class="nextstepaction"] -> [Tutorial: Multi-container WordPress app](tutorial-multi-container-app.md) --> [!div class="nextstepaction"] -> [Configure a custom container](configure-custom-container.md) --> [!div class="nextstepaction"] -> [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md) --<!--Image references--> -[1]: media/tutorial-multi-container-app/azure-multi-container-wordpress-install.png |
app-service | Tutorial Multi Container App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md | - Title: 'Tutorial: Create a multi-container app' -description: Learn how to use build a multi-container app on Azure App Service that contains a WordPress app and a MySQL container, and configure the WordPress app. -keywords: azure app service, web app, linux, docker, compose, multicontainer, multi-container, web app for containers, multiple containers, container, wordpress, azure db for mysql, production database with containers --- Previously updated : 11/18/2022----# Tutorial: Create a multi-container (preview) app in Web App for Containers --> [!NOTE] -> Sidecar containers (preview) will succeed multi-container apps in App Service. To get started, see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md). --[Web App for Containers](overview.md#app-service-on-linux) provides a flexible way to use Docker images. In this tutorial, you'll learn how to create a multi-container app using WordPress and MySQL. You'll complete this tutorial in Cloud Shell, but you can also run these commands locally with the [Azure CLI](/cli/azure/install-azure-cli) command-line tool (2.0.32 or later). --In this tutorial, you learn how to: --> [!div class="checklist"] -> * Convert a Docker Compose configuration to work with Web App for Containers -> * Deploy a multi-container app to Azure -> * Add application settings -> * Use persistent storage for your containers -> * Connect to Azure Database for MySQL -> * Troubleshoot errors ---## Prerequisites --To complete this tutorial, you need experience with [Docker Compose](https://docs.docker.com/compose/). --## Download the sample --For this tutorial, you use the compose file from [Docker](https://docs.docker.com/samples/wordpress/), but you'll modify it to include Azure Database for MySQL, persistent storage, and Redis. The configuration file can be found at [Azure Samples](https://github.com/Azure-Samples/multicontainerwordpress). In the sample below, note that `depends_on` is an **unsupported option** and is ignored. For supported configuration options, see [Docker Compose options](configure-custom-container.md#docker-compose-options). --[!code-yml[Main](../../azure-app-service-multi-container/docker-compose-wordpress.yml)] --In Cloud Shell, create a tutorial directory and then change to it. --```bash -mkdir tutorial --cd tutorial -``` --Next, run the following command to clone the sample app repository to your tutorial directory. Then change to the `multicontainerwordpress` directory. --```bash -git clone https://github.com/Azure-Samples/multicontainerwordpress --cd multicontainerwordpress -``` --## Create a resource group ---In Cloud Shell, create a resource group with the [`az group create`](/cli/azure/group#az-group-create) command. The following example creates a resource group named *myResourceGroup* in the *South Central US* location. To see all supported locations for App Service on Linux in **Standard** tier, run the [`az appservice list-locations --sku S1 --linux-workers-enabled`](/cli/azure/appservice#az-appservice-list-locations) command. --```azurecli-interactive -az group create --name myResourceGroup --location "South Central US" -``` --You generally create your resource group and the resources in a region near you. --When the command finishes, a JSON output shows you the resource group properties. --## Create an Azure App Service plan --In Cloud Shell, create an App Service plan in the resource group with the [`az appservice plan create`](/cli/azure/appservice/plan#az-appservice-plan-create) command. --<!-- [!INCLUDE [app-service-plan](app-service-plan-linux.md)] --> --The following example creates an App Service plan named `myAppServicePlan` in the **Standard** pricing tier (`--sku S1`) and in a Linux container (`--is-linux`). --```azurecli-interactive -az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku S1 --is-linux -``` --When the App Service plan has been created, Cloud Shell shows information similar to the following example: --<pre> -{ - "adminSiteName": null, - "appServicePlanName": "myAppServicePlan", - "geoRegion": "South Central US", - "hostingEnvironmentProfile": null, - "id": "/subscriptions/0000-0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan", - "kind": "linux", - "location": "South Central US", - "maximumNumberOfWorkers": 1, - "name": "myAppServicePlan", - < JSON data removed for brevity. > - "targetWorkerSizeId": 0, - "type": "Microsoft.Web/serverfarms", - "workerTierName": null -} -</pre> --### Docker Compose with WordPress and MySQL containers --## Create a Docker Compose app --In your Cloud Shell, create a multi-container [web app](overview.md) in the `myAppServicePlan` App Service plan with the [az webapp create](/cli/azure/webapp#az-webapp-create) command. Don't forget to replace _\<app-name>_ with a unique app name. --```azurecli-interactive -az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --multicontainer-config-type compose --multicontainer-config-file docker-compose-wordpress.yml -``` --When the web app has been created, Cloud Shell shows output similar to the following example: --<pre> -{ - "additionalProperties": {}, - "availabilityState": "Normal", - "clientAffinityEnabled": true, - "clientCertEnabled": false, - "cloningInfo": null, - "containerSize": 0, - "dailyMemoryTimeQuota": 0, - "defaultHostName": "<app-name>.azurewebsites.net", - "enabled": true, - < JSON data removed for brevity. > -} -</pre> --### Browse to the app --Browse to the deployed app at (`http://<app-name>.azurewebsites.net`). The app may take a few minutes to load. If you receive an error, allow a few more minutes then refresh the browser. If you're having trouble and would like to troubleshoot, review [container logs](#find-docker-container-logs). --![Sample multi-container app on Web App for Containers][1] --**Congratulations**, you've created a multi-container app in Web App for Containers. Next you'll configure your app to use Azure Database for MySQL. Don't install WordPress at this time. --## Connect to production database --It's not recommended to use database containers in a production environment. The local containers aren't scalable. Instead, you'll use Azure Database for MySQL which can be scaled. --### Create an Azure Database for MySQL server --Create an Azure Database for MySQL server with the [`az mysql server create`](/cli/azure/mysql/server#az-mysql-server-create) command. --In the following command, substitute your MySQL server name where you see the _<mysql-server-name>_ placeholder (valid characters are `a-z`, `0-9`, and `-`). This name is part of the MySQL server's hostname (`<mysql-server-name>.database.windows.net`), it needs to be globally unique. --```azurecli-interactive -az mysql server create --resource-group myResourceGroup --name <mysql-server-name> --location "South Central US" --admin-user adminuser --admin-password My5up3rStr0ngPaSw0rd! --sku-name B_Gen5_1 --version 5.7 -``` --Creating the server may take a few minutes to complete. When the MySQL server is created, Cloud Shell shows information similar to the following example: --<pre> -{ - "administratorLogin": "adminuser", - "administratorLoginPassword": null, - "fullyQualifiedDomainName": "<mysql-server-name>.database.windows.net", - "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.DBforMySQL/servers/<mysql-server-name>", - "location": "southcentralus", - "name": "<mysql-server-name>", - "resourceGroup": "myResourceGroup", - ... -} -</pre> --### Configure server firewall --Create a firewall rule for your MySQL server to allow client connections by using the [`az mysql server firewall-rule create`](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-create) command. When both starting IP and end IP are set to 0.0.0.0, the firewall is only opened for other Azure resources. --```azurecli-interactive -az mysql server firewall-rule create --name allAzureIPs --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0 -``` --> [!TIP] -> You can be even more restrictive in your firewall rule by [using only the outbound IP addresses your app uses](overview-inbound-outbound-ips.md#find-outbound-ips). -> --### Create the WordPress database --```azurecli-interactive -az mysql db create --resource-group myResourceGroup --server-name <mysql-server-name> --name wordpress -``` --When the database has been created, Cloud Shell shows information similar to the following example: --<pre> -{ - "additionalProperties": {}, - "charset": "latin1", - "collation": "latin1_swedish_ci", - "id": "/subscriptions/12db1644-4b12-4cab-ba54-8ba2f2822c1f/resourceGroups/myResourceGroup/providers/Microsoft.DBforMySQL/servers/<mysql-server-name>/databases/wordpress", - "name": "wordpress", - "resourceGroup": "myResourceGroup", - "type": "Microsoft.DBforMySQL/servers/databases" -} -</pre> --### Configure database variables in WordPress --To connect the WordPress app to this new MySQL server, you'll configure a few WordPress-specific environment variables, including the SSL CA path defined by `MYSQL_SSL_CA`. The [Baltimore CyberTrust Root](https://www.digicert.com/digicert-root-certificates.htm) from [DigiCert](https://www.digicert.com/) is provided in the [custom image](#use-a-custom-image-for-mysql-tlsssl-and-other-configurations) below. --To make these changes, use the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command in Cloud Shell. App settings are case-sensitive and space-separated. --```azurecli-interactive -az webapp config appsettings set --resource-group myResourceGroup --name <app-name> --settings WORDPRESS_DB_HOST="<mysql-server-name>.mysql.database.azure.com" WORDPRESS_DB_USER="adminuser" WORDPRESS_DB_PASSWORD="My5up3rStr0ngPaSw0rd!" WORDPRESS_DB_NAME="wordpress" MYSQL_SSL_CA="BaltimoreCyberTrustroot.crt.pem" -``` --When the app setting has been created, Cloud Shell shows information similar to the following example: --<pre> -[ - { - "name": "WORDPRESS_DB_HOST", - "slotSetting": false, - "value": "<mysql-server-name>.mysql.database.azure.com" - }, - { - "name": "WORDPRESS_DB_USER", - "slotSetting": false, - "value": "adminuser" - }, - { - "name": "WORDPRESS_DB_NAME", - "slotSetting": false, - "value": "wordpress" - }, - { - "name": "WORDPRESS_DB_PASSWORD", - "slotSetting": false, - "value": "My5up3rStr0ngPaSw0rd!" - }, - { - "name": "MYSQL_SSL_CA", - "slotSetting": false, - "value": "BaltimoreCyberTrustroot.crt.pem" - } -] -</pre> --For more information on environment variables, see [Configure environment variables](configure-custom-container.md#configure-environment-variables). --### Use a custom image for MySQL TLS/SSL and other configurations --By default, TLS/SSL is used by Azure Database for MySQL. WordPress requires additional configuration to use TLS/SSL with MySQL. The WordPress 'official image' doesn't provide the additional configuration, but a [custom image](https://github.com/Azure-Samples/multicontainerwordpress) has been prepared for your convenience. In practice, you would add desired changes to your own image. --The custom image is based on the 'official image' of [WordPress from Docker Hub](https://hub.docker.com/_/wordpress/). The following changes have been made in this custom image for Azure Database for MySQL: --* [Adds Baltimore Cyber Trust Root Certificate file for SSL to MySQL.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L61) -* [Uses App Setting for MySQL SSL Certificate Authority certificate in WordPress wp-config.php.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L163) -* [Adds WordPress define for MYSQL_CLIENT_FLAGS needed for MySQL SSL.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L164) --The following changes have been made for Redis (to be used in a later section): --* [Adds PHP extension for Redis v4.0.2.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/Dockerfile#L35) -* [Adds unzip needed for file extraction.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L71) -* [Adds Redis Object Cache 1.3.8 WordPress plugin.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L74) -* [Uses App Setting for Redis host name in WordPress wp-config.php.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L162) --To use the custom image, you'll update your docker-compose-wordpress.yml file. In Cloud Shell, open a text editor and change the `image: wordpress` to use `image: mcr.microsoft.com/azuredocs/multicontainerwordpress`. You no longer need the database container. Remove the `db`, `environment`, `depends_on`, and `volumes` section from the configuration file. Your file should look like the following code: --```yaml -version: '3.3' --- wordpress: - image: mcr.microsoft.com/azuredocs/multicontainerwordpress - ports: - - "8000:80" - restart: always -``` --### Update app with new configuration --In Cloud Shell, reconfigure your multi-container [web app](overview.md) with the [az webapp config container set](/cli/azure/webapp/config/container#az-webapp-config-container-set) command. Don't forget to replace _\<app-name>_ with the name of the web app you created earlier. --```azurecli-interactive -az webapp config container set --resource-group myResourceGroup --name <app-name> --multicontainer-config-type compose --multicontainer-config-file docker-compose-wordpress.yml -``` --When the app has been reconfigured, Cloud Shell shows information similar to the following example: --<pre> -[ - { - "name": "DOCKER_CUSTOM_IMAGE_NAME", - "value": "COMPOSE|dmVyc2lvbjogJzMuMycKCnNlcnZpY2VzOgogICB3b3JkcHJlc3M6CiAgICAgaW1hZ2U6IG1zYW5nYXB1L3dvcmRwcmVzcwogICAgIHBvcnRzOgogICAgICAgLSAiODAwMDo4MCIKICAgICByZXN0YXJ0OiBhbHdheXM=" - } -] -</pre> --### Browse to the app --Browse to the deployed app at (`http://<app-name>.azurewebsites.net`). The app is now using Azure Database for MySQL. --![Sample multicontainer app on Web App for Containers][1] --## Add persistent storage --Your multi-container is now running in Web App for Containers. However, if you install WordPress now and restart your app later, you'll find that your WordPress installation is gone. This happens because your Docker Compose configuration currently points to a storage location inside your container. The files installed into your container don't persist beyond app restart. In this section, you'll [add persistent storage](configure-custom-container.md#use-persistent-shared-storage) to your WordPress container. --### Configure environment variables --To use persistent storage, you'll enable this setting within App Service. To make this change, use the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command in Cloud Shell. App settings are case-sensitive and space-separated. --```azurecli-interactive -az webapp config appsettings set --resource-group myResourceGroup --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE -``` --When the app setting has been created, Cloud Shell shows information similar to the following example: --<pre> -[ - < JSON data removed for brevity. > - { - "name": "WORDPRESS_DB_NAME", - "slotSetting": false, - "value": "wordpress" - }, - { - "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE", - "slotSetting": false, - "value": "TRUE" - } -] -</pre> --### Modify configuration file --In the Cloud Shell, open the file `docker-compose-wordpress.yml` in a text editor. --The `volumes` option maps the file system to a directory within the container. `${WEBAPP_STORAGE_HOME}` is an environment variable in App Service that is mapped to persistent storage for your app. You'll use this environment variable in the volumes option so that the WordPress files are installed into persistent storage instead of the container. Make the following modifications to the file: --In the `wordpress` section, add a `volumes` option so it looks like the following code: --```yaml -version: '3.3' --- wordpress: - image: mcr.microsoft.com/azuredocs/multicontainerwordpress - volumes: - - ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html - ports: - - "8000:80" - restart: always -``` --### Update app with new configuration --In Cloud Shell, reconfigure your multi-container [web app](overview.md) with the [az webapp config container set](/cli/azure/webapp/config/container#az-webapp-config-container-set) command. Don't forget to replace _\<app-name>_ with a unique app name. --```azurecli-interactive -az webapp config container set --resource-group myResourceGroup --name <app-name> --multicontainer-config-type compose --multicontainer-config-file docker-compose-wordpress.yml -``` --After your command runs, it shows output similar to the following example: --<pre> -[ - { - "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE", - "slotSetting": false, - "value": "TRUE" - }, - { - "name": "DOCKER_CUSTOM_IMAGE_NAME", - "value": "COMPOSE|dmVyc2lvbjogJzMuMycKCnNlcnZpY2VzOgogICBteXNxbDoKICAgICBpbWFnZTogbXlzcWw6NS43CiAgICAgdm9sdW1lczoKICAgICAgIC0gZGJfZGF0YTovdmFyL2xpYi9teXNxbAogICAgIHJlc3RhcnQ6IGFsd2F5cwogICAgIGVudmlyb25tZW50OgogICAgICAgTVlTUUxfUk9PVF9QQVNTV09SRDogZXhhbXBsZXBhc3MKCiAgIHdvcmRwcmVzczoKICAgICBkZXBlbmRzX29uOgogICAgICAgLSBteXNxbAogICAgIGltYWdlOiB3b3JkcHJlc3M6bGF0ZXN0CiAgICAgcG9ydHM6CiAgICAgICAtICI4MDAwOjgwIgogICAgIHJlc3RhcnQ6IGFsd2F5cwogICAgIGVudmlyb25tZW50OgogICAgICAgV09SRFBSRVNTX0RCX1BBU1NXT1JEOiBleGFtcGxlcGFzcwp2b2x1bWVzOgogICAgZGJfZGF0YTo=" - } -] -</pre> --### Browse to the app --Browse to the deployed app at (`http://<app-name>.azurewebsites.net`). --The WordPress container is now using Azure Database for MySQL and persistent storage. --## Add Redis container -- The WordPress 'official image' does not include the dependencies for Redis. These dependencies and additional configuration needed to use Redis with WordPress have been prepared for you in this [custom image](https://github.com/Azure-Samples/multicontainerwordpress). In practice, you would add desired changes to your own image. --The custom image is based on the 'official image' of [WordPress from Docker Hub](https://hub.docker.com/_/wordpress/). The following changes have been made in this custom image for Redis: --* [Adds PHP extension for Redis v4.0.2.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/Dockerfile#L35) -* [Adds unzip needed for file extraction.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L71) -* [Adds Redis Object Cache 1.3.8 WordPress plugin.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L74) -* [Uses App Setting for Redis host name in WordPress wp-config.php.](https://github.com/Azure-Samples/multicontainerwordpress/blob/5669a89e0ee8599285f0e2e6f7e935c16e539b92/docker-entrypoint.sh#L162) --Add the redis container to the bottom of the configuration file so it looks like the following example: --```yaml -version: '3.3' --- wordpress: - image: mcr.microsoft.com/azuredocs/multicontainerwordpress - ports: - - "8000:80" - restart: always -- redis: - image: mcr.microsoft.com/oss/bitnami/redis:6.0.8 - environment: - - ALLOW_EMPTY_PASSWORD=yes - restart: always -``` --### Configure environment variables --To use Redis, you'll enable this setting, `WP_REDIS_HOST`, within App Service. This is a *required setting* for WordPress to communicate with the Redis host. To make this change, use the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command in Cloud Shell. App settings are case-sensitive and space-separated. --```azurecli-interactive -az webapp config appsettings set --resource-group myResourceGroup --name <app-name> --settings WP_REDIS_HOST="redis" -``` --When the app setting has been created, Cloud Shell shows information similar to the following example: --<pre> -[ - < JSON data removed for brevity. > - { - "name": "WORDPRESS_DB_USER", - "slotSetting": false, - "value": "adminuser" - }, - { - "name": "WP_REDIS_HOST", - "slotSetting": false, - "value": "redis" - } -] -</pre> --### Update app with new configuration --In Cloud Shell, reconfigure your multi-container [web app](overview.md) with the [az webapp config container set](/cli/azure/webapp/config/container#az-webapp-config-container-set) command. Don't forget to replace _\<app-name>_ with a unique app name. --```azurecli-interactive -az webapp config container set --resource-group myResourceGroup --name <app-name> --multicontainer-config-type compose --multicontainer-config-file compose-wordpress.yml -``` --After your command runs, it shows output similar to the following example: --<pre> -[ - { - "name": "DOCKER_CUSTOM_IMAGE_NAME", - "value": "COMPOSE|dmVyc2lvbjogJzMuMycKCnNlcnZpY2VzOgogICBteXNxbDoKICAgICBpbWFnZTogbXlzcWw6NS43CiAgICAgdm9sdW1lczoKICAgICAgIC0gZGJfZGF0YTovdmFyL2xpYi9teXNxbAogICAgIHJlc3RhcnQ6IGFsd2F5cwogICAgIGVudmlyb25tZW50OgogICAgICAgTVlTUUxfUk9PVF9QQVNTV09SRDogZXhhbXBsZXBhc3MKCiAgIHdvcmRwcmVzczoKICAgICBkZXBlbmRzX29uOgogICAgICAgLSBteXNxbAogICAgIGltYWdlOiB3b3JkcHJlc3M6bGF0ZXN0CiAgICAgcG9ydHM6CiAgICAgICAtICI4MDAwOjgwIgogICAgIHJlc3RhcnQ6IGFsd2F5cwogICAgIGVudmlyb25tZW50OgogICAgICAgV09SRFBSRVNTX0RCX1BBU1NXT1JEOiBleGFtcGxlcGFzcwp2b2x1bWVzOgogICAgZGJfZGF0YTo=" - } -] -</pre> --### Browse to the app --Browse to the deployed app at (`http://<app-name>.azurewebsites.net`). --Complete the steps and install WordPress. --### Connect WordPress to Redis --Sign in to WordPress admin. In the left navigation, select **Plugins**, and then select **Installed Plugins**. --![Select WordPress Plugins][2] --Show all plugins here --In the plugins page, find **Redis Object Cache** and click **Activate**. --![Activate Redis][3] --Click on **Settings**. --![Click on Settings][4] --Click the **Enable Object Cache** button. --![Click the 'Enable Object Cache' button][5] --WordPress connects to the Redis server. The connection **status** appears on the same page. --![WordPress connects to the Redis server. The connection **status** appears on the same page.][6] --**Congratulations**, you've connected WordPress to Redis. The production-ready app is now using **Azure Database for MySQL, persistent storage, and Redis**. You can now scale out your App Service Plan to multiple instances. --## Find Docker Container logs --If you run into issues using multiple containers, you can access the container logs by browsing to: `https://<app-name>.scm.azurewebsites.net/api/logs/docker`. --You'll see output similar to the following example: --<pre> -[ - { - "machineName":"RD00XYZYZE567A", - "lastUpdated":"2018-05-10T04:11:45Z", - "size":25125, - "href":"https://<app-name>.scm.azurewebsites.net/api/vfs/LogFiles/2018_05_10_RD00XYZYZE567A_docker.log", - "path":"/home/LogFiles/2018_05_10_RD00XYZYZE567A_docker.log" - } -] -</pre> --You see a log for each container and an additional log for the parent process. Copy the respective `href` value into the browser to view the log. ---## Next steps --In this tutorial, you learned how to: -> [!div class="checklist"] -> * Convert a Docker Compose configuration to work with Web App for Containers -> * Deploy a multi-container app to Azure -> * Add application settings -> * Use persistent storage for your containers -> * Connect to Azure Database for MySQL -> * Troubleshoot errors --Advance to the next tutorial to learn how to secure your app with a custom domain and certificate. --> [!div class="nextstepaction"] -> [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md) --Or, check out other resources: --- [Configure custom container](configure-custom-container.md)-- [Environment variables and app settings reference](reference-app-settings.md)--<!--Image references--> -[1]: ./media/tutorial-multi-container-app/azure-multi-container-wordpress-install.png -[2]: ./media/tutorial-multi-container-app/wordpress-plugins.png -[3]: ./media/tutorial-multi-container-app/activate-redis.png -[4]: ./media/tutorial-multi-container-app/redis-settings.png -[5]: ./media/tutorial-multi-container-app/enable-object-cache.png -[6]: ./media/tutorial-multi-container-app/redis-connected.png |
application-gateway | Create Url Route Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md | Review the settings on the **Review + create** tab, and then select **Create** t ![Record application gateway public IP address](./media/application-gateway-create-url-route-portal/application-gateway-record-ag-address.png) -2. Copy the public IP address, and then paste it into the address bar of your browser. Such as, http:\//52.188.72.175:8080. +2. Copy the public IP address, and then paste it into the address bar of your browser. Such as, http:\//203.0.113.10:8080. ![Test base URL in application gateway](./media/application-gateway-create-url-route-portal/application-gateway-iistest.png) |
application-gateway | Ingress Controller Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md | metadata: annotations: kubernetes.io/ingress.class: azure/application-gateway spec:- ingressClassName: azure-application-gateway + #ingressClassName: azure-application-gateway # according to the AGIC setup guide, annotations are the approach to set the class rules: - host: test.agic.contoso.com http: paths: - path: /+ pathType: Prefix backend:- serviceName: test-agic-app-service - servicePort: 80 + name: test-agic-app-service + port: + number: 80 EOF ``` |
azure-arc | Quick Enable Hybrid Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md | After you install the agent and configure it to connect to Azure Arc-enabled ser Now that you've enabled your Linux or Windows hybrid machine and successfully connected to the service, you are ready to enable Azure Policy to understand compliance in Azure. -To learn how to identify Azure Arc-enabled servers enabled machine that doesn't have the Log Analytics agent installed, continue to the tutorial: - > [!div class="nextstepaction"] > [Create a policy assignment to identify non-compliant resources](tutorial-assign-policy-portal.md) |
azure-arc | Manage Automatic Vm Extension Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md | Title: Automatic extension upgrade for Azure Arc-enabled servers description: Learn how to enable automatic extension upgrades for your Azure Arc-enabled servers. Previously updated : 11/03/2023 Last updated : 09/03/2024 # Automatic extension upgrade for Azure Arc-enabled servers Extension versions fixing critical security vulnerabilities are rolled out much Automatic extension upgrade supports the following extensions: - Azure Monitor agent - Linux and Windows-- Log Analytics agent (OMS agent) - Linux only - Dependency agent ΓÇô Linux and Windows - Azure Security agent - Linux and Windows - Key Vault Extension - Linux only |
azure-arc | Manage Vm Extensions Ansible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-ansible.md | Title: Enable VM extension using Red Hat Ansible description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using Red Hat Ansible Automation. Previously updated : 05/15/2023 Last updated : 09/03/2024 The project you created from the Azure Infrastructure Configuration Demo collect ||| |Microsoft Defender for Cloud integrated vulnerability scanner |microsoft_defender | |Custom Script extension |custom_script |-|Log Analytics Agent |log_analytics_agent | |Azure Monitor for VMs (insights) |azure_monitor_for-vms | |Azure Key Vault Certificate Sync |azure_key_vault | |Azure Monitor Agent |azure_monitor_agent | |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md | Azure Arc-enabled servers lets you manage Windows and Linux physical servers and When a hybrid machine is connected to Azure, it becomes a connected machine and is treated as a resource in Azure. Each connected machine has a Resource ID enabling the machine to be included in a resource group. -To connect hybrid machines to Azure, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent doesn't replace the Azure [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) / [Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-overview.md). The Log Analytics agent or Azure Monitor Agent for Windows and Linux is required in order to: +To connect hybrid machines to Azure, you install the [Azure Connected Machine agent](agent-overview.md) on each machine. This agent doesn't replace the Azure [Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-overview.md). The Azure Monitor Agent for Windows and Linux is required in order to: * Proactively monitor the OS and workloads running on the machine * Manage it using Automation runbooks or solutions like Update Management When you connect your machine to Azure Arc-enabled servers, you can perform many * Perform post-deployment configuration and automation tasks using supported [Arc-enabled servers VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. * **Monitor**: * Monitor operating system performance and discover application components to monitor processes and dependencies with other resources using [VM insights](../../azure-monitor/vm/vminsights-overview.md).- * Collect other log data, such as performance data and events, from the operating system or workloads running on the machine with the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md). This data is stored in a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md). + * Collect other log data, such as performance data and events, from the operating system or workloads running on the machine with the [Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-overview.md). This data is stored in a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md). > [!NOTE] > At this time, enabling Azure Automation Update Management directly from an Azure Arc-enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and [how to enable Update Management for non-Azure VMs](../../automation/update-management/enable-from-automation-account.md#enable-non-azure-vms). |
azure-arc | Plan Evaluate On Azure Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md | While you cannot install Azure Arc-enabled servers on an Azure VM for production To start managing your Azure VM as an Azure Arc-enabled server, you need to make the following changes to the Azure VM before you can install and configure Azure Arc-enabled servers. -1. Remove any VM extensions deployed to the Azure VM, such as the Log Analytics agent. While Azure Arc-enabled servers support many of the same extensions as Azure VMs, the Azure Connected Machine agent can't manage VM extensions already deployed to the VM. +1. Remove any VM extensions deployed to the Azure VM, such as the Azure Monitor agent. While Azure Arc-enabled servers support many of the same extensions as Azure VMs, the Azure Connected Machine agent can't manage VM extensions already deployed to the VM. 2. Disable the Azure Windows or Linux Guest Agent. The Azure VM guest agent serves a similar purpose to the Azure Connected Machine agent. To avoid conflicts between the two, the Azure VM Agent needs to be disabled. Once it is disabled, you cannot use VM extensions or some Azure services. |
azure-arc | Private Link Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md | For more information, see [Key Benefits of Private Link](../../private-link/pri Azure Arc Private Link Scope connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc-enabled servers. When you enable any one of the Azure Arc-enabled servers supported VM extensions, such as Azure Monitor, those resources connect other Azure resources. Such as: -- Log Analytics workspace, required for Azure Automation Change Tracking and Inventory, Azure Monitor VM insights, and Azure Monitor log collection with Log Analytics agent.+- Log Analytics workspace, required for Azure Automation Change Tracking and Inventory, Azure Monitor VM insights, and Azure Monitor log collection with Azure Monitor agent. - Azure Automation account, required for Update Management and Change Tracking and Inventory. - Azure Key Vault - Azure Blob storage, required for Custom Script Extension. |
azure-functions | Functions Bindings Signalr Service Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md | Title: Azure Functions SignalR Service trigger binding -description: Learn to send SignalR Service messages from Azure Functions. +description: Learn to handle SignalR Service messages from Azure Functions. ms.devlang: csharp |
azure-functions | Functions Bindings Web Pubsub Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-web-pubsub-input.md | + + Title: Azure Functions Web PubSub input binding +description: Learn to handle client events from Web PubSub with HTTP trigger, and return client access URL and token in Azure Functions. +++ms.devlang: csharp +# ms.devlang: csharp, java, javascript, python + Last updated : 09/02/2024++zone_pivot_groups: programming-languages-set-functions-lang-workers +++# Azure Web PubSub input bindings for Azure Functions ++Our extension provides two input binding targeting different needs. ++- [`WebPubSubConnection`](#webpubsubconnection) ++ To let a client connect to Azure Web PubSub Service, it must know the service endpoint URL and a valid access token. The `WebPubSubConnection` input binding produces required information, so client doesn't need to handle this token generation itself. The token is time-limited and can authenticate a specific user to a connection. Therefore, don't cache the token or share it between clients. An HTTP trigger working with this input binding can be used for clients to retrieve the connection information. ++- [`WebPubSubContext`](#webpubsubcontext) ++ When using is Static Web Apps, `HttpTrigger` is the only supported trigger and under Web PubSub scenario, we provide the `WebPubSubContext` input binding helps users deserialize upstream http request from service side under Web PubSub protocols. So customers can get similar results comparing to `WebPubSubTrigger` to easily handle in functions. + When used with `HttpTrigger`, customer requires to configure the HttpTrigger exposed url in event handler accordingly. ++## `WebPubSubConnection` ++### Example ++The following example shows an HTTP trigger function that acquires Web PubSub connection information using the input binding and returns it over HTTP. In following example, the `UserId` is passed in through client request query part like `?userid={User-A}`. +++# [Isolated worker model](#tab/isolated-process) ++```csharp +[Function("WebPubSubConnectionInputBinding")] +public static HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequestData req, +[WebPubSubConnectionInput(Hub = "<hub>", , UserId = "{query.userid}", Connection = "<web_pubsub_connection_name>")] WebPubSubConnection connectionInfo) +{ + var response = req.CreateResponse(HttpStatusCode.OK); + response.WriteAsJsonAsync(connectionInfo); + return response; +} +``` ++# [In-process model](#tab/in-process) ++```csharp +[FunctionName("WebPubSubConnectionInputBinding")] +public static WebPubSubConnection Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req, + [WebPubSubConnection(Hub = "<hub>", UserId = "{query.userid}")] WebPubSubConnection connection) +{ + return connection; +} +``` +++++# [Model v4](#tab/nodejs-v4) ++```js +const { app, input } = require('@azure/functions'); ++const connection = input.generic({ + type: 'webPubSubConnection', + name: 'connection', + userId: '{query.userId}', + hub: '<hub>' +}); ++app.http('negotiate', { + methods: ['GET', 'POST'], + authLevel: 'anonymous', + extraInputs: [connection], + handler: async (request, context) => { + return { body: JSON.stringify(context.extraInputs.get('connection')) }; + }, +}); +``` ++# [Model v3](#tab/nodejs-v3) ++Define input bindings in `function.json`. ++```json +{ + "disabled": false, + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req" + }, + { + "type": "http", + "direction": "out", + "name": "res" + }, + { + "type": "webPubSubConnection", + "name": "connection", + "userId": "{query.userid}", + "hub": "<hub>", + "direction": "in" + } + ] +} +``` ++Define function in `index.js`. ++```js +module.exports = function (context, req, connection) { + context.res = { body: connection }; + context.done(); +}; +``` +++++Create a folder *negotiate* and update *negotiate/function.json* and copy following JSON codes. ++```json +{ + "scriptFile": "__init__.py", + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req" + }, + { + "type": "http", + "direction": "out", + "name": "$return" + }, + { + "type": "webPubSubConnection", + "name": "connection", + "userId": "{query.userid}", + "hub": "<hub>", + "direction": "in" + } + ] +} +``` ++Define function in *negotiate/__init__.py*. ++```python +import logging ++import azure.functions as func ++def main(req: func.HttpRequest, connection) -> func.HttpResponse: + return func.HttpResponse(connection) +``` ++++> [!NOTE] +> Complete samples for this language are pending ++++> [!NOTE] +> The Web PubSub extensions for Java isn't supported yet. +++### Get authenticated user ID ++If the function is triggered by an authenticated client, you can add a user ID claim to the generated token. You can easily add authentication to a function app using App Service Authentication. ++App Service Authentication sets HTTP headers named `x-ms-client-principal-id` and `x-ms-client-principal-name` that contain the authenticated user's client principal ID and name, respectively. ++You can set the `UserId` property of the binding to the value from either header using a binding expression: `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`. +++# [Isolated worker model](#tab/isolated-process) ++```csharp +[Function("WebPubSubConnectionInputBinding")] +public static HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequestData req, +[WebPubSubConnectionInput(Hub = "<hub>", , UserId = "{headers.x-ms-client-principal-id}", Connection = "<web_pubsub_connection_name>")] WebPubSubConnection connectionInfo) +{ + var response = req.CreateResponse(HttpStatusCode.OK); + response.WriteAsJsonAsync(connectionInfo); + return response; +} +``` ++# [In-process model](#tab/in-process) ++```csharp +[FunctionName("WebPubSubConnectionInputBinding")] +public static WebPubSubConnection Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req, + [WebPubSubConnection(Hub = "<hub>", UserId = "{headers.x-ms-client-principal-id}")] WebPubSubConnection connection) +{ + return connection; +} +``` +++++# [Model v4](#tab/nodejs-v4) ++```js +const { app, input } = require('@azure/functions'); ++const connection = input.generic({ + type: 'webPubSubConnection', + name: 'connection', + userId: '{headers.x-ms-client-principal-id}', + hub: '<hub>' +}); ++app.http('negotiate', { + methods: ['GET', 'POST'], + authLevel: 'anonymous', + extraInputs: [connection], + handler: async (request, context) => { + return { body: JSON.stringify(context.extraInputs.get('connection')) }; + }, +}); +``` ++# [Model v3](#tab/nodejs-v3) ++Define input bindings in `function.json`. ++```json +{ + "disabled": false, + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req" + }, + { + "type": "http", + "direction": "out", + "name": "res" + }, + { + "type": "webPubSubConnection", + "name": "connection", + "userId": "{headers.x-ms-client-principal-id}", + "hub": "<hub>", + "direction": "in" + } + ] +} +``` ++Define function in `index.js`. ++```js +module.exports = function (context, req, connection) { + context.res = { body: connection }; + context.done(); +}; +``` +++++Create a folder *negotiate* and update *negotiate/function.json* and copy following JSON codes. ++```json +{ + "scriptFile": "__init__.py", + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req" + }, + { + "type": "http", + "direction": "out", + "name": "$return" + }, + { + "type": "webPubSubConnection", + "name": "connection", + "userId": "{headers.x-ms-client-principal-id}", + "hub": "<hub>", + "direction": "in" + } + ] +} +``` ++Define function in *negotiate/__init__.py*. ++```python +import logging ++import azure.functions as func ++def main(req: func.HttpRequest, connection) -> func.HttpResponse: + return func.HttpResponse(connection) +``` ++++> [!NOTE] +> Complete samples for this language are pending ++++> [!NOTE] +> The Web PubSub extensions for Java isn't supported yet. +++### Configuration ++The following table explains the binding configuration properties that you set in the function.json file and the `WebPubSubConnection` attribute. ++| function.json property | Attribute property | Description | +|||| +| **type** | n/a | Must be set to `webPubSubConnection` | +| **direction** | n/a | Must be set to `in` | +| **name** | n/a | Variable name used in function code for input connection binding object. | +| **hub** | Hub | Required - The value must be set to the name of the Web PubSub hub for the function to be triggered. We support set the value in attribute as higher priority, or it can be set in app settings as a global value. | +| **userId** | UserId | Optional - the value of the user identifier claim to be set in the access key token. | +| **clientProtocol** | ClientProtocol | Optional - The client protocol type. Valid values include `default` and `mqtt`. <br> For MQTT clients, you must set it to `mqtt`. <br> For other clients, you can omit the property or set it to `default`. | +| **connection** | Connection | Required - The name of the app setting that contains the Web PubSub Service connection string (defaults to "WebPubSubConnectionString"). | ++### Usage +++`WebPubSubConnection` provides following properties. ++| Binding Name | Binding Type | Description | +|||| +| BaseUri | Uri | Web PubSub client connection uri. | +| Uri | Uri | Absolute Uri of the Web PubSub connection, contains `AccessToken` generated base on the request. | +| AccessToken | string | Generated `AccessToken` based on request UserId and service information. | +++++`WebPubSubConnection` provides following properties. ++| Binding Name | Description | +||| +| baseUrl | Web PubSub client connection uri. | +| url | Absolute Uri of the Web PubSub connection, contains `AccessToken` generated base on the request. | +| accessToken | Generated `AccessToken` based on request UserId and service information. | +++> [!NOTE] +> The Web PubSub extensions for Java isn't supported yet. +++### More customization of generated token ++Limited to the binding parameter types don't support a way to pass list nor array, the `WebPubSubConnection` isn't fully supported with all the parameters server SDK has, especially `roles`, and also includes `groups` and `expiresAfter`. +++When customer needs to add roles or delay buildinging the access token in the function, we suggest you to work with [server SDK for C#](/dotnet/api/overview/azure/messaging.webpubsub-readme). ++# [Isolated worker model](#tab/isolated-process) ++```csharp +[Function("WebPubSubConnectionCustomRoles")] +public static HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequestData req) +{ + var serviceClient = new WebPubSubServiceClient(new Uri(endpoint), "<hub>", "<web-pubsub-connection-string>"); + var userId = req.Query["userid"].FirstOrDefault(); + // your method to get custom roles. + var roles = GetRoles(userId); + var url = await serviceClient.GetClientAccessUriAsync(TimeSpan.FromMinutes(5), userId, roles); + var response = req.CreateResponse(HttpStatusCode.OK); + response.WriteString(url.ToString()); + return response; +} +``` ++# [In-process model](#tab/in-process) ++```csharp +[FunctionName("WebPubSubConnectionCustomRoles")] +public static async Task<Uri> Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req) +{ + var serviceClient = new WebPubSubServiceClient(new Uri(endpoint), "<hub>", "<web-pubsub-connection-string>"); + var userId = req.Query["userid"].FirstOrDefault(); + // your method to get custom roles. + var roles = GetRoles(userId); + return await serviceClient.GetClientAccessUriAsync(TimeSpan.FromMinutes(5), userId, roles); +} +``` ++++++When customer needs to add roles or delay building the access token in the function, we suggest you working with [server SDK for JavaScript](/javascript/api/overview/azure/web-pubsub). ++# [Model v4](#tab/nodejs-v4) ++```js +const { app } = require('@azure/functions'); +const { WebPubSubServiceClient } = require('@azure/web-pubsub'); +app.http('negotiate', { + methods: ['GET', 'POST'], + authLevel: 'anonymous', + handler: async (request, context) => { + const serviceClient = new WebPubSubServiceClient(process.env.WebPubSubConnectionString, "<hub>"); + let token = await serviceClient.getAuthenticationToken({ userId: req.query.userid, roles: ["webpubsub.joinLeaveGroup", "webpubsub.sendToGroup"] }); + return { body: token.url }; + }, +}); +``` ++# [Model v3](#tab/nodejs-v3) +Define input bindings in `function.json`. ++```json +{ + "disabled": false, + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req" + }, + { + "type": "http", + "direction": "out", + "name": "res" + } + ] +} +``` +> +Define function in `index.js`. +> +```js +const { WebPubSubServiceClient } = require('@azure/web-pubsub'); +> +module.exports = async function (context, req) { + const serviceClient = new WebPubSubServiceClient(process.env.WebPubSubConnectionString, "<hub>"); + let token = await serviceClient.getAuthenticationToken({ userId: req.query.userid, roles: ["webpubsub.joinLeaveGroup", "webpubsub.sendToGroup"] }); + context.res = { body: token.url }; + context.done(); +}; +``` +++++> [!NOTE] +> Complete samples for this language are pending +++> [!NOTE] +> The Web PubSub extensions for Java isn't supported yet. +++++## `WebPubSubContext` ++### Example +++# [Isolated worker model](#tab/isolated-process) ++```csharp +// validate method when upstream set as http://<func-host>/api/{event} +[Function("validate")] +public static HttpResponseData Validate( + [HttpTrigger(AuthorizationLevel.Anonymous, "options")] HttpRequestData req, + [WebPubSubContextInput] WebPubSubContext wpsReq) +{ + return BuildHttpResponseData(req, wpsReq.Response); +} ++// Respond AbuseProtection to put header correctly. +private static HttpResponseData BuildHttpResponseData(HttpRequestData request, SimpleResponse wpsResponse) +{ + var response = request.CreateResponse(); + response.StatusCode = (HttpStatusCode)wpsResponse.Status; + response.Body = response.Body; + foreach (var header in wpsResponse.Headers) + { + response.Headers.Add(header.Key, header.Value); + } + return response; +} +``` ++# [In-process model](#tab/in-process) ++```csharp +[FunctionName("WebPubSubContextInputBinding")] +public static object Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req, + [WebPubSubContext] WebPubSubContext wpsContext) +{ + // in the case request is a preflight or invalid, directly return prebuild response by extension. + if (wpsContext.IsPreflight || wpsContext.HasError) + { + return wpsContext.Response; + } + var request = wpsContext.Request as ConnectEventRequest; + var response = new ConnectEventResponse + { + UserId = wpsContext.Request.ConnectionContext.UserId + }; + return response; +} +``` +++++# [Model v4](#tab/nodejs-v4) ++```js +const { app, input } = require('@azure/functions'); ++const wpsContext = input.generic({ + type: 'webPubSubContext', + name: 'wpsContext' +}); ++app.http('connect', { + methods: ['GET', 'POST'], + authLevel: 'anonymous', + extraInputs: [wpsContext], + handler: async (request, context) => { + var wpsRequest = context.extraInputs.get('wpsContext'); ++ return { "userId": wpsRequest.request.connectionContext.userId }; + } +}); +``` ++# [Model v3](#tab/nodejs-v3) +Define input bindings in *function.json*. ++```json +{ + "disabled": false, + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req", + "methods": ["get", "post"] + }, + { + "type": "http", + "direction": "out", + "name": "$return" + }, + { + "type": "webPubSubContext", + "name": "wpsContext", + "direction": "in" + } + ] +} +``` ++Define function in *index.js*. ++```js +module.exports = async function (context, req, wpsContext) { + // in the case request is a preflight or invalid, directly return prebuilt response by extension. + if (wpsContext.hasError || wpsContext.isPreflight) + { + return wpsContext.response; + } + // return an http response with connect event response as body. + return { body: {"userId": wpsContext.connectionContext.userId} }; +}; +``` +++++> [!NOTE] +> Complete samples for this language are pending +++> [!NOTE] +> The Web PubSub extensions for Java isn't supported yet. +++### Configuration ++The following table explains the binding configuration properties that you set in the functions.json file and the `WebPubSubContext` attribute. ++| function.json property | Attribute property | Description | +|||| +| **type** | n/a | Must be set to `webPubSubContext`. | +| **direction** | n/a | Must be set to `in`. | +| **name** | n/a | Variable name used in function code for input Web PubSub request. | +| **connection** | Connection | Optional - the name of an app settings or setting collection that specifies the upstream Azure Web PubSub service. The value is used for [Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0.1/http-webhook.md#4-abuse-protection) and Signature validation. The value is auto resolved with "WebPubSubConnectionString" by default. And `null` means the validation isn't needed and always succeed. | +++### Usage ++`WebPubSubContext` provides following properties. ++| Binding Name | Binding Type | Description | Properties | +||||| +| request | `WebPubSubEventRequest` | Request from client, see following table for details. | `WebPubSubConnectionContext` from request header and other properties deserialized from request body describe the request, for example, `Reason` for `DisconnectedEventRequest`. | +| response | `HttpResponseMessage` | Extension builds response mainly for `AbuseProtection` and errors cases. | - | +| errorMessage | string | Describe the error details when processing the upstream request. | - | +| hasError | bool | Flag to indicate whether it's a valid Web PubSub upstream request. | - | +| isPreflight | bool | Flag to indicate whether it's a preflight request of `AbuseProtection`. | - | ++For `WebPubSubEventRequest`, it's deserialized to different classes that provide different information about the request scenario. For `PreflightRequest` or not valid cases, user can check the flags `IsPreflight` and `HasError` to know. We suggest you to return system build response `WebPubSubContext.Response` directly, or customer can log errors on demand. In different scenarios, customer can read the request properties as following. ++| Derived Class | Description | Properties | +| -- | -- | -- | +| `PreflightRequest` | Used in `AbuseProtection` when `IsPreflight` is **true** | - | +| `ConnectEventRequest` | Used in system `Connect` event type | Claims, Query, Subprotocols, ClientCertificates | +| `ConnectedEventRequest` | Used in system `Connected` event type | - | +| `UserEventRequest` | Used in user event type | Data, DataType | +| `DisconnectedEventRequest` | Used in system `Disconnected` event type | Reason | ++> [!NOTE] +> Though the `WebPubSubContext` is a input binding provides similar request deserialize way under `HttpTrigger` comparing to `WebPubSubTrigger`, there's limitations, i.e. connection state post merge isn't supported. The return response is still respected by the service side, but users require to build the response themselves. If users have needs to set the event response, you should return a `HttpResponseMessage` contains `ConnectEventResponse` or messages for user event as **response body** and put connection state with key `ce-connectionstate` in **response header**. |
azure-functions | Functions Bindings Web Pubsub Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-web-pubsub-output.md | + + Title: Azure Functions Web PubSub output binding +description: Learn about the Web PubSub output binding for Azure Functions. +++ms.devlang: csharp +# ms.devlang: csharp, java, javascript, python + Last updated : 09/02/2024++zone_pivot_groups: programming-languages-set-functions-lang-workers +++# Azure Web PubSub output binding for Azure Functions ++Use the *Web PubSub* output binding to invoke Azure Web PubSub service to do something. You can send a message to: ++* All connected clients +* Connected clients authenticated to a specific user +* Connected clients joined in a specific group +* A specific client connection ++The output binding also allows you to manage clients and groups, and grant/revoke permissions targeting specific connectionId with group. ++* Add connection to group +* Add user to group +* Remove connection from a group +* Remove user from a group +* Remove user from all groups +* Close all client connections +* Close a specific client connection +* Close connections in a group +* Grant permission of a connection +* Revoke permission of a connection ++For information on setup and configuration details, see the [overview](functions-bindings-web-pubsub.md). ++## Example ++# [Isolated worker model](#tab/isolated-process) ++```csharp +[Function("WebPubSubOutputBinding")] +[WebPubSubOutput(Hub = "<hub>", Connection = "<web_pubsub_connection_name>")] +public static WebPubSubAction Run([HttpTrigger(AuthorizationLevel.Function, "get", "post")] HttpRequestData req) +{ + return new SendToAllAction + { + Data = BinaryData.FromString("Hello Web PubSub!"), + DataType = WebPubSubDataType.Text + }; +} ++``` ++# [In-process model](#tab/in-process) ++```csharp +[FunctionName("WebPubSubOutputBinding")] +public static async Task RunAsync( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req, + [WebPubSub(Hub = "<hub>")] IAsyncCollector<WebPubSubAction> actions) +{ + await actions.AddAsync(WebPubSubAction.CreateSendToAllAction("Hello Web PubSub!", WebPubSubDataType.Text)); +} +``` +++++# [Model v4](#tab/nodejs-v4) ++```js +const { app, output } = require('@azure/functions'); +const wpsMsg = output.generic({ + type: 'webPubSub', + name: 'actions', + hub: '<hub>', +}); ++app.http('message', { + methods: ['GET', 'POST'], + authLevel: 'anonymous', + extraOutputs: [wpsMsg], + handler: async (request, context) => { + context.extraOutputs.set(wpsMsg, [{ + "actionName": "sendToAll", + "data": `Hello world`, + "dataType": `text` + }]); + } +}); +``` ++# [Model v3](#tab/nodejs-v3) ++Define bindings in *functions.json*. ++```json +{ + "disabled": false, + "bindings": [ + { + "type": "webPubSub", + "name": "actions", + "hub": "<hub>", + "direction": "out" + } + ] +} +``` ++Define function in *index.js*. ++```js +module.exports = async function (context) { + context.bindings.actions = { + "actionName": "sendToAll", + "data": "Hello world", + "dataType": "text" + }; + context.done(); +} +``` ++++++> [!NOTE] +> Complete samples for this language are pending +++> [!NOTE] +> The Web PubSub extensions for Java is not supported yet. +++### WebPubSubAction ++`WebPubSubAction` is the base abstract type of output bindings. The derived types represent the action server want service to invoke. ++++In C# language, we provide a few static methods under `WebPubSubAction` to help discover available actions. For example, user can create the `SendToAllAction` by call `WebPubSubAction.CreateSendToAllAction()`. ++| Derived Class | Properties | +| -- | -- | +| `SendToAllAction`|Data, DataType, Excluded | +| `SendToGroupAction`|Group, Data, DataType, Excluded | +| `SendToUserAction`|UserId, Data, DataType | +| `SendToConnectionAction`|ConnectionId, Data, DataType | +| `AddUserToGroupAction`|UserId, Group | +| `RemoveUserFromGroupAction`|UserId, Group | +| `RemoveUserFromAllGroupsAction`|UserId | +| `AddConnectionToGroupAction`|ConnectionId, Group | +| `RemoveConnectionFromGroupAction`|ConnectionId, Group | +| `CloseAllConnectionsAction`|Excluded, Reason | +| `CloseClientConnectionAction`|ConnectionId, Reason | +| `CloseGroupConnectionsAction`|Group, Excluded, Reason | +| `GrantPermissionAction`|ConnectionId, Permission, TargetName | +| `RevokePermissionAction`|ConnectionId, Permission, TargetName | ++++**`actionName`** is the key parameter to resolve the type. Available actions are listed as follows. ++| ActionName | Properties | +| -- | -- | +| `sendToAll`|Data, DataType, Excluded | +| `sendToGroup`|Group, Data, DataType, Excluded | +| `sendToUser`|UserId, Data, DataType | +| `sendToConnection`|ConnectionId, Data, DataType | +| `addUserToGroup`|UserId, Group | +| `removeUserFromGroup`|UserId, Group | +| `removeUserFromAllGroups`|UserId | +| `addConnectionToGroup`|ConnectionId, Group | +| `removeConnectionFromGroup`|ConnectionId, Group | +| `closeAllConnections`|Excluded, Reason | +| `closeClientConnection`|ConnectionId, Reason | +| `closeGroupConnections`|Group, Excluded, Reason | +| `grantPermission`|ConnectionId, Permission, TargetName | +| `revokePermission`|ConnectionId, Permission, TargetName | ++> [!IMPORTANT] +> The message data property in the sent message related actions must be `string` if data type is set to `json` or `text` to avoid data conversion ambiguity. Please use `JSON.stringify()` to convert the json object in need. This is applied to any place using message property, for example, `UserEventResponse.Data` working with `WebPubSubTrigger`. +> +> When data type is set to `binary`, it's allowed to leverage binding naturally supported `dataType` as `binary` configured in the `function.json`, see [Trigger and binding definitions](../azure-functions/functions-triggers-bindings.md?tabs=csharp#trigger-and-binding-definitions) for details. +++### Configuration ++The following table explains the binding configuration properties that you set in the function.json file and the `WebPubSub` attribute. ++| function.json property | Attribute property | Description | +|||| +| **type** | n/a | Must be set to `webPubSub` | +| **direction** | n/a | Must be set to `out` | +| **name** | n/a | Variable name used in function code for output binding object. | +| **hub** | Hub | The value must be set to the name of the Web PubSub hub for the function to be triggered. We support set the value in attribute as higher priority, or it can be set in app settings as a global value. | +| **connection** | Connection | The name of the app setting that contains the Web PubSub Service connection string (defaults to "WebPubSubConnectionString"). | ++## Troubleshooting ++### Setting up console logging +You can also easily [enable console logging](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/samples/Diagnostics.md#logging) if you want to dig deeper into the requests you're making against the service. ++[azure_sub]: https://azure.microsoft.com/free/ +[samples_ref]: https://github.com/Azure/azure-webpubsub/tree/main/samples/functions |
azure-functions | Functions Bindings Web Pubsub Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-web-pubsub-trigger.md | + + Title: Azure Web PubSub trigger for Azure Functions +description: Learn how to handle Azure Web PubSub client events from Azure Functions + Last updated : 09/02/2024+ms.devlang: csharp +# ms.devlang: csharp, java, javascript, powershell, python ++zone_pivot_groups: programming-languages-set-functions-lang-workers +++# Azure Web PubSub trigger binding for Azure Functions ++Use Azure Web PubSub trigger to handle client events from Azure Web PubSub service. ++The trigger endpoint pattern would be as follows, which should be set in Web PubSub service side (Portal: settings -> event handler -> URL Template). In the endpoint pattern, the query part `code=<API_KEY>` is **REQUIRED** when you're using Azure Function App for [security](../azure-functions/function-keys-how-to.md#understand-keys) reasons. The key can be found in **Azure portal**. Find your function app resource and navigate to **Functions** -> **App keys** -> **System keys** -> **webpubsub_extension** after you deploy the function app to Azure. Though, this key isn't needed when you're working with local functions. ++``` +<Function_App_Url>/runtime/webhooks/webpubsub?code=<API_KEY> +``` ++++## Example ++The following sample shows how to handle user events from clients. +# [Isolated worker model](#tab/isolated-process) ++```csharp +[Function("Broadcast")] +public static void Run( +[WebPubSubTrigger("<hub>", WebPubSubEventType.User, "message")] UserEventRequest request, ILogger log) +{ + log.LogInformation($"Request from: {request.ConnectionContext.UserId}"); + log.LogInformation($"Request message data: {request.Data}"); + log.LogInformation($"Request message dataType: {request.DataType}"); +} +``` +++# [In-process model](#tab/in-process) ++```csharp +[FunctionName("WebPubSubTrigger")] +public static void Run( + [WebPubSubTrigger("<hub>", WebPubSubEventType.User, "message")] UserEventRequest request, ILogger log) +{ + log.LogInformation($"Request from: {request.ConnectionContext.UserId}"); + log.LogInformation($"Request message data: {request.Data}"); + log.LogInformation($"Request message dataType: {request.DataType}"); +} +``` ++++`WebPubSubTrigger` binding also supports return value in synchronize scenarios, for example, system `Connect` and user event, when server can check and deny the client request, or send messages to the caller directly. `Connect` event respects `ConnectEventResponse` and `EventErrorResponse`, and user event respects `UserEventResponse` and `EventErrorResponse`, rest types not matching current scenario is ignored. ++# [Isolated worker model](#tab/isolated-process) ++```csharp +[Function("Broadcast")] +public static UserEventResponse Run( +[WebPubSubTrigger("<hub>", WebPubSubEventType.User, "message")] UserEventRequest request) +{ + return new UserEventResponse("[SYSTEM ACK] Received."); +} +``` ++# [In-process model](#tab/in-process) +```cs +[FunctionName("WebPubSubTriggerReturnValueFunction")] +public static UserEventResponse Run( + [WebPubSubTrigger("hub", WebPubSubEventType.User, "message")] UserEventRequest request) +{ + return request.CreateResponse(BinaryData.FromString("ack"), WebPubSubDataType.Text); +} +``` ++++++# [Model v4](#tab/nodejs-v4) +```js +const { app, trigger } = require('@azure/functions'); ++const wpsTrigger = trigger.generic({ + type: 'webPubSubTrigger', + name: 'request', + hub: '<hub>', + eventName: 'message', + eventType: 'user' +}); ++app.generic('message', { + trigger: wpsTrigger, + handler: async (request, context) => { + context.log('Request from: ', request.connectionContext.userId); + context.log('Request message data: ', request.data); + context.log('Request message dataType: ', request.dataType); + } +}); +``` +# [Model v3](#tab/nodejs-v3) ++Define trigger binding in *function.json*. ++```json +{ + "disabled": false, + "bindings": [ + { + "type": "webPubSubTrigger", + "direction": "in", + "name": "data", + "hub": "<hub>", + "eventName": "message", + "eventType": "user" + } + ] +} +``` ++Define function in *index.js*. ++```js +module.exports = function (context, data) { + context.log('Request from: ', context.bindingData.request.connectionContext.userId); + context.log('Request message data: ', data); + context.log('Request message dataType: ', context.bindingData.request.dataType); +} +``` ++++`WebPubSubTrigger` binding also supports return value in synchronize scenarios, for example, system `Connect` and user event, when server can check and deny the client request, or send message to the request client directly. In JavaScript weakly typed language, it's deserialized regarding the object keys. And `EventErrorResponse` has the highest priority compare to rest objects, that if `code` is in the return, then it's parsed to `EventErrorResponse`. +++# [Model v4](#tab/nodejs-v4) ++```js +app.generic('message', { + trigger: wpsTrigger, + handler: async (request, context) => { + return { + "data": "ack", + "dataType" : "text" + }; + } +}); +``` ++# [Model v3](#tab/nodejs-v3) ++```js +module.exports = async function (context) { + return { + "data": "ack", + "dataType" : "text" + }; +} +``` ++++> [!NOTE] +> Complete samples for this language are pending +> [!NOTE] +> The Web PubSub extensions for Java isn't supported yet. +++## Configuration ++The following table explains the binding configuration properties that you set in the *function.json* file. ++| function.json property | Attribute property | Description | +|||| +| **type** | n/a |Required - must be set to `webPubSubTrigger`. | +| **direction** | n/a | Required - must be set to `in`. | +| **name** | n/a | Required - the variable name used in function code for the parameter that receives the event data. | +| **hub** | Hub | Required - the value must be set to the name of the Web PubSub hub for the function to be triggered. We support set the value in attribute as higher priority, or it can be set in app settings as a global value. | +| **eventType** | WebPubSubEventType | Required - the value must be set as the event type of messages for the function to be triggered. The value should be either `user` or `system`. | +| **eventName** | EventName | Required - the value must be set as the event of messages for the function to be triggered. </br> </br> For `system` event type, the event name should be in `connect`, `connected`, `disconnected`. </br> </br> For user-defined subprotocols, the event name is `message`. </br> </br> For system supported subprotocol `json.webpubsub.azure.v1.`, the event name is user-defined event name. | +| **clientProtocols** | ClientProtocols | Optional - specifies which client protocol can trigger the Web PubSub trigger functions. </br> </br> The following case-insensitive values are valid: </br> `all`: Accepts all client protocols. Default value. </br>`webPubSub`: Accepts only Web PubSub protocols. </br>`mqtt`: Accepts only MQTT protocols. +| **connection** | Connection | Optional - the name of an app settings or setting collection that specifies the upstream Azure Web PubSub service. The value is used for signature validation. And the value is auto resolved with app settings `WebPubSubConnectionString` by default. And `null` means the validation isn't needed and always succeed. | ++## Usages ++In C#, `WebPubSubEventRequest` is type recognized binding parameter, rest parameters are bound by parameter name. Check following table for available parameters and types. ++In weakly typed language like JavaScript, `name` in `function.json` is used to bind the trigger object regarding following mapping table. And respect `dataType` in `function.json` to convert message accordingly when `name` is set to `data` as the binding object for trigger input. All the parameters can be read from `context.bindingData.<BindingName>` and is `JObject` converted. ++| Binding Name | Binding Type | Description | Properties | +||||| +|request|`WebPubSubEventRequest`|Describes the upstream request|Property differs by different event types, including derived classes `ConnectEventRequest`, `MqttConnectEventRequest`, `ConnectedEventRequest`, `MqttConnectedEventRequest`, `UserEventRequest`, `DisconnectedEventRequest`, and `MqttDisconnectedEventRequest`. | +|connectionContext|`WebPubSubConnectionContext`|Common request information| EventType, EventName, Hub, ConnectionId, UserId, Headers, Origin, Signature, States | +|data|`BinaryData`,`string`,`Stream`,`byte[]`| Request message data from client in user `message` event | -| +|dataType|`WebPubSubDataType`| Request message dataType, which supports `binary`, `text`, `json` | -| +|claims|`IDictionary<string, string[]>`|User Claims in system `connect` request | -| +|query|`IDictionary<string, string[]>`|User query in system `connect` request | -| +|subprotocols|`IList<string>`|Available subprotocols in system `connect` request | -| +|clientCertificates|`IList<ClientCertificate>`|A list of certificate thumbprint from clients in system `connect` request|-| +|reason|`string`|Reason in system `disconnected` request|-| ++> [!IMPORTANT] +> In C#, multiple types supported parameter __MUST__ be put in the first, i.e. `request` or `data` that other than the default `BinaryData` type to make the function binding correctly. ++### Return response ++`WebPubSubTrigger` respects customer returned response for synchronous events of `connect` and user event. Only matched response is sent back to service, otherwise, it's ignored. Besides, `WebPubSubTrigger` return object supports users to `SetState()` and `ClearStates()` to manage the metadata for the connection. And the extension merges the results from return value with the original ones from request `WebPubSubConnectionContext.States`. Value in existing key is overwrite and value in new key is added. ++| Return Type | Description | Properties | +|||| +|[`ConnectEventResponse`](/dotnet/api/microsoft.azure.webpubsub.common.connecteventresponse)| Response for `connect` event | Groups, Roles, UserId, Subprotocol | +|[`UserEventResponse`](/dotnet/api/microsoft.azure.webpubsub.common.usereventresponse)| Response for user event | DataType, Data | +|[`EventErrorResponse`](/dotnet/api/microsoft.azure.webpubsub.common.eventerrorresponse)| Error response for the sync event | Code, ErrorMessage | +|[`*WebPubSubEventResponse`](/dotnet/api/microsoft.azure.webpubsub.common.webpubsubeventresponse)| Base response type of the supported ones used for uncertain return cases | - | ++ |
azure-functions | Functions Bindings Web Pubsub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-web-pubsub.md | + + Title: Azure Functions Web PubSub bindings +description: Understand how to use Web PubSub bindings with Azure Functions. ++ Last updated : 09/02/2024+zone_pivot_groups: programming-languages-set-functions-lang-workers +++# Web PubSub bindings for Azure Functions ++This set of articles explains how to authenticate, send real-time messages to clients connected to [Azure Web PubSub](https://azure.microsoft.com/products/web-pubsub/) by using Azure Web PubSub bindings in Azure Functions. ++| Action | Type | +||| +| Handle client events from Web PubSub | [Trigger binding](./functions-bindings-web-pubsub-trigger.md) | +| Handle client events from Web PubSub with HTTP trigger, or return client access URL and token | [Input binding](./functions-bindings-web-pubsub-input.md) +| Invoke service APIs | [Output binding](./functions-bindings-web-pubsub-output.md) | ++[Samples](https://github.com/Azure/azure-webpubsub/tree/main/samples/functions) +++## Install extension ++The extension NuGet package you install depends on the C# mode you're using in your function app: ++# [Isolated worker model](#tab/isolated-process) ++Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). ++Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.WebPubSub/). ++# [In-process model](#tab/in-process) +++Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). ++Add the extension to your project by installing this [NuGet package]. ++++++## Install bundle ++The Web PubSub extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets version 3.x or later, you should already have this bundle installed. To learn more, see [extension bundle]. +++> [!NOTE] +> The Web PubSub extensions for Java is not supported yet. +++## Key concepts ++![Diagram showing the workflow of Azure Web PubSub service working with Function Apps.](../azure-web-pubsub/media/reference-functions-bindings/functions-workflow.png) ++(1)-(2) `WebPubSubConnection` input binding with HttpTrigger to generate client connection. ++(3)-(4) `WebPubSubTrigger` trigger binding or `WebPubSubContext` input binding with HttpTrigger to handle service request. ++(5)-(6) `WebPubSub` output binding to request service do something. ++## Connection string settings ++Add the `WebPubSubConnectionString` key to the _host.json_ file that points to the application setting with your connection string. For local development, this value may exist in the _local.settings.json_ file. ++For details on how to configure and use Web PubSub and Azure Functions together, refer to [Tutorial: Create a serverless notification app with Azure Functions and Azure Web PubSub service](../azure-web-pubsub/tutorial-serverless-notification.md). ++## Next steps ++- [Handle client events from Web PubSub (Trigger binding)](./functions-bindings-web-pubsub-trigger.md) +- [Handle client events from Web PubSub with HTTP trigger, or return client access URL and token (Input binding)](./functions-bindings-web-pubsub-input.md) +- [Invoke service APIs (Output binding)](./functions-bindings-web-pubsub-output.md) ++[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.WebPubSub +[core tools]: ./functions-run-local.md +[extension bundle]: ./functions-bindings-register.md#extension-bundles +[Update your extensions]: ./functions-bindings-register.md +[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack |
azure-functions | Functions Create Maven Intellij | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-intellij.md | To run the project locally, follow these steps: > [!IMPORTANT] > You must have the JAVA_HOME environment variable set correctly to the JDK directory that is used during code compiling using Maven. Make sure that the version of the JDK is at least as high as the `Java.version` setting. -1. Navigate to *src/main/java/org/example/functions/HttpTriggerFunction.java* to see the code generated. Beside line 24, you should see a green **Run** button. Select it and then select **Run 'Functions-azur...'**. You should see your function app running locally with a few logs. +1. Navigate to *src/main/java/org/example/functions/HttpTriggerFunction.java* to see the code generated. Beside line 17, you should see a green **Run** button. Select it and then select **Run 'Functions-azur...'**. You should see your function app running locally with a few logs. :::image type="content" source="media/functions-create-first-java-intellij/local-run-functions-project.png" alt-text="Local run project." lightbox="media/functions-create-first-java-intellij/local-run-functions-project.png"::: |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | For feature variations and limitations, including API endpoints, see [Speech ser ### [Azure AI -The following features of Azure OpenAI are available in Azure Government: --|Feature|Azure OpenAI| -|--|--| -|Models available|US Gov Arizona:<br> GPT-4o (2024-05-13) GPT-4 (1106-Preview)<br> GPT-3.5-Turbo (0125) GPT-3.5-Turbo (1106)<br> text-embedding-ada-002 (version 2)<br><br>US Gov Virginia:<br> GPT-4o (2024-05-13) GPT-4 (1106-Preview)<br> GPT-3.5-Turbo (0125)<br> text-embedding-ada-002 (version 2)<br><br>Learn more about the different capabilities of each model in [Azure OpenAI Service models](/azure/ai-services/openai/concepts/models)| -|Virtual network support & private link support| Yes. | -| Connect your data | Available in US Gov Virginia and Arizona. Virtual network and private links are supported. Deployment to a web app or a copilot in Copilot Studio is not supported. | -|Managed Identity|Yes, via Microsoft Entra ID| -|UI experience|**Azure portal** for account & resource management<br>**Azure OpenAI Studio** for model exploration| -|Abuse Monitoring|Not all features of Abuse Monitoring are enabled for AOAI in Azure Government. You will be responsible for implementing reasonable technical and operational measures to detect and mitigate any use of the service in violation of the Product Terms. [Automated Content Classification and Filtering](/azure/ai-services/openai/concepts/content-filter) remains enabled by default for Azure Government.| -|Data Storage|In AOAI, customer data is only stored at rest as part of our Finetuning solution. Since Finetuning is not enabled within Azure Gov, there is no customer data stored at rest in Azure Gov associated with AOAI. However, Customer Managed Keys (CMK) can still be enabled in Azure Gov to support use of the same policies in Azure Gov as in Public cloud. Note also that if Finetuning is enabled in Azure Gov in the future, any existing CMK deployment would be applied to that data at that time.| --**Next steps** -* To request quota increases for the pay-as-you-go consumption model, apply at [https://aka.ms/AOAIGovQuota](https://aka.ms/AOAIGovQuota) -* If modified content filters are required, apply at [https://aka.ms/AOAIGovModifyContentFilter](https://aka.ms/AOAIGovModifyContentFilter) -+For feature variations and limitations see [Azure OpenAI in Azure Gov](/azure/ai-services/openai/azure-government). ### [Azure AI |
azure-maps | Map Add Drawing Toolbar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-drawing-toolbar.md | Title: Add drawing tools toolbar to map | Microsoft Azure Maps description: How to add a drawing toolbar to a map using Azure Maps Web SDK Previously updated : 06/05/2023 Last updated : 08/30/2024 drawingManager = new atlas.drawing.DrawingManager(map, { For a complete working sample that demonstrates how to add a drawing toolbar to your map, see [Add drawing toolbar to map] in the [Azure Maps Samples]. For the source code for this sample, see [Add drawing toolbar to map source code]. <! > [!VIDEO //codepen.io/azuremaps/embed/ZEzLeRg/?height=265&theme-id=0&default-tab=js,result&editable=true] drawingManager = new atlas.drawing.DrawingManager(map, { The following screenshot shows a sample of an instance of the drawing manager that displays the toolbar with just a single drawing tool on the map: <! > [!VIDEO //codepen.io/azuremaps/embed/OJLWWMy/?height=265&theme-id=0&default-tab=js,result&editable=true] drawingManager.setOptions({ For a complete working sample that demonstrates how to customize the rendering of the drawing shapes in the drawing manager by accessing the rendering layers, see [Change drawing rendering style] in the [Azure Maps Samples]. For the source code for this sample, see [Change drawing rendering style source code]. <! > [!VIDEO //codepen.io/azuremaps/embed/OJLWpyj/?height=265&theme-id=0&default-tab=js,result&editable=true] |
azure-monitor | Azure Monitor Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md | This article details the different methods to install, uninstall, update, and co See the following articles for prerequisites and other requirements for Azure Monitor Agent: -* [Azure Monitor Agent supported operating systems and environments](./azure-monitor-agent-requirements.md) +* [Azure Monitor Agent supported operating systems and environments](./azure-monitor-agent-supported-operating-systems.md) * [Azure Monitor Agent requirements](./azure-monitor-agent-requirements.md) * [Azure Monitor Agent network configuration](./azure-monitor-agent-network-configuration.md) |
azure-monitor | Azure Monitor Agent Migration Helper Workbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-helper-workbook.md | Azure Monitor Agent Migration Helper workbook is a workbook-based Azure Monitor ## Using the AMA workbook To open the workbook:-1. Navigate to the **Azure Monitor** page in the Azure portal, and select **Workbooks**. +1. Navigate to the **Monitor** page in the Azure portal, and select **Workbooks**. 1. In the **Workbooks** pane, scroll down to the **AMA Migration Helper** workbook, and select it. :::image type="content" source="./media/azure-monitor-agent-migration-helper-workbook/select-monitor-workbook.png" lightbox="./media/azure-monitor-agent-migration-helper-workbook/select-monitor-workbook.png" alt-text="A screenshot showing the AMA Migration helper tine in the list of workbooks."::: |
azure-monitor | Best Practices Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-logs.md | This article provides architectural best practices for Azure Monitor Logs. The g ## Reliability [Reliability](/azure/well-architected/resiliency/overview) refers to the ability of a system to recover from failures and continue to function. The goal is to minimize the effects of a single failing component. Use the following information to minimize failure of your Log Analytics workspaces and to protect the data they collect. -This video provides an overview of reliability and resilience options available for Log Analytics workspaces: --> [!VIDEO https://www.youtube.com/embed/CYspm1Yevx8?cc_load_policy=1&cc_lang_pref=auto] - [!INCLUDE [waf-logs-reliability](includes/waf-logs-reliability.md)] |
azure-monitor | Create Custom Table Auxiliary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table-auxiliary.md | Title: Set up a table with the Auxiliary plan for low-cost data ingestion and retention in your Log Analytics workspace + Title: Set up a table with the Auxiliary plan for low-cost data ingestion and retention in your Log Analytics workspace (Preview) description: Create a custom table with the Auxiliary table plan in your Log Analytics workspace for low-cost ingestion and retention of log data. |
azure-monitor | Query Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-audit.md | An audit record is created each time a query is run. If you send the data to a L |:|:| |AAPBI|[Log Analytics integration with Power BI](../logs/log-powerbi.md).| |AppAnalytics|Experiences of Log Analytics in the Azure portal.|-|AppInsightsPortalExtension|[Workbooks](../visualize/workbooks-data-sources.md#logs) or [Application insights](../app/app-insights-overview.md).| +|AppInsightsPortalExtension|[Workbooks](../visualize/workbooks-data-sources.md#logs-analytics-tables-application-insights) or [Application insights](../app/app-insights-overview.md).| |ASC_Portal|Microsoft Defender for Cloud.| |ASI_Portal|Sentinel.| |AzureAutomation|[Azure Automation.](../../automation/overview.md)| |
azure-monitor | Workbooks Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-data-sources.md | + - [Logs (Analytics Tables, Application Insights)](#logs-analytics-tables-application-insights) + - [Logs (Basic, Auxiliary Tables)](#logs-basic-and-auxiliary-tables) - [Metrics](#metrics) - [Azure Resource Graph](#azure-resource-graph) - [Azure Resource Manager](#azure-resource-manager) Workbooks can extract data from these data sources: - [Change Analysis](#change-analysis) - [Prometheus](#prometheus) -## Logs +## Logs (Analytics Tables, Application Insights) -With workbooks, you can query logs from the following sources: +With workbooks, you can use the `Logs (Analytics)` data source query logs from the following sources: -* Azure Monitor Logs (Application Insights resources and Log Analytics workspaces) +* Azure Monitor Logs (Application Insights resources and Log Analytics workspaces analytics tables) * Resource-centric data (activity logs) You can use Kusto query language (KQL) queries that transform the underlying resource data to select a result set that can be visualized as text, charts, or grids. See also: [Workbooks best practices and hints for logs queries](workbooks-create Tutorial: [Making resource centric log queries in workbooks](workbooks-create-workbook.md#tutorialresource-centric-logs-queries-in-workbooks) +## Logs (Basic and Auxiliary Tables) ++Workbooks also supports querying Log Analytics Basic and Auxiliary tables through a separate `Logs (Basic)` data source. Basic and Auxiliary logs tables reduce the cost of ingesting high-volume verbose logs and let you query the data they store with some limitations. +++> [!NOTE] +> Basic and Auxiliary logs and the workbook `Logs (Basic)` data source have limitations compared to the `Log (Analytics)` data source, most notably +> * *Extra cost*, including per-query costs. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for details. +> * Basic logs does not support the full KQL language +> * Basic logs only operates on single Log Analyics Workspace, it does not have cross-resource or resource centric query support. +> * Basic logs does not support "set in query" style time ranges, an explicit time range (or parameter) must be specified. ++For a full list of details and limitations, see [Query data in a Basic and Auxiliary table in Azure Monitor Logs](../logs/basic-logs-query.md) ++See also: [Log Analytics query optimization tips](../logs/query-optimization.md) + ## Metrics Azure resources emit [metrics](../essentials/data-platform-metrics.md) that can be accessed via workbooks. Metrics can be accessed in workbooks through a specialized control that allows you to specify the target resources, the desired metrics, and their aggregation. You can then plot this data in charts or grids. Azure resources emit [metrics](../essentials/data-platform-metrics.md) that can Workbooks support querying for resources and their metadata by using Azure Resource Graph. This functionality is primarily used to build custom query scopes for reports. The resource scope is expressed via a KQL subset that Resource Graph supports, which is often sufficient for common use cases. -To make a query control that uses this data source, use the **Query type** dropdown and select **Azure Resource Graph**. Then select the subscriptions to target. Use **Query control** to add the Resource Graph KQL subset that selects an interesting resource subset. +To make a query control that uses this data source, use the **Query type** dropdown and select **Azure Resource Graph**. Then choose at which level of data you wish to target, either Subscriptions, Management groups, or the entire Tenant/Directory. Then select the subscriptions to target. Use **Query control** to add the Resource Graph KQL query that selects an interesting resource subset. <!-- convertborder later --> :::image type="content" source="./media/workbooks-data-sources/azure-resource-graph.png" lightbox="./media/workbooks-data-sources/azure-resource-graph.png" alt-text="Screenshot that shows an Azure Resource Graph KQL query." border="false"::: To make a query control that uses this data source, use the **Query type** dropd Azure Workbooks supports Azure Resource Manager REST operations so that you can query the management.azure.com endpoint without providing your own authorization header token. -To make a query control that uses this data source, use the **Data source** dropdown and select **Azure Resource Manager**. Provide the appropriate parameters, such as **Http method**, **url path**, **headers**, **url parameters**, and **body**. Azure Resource Manager data source is intended to be used as a data source to power data *visualizations*; as such, it does not support `PUT` or `PATCH` operations. The data source supports the following HTTP methods, with these expecations and limitations: +To make a query control that uses this data source, use the **Data source** dropdown and select **Azure Resource Manager**. Provide the appropriate parameters, such as **Http method**, **url path**, **headers**, **url parameters**, and **body**. Azure Resource Manager data source is intended to be used as a data source to power data *visualizations*; as such, it does not support `PUT` or `PATCH` operations. The data source supports the following HTTP methods, with these expecations and limitations: * `GET` - the most common operation for visualization, execute a query and parse the `JSON` result using settings in the "Result Settings" tab. -* `GETARRAY` - for ARM APIs that may return multiple "pages" of results using the ARM standard `nextLink` or `@odata.nextLink` style response (See [Async operations, throttling, and paging](/rest/api/azure/#async-operations-throttling-and-paging), this method will make followup calls to the API for each `nextLink`, and merge those results into an array of results. +* `GETARRAY` - for ARM APIs that may return multiple "pages" of results using the ARM standard `nextLink` or `@odata.nextLink` style response (See [Async operations, throttling, and paging](/rest/api/azure/#async-operations-throttling-and-paging), this method makes followup calls to the API for each `nextLink` result, and merge those results into an array of results. * `POST` - This method is used for APIs that pass information in a POST body. > [!NOTE]-> The Azure Resource Manager data source only supports results that return a 200 `OK` response, indicating the result is synchronous. APIs returning asynchronous results with 202 `ACCEPTED` asynchronous result and a header with a result URL are not supported. +> The Azure Resource Manager data source only supports results that return a 200 `OK` response, indicating the result is synchronous. APIs returning asynchronous results with 202 `ACCEPTED` asynchronous result and a header with a result URL are not supported. ## Azure Data Explorer See also: [Azure Data Explorer query best practices](/azure/data-explorer/kusto/ ## JSON -The JSON provider allows you to create a query result from static JSON content. It's most commonly used in parameters to create dropdown parameters of static values. Simple JSON arrays or objects will automatically be converted into grid rows and columns. For more specific behaviors, you can use the **Results** tab and JSONPath settings to configure columns. +The JSON provider allows you to create a query result from static JSON content. It's most commonly used in parameters to create dropdown parameters of static values. Simple JSON arrays or objects are converted into grid rows and columns. For more specific behaviors, you can use the **Results** tab and JSONPath settings to configure columns. > [!NOTE] > Do *not* include sensitive information in fields like headers, parameters, body, and URL, because they'll be visible to all the workbook users. This provider supports [JSONPath](workbooks-jsonpath.md). Merging data from different sources can enhance the insights experience. An example is augmenting active alert information with related metric data. Merging data allows users to see not just the effect (an active alert) but also potential causes, for example, high CPU usage. The monitoring domain has numerous such correlatable data sources that are often critical to the triage and diagnostic workflow. -With workbooks, you can query different data sources. Workbooks also provide simple controls that you can use to merge or join data to provide rich insights. The *merge* control is the way to achieve it. A single merge data source can do many merges in one step. For example, a *single* merge data source can merge results from a step using Azure Resource Graph with Azure Metrics, and then merge that result with another step using the Azure Resource Manager data source in one query item. +With workbooks, you can query different data sources. Workbooks also provide simple controls that you can use to merge or join data to provide rich insights. The *merge* control is the way to achieve it. A single merge data source can do many merges in one step. For example, a *single* merge data source can merge results from a step using Azure Resource Graph with Azure Metrics, and then merge that result with another step using the Azure Resource Manager data source in one query item. > [!NOTE] > Although hidden query and metrics steps run if they're referenced by a merge step, hidden query items that use the merge data source don't run while hidden. > A step that uses merge and attempts to reference a hidden step by using merge data source won't run until that hidden step becomes visible.-> A single merge step can merge many data sources at once. There's rarely a case where a merge data source will reference another merge data source. +> A single merge step can merge many data sources at once. There's rarely a case where a merge data source will reference another merge data source. -### Combine alerting data with Log Analytics VM performance data +### Combine alerting data with Log Analytics Virtual Machine (VM) performance data The following example combines alerting data with Log Analytics VM performance data to get a rich insights grid. <!-- convertborder later --> Workbooks support getting data from any external source. If your data lives outs To make a query control that uses this data source, use the **Data source** dropdown and select **Custom Endpoint**. Provide the appropriate parameters, such as **Http method**, **url**, **headers**, **url parameters**, and **body**. Make sure your data source supports [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS). Otherwise, the request will fail. -To avoid automatically making calls to untrusted hosts when you use templates, you need to mark the used hosts as trusted. You can either select **Add as trusted** or add it as a trusted host in workbook settings. These settings will be saved in [browsers that support IndexDb with web workers](https://caniuse.com/#feat=indexeddb). +To avoid automatically making calls to untrusted hosts when you use templates, you need to mark the used hosts as trusted. You can either select **Add as trusted** or add it as a trusted host in workbook settings. These settings are saved locally in [browsers that support IndexDb with web workers](https://caniuse.com/#feat=indexeddb). This provider supports [JSONPath](workbooks-jsonpath.md). ## Workload health -Azure Monitor has functionality that proactively monitors the availability and performance of Windows or Linux guest operating systems. Azure Monitor models key components and their relationships, criteria for how to measure the health of those components, and which components alert you when an unhealthy condition is detected. With workbooks, you can use this information to create rich interactive reports. +Azure Monitor has functionality that proactively monitors the availability and performance of Windows or Linux guest operating systems. Azure Monitor models key components and their relationships, criteria for how to measure the health of those components, and can alert you when an unhealthy condition is detected. With workbooks, you can use this information to create rich interactive reports. To make a query control that uses this data source, use the **Query type** dropdown to select **Workload Health**. Then select subscription, resource group, or VM resources to target. Use the health filter dropdowns to select an interesting subset of health incidents for your analytic needs. <!-- convertborder later --> To make a query control that uses this data source, use the **Query type** dropd ## Azure RBAC -The Azure role-based access control (RBAC) provider allows you to check permissions on resources. It's most commonly used in parameters to check if the correct RBACs are set up. A use case would be to create a parameter to check deployment permission and then notify the user if they don't have deployment permission. +The Azure role-based access control (RBAC) provider allows you to check permissions on resources. It's can be used in parameters to check if the correct RBACs are set up. A use case would be to create a parameter to check deployment permission and then notify the user if they don't have deployment permission. -Simple JSON arrays or objects will automatically be converted into grid rows and columns or text with a `hasPermission` column with either true or false. The permission is checked on each resource and then either `or` or `and` to get the result. The [operations or actions](../../role-based-access-control/resource-provider-operations.md) can be a string or an array. +Simple JSON arrays or objects are converted into grid rows and columns or text with a `hasPermission` column with either true or false. The permission is checked on each resource and then either `or` or `and` to get the result. The [operations or actions](../../role-based-access-control/resource-provider-operations.md) can be a string or an array. **String:** ``` |
azure-monitor | Workbooks Dropdowns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-dropdowns.md | description: Use dropdown parameters to simplify complex reporting with prebuilt ibiza Previously updated : 06/21/2023 Last updated : 08/14/2024 # Workbook dropdown parameters By using dropdown parameters, you can collect one or more input values from a kn The easiest way to specify a dropdown parameter is by providing a static list in the parameter setting. A more interesting way is to get the list dynamically via a KQL query. You can also specify whether it's single or multi-select by using parameter settings. If it's multi-select, you can specify how the result set should be formatted, for example, as delimiter or quotation. +## Dropdown parameter components +When using either static JSON content or getting dynamic values from queries, dropdown parameters allow up to four fields of information, in this specific order: ++1. `value` (required): the first column / field in the data is used as the literal value of the parameter. In the case of simple static JSON parameters, it can be as simple as the JSON content `["dev", "test", "prod"]`, which would create a dropdown of three items with those values as both the value and the label in the dropdown. The name of this field doesn't need to be `value`, the dropdown will use the first field in the data no matter the name. +1. `label` (optional): the second column / field in the data is used as the display name/label of the parameter in the dropdown. If not specified, the value is used as the label. The name of this field doesn't need to be `label`, the dropdown will use the second field in the data no matter the name. +1. `selected` (optional): the third column / field in the data is used to specify which value should be selected by default. If not specified, no items are selected by default. The selection behavior is based on the JavaScript "falsy" concept, so values like `0`, `false`, `null`, or empty strings are treated as not selected. The name of this field doesn't need to be `selected`, the dropdown will use the third field in the data no matter the name. ++ > [!NOTE] + > This only controls *default* selection, once a user has selected values in the dropdown, those user selected values are used. Even if a subsequent query for the parameter runs and returns new default values. To return to the default selection, the use can use the "Default Items" option in the dropdown, which will re-query the default values and apply them. + > + > Default values are only applied if no items have been selected by the user. + > + > If a subsequent query returns items that do *not* include previously selected values, the missing values are removed from the selection. The selected items in the dropdown will become the intersection of the items returned by the query and the items that were previously selected. ++1. `group` (optional): unlike the other fields, the grouping column *must* be named `group` and appear after `value`, `label` and `selected`. This field in the data is used to group the items in the dropdown. If not specified, no grouping is used. If default selection isn't needed, the data/query must still return a `selected` field in at least one object/row, even if all the values are `false`. ++> [!NOTE] +> Any other fields in the data are ignored by the dropdown parameter. It is suggested to limit the content to just those fields used by the dropdown to avoid complicated queries returning data that is ignored. + ## Create a static dropdown parameter 1. Start with an empty workbook in edit mode. The easiest way to specify a dropdown parameter is by providing a static list in 1. **Parameter type**: `Drop down` 1. **Required**: `checked` 1. **Allow multiple selections**: `unchecked`- 1. **Get data from**: `JSON` + 1. **Get data from**: `JSON` or, select `Query` and select the `JSON` data source. + + The JSON data source allows the JSON content to reference any existing parameters. 1. In the **JSON Input** text block, insert this JSON snippet: ```json If your query result/JSON contains a `group` field, the dropdown list displays g <!-- convertborder later --> :::image type="content" source="./media/workbooks-dropdowns/grouped-dropDown.png" lightbox="./media/workbooks-dropdowns/grouped-dropDown.png" alt-text="Screenshot that shows an example of a grouped dropdown list." border="false"::: +> [!NOTE] +> When using a `group` field in your query, you must also supply a value for `label` and `selected` fields. + ## Create a dynamic dropdown parameter 1. Start with an empty workbook in edit mode. If your query result/JSON contains a `group` field, the dropdown list displays g <!-- convertborder later --> :::image type="content" source="./media/workbooks-dropdowns/dropdown-dynamic.png" lightbox="./media/workbooks-dropdowns/dropdown-dynamic.png" alt-text="Screenshot that shows the creation of a dynamic dropdown parameter." border="false"::: +## Example: Custom labels, selecting the first item by default, and grouping by operation name +The query used in the preceding dynamic dropdown parameter returns a list of values that are rendered in the dropdown list. +If you want a different display name, or to allow the user to select the display name, use the value, label, selection, and group columns. ++The following sample shows how to get a list of distinct Application Insights dependencies. The display names are styled with an emoji, the first item is selected by default, and the items are grouped by operation names: ++```kusto +dependencies +| summarize by operation_Name, name +| where name !contains ('.') +| order by name asc +| serialize Rank = row_number() +| project value = name, label = strcat('🌐 ', name), selected = iff(Rank == 1, true, false), group = operation_Name +``` +<!-- convertborder later --> + ## Reference a dropdown parameter -You can reference dropdown parameters. +You can reference dropdown parameters anywhere that parameters can be used, including replacing the parameter value into queries, visualization settings, Markdown text content, or other places where you can select a parameter as an option. ### In KQL You can reference dropdown parameters. | summarize Requests = count() by bin(timestamp, 1h) ``` -1. Run the query to see the results. Optionally, render it as a chart. +1. Select the **Run query** to see the results. Optionally, render it as a chart. <!-- convertborder later --> :::image type="content" source="./media/workbooks-dropdowns/dropdown-reference.png" lightbox="./media/workbooks-dropdowns/dropdown-reference.png" alt-text="Screenshot that shows a dropdown parameter referenced in KQL." border="false"::: -## Parameter value, label, selection, and group --The query used in the preceding dynamic dropdown parameter returns a list of values that are rendered faithfully in the dropdown list. But what if you wanted a different display name or one of the names to be selected? Dropdown parameters use value, label, selection, and group columns for this functionality. --The following sample shows how to get a list of Application Insights dependencies whose display names are styled with an emoji, has the first one selected, and is grouped by operation names: --```kusto -dependencies -| summarize by operation_Name, name -| where name !contains ('.') -| order by name asc -| serialize Rank = row_number() -| project value = name, label = strcat('🌐 ', name), selected = iff(Rank == 1, true, false), group = operation_Name -``` -<!-- convertborder later --> - ## Dropdown parameter options | Parameter | Description | Example | | - |:-|:-| | `{DependencyName}` | The selected value | GET fabrikamaccount |+| `{DependencyName:value}` | The selected value (same as above) | GET fabrikamaccount | | `{DependencyName:label}` | The selected label | 🌐 GET fabrikamaccount |-| `{DependencyName:value}` | The selected value | GET fabrikamaccount | +| `{DependencyName:escape}` | The selected value, with any common quote characters replaced when formatted into queries | GET fabrikamaccount | ## Multiple selection The examples so far explicitly set the parameter to select only one value in the dropdown list. Dropdown parameters also support *multiple selection*. To enable this option, select the **Allow multiple selections** checkbox. -You can specify the format of the result set via the **Delimiter** and **Quote with** settings. The default returns the values as a collection in the form of **a**, **b**, **c**. You can also limit the number of selections. +You can specify the format of the result set via the **Delimiter** and **Quote with** settings. By default, `,` (comma) is used as the delimiter, and `'` (single quote) is used as the quote character. The default returns the values as a collection in the form of `'a', 'b', 'c'` when formatted into the query. You can also limit the maximum number of selections. ++When using a multiple select parameter in a query, make sure that the KQL referencing the parameter works with the format of the result. For example: +- a single value parameter doesn't include any quotes when formatted into a query, so make sure to include the quotes in the query itself, for example: `where name == '{parameter}'`. +- quotes are included in the formatted parameter when using a multiple select parameter, so make sure that the query doesn't include quotes. For example, `where name in ({parameter})`. -The KQL referencing the parameter needs to change to work with the format of the result. The most common way to enable it is via the `in` operator. +Note how this example also switched from `name ==` to `name in`. The `==` operator only allows a single value, while the `in` operator allows multiple values. ```kusto dependencies This example shows the multi-select dropdown parameter at work: ## Dropdown special selections -Dropdown parameters also allow you to specify special values that will also appear in the dropdown: +Dropdown parameters also allow you to specify special values that also appear in the dropdown: * Any one * Any three * ... Dropdown parameters also allow you to specify special values that will also appe When these special items are selected, the parameter value is automatically set to the specific number of items, or all values. -### Special casing All +### Special casing All, and allowing an empty selection to be treated as All -When you select the **All** option, an extra field appears, which allows you to specify that a special value will be used for the parameter if the **All** option is selected. This special value is useful for cases where "All" could be a large number of items and could generate a very large query. +When you select **All**, an additional field appears, which allows you to specify a special value for the **All** parameter. This is useful when "All" could be a large number of items and could generate a very large query. :::image type="content" source="./media/workbooks-dropdowns/dropdown-all.png" alt-text="Screenshot of the New Parameter window in the Azure portal. The All option is selected and the All option and Select All value field are highlighted." lightbox="./media/workbooks-dropdowns/dropdown-all.png"::: SomeQuery | where array_length(selection) == 0 or SomeField in (selection) ``` -If all items are selected, the value of `Selection` is `[]`, producing an empty array for the `selection` variable in the query. If no values are selected, the value of `Selection` will be an empty string, also resulting in an empty array. If any values are selected, they are formatted inside the dynamic part of the query, causing the array to have those values. You can then test for `array_length` of 0 to have the filter not apply or use the `in` operator to filter on the values in the array. +If all items are selected, the value of `Selection` is `[]`, producing an empty array for the `selection` variable in the query. If no values are selected, the value of `Selection` is formatted as empty string, also resulting in an empty array. If any values are selected, they're formatted inside the dynamic part of the query, causing the array to have those values. You can then test for `array_length` of 0 to have the filter not apply or use the `in` operator to filter on the values in the array. Other common examples use '*' as the special marker value when a parameter is required, and then test with: |
azure-resource-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md | Bicep provides the following advantages: You can also create Bicep files in Visual Studio with the [Bicep extension for Visual Studio](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.visualstudiobicep). -- **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Bicep files are idempotent, which means you can deploy the same file many times and get the same resource types in the same state. You can develop one file that represents the desired state, rather than developing lots of separate files to represent updates. For example, the following file creates a storage account. If you deploy this template and the storage account with the specified properties already exists , no changes is made.+- **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Bicep files are idempotent, which means you can deploy the same file many times and get the same resource types in the same state. You can develop one file that represents the desired state, rather than developing lots of separate files to represent updates. For example, the following file creates a storage account. If you deploy this template and the storage account with the specified properties already exists , no changes are made. # [Bicep](#tab/bicep) |
azure-resource-manager | Azure Services Resource Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md | The resource providers for DevOps services are: | | - | | microsoft.visualstudio | [Azure DevOps](/azure/devops/) | | Microsoft.VSOnline | [Azure DevOps](/azure/devops/) |+| Microsoft.DevOpsInfrastructure | [Managed DevOps Pools](/azure/devops/managed-devops-pools/) | ## Hybrid resource providers |
azure-web-pubsub | Howto Client Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-client-certificate.md | module.exports = async function (context, req) { var certThumbprint = null; if (req.body.clientCertificates) { certThumbprint = req.body.clientCertificates[0].thumbprint;+ // Certificate content in PEM + var certContent = req.body.clientCertificates[0].content; + var cert = new crypto.X509Certificate(certContent); + console.log('Client cert:', cert); } if (certThumbprint != validCertThumbprint) { context.log('Expect client cert:', validCertThumbprint, 'but got:', certThumbprint); |
azure-web-pubsub | Reference Cloud Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-cloud-events.md | ce-eventName: connect "subprotocols": [], "clientCertificates": [ {- "thumbprint": "ABC" + "thumbprint": "<certificate SHA-1 thumbprint>", + "content": "--BEGIN CERTIFICATE--\r\n...\r\n--END CERTIFICATE--" } ] } |
azure-web-pubsub | Tutorial Pub Sub Messages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-pub-sub-messages.md | -The Azure Web PubSub service helps you to easily build real-time web messaging applications. In this tutorial, you'll learn how to subscribe to the service using WebSocket API and publish messages using the Web PubSub service SDK. +The Azure Web PubSub service helps you easily build real-time web messaging applications. In this tutorial, you learn how to subscribe to the service using the WebSocket API and publish messages using the Web PubSub service SDK. In this tutorial, you learn how to: > [!div class="checklist"] > * Create a Web PubSub service instance > * Generate the full URL to establish the WebSocket connection-> * Create a Web PubSub subscriber client to receive messages using standard WebSocket protocol -> * Create a Web PubSub publisher client to publish messages using Web PubSub service SDK +> * Create a Web PubSub subscriber client to receive messages using the standard WebSocket protocol +> * Create a Web PubSub publisher client to publish messages using the Web PubSub service SDK [!INCLUDE [azure-web-pubsub-tutorial-prerequisites](includes/cli-awps-prerequisites.md)] You can use the Windows cmd.exe command shell instead of a Bash shell to run the commands in this tutorial. -If creating the project on a local machine, you'll need to install the dependencies for the language you're using: +If you're creating the project on a local machine, you need to install the dependencies for the language you're using: # [C#](#tab/csharp) If creating the project on a local machine, you'll need to install the dependenc ### Create a Web PubSub instance -Use the Azure CLI [az webpubsub create](/cli/azure/webpubsub#az-webpubsub-create) command to create a Web PubSub in the resource group you've created. The following command creates a _Free_ Web PubSub resource under resource group `myResourceGroup` in `EastUS`: +To create a Web PubSub instance in the resource group you created, use the Azure CLI [az webpubsub create](/cli/azure/webpubsub#az-webpubsub-create) command. The following command creates a _Free_ Web PubSub resource under resource group `myResourceGroup` in `EastUS`: Each Web PubSub resource must have a unique name. Replace <your-unique-resource-name> with the name of your Web PubSub instance in the following command. Each Web PubSub resource must have a unique name. Replace <your-unique-resour az webpubsub create --resource-group myResourceGroup --name <your-unique-resource-name> --location EastUS --sku Free_F1 ``` -The output of this command shows properties of the newly created resource. Take note of the two properties listed below: +The output of this command shows properties of the newly created resource. Take note of the following roperties: * **name**: The Web PubSub name you provided in the `--name` parameter above. * **hostName**: In the example, the host name is `<your-unique-resource-name>.webpubsub.azure.com/`. Clients connect to the Azure Web PubSub service through the standard WebSocket p dotnet add package Azure.Messaging.WebPubSub --version 1.0.0 ``` -1. Replace the code in the `Program.cs` with the following code that will connect to the service: +1. Replace the code in the `Program.cs` with the following code that connects to the service: ```csharp using System; Clients connect to the Azure Web PubSub service through the standard WebSocket p After the connection is established, your client receives messages through the WebSocket connection. The client uses `client.MessageReceived.Subscribe(msg => ...));` to listen for incoming messages. -1. Run the following command replacing `<Web-PubSub-connection-string>` with the connection string you copied earlier. If you are using Windows command shell, you can use `set` instead of `export`. +1. Run the following command replacing `<Web-PubSub-connection-string>` with the connection string you copied earlier. If you're using Windows command shell, you can use `set` instead of `export`. ```bash export WebPubSubConnectionString=<Web-PubSub-connection-string> Clients connect to the Azure Web PubSub service through the standard WebSocket p ## 2. Publish messages using service SDK -Create a publisher using the Azure Web PubSub SDK to publish a message to the connected client. For this project, you'll need to open another command shell. +Create a publisher using the Azure Web PubSub SDK to publish a message to the connected client. For this project, you need to open another command shell. # [C#](#tab/csharp) Create a publisher using the Azure Web PubSub SDK to publish a message to the co dotnet run <Web-PubSub-connection-string> "myHub1" "Hello World" ``` -1. Check the command shell of the subscriber to see that it received the message: +1. Verify that the subscriber's command shell receives the message: ```console Message received: Hello World Create a publisher using the Azure Web PubSub SDK to publish a message to the co ``` -1. Use Azure Web PubSub SDK to publish a message to the service. Create a `publish.js` file with the below code: +1. Use Azure Web PubSub SDK to publish a message to the service. Create a `publish.js` file with the following code: ```javascript const { WebPubSubServiceClient } = require('@azure/web-pubsub'); Create a publisher using the Azure Web PubSub SDK to publish a message to the co The `service.sendToAll()` call simply sends a message to all connected clients in a hub. -1. To send a message, run the following command replacing `<Web-PubSub-connection-string>` with the connection string you copied earlier. If you are using Windows command shell, you can use `set` instead of `export`. +1. To send a message, run the following command replacing `<Web-PubSub-connection-string>` with the connection string you copied earlier. If you're using the Windows command shell, you can use `set` instead of `export`. ```bash export WebPubSubConnectionString=<Web-PubSub-connection-string> Create a publisher using the Azure Web PubSub SDK to publish a message to the co ``` -1. Use the Azure Web PubSub SDK to publish a message to the service. Create a `publish.py` file with the below code: +1. Use the Azure Web PubSub SDK to publish a message to the service. Create a `publish.py` file with the following code: ```python import sys Create a publisher using the Azure Web PubSub SDK to publish a message to the co # [Java](#tab/java) -1. Go to the `pubsub` directory. Use Maven to create a publisher console app `webpubsub-quickstart-publisher` and go to the *webpubsub-quickstart-publisher* directory: +1. Go to the `pubsub` directory. Use Maven to create a publisher console app `webpubsub-quickstart-publisher` and go to the *webpubsub-quickstart-publisher* directory: ```bash mvn archetype:generate --define interactiveMode=n --define groupId=com.webpubsub.quickstart --define artifactId=webpubsub-quickstart-publisher --define archetypeArtifactId=maven-archetype-quickstart --define archetypeVersion=1.4 Create a publisher using the Azure Web PubSub SDK to publish a message to the co The `sendToAll()` call sends a message to all connected clients in a hub. -1. To send a message, go to the *webpubsub-quickstart-publisher* directory and run the project using the following command. Replace the `<Web-PubSub-connection-string>` with the connection string you copied earlier. +1. To send a message, go to the *webpubsub-quickstart-publisher* directory and run the project using the following command. Replace the `<Web-PubSub-connection-string>` with the connection string you copied earlier. ```bash mvn compile & mvn package & mvn exec:java -Dexec.mainClass="com.webpubsub.quickstart.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="<Web-PubSub-connection-string> 'myHub1' 'Hello World'" You can delete the resources that you created in this quickstart by deleting the az group delete --name myResourceGroup --yes ``` -If you aren't planning to continue using Azure Cloud Shell, you can avoid accumulating costs by deleting the resource group that contains the associated the storage account. The resource group is named `cloud-shell-storage-<your-region>`. Run the following command, replacing `<CloudShellResourceGroup>` with the Cloud Shell group name. +If you aren't planning to continue using Azure Cloud Shell, you can avoid accumulating costs by deleting the resource group that contains the associated the storage account. The resource group is named `cloud-shell-storage-<your-region>`. Run the following command, replacing `<CloudShellResourceGroup>` with the Cloud Shell group name. ```azurecli |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | Title: List of updates applied to the Azure Guest OS | Microsoft Docs description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to your Guest OS. -+ ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 Previously updated : 07/31/2024- Last updated : 09/03/2024+ # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to your Guest OS. Updates always carry forward for the particular [family][family-explain] they were introduced in. +## August 2024 Guest OS +| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | +| | | | | | +| Rel 24-08 | 5041160 | Latest Cumulative Update(LCU) | [7.44] | Aug 13, 2024 +| Rel 24-08 | 5041578 | Latest Cumulative Update(LCU) | [6.74] | Aug 13, 2024 +| Rel 24-08 | 5041773 | Latest Cumulative Update(LCU) | [5.98] | Aug 13, 2024 +| Rel 24-08 | 5041942 | .NET Framework 3.5 Security and Quality Rollup | [2.154] | May 14, 2024 +| Rel 24-08 | 5041926 | .NET Framework 4.7.2 Cumulative Update LKG | [2.154] | Apr 9, 2024 +| Rel 24-08 | 5041936 | .NET Framework 3.5 Security and Quality Rollup LKG | [3.142] | Aug 13, 2024 +| Rel 24-08 | 5041919 | .NET Framework 4.7.2 Cumulative Update LKG | [3.142] | Aug 13, 2024 +| Rel 24-08 | 5041945 | .NET Framework 3.5 Security and Quality Rollup LKG | [4.134] | Aug 13, 2024 +| Rel 24-08 | 5041923 | .NET Framework 4.7.2 Cumulative Update LKG | [4.134] | Aug 13, 2024 +| Rel 24-08 | 5041913 | .NET Framework Dot Net | [6.74] | Aug 13, 2024 +| Rel 24-08 | 5041948 | .NET Framework 4.8 Security and Quality Rollup LKG | [7.44] | Aug 13, 2024 +| Rel 24-08 | 5041838 | Monthly Rollup | [2.154] | Aug 13, 2024 +| Rel 24-08 | 5041851 | Monthly Rollup | [3.142] | Aug 13, 2024 +| Rel 24-08 | 5041828 | Monthly Rollup | [4.134] | Aug 13, 2024 +| Rel 24-08 | 5041589 | Servicing Stack Update | [3.142] | Aug 13, 2024 +| Rel 24-08 | 5041588 | Servicing Stack Update | [4.134] | Aug 13, 2024 +| Rel 24-08 | 5041576 | Servicing Stack Update | [5.98] | Aug 13, 2024 +| Rel 24-08 | 5039339 | Servicing Stack Update LKG | [2.154] | Jun 11, 2024 +| Rel 24-06 | 5041577 | Servicing Stack Update | [6.74] | Aug 13, 2024 +| Rel 24-06 | 5041590 | Servicing Stack Update | [7.44] | Aug 13, 2024 +| Rel 24-08 | 4494175 | January '20 Microcode | [5.98] | Sep 1, 2020 +| Rel 24-08 | 4494175 | January '20 Microcode | [6.74] | Sep 1, 2020 +++[5041160]: https://support.microsoft.com/kb/5041160 +[5041578]: https://support.microsoft.com/kb/5041578 +[5041773]: https://support.microsoft.com/kb/5041773 +[5041942]: https://support.microsoft.com/kb/5041942 +[5041926]: https://support.microsoft.com/kb/5041926 +[5041936]: https://support.microsoft.com/kb/5041936 +[5041919]: https://support.microsoft.com/kb/5041919 +[5041945]: https://support.microsoft.com/kb/5041945 +[5041923]: https://support.microsoft.com/kb/5041923 +[5041913]: https://support.microsoft.com/kb/5041913 +[5041948]: https://support.microsoft.com/kb/5041948 +[5041838]: https://support.microsoft.com/kb/5041838 +[5041851]: https://support.microsoft.com/kb/5041851 +[5041828]: https://support.microsoft.com/kb/5041828 +[5041589]: https://support.microsoft.com/kb/5041589 +[5041588]: https://support.microsoft.com/kb/5041588 +[5041576]: https://support.microsoft.com/kb/5041576 +[5039339]: https://support.microsoft.com/kb/5039339 +[5041577]: https://support.microsoft.com/kb/5041577 +[5041590]: https://support.microsoft.com/kb/5041590 +[4494175]: https://support.microsoft.com/kb/4494175 ++[2.154]: ./cloud-services-guestos-update-matrix.md#family-2-releases +[3.142]: ./cloud-services-guestos-update-matrix.md#family-3-releases +[4.134]: ./cloud-services-guestos-update-matrix.md#family-4-releases +[5.98]: ./cloud-services-guestos-update-matrix.md#family-5-releases +[6.74]: ./cloud-services-guestos-update-matrix.md#family-6-releases +[7.44]: ./cloud-services-guestos-update-matrix.md#family-7-releases + ## July 2024 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | Title: Learn about the latest Azure Guest OS Releases | Microsoft Docs description: The latest release news and SDK compatibility for Azure Cloud Services Guest OS. -+ ms.assetid: 6306cafe-1153-44c7-8554-623b03d59a34 Previously updated : 07/31/2024- Last updated : 09/03/2024+ # Azure Guest OS releases and SDK compatibility matrix Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates +###### **Aug 27, 2024** +The August 2024 Guest OS released. + ###### **July 31, 2024** The July Guest OS released. The September Guest OS released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-7.44_202408-01 | August 27, 2024 | Post 7.47 | | WA-GUEST-OS-7.43_202407-01 | July 31, 2024 | Post 7.46 | | WA-GUEST-OS-7.42_202406-01 | June 27, 2024 | Post 7.45 |-| WA-GUEST-OS-7.41_202405-01 | June 1, 2024 | Post 7.44 | +|~~WA-GUEST-OS-7.41_202405-01~~| June 1, 2024 | August 27, 2024 | |~~WA-GUEST-OS-7.40_202404-01~~| April 19, 2024 | July 31, 2024 | |~~WA-GUEST-OS-7.39_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-7.38_202402-01~~| February 24, 2024 | June 1, 2024 | The September Guest OS released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-6.74_202408-01 | August 27, 2024 | Post 6.77 | | WA-GUEST-OS-6.73_202407-01 | July 31, 2024 | Post 6.76 | | WA-GUEST-OS-6.72_202406-01 | June 27, 2024 | Post 6.75 |-| WA-GUEST-OS-6.71_202405-01 | June 1, 2024 | Post 6.74 | +|~~WA-GUEST-OS-6.71_202405-01~~| June 1, 2024 | August 27, 2024 | |~~WA-GUEST-OS-6.70_202404-01~~| April 19, 2024 | July 31, 2024 | |~~WA-GUEST-OS-6.69_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-6.68_202402-01~~| February 24, 2024 | June 1, 2024 | The September Guest OS released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-5.98_202408-01 | August 27, 2024 | Post 5.101 | | WA-GUEST-OS-5.97_202407-01 | July 31, 2024 | Post 5.100 | | WA-GUEST-OS-5.96_202406-01 | June 27, 2024 | Post 5.99 |-| WA-GUEST-OS-5.95_202405-01 | June 1, 2024 | Post 5.98 | +|~~WA-GUEST-OS-5.95_202405-01~~| June 1, 2024 | August 27, 2024 | |~~WA-GUEST-OS-5.94_202404-01~~| April 19, 2024 | July 31, 2024 | |~~WA-GUEST-OS-5.93_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-5.92_202402-01~~| February 24, 2024 | June 1, 2024 | The September Guest OS released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-4.134_202408-01 | August 27, 2024 | Post 4.137 | | WA-GUEST-OS-4.133_202407-01 | July 31, 2024 | Post 4.136 | | WA-GUEST-OS-4.132_202406-01 | June 27, 2024 | Post 4.135 |-| WA-GUEST-OS-4.131_202405-01 | June 1, 2024 | Post 4.134 | +|~~WA-GUEST-OS-4.131_202405-01~~| June 1, 2024 | August 27, 2024 | |~~WA-GUEST-OS-4.130_202404-01~~| April 19, 2024 | July 31, 2024 | |~~WA-GUEST-OS-4.129_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-4.128_202402-01~~| February 24, 2024 | June 1, 2024 | The September Guest OS released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-3.142_202408-01 | August 27, 2024 | Post 3.145 | | WA-GUEST-OS-3.141_202407-01 | July 31, 2024 | Post 3.144 | | WA-GUEST-OS-3.140_202406-01 | June 27, 2024 | Post 3.143 |-| WA-GUEST-OS-3.139_202405-01 | June 1, 2024 | Post 3.142 | +|~~WA-GUEST-OS-3.139_202405-01~~| June 1, 2024 | August 27, 2024 | |~~WA-GUEST-OS-3.138_202404-01~~| April 19, 2024 | Post 3.141 | |~~WA-GUEST-OS-3.137_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-3.136_202402-01~~| February 24, 2024 | June 1, 2024 | The September Guest OS released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-2.154_202408-01 | August 27, 2024 | Post 2.156 | | WA-GUEST-OS-2.153_202407-01 | July 31, 2024 | Post 2.156 | | WA-GUEST-OS-2.152_202406-01 | June 27, 2024 | Post 2.155 |-| WA-GUEST-OS-2.151_202405-01 | June 1, 2024 | Post 2.154 | +|~~WA-GUEST-OS-2.151_202405-01~~| June 1, 2024 | August 27, 2024 | |~~WA-GUEST-OS-2.150_202404-01~~| April 19, 2024 | July 31, 2024 | |~~WA-GUEST-OS-2.149_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-2.148_202402-01~~| February 24, 2024 | June 1, 2024 | |
data-factory | Concepts Pipeline Execution Triggers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-pipeline-execution-triggers.md | To have your schedule trigger kick off a pipeline run, include a pipeline refere "type": "ScheduleTrigger", "typeProperties": { "recurrence": {- "frequency": <<Minute, Hour, Day, Week, Year>>, + "frequency": <<Minute, Hour, Day, Week>>, "interval": <<int>>, // How often to fire "startTime": <<datetime>>, "endTime": <<datetime>>, |
ddos-protection | Ddos Protection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md | Get detailed reports in five-minute increments during an attack, and a complete ](alerts.md) to learn more. - **Azure DDoS Rapid Response:**- During an active attack, customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, see [Azure DDoS Rapid Response](ddos-rapid-response.md). + During an active attack, Azure DDoS Network Protection enabled customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, see [Azure DDoS Rapid Response](ddos-rapid-response.md). - **Native platform integration:** Natively integrated into Azure. Includes configuration through the Azure portal. Azure DDoS Protection understands your resources and resource configuration. |
event-hubs | Event Hubs Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-data-explorer.md | + + Title: Overview of the Event Hubs Data Explorer +description: This article provides an overview of the Event Hubs Data Explorer, which provides an easy way to send data to and receive data from Azure Event Hubs. + Last updated : 08/22/2024+++# Use Event Hubs Data Explorer to run data operations on Event Hubs ++Azure Event Hubs is a scalable event processing service that ingests and processes large volumes of events and data, with low latency and high reliability. For a high-level overview of the service, see [What is Event Hubs?](event-hubs-about.md). ++Developers and Operators are often looking for an easy tool to send sample data to their event hub to test the end-to-end flow, or view events at a specific offset (or point in time) for light debugging, often after the fact. The Event Hubs Data Explorer makes these common workflows simple by eliminating the need to write bespoke client applications to test and inspect the data on the event hub. ++This article highlights the functionality of Azure Event Hubs Data explorer that is made available on the Azure portal. ++Operations run on an Azure Event Hubs namespace are of two kinds. ++ * Management Operations - Create, update, delete of Event Hubs namespace, and event hubs. + * Data Operations - Send and view events from an event hub. ++> [!IMPORTANT] +> * The Event Hubs Data Explorer doesn't support **management operations**. The event hub must be created before the data explorer can send or view events from that event hub. +> * While events payloads (known as **values** in Kafka) sent using the **Kafka protocol** will be visible via the data explorer, the **key** for the specific event will not be visible. +> * We advise against using the Event Hubs Data Explorer for larger messages, as this may result in timeouts, depending on the message size, network latency between client and Service Bus service etc. Instead, we recommend that you use your own client to work with larger messages, where you can specify your own timeout values. +> ++## Prerequisites ++To use the Event Hubs Data Explorer tool, [create an Azure Event Hubs namespace and an event hub](event-hubs-create.md). ++## Use the Event Hubs Data Explorer ++To use the Event Hubs data explorer, navigate to the Event Hubs namespace on which you want to perform the data operations. ++Either navigate to the `Data Explorer` directly where you can pick the event hub, or pick the event hub from the `entities` and then pick the `Data Explorer` from the navigation menu. +++### Send Events ++You can send either custom payloads, or precanned datasets to the selected event hub using the `Send events` experience. ++To do so, select the `send events` button, which enables the right pane. ++++#### Sending custom payload ++To send a custom payload - +1. **Select Dataset** - Pick `Custom payload`. +2. Select the **Content-Type**, from either `Text/Plain`, `JSON`, or `XML`. +3. Either upload a JSON file, or type out the payload in the **Enter payload** box. +4. **[Optional]** Specify system properties. +5. **[Optional]** Specify custom properties - available as key-value pair. +6. **[Optional]** If you wish to send multiple payloads, check the **Repeat send** box, and specify the **Repeat send count** (that is, the number of payloads to send) and the **Interval between repeat send in ms**. ++Once the payload details are defined, select **Send** to send the event payload as defined. ++++#### Sending precanned dataset ++To send event payloads from a precanned dataset - +1. **Select Dataset** - Pick an option from the **Pre canned datasets**, for example, Yellow taxi, Weather data, and others. +2. **[Optional]** Specify system properties. +3. **[Optional]** Specify custom properties - available as key-value pairs. +4. **[Optional]** If you wish to send multiple payloads, check the **Repeat send** box, and specify the **Repeat send count** (that is, the number of payloads to send) and the **Interval between repeat send in ms**. ++Once the payload details are defined, select **Send** to send the event payload as defined. ++++### View Events ++Event Hubs data explorer enables viewing the events to inspect the data that fit the criteria. ++To view events, you can define the below properties, or rely on the default - ++++1. **PartitionID** - Pick either a specific partition or select *All partition IDs*. +2. **Consumer Group** - Pick the *$Default* or another consumer group, or create one on the fly. +3. **Event position** - Pick the *oldest position* (that is, the start of the event hub), *Newest position* (that is, latest), *Custom position* (for a specific offset, sequence number or timestamp). +4. **Advanced properties** - Specify the *maximum batch size* and *maximum wait time in seconds*. ++Once the above options are set, select **View events** to pull the events and render them on the data explorer. ++++Once the events are loaded, you can select **View next events** to pull events using the same query again, or **Clear all** to refresh the grid. ++### Download event payload ++When viewing the events on a given event hub, the event payload can be downloaded for further review. ++To download the event payload, select the specific event and select the **download** button displayed above the event payload body. ++++## Next steps ++ * Learn more about [Event Hubs](event-hubs-about.md). + * Check out [Event Hubs features and terminology](event-hubs-features.md) |
firewall | Firewall Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-known-issues.md | -This article lists the known issues for [Azure Firewall](overview.md). It is updated as issues are resolved. +This article lists the known issues for [Azure Firewall](overview.md). It's updated as issues are resolved. For Azure Firewall limitations, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-firewall-limits). Azure Firewall Standard has the following known issues: |Issue |Description |Mitigation | ||||+|DNAT support for private IP addresses limited to Standard and Premium versions|Support for DNAT on Azure Firewall private IP address is intended for enterprises, so is limited to the Standard and Premium Firewall versions.| None| |Network filtering rules for non-TCP/UDP protocols (for example ICMP) don't work for Internet bound traffic|Network filtering rules for non-TCP/UDP protocols don't work with SNAT to your public IP address. Non-TCP/UDP protocols are supported between spoke subnets and VNets.|Azure Firewall uses the Standard Load Balancer, [which doesn't support SNAT for IP protocols today](../load-balancer/outbound-rules.md#limitations). We're exploring options to support this scenario in a future release.| |Missing PowerShell and CLI support for ICMP|Azure PowerShell and CLI don't support ICMP as a valid protocol in network rules.|It's still possible to use ICMP as a protocol via the portal and the REST API. We're working to add ICMP in PowerShell and CLI soon.| |FQDN tags require a protocol: port to be set|Application rules with FQDN tags require port: protocol definition.|You can use **https** as the port: protocol value. We're working to make this field optional when FQDN tags are used.| |Moving a firewall to a different resource group or subscription isn't supported|Moving a firewall to a different resource group or subscription isn't supported.|Supporting this functionality is on our road map. To move a firewall to a different resource group or subscription, you must delete the current instance and recreate it in the new resource group or subscription.|-|Threat intelligence alerts may get masked|Network rules with destination 80/443 for outbound filtering masks threat intelligence alerts when configured to alert only mode.|Create outbound filtering for 80/443 using application rules. Or, change the threat intelligence mode to **Alert and Deny**.| -|Azure Firewall DNAT doesn't work for private IP destinations|Azure Firewall DNAT support is limited to Internet egress/ingress. DNAT doesn't currently work for private IP destinations. For example, spoke to spoke.|A fix is being investigated.<br><br>Private DNAT is currently in private preview. Watch the [Azure Firewall preview features](firewall-preview.md) article for the public preview announcement.| -|With secured virtual hubs, availability zones can only be configured during deployment.| You can't configure Availability Zones after a firewall with secured virtual hubs has been deployed.|This is by design.| +|Threat intelligence alerts might get masked|Network rules with destination 80/443 for outbound filtering masks threat intelligence alerts when configured to alert only mode.|Create outbound filtering for 80/443 using application rules. Or, change the threat intelligence mode to **Alert and Deny**.| +|With secured virtual hubs, availability zones can only be configured during deployment.| You can't configure Availability Zones after a firewall with secured virtual hubs is deployed.|This is by design.| |SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall. |SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using nonstandard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules.-|Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 can be blocked by the Azure platform. This is the default platform behavior in Azure, Azure Firewall doesn't introduce any more specific restriction. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).<br><br>Another option is to deploy Azure Firewall in a standard Enterprise Agreement (EA) subscription. Azure Firewall in an EA subscription can communicate with public IP addresses using outbound TCP port 25. Currently, it may also work in other subscription types, but it's not guaranteed to work. For private IP addresses like virtual networks, VPNs, and Azure ExpressRoute, Azure Firewall supports an outbound connection on TCP port 25. -|SNAT port exhaustion|Azure Firewall currently supports 2496 ports per Public IP address per backend Virtual Machine Scale Set instance. By default, there are two Virtual Machine Scale Set instances. So, there are 4992 ports per flow (destination IP, destination port and protocol (TCP or UDP). The firewall scales up to a maximum of 20 instances. |This is a platform limitation. You can work around the limits by configuring Azure Firewall deployments with a minimum of five public IP addresses for deployments susceptible to SNAT exhaustion. This increases the SNAT ports available by five times. Allocate from an IP address prefix to simplify downstream permissions. For a more permanent solution, you can deploy a NAT gateway to overcome the SNAT port limits. This approach is supported for virtual network deployments. <br /><br /> For more information, see [Scale SNAT ports with Azure Virtual Network NAT](integrate-with-nat-gateway.md).| +|Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 is blocked by the Azure platform. This is the default platform behavior in Azure. Azure Firewall doesn't introduce any more specific restriction. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).<br><br>Another option is to deploy Azure Firewall in a standard Enterprise Agreement (EA) subscription. Azure Firewall in an EA subscription can communicate with public IP addresses using outbound TCP port 25. Currently, it might also work in other subscription types, but it's not guaranteed to work. For private IP addresses like virtual networks, VPNs, and Azure ExpressRoute, Azure Firewall supports an outbound connection on TCP port 25. +|SNAT port exhaustion|Azure Firewall currently supports 2,496 ports per Public IP address per backend Virtual Machine Scale Set instance. By default, there are two Virtual Machine Scale Set instances. So, there are 4,992 ports per flow (destination IP, destination port and protocol (TCP or UDP). The firewall scales up to a maximum of 20 instances. |This is a platform limitation. You can work around the limits by configuring Azure Firewall deployments with a minimum of five public IP addresses for deployments susceptible to SNAT exhaustion. This increases the SNAT ports available by five times. Allocate from an IP address prefix to simplify downstream permissions. For a more permanent solution, you can deploy a NAT gateway to overcome the SNAT port limits. This approach is supported for virtual network deployments. <br /><br /> For more information, see [Scale SNAT ports with Azure Virtual Network NAT](integrate-with-nat-gateway.md).| |DNAT isn't supported with Forced Tunneling enabled|Firewalls deployed with Forced Tunneling enabled can't support inbound access from the Internet because of asymmetric routing.|This is by design because of asymmetric routing. The return path for inbound connections goes via the on-premises firewall, which hasn't seen the connection established.-|Outbound Passive FTP may not work for Firewalls with multiple public IP addresses, depending on your FTP server configuration.|Passive FTP establishes different connections for control and data channels. When a Firewall with multiple public IP addresses sends data outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|An explicit SNAT configuration is planned. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses (see [an example for IIS](/iis/configuration/system.applicationhost/sites/sitedefaults/ftpserver/security/datachannelsecurity)). Alternatively, consider using a single IP address in this situation.| -|Inbound Passive FTP may not work depending on your FTP server configuration |Passive FTP establishes different connections for control and data channels. Inbound connections on Azure Firewall are SNATed to one of the firewall private IP addresses to ensure symmetric routing. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|Preserving the original source IP address is being investigated. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses.| +|Outbound Passive FTP might not work for Firewalls with multiple public IP addresses, depending on your FTP server configuration.|Passive FTP establishes different connections for control and data channels. When a Firewall with multiple public IP addresses sends data outbound, it randomly selects one of its public IP addresses for the source IP address. FTP might fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|An explicit SNAT configuration is planned. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses (see [an example for IIS](/iis/configuration/system.applicationhost/sites/sitedefaults/ftpserver/security/datachannelsecurity)). Alternatively, consider using a single IP address in this situation.| +|Inbound Passive FTP might not work depending on your FTP server configuration |Passive FTP establishes different connections for control and data channels. Inbound connections on Azure Firewall are SNATed to one of the firewall private IP addresses to ensure symmetric routing. FTP might fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|Preserving the original source IP address is being investigated. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses.| |Active FTP doesn't work when the FTP client must reach an FTP server across the internet.|Active FTP utilizes a PORT command from the FTP client that directs the FTP server what IP and port to use for the data channel. This PORT command utilizes the private IP of the client that can't be changed. Client-side traffic traversing the Azure Firewall is NATed for Internet-based communications, making the PORT command seen as invalid by the FTP server.|This is a general limitation of Active FTP when used with client-side NAT.| |NetworkRuleHit metric is missing a protocol dimension|The ApplicationRuleHit metric allows filtering based protocol, but this capability is missing in the corresponding NetworkRuleHit metric.|A fix is being investigated.| |NAT rules with ports between 64000 and 65535 are unsupported|Azure Firewall allows any port in the 1-65535 range in network and application rules, however NAT rules only support ports in the 1-63999 range.|This is a current limitation.-|Configuration updates may take five minutes on average|An Azure Firewall configuration update can take three to five minutes on average, and parallel updates aren't supported.|A fix is being investigated.| -|Azure Firewall uses SNI TLS headers to filter HTTPS and MSSQL traffic|If browser or server software doesn't support the Server Name Indicator (SNI) extension, you can't connect through Azure Firewall.|If browser or server software doesn't support SNI, then you may be able to control the connection using a network rule instead of an application rule. See [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication) for software that supports SNI.| +|Configuration updates might take five minutes on average|An Azure Firewall configuration update can take three to five minutes on average, and parallel updates aren't supported.|A fix is being investigated.| +|Azure Firewall uses SNI TLS headers to filter HTTPS and MSSQL traffic|If browser or server software doesn't support the Server Name Indicator (SNI) extension, you can't connect through Azure Firewall.|If browser or server software doesn't support SNI, then you might be able to control the connection using a network rule instead of an application rule. See [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication) for software that supports SNI.| |Can't add firewall policy tags using the portal or Azure Resource Manager (ARM) templates|Azure Firewall Policy has a patch support limitation that prevents you from adding a tag using the Azure portal or ARM templates. The following error is generated: *Couldn't save the tags for the resource*.|A fix is being investigated. Or, you can use the Azure PowerShell cmdlet `Set-AzFirewallPolicy` to update tags.| |IPv6 not currently supported|If you add an IPv6 address to a rule, the firewall fails.|Use only IPv4 addresses. IPv6 support is under investigation.| |Updating multiple IP Groups fails with conflict error.|When you update two or more IP Groups attached to the same firewall, one of the resources goes into a failed state.|This is a known issue/limitation. <br><br>When you update an IP Group, it triggers an update on all firewalls that the IPGroup is attached to. If an update to a second IP Group is started while the firewall is still in the *Updating* state, then the IPGroup update fails.<br><br>To avoid the failure, IP Groups attached to the same firewall must be updated one at a time. Allow enough time between updates to allow the firewall to get out of the *Updating* state.| |Removing RuleCollectionGroups using ARM templates not supported.|Removing a RuleCollectionGroup using ARM templates isn't supported and results in failure.|This isn't a supported operation.| |DNAT rule for allow *any* (*) will SNAT traffic.|If a DNAT rule allows *any* (*) as the Source IP address, then an implicit Network rule matches VNet-VNet traffic and will always SNAT the traffic.|This is a current limitation.| |Adding a DNAT rule to a secured virtual hub with a security provider isn't supported.|This results in an asynchronous route for the returning DNAT traffic, which goes to the security provider.|Not supported.|-| Error encountered when creating more than 2000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource Manager limit). | This is a current limitation. | +| Error encountered when creating more than 2,000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource Manager limit). | This is a current limitation. | |XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.| |CanΓÇÖt deploy Firewall with Availability Zones with a newly created Public IP address|When you deploy a Firewall with Availability Zones, you canΓÇÖt use a newly created Public IP address.|First create a new zone redundant Public IP address, then assign this previously created IP address during the Firewall deployment.| |Azure private DNS zone isn't supported with Azure Firewall|Azure private DNS zone doesn't work with Azure Firewall regardless of Azure Firewall DNS settings.|To achieve the desire state of using a private DNS server, use Azure Firewall DNS proxy instead of an Azure private DNS zone.|-|Physical zone 2 in Japan East is unavailable for firewall deployments.|You canΓÇÖt deploy a new firewall with physical zone 2. Additionally, if you stop an existing firewall which is deployed in physical zone 2, it cannot be restarted. For more information, see [Physical and logical availability zones](../reliability/availability-zones-overview.md#physical-and-logical-availability-zones).|For new firewalls, deploy with the remaining availability zones or use a different region. To configure an existing firewall, see [How can I configure availability zones after deployment?](firewall-faq.yml#how-can-i-configure-availability-zones-after-deployment). +|Physical zone 2 in Japan East is unavailable for firewall deployments.|You canΓÇÖt deploy a new firewall with physical zone 2. Additionally, if you stop an existing firewall that is deployed in physical zone 2, it can't be restarted. For more information, see [Physical and logical availability zones](../reliability/availability-zones-overview.md#physical-and-logical-availability-zones).|For new firewalls, deploy with the remaining availability zones or use a different region. To configure an existing firewall, see [How can I configure availability zones after deployment?](firewall-faq.yml#how-can-i-configure-availability-zones-after-deployment). ## Azure Firewall Premium Azure Firewall Premium has the following known issues: |||| |ESNI support for FQDN resolution in HTTPS|Encrypted SNI isn't supported in HTTPS handshake.|Today only Firefox supports ESNI through custom configuration. Suggested workaround is to disable this feature.| |Client Certification Authentication isn't supported|Client certificates are used to build a mutual identity trust between the client and the server. Client certificates are used during a TLS negotiation. Azure firewall renegotiates a connection with the server and has no access to the private key of the client certificates.|None|-|QUIC/HTTP3|QUIC is the new major version of HTTP. It's a UDP-based protocol over 80 (PLAN) and 443 (SSL). FQDN/URL/TLS inspection won't be supported.|Configure passing UDP 80/443 as network rules.| +|QUIC/HTTP3|QUIC is the new major version of HTTP. It's a UDP-based protocol over 80 (PLAN) and 443 (SSL). FQDN/URL/TLS inspection isn't supported.|Configure passing UDP 80/443 as network rules.| Untrusted customer signed certificates|Customer signed certificates aren't trusted by the firewall once received from an intranet-based web server.|A fix is being investigated. |Wrong source IP address in Alerts with IDPS for HTTP (without TLS inspection).|When plain text HTTP traffic is in use, and IDPS issues a new alert, and the destination is a public IP address, the displayed source IP address is wrong (the internal IP address is displayed instead of the original IP address).|A fix is being investigated.|-|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.| +|Certificate Propagation|After a CA certificate is applied on the firewall, it might take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.| |TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.| |TLSi intermediate CA certificate expiration|In some unique cases, the intermediate CA certificate can expire two months before the original expiration date.|Renew the intermediate CA certificate two months before the original expiration date. A fix is being investigated.| |
firewall | Firewall Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md | Title: Azure Firewall preview features -description: Learn about Azure Firewall preview features that are currently publicly available. +description: Learn about Azure Firewall preview features that are publicly available now. With the Azure Firewall Resource Health check, you can now diagnose and get supp Starting in August 2023, this preview is automatically enabled on all firewalls and no action is required to enable this functionality. For more information, see [Resource Health overview](../service-health/resource-health-overview.md). -### Auto-learn SNAT routes (preview) +### Autolearn SNAT routes (preview) -You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. For information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md#auto-learn-snat-routes-preview). +You can configure Azure Firewall to autolearn both registered and private ranges every 30 minutes. For information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md#auto-learn-snat-routes-preview). ### Parallel IP Group updates (preview) You can now update multiple IP Groups in parallel at the same time. This is usef For more information, see [IP Groups in Azure Firewall](ip-groups.md#parallel-ip-group-updates-preview). +### Private IP address DNAT rules (preview) ++You can now configure a DNAT rule on Azure Firewall Policy with the private IP address of the Azure Firewall as the destination. Previously, DNAT rules only worked with Azure Firewall Public IP addresses. +This capability helps with connectivity between overlapped IP networks, which is a common scenario for enterprises when onboarding new partners to their network or merging with new acquisitions. +This is also relevant for hybrid scenarios, connecting on-premises datacenters to Azure, where DNAT bridges the gap, enabling communication between private resources over nonroutable IP addresses. ++For more information, see [Filter inbound Internet or intranet traffic with Azure Firewall DNAT using the Azure portal](tutorial-firewall-dnat.md). ++ ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md). |
firewall | Rule Processing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md | If still no match is found within application rules, then the packet is evaluate ### DNAT rules and Network rules -Inbound Internet connectivity can be enabled by configuring Destination Network Address Translation (DNAT) as described in [Filter inbound traffic with Azure Firewall DNAT using the Azure portal](../firewall/tutorial-firewall-dnat.md). NAT rules are applied in priority before network rules. If a match is found, the traffic is translated according to the DNAT rule and allowed by the firewall. So the traffic isn't subject to any further processing by other network rules. For security reasons, the recommended approach is to add a specific Internet source to allow DNAT access to the network and avoid using wildcards. +Inbound Internet or intranet (preview) connectivity can be enabled by configuring Destination Network Address Translation (DNAT) as described in [Filter inbound Internet or intranet traffic with Azure Firewall DNAT using the Azure portal](../firewall/tutorial-firewall-dnat.md). NAT rules are applied in priority before network rules. If a match is found, the traffic is translated according to the DNAT rule and allowed by the firewall. So the traffic isn't subject to any further processing by other network rules. For security reasons, the recommended approach is to add a specific Internet source to allow DNAT access to the network and avoid using wildcards. Application rules aren't applied for inbound connections. So, if you want to filter inbound HTTP/S traffic, you should use Web Application Firewall (WAF). For more information, see [What is Azure Web Application Firewall](../web-application-firewall/overview.md)? |
firewall | Tutorial Firewall Dnat Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-dnat-policy.md | Title: 'Tutorial: Filter inbound Internet traffic with Azure Firewall DNAT policy using the portal' + Title: 'Tutorial: Filter inbound Internet or intranet traffic with Azure Firewall DNAT policy using the portal' description: In this tutorial, you learn how to deploy and configure Azure Firewall policy DNAT using the Azure portal. -# Tutorial: Filter inbound Internet traffic with Azure Firewall policy DNAT using the Azure portal +# Tutorial: Filter inbound Internet or intranet traffic with Azure Firewall policy DNAT using the Azure portal -You can configure Azure Firewall policy Destination Network Address Translation (DNAT) to translate and filter inbound Internet traffic to your subnets. When you configure DNAT, the *rule collection action* is set to **DNAT**. Each rule in the NAT rule collection can then be used to translate your firewall public IP address and port to a private IP address and port. DNAT rules implicitly add a corresponding network rule to allow the translated traffic. For security reasons, the recommended approach is to add a specific Internet source to allow DNAT access to the network and avoid using wildcards. To learn more about Azure Firewall rule processing logic, see [Azure Firewall rule processing logic](rule-processing.md). +You can configure Azure Firewall policy Destination Network Address Translation (DNAT) to translate and filter inbound Internet or intranet (preview) traffic to your subnets. When you configure DNAT, the *rule collection action* is set to **DNAT**. Each rule in the NAT rule collection can then be used to translate your firewall public or private IP address and port to a private IP address and port. DNAT rules implicitly add a corresponding network rule to allow the translated traffic. For security reasons, the recommended approach is to add a specific source to allow DNAT access to the network and avoid using wildcards. To learn more about Azure Firewall rule processing logic, see [Azure Firewall rule processing logic](rule-processing.md). In this tutorial, you learn how to: This rule allows you to connect a remote desktop to the Srv-Workload virtual mac 1. For **Protocol**, select **TCP**. 1. For **Destination Ports**, type **3389**. 1. For **Destination Type**, select **IP Address**.-1. For **Destination**, type the firewall public IP address. +1. For **Destination**, type the firewall public or private IP address. 1. For **Translated address**, type the **Srv-Workload** private IP address. 1. For **Translated port**, type **3389**. 1. Select **Add**. |
firewall | Tutorial Firewall Dnat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-dnat.md | Title: Filter inbound Internet traffic with Azure Firewall DNAT using the portal + Title: Filter inbound Internet or intranet traffic with Azure Firewall DNAT using the portal description: In this article, you learn how to deploy and configure Azure Firewall DNAT using the Azure portal. -# Filter inbound Internet traffic with Azure Firewall DNAT using the Azure portal +# Filter inbound Internet or intranet traffic with Azure Firewall DNAT using the Azure portal -You can configure Azure Firewall Destination Network Address Translation (DNAT) to translate and filter inbound Internet traffic to your subnets. When you configure DNAT, the NAT rule collection action is set to **Dnat**. Each rule in the NAT rule collection can then be used to translate your firewall public IP address and port to a private/public IP address and port. DNAT rules implicitly add a corresponding network rule to allow the translated traffic. For security reasons, the recommended approach is to add a specific Internet source to allow DNAT access to the network and avoid using wildcards. To learn more about Azure Firewall rule processing logic, see [Azure Firewall rule processing logic](rule-processing.md). +You can configure Azure Firewall Destination Network Address Translation (DNAT) to translate and filter inbound Internet traffic to your subnets or intranet traffic between private networks (preview). When you configure DNAT, the NAT rule collection action is set to **Dnat**. Each rule in the NAT rule collection can then be used to translate your firewall public or private IP address and port to a private IP address and port. DNAT rules implicitly add a corresponding network rule to allow the translated traffic. For security reasons, the recommended approach is to add a specific source to allow DNAT access to the network and avoid using wildcards. To learn more about Azure Firewall rule processing logic, see [Azure Firewall rule processing logic](rule-processing.md). > [!NOTE] > This article uses classic Firewall rules to manage the firewall. The preferred method is to use [Firewall Policy](../firewall-manager/policy-overview.md). To complete this procedure using Firewall Policy, see [Tutorial: Filter inbound Internet traffic with Azure Firewall policy DNAT using the Azure portal](tutorial-firewall-dnat-policy.md) For the **SN-Workload** subnet, you configure the outbound default route to go t 7. For **Protocol**, select **TCP**. 1. For **Source type**, select **IP address**. 1. For **Source**, type *. -1. For **Destination Addresses**, type the firewall's public IP address. +1. For **Destination Addresses**, type the firewall's public or private IP address. 1. For **Destination ports**, type **3389**. 1. For **Translated Address** type the private IP address for the Srv-Workload virtual machine. 1. For **Translated port**, type **3389**. |
governance | Create Management Group Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-azure-cli.md | directory. You receive a notification when the process is complete. For more inf [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)- and the creator is given an "Owner" role assignment. Management group service allows this ability - so that role assignments aren't needed at the root level. No users have access to the Root - Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to - start using management groups, we allow the creation of the initial management groups at the root - level. + and the creator is given an Owner role assignment. Management group service allows this ability + so that role assignments aren't needed at the root level. When the Root + Management Group when is created, users don't have access to it. To start using management groups, the service allows the creation of the initial management groups at the root level. For more information, see [Root management group for each directory](./overview.md#root-management-group-for-each-directory). [!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)] |
governance | Create Management Group Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-dotnet.md | directory. You receive a notification when the process is complete. For more inf [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)- and the creator is given an "Owner" role assignment. Management group service allows this ability - so that role assignments aren't needed at the root level. No users have access to the Root - Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to - start using management groups, we allow the creation of the initial management groups at the root - level. + and the creator is given an Owner role assignment. Management group service allows this ability + so that role assignments aren't needed at the root level. When the Root + Management Group when is created, users don't have access to it. To start using management groups, the service allows the creation of the initial management groups at the root level. For more information, see [Root management group for each directory](./overview.md#root-management-group-for-each-directory). [!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)] |
governance | Create Management Group Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-go.md | directory. You receive a notification when the process is complete. For more inf [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)- and the creator is given an "Owner" role assignment. Management group service allows this ability - so that role assignments aren't needed at the root level. No users have access to the Root - Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to - start using management groups, we allow the creation of the initial management groups at the root - level. + and the creator is given an Owner role assignment. Management group service allows this ability + so that role assignments aren't needed at the root level. When the Root + Management Group when is created, users don't have access to it. To start using management groups, the service allows the creation of the initial management groups at the root level. For more information, see [Root management group for each directory](./overview.md#root-management-group-for-each-directory). [!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)] |
governance | Create Management Group Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-javascript.md | directory. You receive a notification when the process is complete. For more inf [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)- and the creator is given an "Owner" role assignment. Management group service allows this ability - so that role assignments aren't needed at the root level. No users have access to the Root - Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to - start using management groups, we allow the creation of the initial management groups at the root - level. + and the creator is given an Owner role assignment. Management group service allows this ability + so that role assignments aren't needed at the root level. When the Root + Management Group when is created, users don't have access to it. To start using management groups, the service allows the creation of the initial management groups at the root level. For more information, see [Root management group for each directory](./overview.md#root-management-group-for-each-directory). [!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)] |
governance | Create Management Group Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-portal.md | directory. You receive a notification when the process is complete. For more inf [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)- and the creator is given an "Owner" role assignment. Management group service allows this ability - so that role assignments aren't needed at the root level. No users have access to the Root - Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to - start using management groups, we allow the creation of the initial management groups at the root - level. + and the creator is given an Owner role assignment. Management group service allows this ability + so that role assignments aren't needed at the root level. When the Root + Management Group when is created, users don't have access to it. To start using management groups, the service allows the creation of the initial management groups at the root level. For more information, see [Root management group for each directory](./overview.md#root-management-group-for-each-directory). ### Create in portal |
governance | Create Management Group Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-powershell.md | directory. You receive a notification when the process is complete. For more inf [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)- and the creator is given an "Owner" role assignment. Management group service allows this ability - so that role assignments aren't needed at the root level. No users have access to the Root - Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to - start using management groups, we allow the creation of the initial management groups at the root - level. + and the creator is given an Owner role assignment. Management group service allows this ability + so that role assignments aren't needed at the root level. When the Root + Management Group when is created, users don't have access to it. To start using management groups, the service allows the creation of the initial management groups at the root level. For more information, see [Root management group for each directory](./overview.md#root-management-group-for-each-directory). [!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)] |
governance | Create Management Group Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-python.md | directory. You receive a notification when the process is complete. For more inf [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)- and the creator is given an "Owner" role assignment. Management group service allows this ability - so that role assignments aren't needed at the root level. No users have access to the Root - Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to - start using management groups, we allow the creation of the initial management groups at the root - level. + and the creator is given an Owner role assignment. Management group service allows this ability + so that role assignments aren't needed at the root level. When the Root + Management Group when is created, users don't have access to it. To start using management groups, the service allows the creation of the initial management groups at the root level. For more information, see [Root management group for each directory](./overview.md#root-management-group-for-each-directory). [!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)] |
governance | Create Management Group Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-rest-api.md | directory. You receive a notification when the process is complete. For more inf [hierarchy protection](./how-to/protect-resource-hierarchy.md#setting-require-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#setting-define-the-default-management-group)- and the creator is given an "Owner" role assignment. Management group service allows this ability - so that role assignments aren't needed at the root level. No users have access to the Root - Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to - start using management groups, we allow the creation of the initial management groups at the root - level. + and the creator is given an Owner role assignment. Management group service allows this ability + so that role assignments aren't needed at the root level. When the Root + Management Group when is created, users don't have access to it. To start using management groups, the service allows the creation of the initial management groups at the root level. For more information, see [Root management group for each directory](./overview.md#root-management-group-for-each-directory). [!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)] |
governance | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md | root management group is built into the hierarchy to have all management groups fold up to it. The root management group allows for the application of global policies and Azure role assignments-at the directory level. Initially, the [Microsoft Entra Global Administrator needs to elevate -themselves](../../role-based-access-control/elevate-access-global-admin.md) to the User Access +at the directory level. Initially, the [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md) to the User Access Administrator role of this root group. After elevating access, the administrator can assign any Azure role to other directory users or groups to manage the hierarchy. As an administrator, you can assign your account as the owner of the root management group. |
hdinsight | Azure Monitor Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/azure-monitor-agent.md | Title: Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters description: Learn how to migrate to Azure Monitor Agent (AMA) in Azure HDInsight clusters. Previously updated : 08/29/2024 Last updated : 09/03/2024 # Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters The following sections describe how customers can use the new Azure Monitor Agen > > For more information about how to create a Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](/azure/azure-monitor/logs/quick-create-workspace). -### Enable Azure monitor agent using Portal +### Approach 1: Enable Azure monitor agent using Portal Activate the new integration by going to your cluster's portal page and scrolling down the menu on the left until you reach the Monitoring section. Activate the new integration by going to your cluster's portal page and scrollin 1. Select Save once precondition steps are complete. +### Approach 2: Enable Azure monitor agent using Azure PowerShell ++1. Enable system-assigned MSI ++ 1. First get cluster information to check the MSI of cluster. + + + `Get-AzHDInsightCluster -ResourceGroupName $resourceGroup –ClusterName $cluster` + + ++ 1. If this cluster has no MSI, directly enable system assigned MSI + + `Update-AzHDInsightCluster -ResourceGroupName $resourceGroup -ClusterName $cluster -IdentityType "SystemAssigned"` + + + 1. If this cluster only has user assigned MSI, add system assigned MSI to identity. + + + `Update-AzHDInsightCluster -ResourceGroupName $resourceGroup -ClusterName $cluster -IdentityType "SystemAssigned,UserAssigned" -IdentityId "$userAssignedIdentityResourceId"` + + ++1. If this cluster already system assigned MSI, no need to anything. +++1. Creation of DCR ++ For more information, see [Create and edit data collection rules (DCRs)](/azure/azure-monitor/essentials/data-collection-rule-create-edit?tabs=powershell#create-a-dcr). ++ ``` + # The URL of the DCR template file, change {HDIClusterType} to your cluster type. + + # The valid types are: hadoop, hbase, interactivehive, kafka, llap, spark + + $dcrTemplatejsonUrl = "https://hdiconfigactions.blob.core.windows.net/azuremonitoriningagent/DCR/{HDIClusterType}_dcr_template.json" + + $dcrJsonContent = Invoke-RestMethod -Uri $dcrTemplatejsonUrl + + + + # Get details of your Log Analytics workspace, if your workspace is in another subscription, you need to change context to the subscription + + $workspaceResourceGroupName = "{yourWorkspaceResourceGroup}" + + $workspaceName = {yourWorkspaceName} + + $workspace = Get-AzOperationalInsightsWorkspace -ResourceGroupName $workspaceResourceGroupName -Name $workspaceName + + + + # Customize the DCR content + + $dcrJsonContent.properties.destinations.logAnalytics[0].workspaceResourceId = $workspace.ResourceId + + $dcrJsonContent.properties.destinations.logAnalytics[0].workspaceId = $workspace.CustomerId + + $dcrJsonContent.location = $workspace.Location + + + + # Create the DCR using the customized JSON (DCR needs to be in the same location as Log Analytics workspace). + + # If your HDInsight cluster is in another subscription, you need to change context to your cluster’s subscription + + $dcrName = " {yourDcrName} " + + $resourceGroupName = " {YourDcrResourceGroup} " + + $dcrStr = $dcrJsonContent | ConvertTo-Json -Depth 10 + + $dcr = New-AzDataCollectionRule -Name $dcrName -ResourceGroupName $resourceGroupName -JsonString $dcrStr + ``` + ++1. Association of DCR. ++ For more information, see [Set up the Azure Monitor agent on Windows client devices](/azure/azure-monitor/agents/azure-monitor-agent-windows-client#create-and-associate-a-monitored-object). + + + ``` + # Associate DCR to HDInsight cluster + + $hdinsightClusterResourceId = "/subscriptions/{subscription}/resourceGroups/{resourceGroup}/providers/Microsoft.HDInsight/clusters/{clusterName}" + + $dcrAssociationName = "dcrAssociationName {yourDcrAssociation} " + + New-AzDataCollectionRuleAssociation -AssociationName $dcrAssociationName -ResourceUri $hdinsightClusterResourceId -DataCollectionRuleId $dcr.Id + ``` ++1. Enabling Azure Monitor Agent. ++ ``` + # Enter user information + + $resourceGroup = "<your-resource-group>" + + $cluster = "<your-cluster>" + + $LAW = "<your-Log-Analytics-workspace>" + + # End of user input + + + # obtain workspace id for defined Log Analytics workspace + + $WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $resourceGroup -Name $LAW).CustomerId + + + + # obtain primary key for defined Log Analytics workspace + + $PrimaryKey = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $resourceGroup -Name $LAW | Get-AzOperationalInsightsWorkspaceSharedKeys).PrimarySharedKey + + + + # Enables monitoring and relevant logs will be sent to the specified workspace. + + Enable-AzHDInsightAzureMonitorAgent -ResourceGroupName $resourceGroup -ClusterName $cluster -WorkspaceId $WorkspaceId -PrimaryKey $PrimaryKey + + + + # Gets the status of monitoring installation on the cluster. + + Get-AzHDInsightAzureMonitorAgent -ResourceGroupName $resourceGroup -ClusterName $cluster + ``` + ++1. (Optional) disabling Azure Monitor Agent. ++ ``` + Disable-AzHDInsightAzureMonitorAgent -ResourceGroupName $resourceGroup -ClusterName $cluster + ``` + ++### Approach 3: Enable Azure monitor agent using Azure CLI ++1. Enable system-assigned MSI. ++ 1. First get cluster information to check the MSI of cluster. ++ + ``` + az hdinsight show –-resource-group $resourceGroup –name $cluster + + #get access token if needed + + accessToken=$(az account get-access-token --query accessToken -o tsv) + + url="https://management.azure.com/subscriptions/${subscriptionId}/resourcegroups/${resourceGroupName}/providers/Microsoft.HDInsight/clusters/${clusterName}?api-version=2024-08-01-preview" + ``` + + 1. If this cluster has no MSI, directly enable system assigned MSI via rest API. + + ``` + body="{\"identity\": {\"type\": \"SystemAssigned\"}}" + + az rest --method patch --url "$url" --body "$body" --headers "Authorization=Bearer $accessToken" + ``` + 1. If this cluster only has user assigned MSI, add system assigned MSI to identity. + ``` + body="{\"identity\": {\"type\": \"SystemAssigned,UserAssigned\", \"userAssignedIdentities\": {$userAssignedIdentityResourceId:{}}}}" + + az rest --method patch --url "$url" --body "$body" --headers "Authorization=Bearer $accessToken" + ``` + ++ 1. If this cluster already system assigned MSI, no need to anything. + ++1. Creation of DCR. ++ For more information, see [Create and edit data collection rules (DCRs)](/azure/azure-monitor/essentials/data-collection-rule-create-edit?tabs=CLI#create-a-dcr) + + ``` + # The URL of the DCR template file, change {HDIClusterType} to your cluster type. + + # The valid types are: hadoop, hbase, interactivehive, kafka, llap, spark + + $dcrTemplatejsonUrl = "https://hdiconfigactions.blob.core.windows.net/azuremonitoriningagent/DCR/{HDIClusterType}_dcr_template.json?api-version=2020-08-01" + + + + # Download dcr template to local + + $dcrTemplateLocalFile = "dcrTemplateFileName.json" + + azcopy copy $dcrTemplatejsonUrl $dcrTemplateLocalFile + + + + # Set subscription + + az account set --subscription "{yourSubscription}" + + + + # Get details of your Log Analytics workspace + + $workspaceResourceGroupName = "{yourWorkspaceResourceGroup}" + + $workspaceName = "{yourWorkspaceName}" + + $workspace = az monitor log-analytics workspace show --resource-group $workspaceResourceGroupName --workspace-name $workspaceName + + + + # Customize the DCR content. Below script depends on jq, you need to install it if it’s not available in your environment. + + $workspaceResourceId = $workspace | jq -r '.id' + + $workspaceId = $workspace | jq -r '.customerId' + + $location = $workspace | jq -r '.location' + + + + # Read the JSON file + + $templateJsonData=cat $dcrTemplateLocalFile + + + + # Update the JSON fields using jq + + $templateJsonData=echo $templateJsonData | jq --arg workspaceResourceId $workspaceResourceId '.properties.destinations.logAnalytics[0].workspaceResourceId = $workspaceResourceId' + + $templateJsonData=echo $templateJsonData | jq --arg workspaceId $workspaceId '.properties.destinations.logAnalytics[0].workspaceId = $workspaceId' + + $templateJsonData=echo $templateJsonData | jq --arg location $location '.location = $location' + + + + # Save the updated JSON back to the file + + echo $templateJsonData > $dcrTemplateLocalFile + + + + # Print the updated JSON + + cat $dcrTemplateLocalFile + + + + # Create the DCR using the customized JSON (DCR needs to be in the same location as Log Analytics workspace) + + # If your HDInsight cluster is in another subscription, you need to set subscription to your cluster’s subscription + + $dcrName = "{yourDcrName}" + + $resourceGroupName = "{YourDcrResourceGroup}" # Suggest to put DCR in the same resource group as your HDInsight cluster + + $dcr = az monitor data-collection rule create --name $dcrName --location $location --resource-group $resourceGroupName --rule-file $dcrTemplateLocalFile + ``` + ++1. Association of DCR ++ ``` + # Associate DCR to HDInsight cluster + + $hdinsightClusterResourceId = "{YourHDInsightClusterResourceId}" + + $dcrAssociationName = "{yourDcrAssociation}" + + $dcrId = $dcr | jq -r '.id' + + az monitor data-collection rule association create --association-name $dcrAssociationName --resource $hdinsightClusterResourceId --data-collection-rule-id $dcrId + ``` + ++1. Enabling Azure Monitor Agent ++ ``` + # set variables + + export resourceGroup=RESOURCEGROUPNAME + + export cluster=CLUSTERNAME + + export LAW=LOGANALYTICSWORKSPACENAME ++ ++ # Enable the Azure Monitor Agent logs integration on an HDInsight cluster. + + az hdinsight azure-monitor-agent enable --name $cluster --resource-group $resourceGroup --workspace $LAW + + + + # Get the status of Azure Monitor Agent logs integration on an HDInsight cluster. + + az hdinsight azure-monitor-agent show --name $cluster --resource-group $resourceGroup + ``` + ++1. (Optional) disabling Azure Monitor Agent. ++ ``` + az hdinsight azure-monitor-agent disable --name $cluster --resource-group $resourceGroup + ``` + ### Enable Azure Monitor Agent logging for Spark cluster Azure HDInsight Spark clusters control AMA integration using a Spark configuration `spark.hdi.ama.enabled`, by default the value is set to false. This configuration controls whether the Spark specific logs will come up in the Log Analytics workspace. If you want to enable AMA in your Spark clusters and retrieve the Spark event logs in their LA workspaces, you need to perform an additional step to enable AMA for spark specific logs. The following steps describe how customers can enable the new Azure Monitor Agen There are two ways you can access the new tables. -#### Approach 1: +#### Approach 1 1. The first way to access the new tables is through the Log Analytics workspace. There are two ways you can access the new tables. > [!NOTE] >This process describes how the logs were accessed in the old integration. This requires the user to have access to the workspace. -#### Approach 2: +#### Approach 2 The second way to access the new tables is through Cluster portal access. We provide a [mapping table](./log-analytics-migration.md#appendix-table-mappi ### Update dashboards for HDInsight clusters -If you build multiple dashboards to monitor your HDInsight clusters, you need to adjust the query behind the table once you enable the new Azure Monitor integration. The table name or the field name might change in the new integration, but all the information you have in old integration is included. +If you build multiple dashboards to monitor your HDInsight clusters, you need to adjust the query behind the table once you enable the new Azure Monitor integration. The table name or the field name might change in the new integration, but all the information you have in old integration is included. Refer to the [mapping table](log-analytics-migration.md#appendix-table-mapping) between the old table/schema to the new table/schema to update the query behind the dashboards |
hdinsight | Hdinsight Hadoop Oms Log Analytics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md | description: Learn how to use Azure Monitor logs to monitor jobs running in an H Previously updated : 05/10/2024 Last updated : 09/03/2024 # Use Azure Monitor logs to monitor HDInsight clusters If you don't have an Azure subscription, [create a free account](https://azure.m #### [New Azure monitor experience](#tab/new) > [!Important]-> New Azure Monitor experience is available in all the regions as a preview feature. -> +> Azure Monitor experience (preview) in HDInsight is retiring by February 1, 2025. For more information, see [Retirement: Azure Monitor experience (preview) in HDInsight is retiring by February 1, 2025](https://azure.microsoft.com/updates/v2/HDInsight-Azure-Monitor-experience-retirement). ## Prerequisites |
healthcare-apis | Api Versioning Dicom Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/api-versioning-dicom-service.md | Title: API versioning for DICOM service - Azure Health Data Services description: This guide gives an overview of the API version policies for the DICOM service. -+ Last updated 10/13/2023-+ # API versioning for DICOM service |
healthcare-apis | Change Feed Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/change-feed-overview.md | Title: Change feed overview for the DICOM service in Azure Health Data Services description: Learn how to use the change feed in the DICOM service to access the logs of all the changes that occur in your organization's medical imaging data. The change feed allows you to query, process, and act upon the change events in a scalable and efficient way.-+ Last updated 1/18/2024-+ # Change feed overview |
healthcare-apis | Configure Cross Origin Resource Sharing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/configure-cross-origin-resource-sharing.md | Title: Configure cross-origin resource sharing in DICOM service in Azure Health Data Services description: This article describes how to configure cross-origin resource sharing in DICOM service in Azure Health Data Services--++ Last updated 10/09/2023 |
healthcare-apis | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/configure-customer-managed-keys.md | Title: Configure customer-managed keys (CMK) for the DICOM service in Azure Health Data Services description: Use customer-managed keys (CMK) to encrypt data in the DICOM service. Create and manage CMK in Azure Key Vault and update the encryption key with a managed identity.-+ Last updated 11/20/2023-+ # Configure customer-managed keys for the DICOM service |
healthcare-apis | Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/customer-managed-keys.md | Title: Best practices for customer-managed keys for the DICOM service in Azure Health Data Services description: Encrypt your data with customer-managed keys (CMK) in the DICOM service in Azure Health Data Services. Get tips on requirements, best practices, limitations, and troubleshooting.-+ Last updated 11/20/2023-+ # Best practices for using customer-managed keys for the DICOM service |
healthcare-apis | Data Partitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/data-partitions.md | Title: Enable data partitioning for the DICOM service in Azure Health Data Services description: Learn how to enable data partitioning for efficient storage and management of medical images for the DICOM service in Azure Health Data Services.-+ Last updated 03/26/2024-+ # Enable data partitioning |
healthcare-apis | Deploy Dicom Services In Azure Data Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure-data-lake.md | Title: Deploy the DICOM service with Azure Data Lake Storage description: Learn how to deploy the DICOM service and store all your DICOM data in its native format with a data lake in Azure Health Data Services.-+ Last updated 11/21/2023-+ |
healthcare-apis | Deploy Dicom Services In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure.md | Title: Deploy the DICOM service by using the Azure portal - Azure Health Data Services description: This article describes how to deploy the DICOM service in the Azure portal.-+ Last updated 03/11/2024-+ |
healthcare-apis | Dicom Configure Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-configure-azure-rbac.md | Title: Configure Azure RBAC for the DICOM service - Azure Health Data Services description: This article describes how to configure Azure RBAC for the DICOM service-+ Last updated 10/09/2023-+ # Configure Azure RBAC for the DICOM service |
healthcare-apis | Dicom Data Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-data-lake.md | Title: Manage medical imaging data with the DICOM service and Azure Data Lake Storage description: Learn how to use the DICOM service in Azure Health Data Services to store, access, and analyze medical imaging data in the cloud. Explore the benefits, architecture, and data contracts of the integration of the DICOM service with Azure Data Lake Storage.-+ Last updated 03/11/2024-+ |
healthcare-apis | Dicom Digital Pathology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-digital-pathology.md | Title: Digital pathology in the DICOM service in Azure Health Data Services description: Explore digital pathology with the DICOM service in Azure Health Data Services. Share slide images, train AI models, and store digitized slides securely. -+ Last updated 10/9/2023-+ # Digital pathology using the DICOM service |
healthcare-apis | Dicom Extended Query Tags Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-extended-query-tags-overview.md | Title: DICOM extended query tags overview - Azure Health Data Services description: In this article, you'll learn the concepts of Extended Query Tags.-+ Last updated 10/9/2023-+ # Extended query tags |
healthcare-apis | Dicom Register Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-register-application.md | Title: Register a client application for the DICOM service in Microsoft Entra ID description: Learn how to register a client application for the DICOM service in Microsoft Entra ID.-+ Last updated 09/02/2022-+ # Register a client application for the DICOM service |
healthcare-apis | Dicom Service V2 Api Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-service-v2-api-changes.md | Title: DICOM Service API v2 Changes - Azure Health Data Services description: This guide gives an overview of the changes in the v2 API for the DICOM service. -+ Last updated 10/13/2023-+ # DICOM Service API v2 Changes |
healthcare-apis | Dicom Services Conformance Statement V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md | Title: DICOM Conformance Statement version 2 for Azure Health Data Services description: Read about the features and specifications of the DICOM service v2 API, which supports a subset of the DICOMweb Standard for medical imaging data. A DICOM Conformance Statement is a technical document that describes how a device or software implements the DICOM standard. -+ Last updated 1/18/2024-+ # DICOM Conformance Statement v2 |
healthcare-apis | Configure Identity Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-identity-providers.md | https://yourIdentityProvider.com/authority/v2.0/.well-known/openid-configuration #### Configure the `applications` array -You must include at least one application configuration and at most two in the `applications` array. Each application configuration has values that validate access token claims and an array that defines the permissions for the application to access FHIR resources. +You must include at least one application configuration and can add upto 25 applications in the `applications` array. Each application configuration has values that validate access token claims and an array that defines the permissions for the application to access FHIR resources. #### Identify the application with the `clientId` string |
iot-central | Howto Integrate With Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md | You need the following prerequisites to complete the steps in this guide: ## Download the sample code -To get started, fork the IoT Central CI/CD GitHub repository and then clone your fork to your local machine: +To get started, fork the IoT Central CI/CD GitHub repository and then clne your fork to your local machine:o 1. To fork the GitHub repository, open the [IoT Central CI/CD GitHub repository](https://github.com/Azure/iot-central-CICD-sample) and select **Fork**. Now that you have a configuration file that represents the settings for your dev "displayName": "Blob destination", "type": "blobstorage@v1", "authorization": {- "type": "connectionString", - "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourexportaccount;AccountKey=*****;EndpointSuffix=core.windows.net", + "type": "systemAssignedManagedIdentity", + "endpointUri": "https://yourstorageaccount.blob.core.windows.net/", "containerName": "dataexport" }, "status": "waiting" Now that you have a configuration file that represents the settings for your dev az keyvault secret set --name FileUpload --vault-name {your production key vault name} --value '{your production storage account connection string}' ``` -1. If your application uses data exports, add secrets for the destinations to the production key vault. The config file doesn't contain any actual secrets for your destination, the secrets are stored in your key vault. +1. If your application uses managed identities for data export destinations, there are no secrets for you to manage. However, you do need to enable the system-assigned managed identity for your production IoT Central application and give it the necessary permissions to write to the destination. ++1. If your application uses connection strings for data export destinations, add secrets for the destinations to the production key vault. The config file doesn't contain any actual secrets for your destination, the secrets are stored in your key vault. 1. Update the secrets in the config file with the name of the secret in your key vault. | Destination type | Property to change | |
iot-central | Howto Manage Data Export With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md | The following example shows a request body that creates a blob storage destinati ```json {- "displayName": "Blob Storage Destination", - "type": "blobstorage@v1", - "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=********;EndpointSuffix=core.windows.net", - "containerName": "central-data" + "displayName": "Blob Storage", + "type": "blobstorage@v1", + "authorization": { + "type": "systemAssignedManagedIdentity", + "endpointUri": "https://yourapplication.blob.core.windows.net/", + "containerName": "central-data" + } } ``` The request body has some required fields: * `displayName`: Display name of the destination. * `type`: Type of destination object. One of: `blobstorage@v1`, `dataexplorer@v1`, `eventhubs@v1`, `servicebusqueue@v1`, `servicebustopic@v1`, `webhook@v1`.-* `connectionString`: The connection string for accessing the destination resource. -* `containerName`: For a blob storage destination, the name of the container where data should be written. +* `authorization`: The authorization details for the destination. The supported authorization types are `systemAssignedManagedIdentity` and `connectionString`. The response to this request looks like the following example: The response to this request looks like the following example: "displayName": "Blob Storage", "type": "blobstorage@v1", "authorization": {- "type": "connectionString", - "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;EndpointSuffix=core.windows.net", + "type": "systemAssignedManagedIdentity", + "endpointUri": "https://yourapplication.blob.core.windows.net/", "containerName": "central-data" }, "status": "waiting" The response to this request looks like the following example: "displayName": "Blob Storage", "type": "blobstorage@v1", "authorization": {- "type": "connectionString", - "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;EndpointSuffix=core.windows.net", + "type": "systemAssignedManagedIdentity", + "endpointUri": "https://yourapplication.blob.core.windows.net/", "containerName": "central-data" }, "status": "waiting" The response to this request looks like the following example: "displayName": "Blob Storage Destination", "type": "blobstorage@v1", "authorization": {- "type": "connectionString", - "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=********;EndpointSuffix=core.windows.net", - "containerName": "central-data" + "type": "systemAssignedManagedIdentity", + "endpointUri": "https://yourapplication.blob.core.windows.net/", + "containerName": "central-data" }, "status": "waiting" }, The response to this request looks like the following example: PATCH https://{your app subdomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview ``` -You can use this call to perform an incremental update to an export. The sample request body looks like the following example that updates the `connectionString` of a destination: +You can use this call to perform an incremental update to an export. The sample request body looks like the following example that updates the `containerName` of a destination: ```json {- "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=********;EndpointSuffix=core.windows.net" + "containerName": "central-data-analysis" } ``` The response to this request looks like the following example: "displayName": "Blob Storage", "type": "blobstorage@v1", "authorization": {- "type": "connectionString", - "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=*****;EndpointSuffix=core.windows.net", - "containerName": "central-data" + "type": "systemAssignedManagedIdentity", + "endpointUri": "https://yourapplication.blob.core.windows.net/", + "containerName": "central-data-analysis" }, "status": "waiting" } The response to this request looks like the following example: ```json {- "id": "8dbcdb53-c6a7-498a-a976-a824b694c150", - "displayName": "Blob Storage Destination", - "type": "blobstorage@v1", - "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourAccountName;AccountKey=********;EndpointSuffix=core.windows.net", - "containerName": "central-data", - "status": "waiting" + "id": "802894c4-33bc-4f1e-ad64-e886f315cece", + "displayName": "Enriched Export", + "enabled": true, + "source": "telemetry", + "enrichments": { + "Custom data": { + "value": "My value" + } + }, + "destinations": [ + { + "id": "9742a8d9-c3ca-4d8d-8bc7-357bdc7f39d9" + } + ], + "status": "healthy" } ``` |
load-balancer | Upgrade Basic Standard With Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-with-powershell.md | The PowerShell module performs the following functions: ### Prerequisites - **PowerShell**: A supported version of PowerShell version 7 or higher is recommended for use with the AzureBasicLoadBalancerUpgrade module on all platforms including Windows, Linux, and macOS. However, PowerShell 5.1 on Windows is supported. -- **Az PowerShell Module**: Determine whether you have the latest Az PowerShell module installed- - Install the latest [Az PowerShell module](/powershell/azure/install-azure-powershell) -- **Az.ResourceGraph PowerShell Module**: The Az.ResourceGraph PowerShell module is used to query resource configuration during upgrade and is a separate install from the Az PowerShell module. It is automatically added if you install the `AzureBasicLoadBalancerUpgrade` module using the `Install-Module` command. ### Module Installation -Install the module from [PowerShell gallery](https://www.powershellgallery.com/packages/AzureBasicLoadBalancerUpgrade) +Install the module from [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureBasicLoadBalancerUpgrade) ```powershell-PS C:\> Install-Module -Name AzureBasicLoadBalancerUpgrade -Scope CurrentUser -Repository PSGallery -Force +Install-Module -Name AzureBasicLoadBalancerUpgrade -Scope CurrentUser -Repository PSGallery -Force ``` ## Pre- and Post-migration Steps PS C:\> Install-Module -Name AzureBasicLoadBalancerUpgrade -Scope CurrentUser -R ## Use the module -1. Use `Connect-AzAccount` to connect to Azure, specifying the Basic Load Balancer's subscription ID if you have more than one subscription. +1. Ensure you have selected the Basic Load Balancer's subscription ID by running `Select-AzSubscription`. ```powershell- PS C:\> Connect-AzAccount -Subscription <SubscriptionId> + Select-AzSubscription -Subscription <SubscriptionId> ``` 2. Find the Load Balancer you wish to upgrade. Record its name and resource group name. PS C:\> Install-Module -Name AzureBasicLoadBalancerUpgrade -Scope CurrentUser -R >[!TIP] >Additional parameters for advanced and recovery scenarios can be viewed by running `Get-Help Start-AzBasicLoadBalancerUpgrade -Detailed` -4. Run the Upgrade command. +4. Run the `Start-AzBasicLoadBalancerUpgrade` command, using the following examples for guidance. ### Example: validate a scenario Validate that a Basic Load Balancer is supported for upgrade ```powershell-PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> -validateScenarioOnly +Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> -validateScenarioOnly ``` ### Example: upgrade by name PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> Upgrade a Basic Load Balancer to a Standard Load Balancer with the same name, providing the Basic Load Balancer name and resource group name ```powershell-PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> +Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> ``` ### Example: upgrade, change name, and show logs PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> Upgrade a Basic Load Balancer to a Standard Load Balancer with the specified name, displaying logged output on screen ```powershell-PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> -StandardLoadBalancerName <newStandardLBName> -FollowLog +Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> -StandardLoadBalancerName <newStandardLBName> -FollowLog ``` ### Example: upgrade with alternate backup path PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> Upgrade a Basic Load Balancer to a Standard Load Balancer with the specified name and store the Basic Load Balancer backup file at the specified path ```powershell-PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> -StandardLoadBalancerName <newStandardLBName> -RecoveryBackupPath C:\BasicLBRecovery +Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> -StandardLoadBalancerName <newStandardLBName> -RecoveryBackupPath C:\BasicLBRecovery ``` ### Example: validate completed migration PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> Validate a completed migration by passing the Basic Load Balancer state file backup and the Standard Load Balancer name ```powershell-PS C:\> Start-AzBasicLoadBalancerUpgrade -validateCompletedMigration -StandardLoadBalancerName <newStandardLBName> -basicLoadBalancerStatePath C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json +Start-AzBasicLoadBalancerUpgrade -validateCompletedMigration -StandardLoadBalancerName <newStandardLBName> -basicLoadBalancerStatePath C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json ``` ### Example: migrate multiple, related Load Balancers Migrate multiple Load Balancers with shared backend members at the same time, us ```powershell # build array of multiple basic load balancers-PS C:\> $multiLBConfig = @( +$multiLBConfig = @( @{ 'standardLoadBalancerName' = 'myStandardInternalLB01' # specifying the standard load balancer name is optional 'basicLoadBalancer' = (Get-AzLoadBalancer -ResourceGroupName myRG -Name myBasicInternalLB01) PS C:\> $multiLBConfig = @( } ) # pass the array of load balancer configurations to the -MultiLBConfig parameter-PS C:\> Start-AzBasicLoadBalancerUpgrade -MultiLBConfig $multiLBConfig +Start-AzBasicLoadBalancerUpgrade -MultiLBConfig $multiLBConfig ``` ### Example: retry failed virtual machine scale set migration PS C:\> Start-AzBasicLoadBalancerUpgrade -MultiLBConfig $multiLBConfig Retry a failed upgrade for a virtual machine scale set's load balancer (due to error or script termination) by providing the Basic Load Balancer and Virtual Machine Scale Set backup state file ```powershell-PS C:\> Start-AzBasicLoadBalancerUpgrade -FailedMigrationRetryFilePathLB C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json -FailedMigrationRetryFilePathVMSS C:\RecoveryBackups\VMSS_myVMSS_rg-basiclbrg_20220912T1740032148.json +Start-AzBasicLoadBalancerUpgrade -FailedMigrationRetryFilePathLB C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json -FailedMigrationRetryFilePathVMSS C:\RecoveryBackups\VMSS_myVMSS_rg-basiclbrg_20220912T1740032148.json ``` ### Example: retry failed virtual machine migration PS C:\> Start-AzBasicLoadBalancerUpgrade -FailedMigrationRetryFilePathLB C:\Reco Retry a failed upgrade for a VM load balancer (due to error or script termination) by providing the Basic Load Balancer backup state file ```powershell-PS C:\> Start-AzBasicLoadBalancerUpgrade -FailedMigrationRetryFilePathLB C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json +Start-AzBasicLoadBalancerUpgrade -FailedMigrationRetryFilePathLB C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json ``` ## Common Questions |
logic-apps | Edit App Settings Host Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md | App settings in Azure Logic Apps work similarly to app settings in Azure Functio | Setting | Default value | Description | |||-|-| `APP_KIND` | `workflowApp` | Sets the app type for the Azure resource. | +| `APP_KIND` | `workflowApp` | Required for setting the app type for the Azure resource. | | `AzureWebJobsStorage` | None | Sets the connection string for an Azure storage account. For more information, see [AzureWebJobsStorage](../azure-functions/functions-app-settings.md#azurewebjobsstorage) | | `FUNCTIONS_WORKER_RUNTIME` | `dotnet` | Sets the language worker runtime to use with your logic app resource and workflows. However, this setting is no longer necessary due to automatically enabled multi-language support. <br><br>**Note**: Previously, this setting's default value was **`node`**. Now, **`dotnet`** is the default value for all new and existing deployed Standard logic apps, even for apps that had a different different value. This change shouldn't affect your workflow's runtime, and everything should work the same way as before.<br><br>For more information, see [FUNCTIONS_WORKER_RUNTIME](../azure-functions/functions-app-settings.md#functions_worker_runtime). | | `ServiceProviders.Sftp.FileUploadBufferTimeForTrigger` | `00:00:20` <br>(20 seconds) | Sets the buffer time to ignore files that have a last modified timestamp that's greater than the current time. This setting is useful when large file writes take a long time and avoids fetching data for a partially written file. | |
operational-excellence | Relocation Netapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-netapp.md | |
private-link | Private Endpoint Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md | For Azure services, use the recommended zone names as described in the following >| Azure Event Grid (Microsoft.EventGrid/namespaces) | topic | privatelink.eventgrid.azure.net | eventgrid.azure.net | >| Azure Event Grid (Microsoft.EventGrid/namespaces/topicSpace) | topicSpace | privatelink.ts.eventgrid.azure.net | eventgrid.azure.net | >| Azure Event Grid (Microsoft.EventGrid/partnerNamespaces) | partnernamespace | privatelink.eventgrid.azure.net | eventgrid.azure.net |->| Azure API Management (Microsoft.ApiManagement/service) | gateway | privatelink.azure-api.net | azure-api.net | +>| Azure API Management (Microsoft.ApiManagement/service) | Gateway | privatelink.azure-api.net | azure-api.net | >| Azure Health Data Services (Microsoft.HealthcareApis/workspaces) | healthcareworkspace | privatelink.workspace.azurehealthcareapis.com </br> privatelink.fhir.azurehealthcareapis.com </br> privatelink.dicom.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com | ### Internet of Things (IoT) |
quotas | How To Guide Monitoring Alerting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/how-to-guide-monitoring-alerting.md | Title: Create alerts for quotas description: Learn how to create alerts for quotas Previously updated : 05/09/2024 Last updated : 09/03/2024 The simplest way to create a quota alert is to use the Azure portal. Follow thes | Severity | Select the **severity** of the alert when the **ruleΓÇÖs condition** is met.| | [Frequency of evaluation](../azure-monitor/alerts/alerts-overview.md#stateful-alerts) | Choose how **often** the alert rule should **run**, by selecting 5, 10, or 15 minutes. If the frequency is smaller than the aggregation granularity, frequency of evaluation results in sliding window evaluation. | | [Resource Group](../azure-resource-manager/management/manage-resource-groups-portal.md) | Resource Group is a collection of resources that share the same lifecycles, permissions, and policies. Select a resource group similar to other quotas in your subscription, or create a new resource group. |- | [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?tabs=azure-portal) | A workspace within the subscription that is being **monitored** and is used as the **scope for rule execution**. Select from the dropdown or create a new workspace. If you create a new workspace, use it for all alerts in your subscription. | - | [Managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) | Select from the dropdown, or **Create New**. Managed Identity should have **read permissions** to the Subscription (to read Usage data from ARG) and Log Analytics workspace that is chosen(to read the log alerts). | + | [Managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) | Select from the dropdown, or **Create New**. Managed Identity should have **read permissions** for the selected Subscription (to read Usage data from ARG). | | Notify me by | There are three notifications methods and you can check one or all three check boxes, depending on your notification preference. | | [Use an existing action group](../azure-monitor/alerts/action-groups.md) | Check the box to use an existing action group. An action group **invokes** a defined set of **notifications** and actions when an alert is triggered. You can create Action Group to automatically Increase the Quota whenever possible. | | [Dimensions](../azure-monitor/alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) | Here are the options for selecting **multiple Quotas** and **regions** within a single alert rule. Adding dimensions is a cost-effective approach compared to creating a new alert for each quota or region.|- | [Estimated cost](https://azure.microsoft.com/pricing/details/monitor/) |Estimated cost is automatically calculated cost associated with running this **new alert rule** against your quota. See [Azure Monitor cost and usage](../azure-monitor/cost-usage.md) for more information. | > [!TIP] > Within the same subscription, we advise using the same **Resource group**, **Log Analytics workspace,** and **Managed identity** values for all alert rules. |
reliability | Availability Zones Service Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md | The following regions currently support availability zones: | South Central US | UK South | | | East Asia | | US Gov Virginia | West Europe | | | China North 3 | | West US 2 | Sweden Central | | |Korea Central | -| West US 3 | Switzerland North | | | | +| West US 3 | Switzerland North | | | New Zealand North | | Mexico Central | Poland Central |||| ||Spain Central |||| |
reliability | Cross Region Replication Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md | The table below lists Azure regions without a region pair: | Geography | Region | |--|-|-| Qatar | Qatar Central | -| Mexico | Mexico Central | -| Poland | Poland Central | +| Austria | Austria East (coming soon) | | Israel | Israel Central| | Italy | Italy North|-| Austria | Austria East (Coming soon) | +| Mexico | Mexico Central | +| New Zealand | New Zealand North (coming soon) | +| Poland | Poland Central | +| Qatar | Qatar Central | | Spain | Spain Central| ## Next steps |
sentinel | Connect Mdti Data Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-mdti-data-connector.md | Title: Enable data connector for Microsoft's threat intelligence keywords: premium, TI, STIX objects, relationships, threat actor, watchlist, license -description: Learn how to ingest Microsoft's threat intelligence into your Sentinel workspace to generate high fidelity alerts and incidents. +description: Learn how to ingest Microsoft's threat intelligence into your Microsoft Sentinel workspace to generate high-fidelity alerts and incidents. Last updated 8/16/2024 appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customer intent: As a SOC admin, I want to utilize the best threat intelligence from Microsoft, so I can generate high fidelity alerts and incidents. +#customer intent: As an SOC admin, I want to use the best threat intelligence from Microsoft so that I can generate high-fidelity alerts and incidents. # Enable data connector for Microsoft Defender Threat Intelligence-Bring public, open source and high fidelity indicators of compromise (IOC) generated by Microsoft Defender Threat Intelligence (MDTI) into your Microsoft Sentinel workspace with the MDTI data connectors. With a simple one-click setup, use the TI from the standard and premium MDTI data connectors to monitor, alert and hunt. ++Bring public, open-source and high-fidelity indicators of compromise (IOCs) generated by Microsoft Defender Threat Intelligence into your Microsoft Sentinel workspace with the Defender Threat Intelligence data connectors. With a simple one-click setup, use the threat intelligence from the standard and premium Defender Threat Intelligence data connectors to monitor, alert, and hunt. > [!IMPORTANT]-> The Microsoft Defender Threat Intelligence data connector and the Premium Microsoft Defender Threat Intelligence data connector are currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> The Defender Threat Intelligence data connector and the premium Defender Threat Intelligence data connector are currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for more legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)] -For more information about the benefits of the standard and premium MDTI data connectors, see [Understand threat intelligence](understand-threat-intelligence.md#add-threat-indicators-to-microsoft-sentinel-with-the-microsoft-defender-threat-intelligence-data-connector). +For more information about the benefits of the standard and premium Defender Threat Intelligence data connectors, see [Understand threat intelligence](understand-threat-intelligence.md#add-threat-indicators-to-microsoft-sentinel-with-the-defender-threat-intelligence-data-connector). ## Prerequisites-- In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.++- To install, update, and delete standalone content or solutions in the **Content hub**, you need the Microsoft Sentinel Contributor role at the resource group level. - To configure these data connectors, you must have read and write permissions to the Microsoft Sentinel workspace. -## Install the Threat Intelligence solution in Microsoft Sentinel +## Install the threat intelligence solution in Microsoft Sentinel -To import threat indicators into Microsoft Sentinel from standard and premium MDTI, follow these steps: +To import threat indicators into Microsoft Sentinel from standard and premium Defender Threat Intelligence, follow these steps: -1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**. +1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. ++ For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**. 1. Find and select the **Threat Intelligence** solution. To import threat indicators into Microsoft Sentinel from standard and premium MD For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md). -## Enable the Microsoft Defender Threat Intelligence data connector +## Enable the Defender Threat Intelligence data connector ++1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Data connectors**. -1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Data connectors**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Data connectors**. + For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Data connectors**. -1. Find and select the Microsoft Defender Threat Intelligence data connector > **Open connector page** button. +1. Find and select the Defender Threat Intelligence data connector **Open connector page** button. - :::image type="content" source="mediti-data-connector/premium-microsoft-defender-threat-intelligence-data-connector-config.png"::: + :::image type="content" source="mediti-data-connector/premium-microsoft-defender-threat-intelligence-data-connector-config.png"::: -1. Enable the feed by selecting the **Connect** button +1. Enable the feed by selecting **Connect**. - :::image type="content" source="mediti-data-connector/microsoft-defender-threat-intelligence-data-connector-connect.png"::: + :::image type="content" source="mediti-data-connector/microsoft-defender-threat-intelligence-data-connector-connect.png"::: -1. When MDTI indicators start populating the Microsoft Sentinel workspace, the connector status displays **Connected**. +1. When Defender Threat Intelligence indicators start populating the Microsoft Sentinel workspace, the connector status displays **Connected**. -At this point, the ingested indicators are now available for use in the *TI map...* analytics rules. For more information, see [Use threat indicators in analytics rules](use-threat-indicators-in-analytics-rules.md). +At this point, the ingested indicators are now available for use in the `TI map...` analytics rules. For more information, see [Use threat indicators in analytics rules](use-threat-indicators-in-analytics-rules.md). -Find the new indicators in the **Threat intelligence** blade or directly in **Logs** by querying the **ThreatIntelligenceIndicator** table. For more information, see [Work with threat indicators](work-with-threat-indicators.md). +Find the new indicators on the **Threat intelligence** pane or directly in **Logs** by querying the `ThreatIntelligenceIndicator` table. For more information, see [Work with threat indicators](work-with-threat-indicators.md). ## Related content -In this document, you learned how to connect Microsoft Sentinel to Microsoft's threat intelligence feed with the MDTI data connector. To learn more about Microsoft Defender for Threat Intelligence see the following articles. +In this article, you learned how to connect Microsoft Sentinel to the Microsoft threat intelligence feed with the Defender Threat Intelligence data connector. To learn more about Defender Threat Intelligence, see the following articles: -- Learn about [What is Microsoft Defender Threat Intelligence?](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti).-- Get started with the MDTI portal [MDTI portal](/defender/threat-intelligence/learn-how-to-access-microsoft-defender-threat-intelligence-and-make-customizations-in-your-portal).-- Use MDTI in analytics [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md).+- Learn about [What is Defender Threat Intelligence?](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti). +- Get started with the [Defender Threat Intelligence portal](/defender/threat-intelligence/learn-how-to-access-microsoft-defender-threat-intelligence-and-make-customizations-in-your-portal). +- Use Defender Threat Intelligence in analytics [by using matching analytics to detect threats](use-matching-analytics-to-detect-threats.md). |
sentinel | Connect Threat Intelligence Tip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-tip.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customer intent: As a SOC admin, I want to use a Threat Intelligence Platform solution to ingest threat intelligence, so I can generate alerts incidents. +#customer intent: As an SOC admin, I want to use a threat intelligence platform solution to ingest threat intelligence so that I can generate alerts incidents. # Connect your threat intelligence platform to Microsoft Sentinel ->[!NOTE] -> This data connector is on a path for deprecation. More details will be published on the precise timeline. Use the new threat intelligence upload indicators API data connector for new solutions going forward. -> For more information, see [Connect your threat intelligence platform to Microsoft Sentinel with the upload indicators API](connect-threat-intelligence-upload-api.md). +> [!NOTE] +> This data connector is on a path for deprecation. More information will be published on the precise timeline. Use the new Threat Intelligence Upload Indicators API data connector for new solutions going forward. +> For more information, see [Connect your threat intelligence platform to Microsoft Sentinel with the Upload Indicators API](connect-threat-intelligence-upload-api.md). -Many organizations use threat intelligence platform (TIP) solutions to aggregate threat indicator feeds from various sources. From the aggregated feed, the data is curated to apply to security solutions such as network devices, EDR/XDR solutions, or SIEMs such as Microsoft Sentinel. The **Threat Intelligence Platforms data connector** allows you to use these solutions to import threat indicators into Microsoft Sentinel. +Many organizations use threat intelligence platform (TIP) solutions to aggregate threat indicator feeds from various sources. From the aggregated feed, the data is curated to apply to security solutions such as network devices, EDR/XDR solutions, or security information and event management (SIEM) solutions such as Microsoft Sentinel. By using the TIP data connector, you can use these solutions to import threat indicators into Microsoft Sentinel. -Because the TIP data connector works with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator) to accomplish this, you can use the connector to send indicators to Microsoft Sentinel (and to other Microsoft security solutions like Microsoft Defender XDR) from any other custom threat intelligence platform that can communicate with that API. +Because the TIP data connector works with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator) to accomplish this process, you can use the connector to send indicators to Microsoft Sentinel (and to other Microsoft security solutions like Defender XDR) from any other custom TIP that can communicate with that API. --Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Microsoft Sentinel, and specifically about the [threat intelligence platform products](threat-intelligence-integration.md#integrated-threat-intelligence-platform-products) that can be integrated with Microsoft Sentinel. [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] +Learn more about [threat intelligence](understand-threat-intelligence.md) in Microsoft Sentinel, and specifically about the [TIP products](threat-intelligence-integration.md#integrated-threat-intelligence-platform-products) that you can integrate with Microsoft Sentinel. + [!INCLUDE [unified-soc-preview](includes/unified-soc-preview.md)] -## Prerequisites +## Prerequisites -- In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.-- To grant permissions to your TIP product or any other custom application that uses direct integration with the Microsoft Graph TI Indicators API, you must have the **Security administrator** Microsoft Entra role, or the equivalent permissions.-- You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.+- To install, update, and delete standalone content or solutions in the **Content hub**, you need the Microsoft Sentinel Contributor role at the resource group level. +- To grant permissions to your TIP product or any other custom application that uses direct integration with the Microsoft Graph TI Indicators API, you must have the Security Administrator Microsoft Entra role or the equivalent permissions. +- To store your threat indicators, you must have read and write permissions to the Microsoft Sentinel workspace. ## Instructions -Follow these steps to import threat indicators to Microsoft Sentinel from your integrated TIP or custom threat intelligence solution: -1. Obtain an Application ID and Client Secret from your Microsoft Entra ID -2. Input this information into your TIP solution or custom application -3. Enable the Threat Intelligence Platforms data connector in Microsoft Sentinel +To import threat indicators to Microsoft Sentinel from your integrated TIP or custom threat intelligence solution, follow these steps: ++1. Obtain an application ID and client secret from Microsoft Entra ID. +1. Input this information into your TIP solution or custom application. +1. Enable the TIP data connector in Microsoft Sentinel. <a name='sign-up-for-an-application-id-and-client-secret-from-your-azure-active-directory'></a> -## Sign up for an Application ID and Client secret from your Microsoft Entra ID +## Sign up for an application ID and client secret from Microsoft Entra ID -Whether you are working with a TIP or with a custom solution, the tiIndicators API requires some basic information to allow you to connect your feed to it and send it threat indicators. The three pieces of information you need are: +Whether you're working with a TIP or a custom solution, the tiIndicators API requires some basic information to allow you to connect your feed to it and send it threat indicators. The three pieces of information you need are: - Application (client) ID - Directory (tenant) ID - Client secret -You can get this information from your Microsoft Entra ID through a process called **App Registration** which includes the following three steps: +You can get this information from Microsoft Entra ID through app registration, which includes the following three steps: -- Register an app with Microsoft Entra ID-- Specify the permissions required by the app to connect to the Microsoft Graph tiIndicators API and send threat indicators+- Register an app with Microsoft Entra ID. +- Specify the permissions required by the app to connect to the Microsoft Graph tiIndicators API and send threat indicators. - Get consent from your organization to grant these permissions to this application. <a name='register-an-application-with-azure-active-directory'></a> #### Register an application with Microsoft Entra ID -1. From the Azure portal, navigate to the **Microsoft Entra ID** service. -1. Select **App Registrations** from the menu and select **New registration**. -1. Choose a name for your application registration, select the **Single tenant** radio button, and select **Register**. +1. In the Azure portal, go to **Microsoft Entra ID**. +1. On the menu, select **App Registrations**, and then select **New registration**. +1. Choose a name for your application registration, select **Single tenant**, and then select **Register**. - :::image type="content" source="media/connect-threat-intelligence-tip/threat-intel-register-application.png" alt-text="Register an application"::: + :::image type="content" source="media/connect-threat-intelligence-tip/threat-intel-register-application.png" alt-text="Screenshot that shows registering an application."::: -1. From the resulting screen, copy the **Application (client) ID** and **Directory (tenant) ID** values. These are the first two pieces of information youΓÇÖll need later to configure your TIP or custom solution to send threat indicators to Microsoft Sentinel. The third, the **Client secret**, comes later. +1. On the screen that opens, copy the **Application (client) ID** and **Directory (tenant) ID** values. You need these two pieces of information later to configure your TIP or custom solution to send threat indicators to Microsoft Sentinel. The third piece of information you need, the client secret, comes later. #### Specify the permissions required by the application -1. Go back to the main page of the **Microsoft Entra ID** service. +1. Go back to the main page of **Microsoft Entra ID**. -1. Select **App Registrations** from the menu and select your newly registered app. +1. On the menu, select **App Registrations**, and then select your newly registered app. -1. Select **API Permissions** from the menu and select the **Add a permission** button. +1. On the menu, select **API Permissions** > **Add a permission**. -1. On the **Select an API** page, select the **Microsoft Graph** API and then choose from a list of Microsoft Graph permissions. +1. On the **Select an API** page, select the **Microsoft Graph** API. Then choose from a list of Microsoft Graph permissions. -1. At the prompt "What type of permissions does your application require?" select **Application permissions**. This is the type of permissions used by applications authenticating with App ID and App Secrets (API Keys). +1. At the prompt **What type of permissions does your application require?**, select **Application permissions**. This permission is the type used by applications that authenticate with app ID and app secrets (API keys). -1. Select **ThreatIndicators.ReadWrite.OwnedBy** and select **Add permissions** to add this permission to your appΓÇÖs list of permissions. +1. Select **ThreatIndicators.ReadWrite.OwnedBy**, and then select **Add permissions** to add this permission to your app's list of permissions. - :::image type="content" source="media/connect-threat-intelligence-tip/threat-intel-api-permissions-1.png" alt-text="Specify permissions"::: + :::image type="content" source="media/connect-threat-intelligence-tip/threat-intel-api-permissions-1.png" alt-text="Screenshot that shows specifying permissions."::: #### Get consent from your organization to grant these permissions 1. To grant consent, a privileged role is required. For more information, see [Grant tenant-wide admin consent to an application](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal). - :::image type="content" source="media/connect-threat-intelligence-tip/threat-intel-api-permissions-2.png" alt-text="Grant consent"::: + :::image type="content" source="media/connect-threat-intelligence-tip/threat-intel-api-permissions-2.png" alt-text="Screenshot that shows granting consent."::: -1. Once consent has been granted to your app, you should see a green check mark under **Status**. +1. After consent is granted to your app, you should see a green check mark under **Status**. -Now that your app has been registered and permissions have been granted, you can get the last thing on your list - a client secret for your app. +After your app is registered and permissions are granted, you need to get a client secret for your app. -1. Go back to the main page of the **Microsoft Entra ID** service. +1. Go back to the main page of **Microsoft Entra ID**. -1. Select **App Registrations** from the menu and select your newly registered app. +1. On the menu, select **App Registrations**, and then select your newly registered app. -1. Select **Certificates & secrets** from the menu and select the **New client secret** button to receive a secret (API key) for your app. +1. On the menu, select **Certificates & secrets**. Then select **New client secret** to receive a secret (API key) for your app. - :::image type="content" source="media/connect-threat-intelligence-tip/threat-intel-client-secret.png" alt-text="Get client secret"::: + :::image type="content" source="media/connect-threat-intelligence-tip/threat-intel-client-secret.png" alt-text="Screenshot that shows getting a client secret."::: -1. Select the **Add** button and **copy the client secret**. +1. Select **Add**, and then copy the client secret. > [!IMPORTANT]- > You must copy the **client secret** before leaving this screen. You cannot retrieve this secret again if you navigate away from this page. You will need this value when you configure your TIP or custom solution. + > You must copy the client secret before you leave this screen. You can't retrieve this secret again if you go away from this page. You need this value when you configure your TIP or custom solution. ## Input this information into your TIP solution or custom application -You now have all three pieces of information you need to configure your TIP or custom solution to send threat indicators to Microsoft Sentinel. +You now have all three pieces of information you need to configure your TIP or custom solution to send threat indicators to Microsoft Sentinel: - Application (client) ID - Directory (tenant) ID - Client secret -1. Enter these values in the configuration of your integrated TIP or custom solution where required. +Enter these values in the configuration of your integrated TIP or custom solution where required. -1. For the target product, specify **Azure Sentinel**. (Specifying "Microsoft Sentinel" will result in an error.) +1. For the target product, specify **Azure Sentinel**. (Specifying **Microsoft Sentinel** results in an error.) 1. For the action, specify **alert**. -Once this configuration is complete, threat indicators will be sent from your TIP or custom solution, through the **Microsoft Graph tiIndicators API**, targeted at Microsoft Sentinel. +After the configuration is finished, threat indicators are sent from your TIP or custom solution, through the Microsoft Graph tiIndicators API, targeted at Microsoft Sentinel. -## Enable the Threat Intelligence Platforms data connector in Microsoft Sentinel +## Enable the TIP data connector in Microsoft Sentinel -The last step in the integration process is to enable the **Threat Intelligence Platforms data connector** in Microsoft Sentinel. Enabling the connector is what allows Microsoft Sentinel to receive the threat indicators sent from your TIP or custom solution. These indicators will be available to all Microsoft Sentinel workspaces for your organization. Follow these steps to enable the Threat Intelligence Platforms data connector for each workspace: +The last step in the integration process is to enable the TIP data connector in Microsoft Sentinel. Enabling the connector is what allows Microsoft Sentinel to receive the threat indicators sent from your TIP or custom solution. These indicators are available to all Microsoft Sentinel workspaces for your organization. To enable the TIP data connector for each workspace, follow these steps: 1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**. The last step in the integration process is to enable the **Threat Intelligence 1. Select the :::image type="icon" source="mediti-data-connector/install-update-button.png"::: **Install/Update** button. -For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md). + For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md). -1. To configure the TIP data connector, select **Configuration** > **Data connectors**. +1. To configure the TIP data connector, select **Configuration** > **Data connectors**. -1. Find and select the **Threat Intelligence Platforms** data connector > **Open connector page** button. +1. Find and select the **Threat Intelligence Platforms** data connector, and then select **Open connector page**. - :::image type="content" source="media/connect-threat-intelligence-tip/tip-data-connector-config.png" alt-text="Screenshot displaying the data connectors page with the TIP data connector listed." lightbox="media/connect-threat-intelligence-tip/tip-data-connector-config.png"::: + :::image type="content" source="media/connect-threat-intelligence-tip/tip-data-connector-config.png" alt-text="Screenshot that shows the Data connectors page with the Threat Intelligence Platforms data connector listed." lightbox="media/connect-threat-intelligence-tip/tip-data-connector-config.png"::: -1. As you've already completed the app registration and configured your TIP or custom solution to send threat indicators, the only step left is to select the **Connect** button. +1. Because you already finished the app registration and configured your TIP or custom solution to send threat indicators, the only step left is to select **Connect**. -Within a few minutes, threat indicators should begin flowing into this Microsoft Sentinel workspace. You can find the new indicators in the **Threat intelligence** blade, accessible from the Microsoft Sentinel navigation menu. +Within a few minutes, threat indicators should begin flowing into this Microsoft Sentinel workspace. You can find the new indicators on the **Threat intelligence** pane, which you can access from the Microsoft Sentinel menu. ## Related content -In this document, you learned how to connect your threat intelligence platform to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles. +In this article, you learned how to connect your TIP to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md). |
sentinel | Connect Threat Intelligence Upload Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-upload-api.md | Title: Connect your TIP with upload indicators API -description: Learn how to connect your threat intelligence platform (TIP) or custom feed using the upload indicators API to Microsoft Sentinel. +description: Learn how to connect your threat intelligence platform or custom feed by using the Upload Indicators API to Microsoft Sentinel. Last updated 3/14/2024 appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#customer intent: As a SOC admin, I want to connect my Threat Intelligence Platform with the upload indicators API to ingest threat intelligence, so I can utilize the benefits of this updated API. +#customer intent: As an SOC admin, I want to connect my threat intelligence platform with the Upload Indicators API to ingest threat intelligence that so I can use the benefits of this updated API. -# Connect your threat intelligence platform to Microsoft Sentinel with the upload indicators API +# Connect your threat intelligence platform to Microsoft Sentinel with the Upload Indicators API -Many organizations use threat intelligence platform (TIP) solutions to aggregate threat indicator feeds from various sources. From the aggregated feed, the data is curated to apply to security solutions such as network devices, EDR/XDR solutions, or SIEMs such as Microsoft Sentinel. The **Threat Intelligence Upload Indicators API** allows you to use these solutions to import threat indicators into Microsoft Sentinel. The upload indicators API ingests threat intelligence indicators into Microsoft Sentinel without the need of the data connector. The data connector only mirrors the instructions for connecting to the API endpoint detailed in this article and the supplemental API reference document [Microsoft Sentinel upload indicators API](upload-indicators-api.md). +Many organizations use threat intelligence platform (TIP) solutions to aggregate threat indicator feeds from various sources. From the aggregated feed, the data is curated to apply to security solutions such as network devices, EDR/XDR solutions, or security information and event management (SIEM) solutions such as Microsoft Sentinel. By using the Threat Intelligence Upload Indicators API, you can use these solutions to import threat indicators into Microsoft Sentinel. +The Upload Indicators API ingests threat intelligence indicators into Microsoft Sentinel without the need for the data connector. The data connector only mirrors the instructions for connecting to the API endpoint described in this article and the API reference document [Microsoft Sentinel Upload Indicators API](upload-indicators-api.md). -For more information about threat intelligence, see [Threat Intelligence](understand-threat-intelligence.md). ++For more information about threat intelligence, see [Threat intelligence](understand-threat-intelligence.md). > [!IMPORTANT]-> The Microsoft Sentinel **Threat Intelligence Upload Indicators API** is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> The Microsoft Sentinel Threat Intelligence Upload Indicators API is in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for more legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > > [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)] +For more information, see [Connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md). + [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] -**See also**: [Connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md) +## Prerequisites -## Prerequisites -- In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level. Keep in mind, you don't need to install the data connector to use the API endpoint.+- To install, update, and delete standalone content or solutions in the **Content hub**, you need the Microsoft Sentinel Contributor role at the resource group level. You don't need to install the data connector to use the API endpoint. - You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.-- You must be able to register a Microsoft Entra application. -- The Microsoft Entra application must be granted the Microsoft Sentinel contributor role at the workspace level.+- You must be able to register a Microsoft Entra application. +- Your Microsoft Entra application must be granted the Microsoft Sentinel Contributor role at the workspace level. ## Instructions Follow these steps to import threat indicators to Microsoft Sentinel from your integrated TIP or custom threat intelligence solution: -1. Register a Microsoft Entra application and record its application ID. +1. Register a Microsoft Entra application, and then record its application ID. 1. Generate and record a client secret for your Microsoft Entra application.-1. Assign your Microsoft Entra application the Microsoft Sentinel contributor role or equivalent. +1. Assign your Microsoft Entra application the Microsoft Sentinel Contributor role or the equivalent. 1. Configure your TIP solution or custom application. <a name='register-an-azure-ad-application'></a> ## Register a Microsoft Entra application -The [default user role permissions](../active-directory/fundamentals/users-default-permissions.md#restrict-member-users-default-permissions) allow users to create application registrations. If this setting has been switched to **No**, you'll need permission to manage applications in Microsoft Entra ID. Any of the following Microsoft Entra roles include the required permissions: +The [default user role permissions](../active-directory/fundamentals/users-default-permissions.md#restrict-member-users-default-permissions) allow users to create application registrations. If this setting was switched to **No**, you need permission to manage applications in Microsoft Entra. Any of the following Microsoft Entra roles include the required permissions: + - Application administrator - Application developer - Cloud application administrator For more information on registering your Microsoft Entra application, see [Register an application](../active-directory/develop/quickstart-register-app.md#register-an-application). -Once you've registered your application, record its Application (client) ID from the application's **Overview** tab. +After you register your application, record its application (client) ID from the application's **Overview** tab. -## Generate and record client secret +## Generate and record a client secret -Now that your application has been registered, generate and record a client secret. +Now that your application is registered, generate and record a client secret. For more information on generating a client secret, see [Add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret). ## Assign a role to the application -The upload indicators API ingests threat indicators at the workspace level and allows a least privilege role of Microsoft Sentinel contributor. +The Upload Indicators API ingests threat indicators at the workspace level and allows a least-privilege role of Microsoft Sentinel Contributor. -1. From the Azure portal, go to Log Analytics workspaces. +1. From the Azure portal, go to **Log Analytics workspaces**. 1. Select **Access control (IAM)**. 1. Select **Add** > **Add role assignment**.-1. In the **Role** tab, select the **Microsoft Sentinel Contributor** role > **Next**. +1. On the **Role** tab, select the **Microsoft Sentinel Contributor** role, and then select **Next**. 1. On the **Members** tab, select **Assign access to** > **User, group, or service principal**.-1. **Select members**. By default, Microsoft Entra applications aren't displayed in the available options. To find your application, search for it by name. - :::image type="content" source="media/connect-threat-intelligence-upload-api/assign-role.png" alt-text="Screenshot showing the Microsoft Sentinel contributor role assigned to the application at the workspace level."::: +1. Select members. By default, Microsoft Entra applications aren't displayed in the available options. To find your application, search for it by name. -1. **Select** > **Review + assign**. + :::image type="content" source="media/connect-threat-intelligence-upload-api/assign-role.png" alt-text="Screenshot that shows the Microsoft Sentinel Contributor role assigned to the application at the workspace level."::: ++1. Select **Review + assign**. For more information on assigning roles to applications, see [Assign a role to the application](../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application). -## Install the Threat Intelligence upload indicators API data connector in Microsoft Sentinel (optional) +## Install the Threat Intelligence Upload Indicators API data connector in Microsoft Sentinel (optional) -Install the **Threat Intelligence Upload Indicators API** data connector to see the API connection instructions from your Microsoft Sentinel workspace. +Install the Threat Intelligence Upload Indicators API data connector to see the API connection instructions from your Microsoft Sentinel workspace. 1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**. Install the **Threat Intelligence Upload Indicators API** data connector to see 1. Select the :::image type="icon" source="mediti-data-connector/install-update-button.png"::: **Install/Update** button. -For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md). + For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md). ++1. The data connector is now visible in **Configuration** > **Data connectors**. Open the **Data connectors** page to find more information on how to configure your application with this API. -1. The data connector is now visible in **Configuration** > **Data Connectors**. Open the data connector page to find more information on configuring your application with this API. + :::image type="content" source="media/connect-threat-intelligence-upload-api/upload-api-data-connector.png" alt-text="Screenshot that shows the Data connectors page with the Upload Indicators API data connector listed." lightbox="media/connect-threat-intelligence-upload-api/upload-api-data-connector.png"::: - :::image type="content" source="media/connect-threat-intelligence-upload-api/upload-api-data-connector.png" alt-text="Screenshot displaying the data connectors page with the upload API data connector listed." lightbox="media/connect-threat-intelligence-upload-api/upload-api-data-connector.png"::: +## Configure your threat intelligence platform solution or custom application -## Configure your TIP solution or custom application +The following configuration information is required by the Upload Indicators API: -The following configuration information required by the upload indicators API: - Application (client) ID - Client secret - Microsoft Sentinel workspace ID Enter these values in the configuration of your integrated TIP or custom solution where required. -1. Submit the indicators to the Microsoft Sentinel upload API. To learn more about the upload indicators API, see the reference document [Microsoft Sentinel upload indicators API](upload-indicators-api.md). -1. Within a few minutes, threat indicators should begin flowing into your Microsoft Sentinel workspace. Find the new indicators in the **Threat intelligence** blade, accessible from the Microsoft Sentinel navigation menu. -1. The data connector status reflects the **Connected** status and the **Data received** graph is updated once indicators are submitted successfully. +1. Submit the indicators to the Microsoft Sentinel Upload Indicators API. To learn more about the Upload Indicators API, see [Microsoft Sentinel Upload Indicators API](upload-indicators-api.md). +1. Within a few minutes, threat indicators should begin flowing into your Microsoft Sentinel workspace. Find the new indicators on the **Threat intelligence** pane, which is accessible from the Microsoft Sentinel menu. +1. The data connector status reflects the **Connected** status. The **Data received** graph is updated after indicators are submitted successfully. - :::image type="content" source="media/connect-threat-intelligence-upload-api/upload-api-data-connector-connected.png" alt-text="Screenshot showing upload indicators API data connector in the connected state." lightbox="media/connect-threat-intelligence-upload-api/upload-api-data-connector-connected.png"::: + :::image type="content" source="media/connect-threat-intelligence-upload-api/upload-api-data-connector-connected.png" alt-text="Screenshot that shows the Upload Indicators API data connector in the Connected state." lightbox="media/connect-threat-intelligence-upload-api/upload-api-data-connector-connected.png"::: ## Related content -In this document, you learned how to connect your threat intelligence platform to Microsoft Sentinel. To learn more about using threat indicators in Microsoft Sentinel, see the following articles. +In this article, you learned how to connect your TIP to Microsoft Sentinel. To learn more about using threat indicators in Microsoft Sentinel, see the following articles: - [Understand threat intelligence](understand-threat-intelligence.md). - [Work with threat indicators](work-with-threat-indicators.md) throughout the Microsoft Sentinel experience. |
sentinel | Threat Intelligence Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md | -Microsoft Sentinel gives you a few different ways to [use threat intelligence feeds](work-with-threat-indicators.md) to enhance your security analysts' ability to detect and prioritize known threats. +Microsoft Sentinel gives you a few ways to [use threat intelligence feeds](work-with-threat-indicators.md) to enhance your security analysts' ability to detect and prioritize known threats: - Use one of many available integrated [threat intelligence platform (TIP) products](connect-threat-intelligence-tip.md).-- [Connect to TAXII servers](connect-threat-intelligence-taxii.md) to take advantage of any STIX-compatible threat intelligence source.+- Connect to [TAXII servers](connect-threat-intelligence-taxii.md) to take advantage of any STIX-compatible threat intelligence source. - Connect directly to the [Microsoft Defender Threat Intelligence](connect-mdti-data-connector.md) feed. - Make use of any custom solutions that can communicate directly with the [Threat Intelligence Upload Indicators API](connect-threat-intelligence-upload-api.md). -- You can also connect to threat intelligence sources from playbooks, in order to enrich incidents with TI information that can help direct investigation and response actions.+- Connect to threat intelligence sources from playbooks to enrich incidents with threat intelligence information that can help direct investigation and response actions. > [!TIP]-> If you have multiple workspaces in the same tenant, such as for [Managed Security Service Providers (MSSPs)](mssp-protect-intellectual-property.md), it may be more cost effective to connect threat indicators only to the centralized workspace. +> If you have multiple workspaces in the same tenant, such as for [Managed Security Service Providers (MSSPs)](mssp-protect-intellectual-property.md), it might be more cost effective to connect threat indicators only to the centralized workspace. > > When you have the same set of threat indicators imported into each separate workspace, you can run cross-workspace queries to aggregate threat indicators across your workspaces. Correlate them within your MSSP incident detection, investigation, and hunting experience.-> ## TAXII threat intelligence feeds -To connect to TAXII threat intelligence feeds, follow the instructions to [connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md), together with the data supplied by each vendor. You may need to contact the vendor directly to obtain the necessary data to use with the connector. +To connect to TAXII threat intelligence feeds, follow the instructions to [connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md), together with the data supplied by each vendor. You might need to contact the vendor directly to obtain the necessary data to use with the connector. -### Accenture Cyber Threat Intelligence +### Accenture cyber threat intelligence -- [Learn about Accenture Cyber Threat Intelligence (CTI) integration with Microsoft Sentinel](https://www.accenture.com/us-en/services/security/cyber-resilience).+- Learn about [Accenture cyber threat intelligence (CTI) integration with Microsoft Sentinel](https://www.accenture.com/us-en/services/security/cyber-resilience). ### Cybersixgill Darkfeed -- [Learn about Cybersixgill integration with Microsoft Sentinel](https://www.cybersixgill.com/partners/azure-sentinel/).-- To connect Microsoft Sentinel to Cybersixgill TAXII Server and get access to Darkfeed, [contact azuresentinel@cybersixgill.com](mailto://azuresentinel@cybersixgill.com) to obtain the API Root, Collection ID, Username, and Password.+- Learn about [Cybersixgill integration with Microsoft Sentinel](https://www.cybersixgill.com/partners/azure-sentinel/). +- Connect Microsoft Sentinel to the Cybersixgill TAXII server and get access to Darkfeed. [Contact azuresentinel@cybersixgill.com](mailto://azuresentinel@cybersixgill.com) to obtain the API root, collection ID, username, and password. ++### Cyware threat intelligence exchange (CTIX) -### Cyware Threat Intelligence eXchange (CTIX) +One component of Cyware's TIP, CTIX, is to make intel actionable with a TAXII feed for your security information and event management. For Microsoft Sentinel, follow the instructions here: -One component of Cyware's threat intelligence platform, CTIX, is actioning intel with a TAXII feed for your SIEM. In the case of Microsoft Sentinel, follow the instructions here: + - Learn how to [integrate with Microsoft Sentinel](https://techdocs.cyware.com/en/299670-419978-configure-subscribers-to-receive-ctix-threat-intel-over-taxii.html#299670-13832-integrate-with-microsoft-sentinel) ### ESET -- [Learn about ESET's threat intelligence offering](https://www.eset.com/int/business/services/threat-intelligence/).-- To connect Microsoft Sentinel to the ESET TAXII server, obtain the API root URL, Collection ID, Username, and Password from your ESET account. Then follow the [general instructions](connect-threat-intelligence-taxii.md) and [ESET's knowledge base article](https://support.eset.com/en/kb8314-eset-threat-intelligence-with-ms-azure-sentinel).+- Learn about [ESET's threat intelligence offering](https://www.eset.com/int/business/services/threat-intelligence/). +- Connect Microsoft Sentinel to the ESET TAXII server. Obtain the API root URL, collection ID, username, and password from your ESET account. Then follow the [general instructions](connect-threat-intelligence-taxii.md) and [ESET's knowledge base article](https://support.eset.com/en/kb8314-eset-threat-intelligence-with-ms-azure-sentinel). ### Financial Services Information Sharing and Analysis Center (FS-ISAC) One component of Cyware's threat intelligence platform, CTIX, is actioning intel ### Health intelligence sharing community (H-ISAC) -- [Join the H-ISAC](https://h-isac.org/) to get the credentials to access this feed.+- Join the [H-ISAC](https://h-isac.org/) to get the credentials to access this feed. ### IBM X-Force -- [Learn more about IBM X-Force integration](https://www.ibm.com/security/xforce).+- Learn more about [IBM X-Force integration](https://www.ibm.com/security/xforce). ### IntSights -- [Learn more about the IntSights integration with Microsoft Sentinel @IntSights](https://intsights.com/resources/intsights-microsoft-azure-sentinel).-- To connect Microsoft Sentinel to the IntSights TAXII Server, obtain the API Root, Collection ID, Username and Password from the IntSights portal after you configure a policy of the data you wish to send to Microsoft Sentinel.+- Learn more about the [IntSights integration with Microsoft Sentinel @IntSights](https://intsights.com/resources/intsights-microsoft-azure-sentinel). +- Connect Microsoft Sentinel to the IntSights TAXII server. Obtain the API root, collection ID, username, and password from the IntSights portal after you configure a policy of the data that you want to send to Microsoft Sentinel. ### Kaspersky -- [Learn about Kaspersky integration with Microsoft Sentinel](https://support.kaspersky.com/15908).+- Learn about [Kaspersky integration with Microsoft Sentinel](https://support.kaspersky.com/15908). ### Pulsedive -- [Learn about Pulsedive integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-pulsedive-feed-into-microsoft-sentinel/ba-p/3478953).+- Learn about [Pulsedive integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-pulsedive-feed-into-microsoft-sentinel/ba-p/3478953). ### ReversingLabs -- [Learn about ReversingLabs TAXII integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-reversinglab-s-ransomware-feed-into-microsoft-sentinel/ba-p/3423937).+- Learn about [ReversingLabs TAXII integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-reversinglab-s-ransomware-feed-into-microsoft-sentinel/ba-p/3423937). ### Sectrio -- [Learn more about Sectrio integration](https://sectrio.com/threat-intelligence/).-- [Step by step process for integrating Sectrio's TI feed into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-bring-threat-intelligence-from-sectrio-using/ba-p/2964648).+- Learn more about [Sectrio integration](https://sectrio.com/threat-intelligence/). +- Learn about the [step-by-step process for integrating Sectrio's threat intelligence feed into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-bring-threat-intelligence-from-sectrio-using/ba-p/2964648). ### SEKOIA.IO -- [Learn about SEKOIA.IO integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bring-threat-intelligence-from-sekoia-io-using-taxii-data/ba-p/3302497).+- Learn about [SEKOIA.IO integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bring-threat-intelligence-from-sekoia-io-using-taxii-data/ba-p/3302497). ### ThreatConnect -- [Learn more about STIX and TAXII at ThreatConnect](https://threatconnect.com/stix-taxii/).-- [See TAXII Services documentation at ThreatConnect](https://docs.threatconnect.com/en/latest/rest_api/taxii/taxii_2.1.html)+- Learn more about [STIX and TAXII at ThreatConnect](https://threatconnect.com/stix-taxii/). +- See the [TAXII services documentation at ThreatConnect](https://docs.threatconnect.com/en/latest/rest_api/taxii/taxii_2.1.html). ## Integrated threat intelligence platform products -To connect to Threat Intelligence Platform (TIP) feeds, see [connect Threat Intelligence platforms to Microsoft Sentinel](connect-threat-intelligence-tip.md). See the following solutions to learn what additional information is needed. +To connect to TIP feeds, see [Connect threat intelligence platforms to Microsoft Sentinel](connect-threat-intelligence-tip.md). See the following solutions to learn what other information is needed. ### Agari Phishing Defense and Brand Protection To connect to Threat Intelligence Platform (TIP) feeds, see [connect Threat Inte ### AlienVault Open Threat Exchange (OTX) from AT&T Cybersecurity -- [AlienVault OTX](https://otx.alienvault.com/) makes use of Azure Logic Apps (playbooks) to connect to Microsoft Sentinel. See the [specialized instructions](https://techcommunity.microsoft.com/t5/azure-sentinel/ingesting-alien-vault-otx-threat-indicators-into-azure-sentinel/ba-p/1086566) necessary to take full advantage of the complete offering.+- Learn how [AlienVault OTX](https://otx.alienvault.com/) makes use of Azure Logic Apps (playbooks) to connect to Microsoft Sentinel. See the [specialized instructions](https://techcommunity.microsoft.com/t5/azure-sentinel/ingesting-alien-vault-otx-threat-indicators-into-azure-sentinel/ba-p/1086566) necessary to take full advantage of the complete offering. ### EclecticIQ Platform -- EclecticIQ Platform integrates with Microsoft Sentinel to enhance threat detection, hunting and response. Learn more about the [benefits and use cases](https://www.eclecticiq.com/resources/microsoft-sentinel-and-eclecticiq-intelligence-center) of this two-way integration.+- EclecticIQ Platform integrates with Microsoft Sentinel to enhance threat detection, hunting, and response. Learn more about the [benefits and use cases](https://www.eclecticiq.com/resources/microsoft-sentinel-and-eclecticiq-intelligence-center) of this two-way integration. ### GroupIB Threat Intelligence and Attribution -- To connect [GroupIB Threat Intelligence and Attribution](https://www.group-ib.com/products/threat-intelligence/) to Microsoft Sentinel, GroupIB makes use of Azure Logic Apps. See the [specialized instructions](https://techcommunity.microsoft.com/t5/azure-sentinel/group-ib-threat-intelligence-and-attribution-connector-azure/ba-p/2252904) necessary to take full advantage of the complete offering.+- To connect [GroupIB Threat Intelligence and Attribution](https://www.group-ib.com/products/threat-intelligence/) to Microsoft Sentinel, GroupIB makes use of Logic Apps. See the [specialized instructions](https://techcommunity.microsoft.com/t5/azure-sentinel/group-ib-threat-intelligence-and-attribution-connector-azure/ba-p/2252904) that are necessary to take full advantage of the complete offering. -### MISP Open Source Threat Intelligence Platform +### MISP open-source threat intelligence platform -- Push threat indicators from MISP to Microsoft Sentinel using the TI upload indicators API with [MISP2Sentinel](https://www.misp-project.org/2023/08/26/MISP-Sentinel-UploadIndicatorsAPI.html/).-- Here is the Azure Marketplace link for [MISP2Sentinel](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-misp2sentinel?tab=Overview).-- [Learn more about the MISP Project](https://www.misp-project.org/).+- Push threat indicators from MISP to Microsoft Sentinel by using the Threat Intelligence Upload Indicators API with [MISP2Sentinel](https://www.misp-project.org/2023/08/26/MISP-Sentinel-UploadIndicatorsAPI.html/). +- See [MISP2Sentinel](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-misp2sentinel?tab=Overview) in Azure Marketplace. +- Learn more about the [MISP Project](https://www.misp-project.org/). ### Palo Alto Networks MineMeld -- To configure [Palo Alto MineMeld](https://www.paloaltonetworks.com/products/secure-the-network/subscriptions/minemeld) with the connection information to Microsoft Sentinel, see [Sending IOCs to the Microsoft Graph Security API using MineMeld](https://live.paloaltonetworks.com/t5/MineMeld-Articles/Sending-IOCs-to-the-Microsoft-Graph-Security-API-using-MineMeld/ta-p/258540) and skip to the **MineMeld Configuration** heading.+- To configure [Palo Alto MineMeld](https://www.paloaltonetworks.com/products/secure-the-network/subscriptions/minemeld) with the connection information to Microsoft Sentinel, see [Sending IOCs to the Microsoft Graph Security API using MineMeld](https://live.paloaltonetworks.com/t5/MineMeld-Articles/Sending-IOCs-to-the-Microsoft-Graph-Security-API-using-MineMeld/ta-p/258540). Go to the "MineMeld Configuration" heading. -### Recorded Future Security Intelligence Platform +### Recorded Future security intelligence platform -- [Recorded Future](https://www.recordedfuture.com/integrations/microsoft-azure/) makes use of Azure Logic Apps (playbooks) to connect to Microsoft Sentinel. See the [specialized instructions](https://www.recordedfuture.com/integrations/microsoft) necessary to take full advantage of the complete offering.+- Learn how [Recorded Future](https://www.recordedfuture.com/integrations/microsoft-azure/) makes use of Logic Apps (playbooks) to connect to Microsoft Sentinel. See the [specialized instructions](https://www.recordedfuture.com/integrations/microsoft) necessary to take full advantage of the complete offering. ### ThreatConnect Platform - See the [Microsoft Graph Security Threat Indicators Integration Configuration Guide](https://training.threatconnect.com/learn/article/microsoft-graph-security-threat-indicators-integration-configuration-guide-kb-article) for instructions to connect [ThreatConnect](https://threatconnect.com/solution/) to Microsoft Sentinel. -### ThreatQuotient Threat Intelligence Platform +### ThreatQuotient threat intelligence platform - See [Microsoft Sentinel Connector for ThreatQ integration](https://azuremarketplace.microsoft.com/marketplace/apps/threatquotientinc1595345895602.microsoft-sentinel-connector-threatq?tab=overview) for support information and instructions to connect [ThreatQuotient TIP](https://www.threatq.com/) to Microsoft Sentinel. ## Incident enrichment sources -Besides being used to import threat indicators, threat intelligence feeds can also serve as a source to enrich the information in your incidents and provide more context to your investigations. The following feeds serve this purpose, and provide Logic App playbooks to use in your [automated incident response](automate-responses-with-playbooks.md). Find these enrichment sources in the **Content hub**. +Besides being used to import threat indicators, threat intelligence feeds can also serve as a source to enrich the information in your incidents and provide more context to your investigations. The following feeds serve this purpose and provide Logic Apps playbooks to use in your [automated incident response](automate-responses-with-playbooks.md). Find these enrichment sources in the **Content hub**. For more information about how to find and manage the solutions, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md). ### HYAS Insight - Find and enable incident enrichment playbooks for [HYAS Insight](https://www.hyas.com/hyas-insight) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/HYAS/Playbooks). Search for subfolders beginning with `Enrich-Sentinel-Incident-HYAS-Insight-`.-- See the HYAS Insight Logic App [connector documentation](/connectors/hyasinsight/).+- See the HYAS Insight Logic Apps [connector documentation](/connectors/hyasinsight/). ### Microsoft Defender Threat Intelligence - Find and enable incident enrichment playbooks for [Microsoft Defender Threat Intelligence](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Microsoft%20Defender%20Threat%20Intelligence/Playbooks).-- See the [MDTI Tech Community blog post](https://aka.ms/sentinel-playbooks) for more information. +- See the [Defender Threat Intelligence Tech Community blog post](https://aka.ms/sentinel-playbooks) for more information. ### Recorded Future Security Intelligence Platform - Find and enable incident enrichment playbooks for [Recorded Future](https://www.recordedfuture.com/integrations/microsoft-azure/) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks). Search for subfolders beginning with `RecordedFuture_`.-- See the Recorded Future Logic App [connector documentation](/connectors/recordedfuturev2/).+- See the Recorded Future Logic Apps [connector documentation](/connectors/recordedfuturev2/). ### ReversingLabs TitaniumCloud - Find and enable incident enrichment playbooks for [ReversingLabs](https://www.reversinglabs.com/products/file-reputation-service) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/ReversingLabs/Playbooks/ReversingLabs-EnrichFileHash).-- See the ReversingLabs TitaniumCloud Logic App [connector documentation](/connectors/reversinglabstitaniu/).+- See the ReversingLabs TitaniumCloud Logic Apps [connector documentation](/connectors/reversinglabstitaniu/). -### RiskIQ Passive Total +### RiskIQ PassiveTotal -- Find and enable incident enrichment playbooks for [RiskIQ Passive Total](https://www.riskiq.com/products/passivetotal/) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/RiskIQ/Playbooks).+- Find and enable the incident enrichment playbooks for [RiskIQ Passive Total](https://www.riskiq.com/products/passivetotal/) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/RiskIQ/Playbooks). - See [more information](https://techcommunity.microsoft.com/t5/azure-sentinel/enrich-azure-sentinel-security-incidents-with-the-riskiq/ba-p/1534412) on working with RiskIQ playbooks.-- See the RiskIQ PassiveTotal Logic App [connector documentation](/connectors/riskiqpassivetotal/).+- See the RiskIQ PassiveTotal Logic Apps [connector documentation](/connectors/riskiqpassivetotal/). -### Virus Total +### VirusTotal -- Find and enable incident enrichment playbooks for [Virus Total](https://developers.virustotal.com/v3.0/reference) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/VirusTotal/Playbooks). Search for subfolders beginning with `Get-VTURL`.-- See the Virus Total Logic App [connector documentation](/connectors/virustotal/).+- Find and enable incident enrichment playbooks for [VirusTotal](https://developers.virustotal.com/v3.0/reference) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/VirusTotal/Playbooks). Search for subfolders beginning with `Get-VTURL`. +- See the VirusTotal Logic Apps [connector documentation](/connectors/virustotal/). -## Next steps +## Related content -In this document, you learned how to connect your threat intelligence provider to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles. +In this article, you learned how to connect your threat intelligence provider to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md). |
sentinel | Understand Threat Intelligence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/understand-threat-intelligence.md | -Microsoft Sentinel is a cloud native Security Information and Event Management (SIEM) solution with the ability to quickly pull threat intelligence from numerous sources. +Microsoft Sentinel is a cloud-native security information and event management (SIEM) solution with the ability to quickly pull threat intelligence from numerous sources. [!INCLUDE [unified-soc-preview](includes/unified-soc-preview.md)] ## Introduction to threat intelligence -Cyber threat intelligence (CTI) is information describing existing or potential threats to systems and users. This intelligence takes many forms, from written reports detailing a particular threat actor's motivations, infrastructure, and techniques, to specific observations of IP addresses, domains, file hashes, and other artifacts associated with known cyber threats. CTI is used by organizations to provide essential context to unusual activity, so security personnel can quickly take action to protect their people, information, and assets. CTI can be sourced from many places, such as open-source data feeds, threat intelligence-sharing communities, commercial intelligence feeds, and local intelligence gathered in the course of security investigations within an organization. +Cyber threat intelligence (CTI) is information that describes existing or potential threats to systems and users. This intelligence takes many forms like written reports that detail a particular threat actor's motivations, infrastructure, and techniques. It can also be specific observations of IP addresses, domains, file hashes, and other artifacts associated with known cyber threats. -For SIEM solutions like Microsoft Sentinel, the most common forms of CTI are threat indicators, also known as Indicators of Compromise (IoC) or Indicators of Attack (IoA). Threat indicators are data that associate observed artifacts such as URLs, file hashes, or IP addresses with known threat activity such as phishing, botnets, or malware. This form of threat intelligence is often called *tactical threat intelligence* because it's applied to security products and automation in large scale to detect potential threats to an organization and protect against them. Use threat indicators in Microsoft Sentinel, to detect malicious activity observed in your environment and provide context to security investigators to inform response decisions. +Organizations use CTI to provide essential context to unusual activity so that security personnel can quickly take action to protect their people, information, and assets. You can source CTI from many places, such as: -Integrate threat intelligence (TI) into Microsoft Sentinel through the following activities: +- Open-source data feeds. +- Threat intelligence-sharing communities. +- Commercial intelligence feeds. +- Local intelligence gathered in the course of security investigations within an organization. -- **Import threat intelligence** into Microsoft Sentinel by enabling **data connectors** to various TI [platforms](connect-threat-intelligence-tip.md) and [feeds](connect-threat-intelligence-taxii.md).+For SIEM solutions like Microsoft Sentinel, the most common forms of CTI are threat indicators, which are also known as indicators of compromise (IOCs) or indicators of attack. Threat indicators are data that associate observed artifacts such as URLs, file hashes, or IP addresses with known threat activity such as phishing, botnets, or malware. This form of threat intelligence is often called *tactical threat intelligence*. It's applied to security products and automation in large scale to detect potential threats to an organization and protect against them. -- **View and manage** the imported threat intelligence in **Logs** and in the **Threat Intelligence** blade of Microsoft Sentinel.+Use threat indicators in Microsoft Sentinel to detect malicious activity observed in your environment and provide context to security investigators to inform response decisions. -- **Detect threats** and generate security alerts and incidents using the built-in **Analytics** rule templates based on your imported threat intelligence.+You can integrate threat intelligence into Microsoft Sentinel through the following activities: -- **Visualize key information** about your imported threat intelligence in Microsoft Sentinel with the **Threat Intelligence workbook**.+- **Import threat intelligence** into Microsoft Sentinel by enabling *data connectors* to various threat intelligence [platforms](connect-threat-intelligence-tip.md) and [feeds](connect-threat-intelligence-taxii.md). +- **View and manage** the imported threat intelligence in **Logs** and on the **Threat Intelligence** pane of Microsoft Sentinel. +- **Detect threats** and generate security alerts and incidents by using the built-in **Analytics** rule templates based on your imported threat intelligence. +- **Visualize key information** about your imported threat intelligence in Microsoft Sentinel with the **Threat Intelligence** workbook. -Microsoft enriches all imported threat intelligence indicators with [GeoLocation and WhoIs data](#view-your-geolocation-and-whois-data-enrichments-public-preview), which is displayed together with other indicator details. +Microsoft enriches all imported threat intelligence indicators with [GeoLocation and WhoIs data](#view-your-geolocation-and-whois-data-enrichments-public-preview), which is displayed together with other indicator information. -Threat Intelligence also provides useful context within other Microsoft Sentinel experiences such as **Hunting** and **Notebooks**. For more information, see [Jupyter Notebooks in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/using-threat-intelligence-in-your-jupyter-notebooks/ba-p/860239) and [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md). +Threat intelligence also provides useful context within other Microsoft Sentinel experiences, such as hunting and notebooks. For more information, see [Jupyter notebooks in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/using-threat-intelligence-in-your-jupyter-notebooks/ba-p/860239) and [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md). [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] ## Import threat intelligence with data connectors -Just like all the other event data in Microsoft Sentinel, threat indicators are imported using data connectors. Here are the data connectors in Microsoft Sentinel provided specifically for threat indicators. +Threat indicators are imported by using data connectors, just like all the other event data in Microsoft Sentinel. Here are the data connectors in Microsoft Sentinel provided specifically for threat indicators: -- **Microsoft Defender Threat Intelligence data connector** to ingest Microsoft's threat indicators-- **Premium Microsoft Defender Threat Intelligence data connector** to ingest MDTI's premium intelligence feed-- **Threat Intelligence - TAXII** for industry-standard STIX/TAXII feeds-- **Threat Intelligence upload indicators API** for integrated and curated TI feeds using a REST API to connect -- **Threat Intelligence Platform data connector** also connects TI feeds using a REST API, but is on the path for deprecation- -Use any of these data connectors in any combination together, depending on where your organization sources threat indicators. All three of these are available in **Content hub** as part of the **Threat Intelligence** solution. For more information about this solution, see the Azure Marketplace entry [Threat Intelligence](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-threatintelligence-taxii?tab=Overview). +- **Microsoft Defender Threat Intelligence data connector**: Used to ingest Microsoft threat indicators. +- **Premium Defender Threat Intelligence data connector**: Used to ingest the Defender Threat Intelligence premium intelligence feed. +- **Threat Intelligence - TAXII**: Used for industry-standard STIX/TAXII feeds. +- **Threat Intelligence Upload Indicators API**: Used for integrated and curated threat intelligence feeds by using a REST API to connect. +- **Threat Intelligence Platform (TIP) data connector**: Used to connect threat intelligence feeds by using a REST API, but it's on the path for deprecation. -Also, see this catalog of [threat intelligence integrations](threat-intelligence-integration.md) available with Microsoft Sentinel. +Use any of these data connectors in any combination together, depending on where your organization sources threat indicators. All three of these connectors are available in the **Content hub** as part of the Threat Intelligence solution. For more information about this solution, see the Azure Marketplace entry [Threat Intelligence](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-threatintelligence-taxii?tab=Overview). -### Add threat indicators to Microsoft Sentinel with the Microsoft Defender Threat Intelligence data connector +Also, see [this catalog of threat intelligence integrations](threat-intelligence-integration.md) that are available with Microsoft Sentinel. -Bring public, open source and high fidelity indicators of compromise (IOC) generated by Microsoft Defender Threat Intelligence (MDTI) into your Microsoft Sentinel workspace with the MDTI data connectors. With a simple one-click setup, use the TI from the standard and premium MDTI data connectors to monitor, alert and hunt. +### Add threat indicators to Microsoft Sentinel with the Defender Threat Intelligence data connector -The freely available MDTI threat analytics rule gives you a taste of what the premium MDTI data connector provides. However with matching analytics, only indicators that match the rule are actually ingested into your environment. The premium MDTI data connector brings the premium TI and allows analytics for more data sources with greater flexibility and understanding of that threat intelligence. Here's a table showing what to expect when you license and enable the premium MDTI data connector. +Bring public, open-source, and high-fidelity IOCs generated by Defender Threat Intelligence into your Microsoft Sentinel workspace with the Defender Threat Intelligence data connectors. With a simple one-click setup, use the threat intelligence from the standard and premium Defender Threat Intelligence data connectors to monitor, alert, and hunt. ++The freely available Defender Threat Intelligence threat analytics rule gives you a sample of what the premium Defender Threat Intelligence data connector provides. However, with matching analytics, only indicators that match the rule are ingested into your environment. The premium Defender Threat Intelligence data connector brings the premium threat intelligence and allows analytics for more data sources with greater flexibility and understanding of that threat intelligence. Here's a table that shows what to expect when you license and enable the premium Defender Threat Intelligence data connector. | Free | Premium | |-|-|-| Public indicators of compromise (IOCs) | | +| Public IOCs | | | Open-source intelligence (OSINT) | | | | Microsoft IOCs | | | Microsoft-enriched OSINT | -For more information see the following articles: +For more information, see the following articles: + - To learn how to get a premium license and explore all the differences between the standard and premium versions, see the [Microsoft Defender Threat Intelligence product page](https://www.microsoft.com/security/business/siem-and-xdr/microsoft-defender-threat-intelligence).-- To learn more about the free MDTI experience, see [Introducing MDTI free experience for Microsoft Defender XDR](https://techcommunity.microsoft.com/t5/microsoft-defender-threat/introducing-mdti-free-experience-for-microsoft-defender-xdr/ba-p/3976635).-- To learn how to enable the MDTI and the PMDTI data connectors, see [Enable MDTI data connector](connect-mdti-data-connector.md).+- To learn more about the free Defender Threat Intelligence experience, see [Introducing Defender Threat Intelligence free experience for Microsoft Defender XDR](https://techcommunity.microsoft.com/t5/microsoft-defender-threat/introducing-mdti-free-experience-for-microsoft-defender-xdr/ba-p/3976635). +- To learn how to enable the Defender Threat Intelligence and the premium Defender Threat Intelligence data connectors, see [Enable the Defender Threat Intelligence data connector](connect-mdti-data-connector.md). - To learn about matching analytics, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md). ### Add threat indicators to Microsoft Sentinel with the Threat Intelligence Upload Indicators API data connector -Many organizations use threat intelligence platform (TIP) solutions to aggregate threat indicator feeds from various sources. From the aggregated feed, the data is curated to apply to security solutions such as network devices, EDR/XDR solutions, or SIEMs such as Microsoft Sentinel. The **Threat Intelligence Upload Indicators API** data connector allows you to use these solutions to import threat indicators into Microsoft Sentinel. +Many organizations use threat intelligence platform (TIP) solutions to aggregate threat indicator feeds from various sources. From the aggregated feed, the data is curated to apply to security solutions such as network devices, EDR/XDR solutions, or SIEMs such as Microsoft Sentinel. By using the Threat Intelligence Upload Indicators API data connector, you can use these solutions to import threat indicators into Microsoft Sentinel. + +This data connector uses a new API and offers the following improvements: -This data connector utilizes a new API and offers the following improvements: - The threat indicator fields are based off of the STIX standardized format.-- The Microsoft Entra application only requires Microsoft Sentinel Contributor role.-- The API request endpoint is scoped at the workspace level and the Microsoft Entra application permissions required allow granular assignment at the workspace level.+- The Microsoft Entra application only requires the Microsoft Sentinel Contributor role. +- The API request endpoint is scoped at the workspace level. The required Microsoft Entra application permissions allow granular assignment at the workspace level. -For more information, see [Connect your threat intelligence platform using upload indicators API](connect-threat-intelligence-upload-api.md) +For more information, see [Connect your threat intelligence platform using the Upload Indicators API](connect-threat-intelligence-upload-api.md). -### Add threat indicators to Microsoft Sentinel with the Threat Intelligence Platforms data connector +### Add threat indicators to Microsoft Sentinel with the Threat Intelligence Platform data connector -Much like the existing upload indicators API data connector, the **Threat Intelligence Platform data connector** uses an API allowing your TIP or custom solution to send indicators into Microsoft Sentinel. However, this data connector is now on a path for deprecation. We recommend new solutions to take advantage of the optimizations the upload indicators API has to offer. +Much like the existing Upload Indicators API data connector, the Threat Intelligence Platform data connector uses an API that allows your TIP or custom solution to send indicators into Microsoft Sentinel. However, this data connector is now on a path for deprecation. We recommend that you take advantage of the optimizations that the Upload Indicators API offers. -The TIP data connector works with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator). It can also be used by any custom threat intelligence platform that communicates with the tiIndicators API to send indicators to Microsoft Sentinel (and to other Microsoft security solutions like Microsoft Defender XDR). +The TIP data connector works with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator). You can also use it with any custom TIP that communicates with the tiIndicators API to send indicators to Microsoft Sentinel (and to other Microsoft security solutions like Defender XDR). For more information on the TIP solutions integrated with Microsoft Sentinel, see [Integrated threat intelligence platform products](threat-intelligence-integration.md#integrated-threat-intelligence-platform-products). For more information, see [Connect your threat intelligence platform to Microsoft Sentinel](connect-threat-intelligence-tip.md). ### Add threat indicators to Microsoft Sentinel with the Threat Intelligence - TAXII data connector -The most widely adopted industry standard for the transmission of threat intelligence is a [combination of the STIX data format and the TAXII protocol](https://oasis-open.github.io/cti-documentation/). If your organization obtains threat indicators from solutions that support the current STIX/TAXII version (2.0 or 2.1), use the **Threat Intelligence - TAXII** data connector to bring your threat indicators into Microsoft Sentinel. The Threat Intelligence - TAXII data connector enables a built-in TAXII client in Microsoft Sentinel to import threat intelligence from TAXII 2.x servers. +The most widely adopted industry standard for the transmission of threat intelligence is a [combination of the STIX data format and the TAXII protocol](https://oasis-open.github.io/cti-documentation/). If your organization obtains threat indicators from solutions that support the current STIX/TAXII version (2.0 or 2.1), use the Threat Intelligence - TAXII data connector to bring your threat indicators into Microsoft Sentinel. The Threat Intelligence - TAXII data connector enables a built-in TAXII client in Microsoft Sentinel to import threat intelligence from TAXII 2.x servers. -**To import STIX-formatted threat indicators to Microsoft Sentinel from a TAXII server**: +To import STIX-formatted threat indicators to Microsoft Sentinel from a TAXII server: -1. Obtain the TAXII server API Root and Collection ID --1. Enable the Threat Intelligence - TAXII data connector in Microsoft Sentinel +1. Obtain the TAXII server API root and collection ID. +1. Enable the Threat Intelligence - TAXII data connector in Microsoft Sentinel. For more information, see [Connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md). ## View and manage your threat indicators -View and manage your indicators in the **Threat Intelligence** page. Sort, filter, and search your imported threat indicators without even writing a Log Analytics query. +View and manage your indicators on the **Threat Intelligence** page. Sort, filter, and search your imported threat indicators without even writing a Log Analytics query. -Perform two of the most common threat intelligence tasks: indicator tagging and creating new indicators related to security investigations. Create or edit the threat indicators directly within the Threat Intelligence page when you only need to quickly manage a few. +Two of the most common threat intelligence tasks are indicator tagging and creating new indicators related to security investigations. Create or edit the threat indicators directly on the **Threat Intelligence** page when you only need to quickly manage a few. -Tagging threat indicators is an easy way to group them together to make them easier to find. Typically, you might apply tags to an indicator related to a particular incident, or representing threats from a particular known actor or well-known attack campaign. Once you search for the indicators you want to work with, tag them individually, or multi-select indicators and tag them all at once with one or more tags. Since tagging is free-form, a recommended practice is to create standard naming conventions for threat indicator tags. +Tagging threat indicators is an easy way to group them together to make them easier to find. Typically, you might apply tags to an indicator related to a particular incident, or if the indicator represents threats from a particular known actor or well-known attack campaign. After you search for the indicators that you want to work with, you can tag them individually. Multiselect indicators and tag them all at once with one or more tags. Because tagging is free-form, we recommend that you create standard naming conventions for threat indicator tags. -Validate your indicators and view your successfully imported threat indicators from the Microsoft Sentinel enabled log analytics workspace. The **ThreatIntelligenceIndicator** table under the **Microsoft Sentinel** schema is where all your Microsoft Sentinel threat indicators are stored. This table is the basis for threat intelligence queries performed by other Microsoft Sentinel features such as **Analytics** and **Workbooks**. +Validate your indicators and view your successfully imported threat indicators from the Microsoft Sentinel-enabled Log Analytics workspace. The `ThreatIntelligenceIndicator` table under the **Microsoft Sentinel** schema is where all your Microsoft Sentinel threat indicators are stored. This table is the basis for threat intelligence queries performed by other Microsoft Sentinel features, such as analytics and workbooks. -Here is an example view of a basic query for threat indicators. +Here's an example view of a basic query for threat indicators. -TI indicators are ingested into the **ThreatIntelligenceIndicator** table of your log analytics workspace as read-only. Anytime an indicator is updated, a new entry in the **ThreatIntelligenceIndicator** table is created. Only the most current indicator is displayed in the **Threat Intelligence** page however. Microsoft Sentinel de-duplicates indicators based on the **IndicatorId** and **SourceSystem** properties and chooses the indicator with the newest **TimeGenerated[UTC]**. +Threat intelligence indicators are ingested into the `ThreatIntelligenceIndicator` table of your Log Analytics workspace as read-only. Whenever an indicator is updated, a new entry in the `ThreatIntelligenceIndicator` table is created. Only the most current indicator appears on the **Threat Intelligence** page. Microsoft Sentinel deduplicates indicators based on the `IndicatorId` and `SourceSystem` properties and chooses the indicator with the newest `TimeGenerated[UTC]`. -The **IndicatorId** property is generated using the STIX indicator ID. When indicators are imported or created from non-STIX sources, the **IndicatorId** is generated by the *Source* and *Pattern* of the indicator. +The `IndicatorId` property is generated by using the STIX indicator ID. When indicators are imported or created from non-STIX sources, `IndicatorId` is generated by the source and pattern of the indicator. -For more details on viewing and managing your threat indicators, see [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md#view-your-threat-indicators-in-microsoft-sentinel). +For more information on viewing and managing your threat indicators, see [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md#view-your-threat-indicators-in-microsoft-sentinel). -### View your GeoLocation and WhoIs data enrichments (Public preview) +### View your GeoLocation and WhoIs data enrichments (public preview) -Microsoft enriches IP and domain indicators with extra GeoLocation and WhoIs data, providing more context for investigations where the selected indicator of compromise (IOC) is found. +Microsoft enriches IP and domain indicators with extra `GeoLocation` and `WhoIs` data to provide more context for investigations where the selected IOC is found. -View GeoLocation and WhoIs data on the **Threat Intelligence** pane for those types of threat indicators imported into Microsoft Sentinel. +View `GeoLocation` and `WhoIs` data on the **Threat Intelligence** pane for those types of threat indicators imported into Microsoft Sentinel. -For example, use GeoLocation data to find details like *Organization* or *Country* for an IP indicator, and WhoIs data to find data like *Registrar* and *Record creation* data from a domain indicator. +For example, use `GeoLocation` data to find information like the organization or country for an IP indicator. Use `WhoIs` data to find data like registrar and record creation data from a domain indicator. ## Detect threats with threat indicator analytics -The most important use case for threat indicators in SIEM solutions like Microsoft Sentinel is to power analytics rules for threat detection. These indicator-based rules compare raw events from your data sources against your threat indicators to detect security threats in your organization. In Microsoft Sentinel **Analytics**, you create analytics rules that run on a schedule and generate security alerts. The rules are driven by queries, along with configurations that determine how often the rule should run, what kind of query results should generate security alerts and incidents, and optionally trigger an automated response. +The most important use case for threat indicators in SIEM solutions like Microsoft Sentinel is to power analytics rules for threat detection. These indicator-based rules compare raw events from your data sources against your threat indicators to detect security threats in your organization. In Microsoft Sentinel Analytics, you create analytics rules that run on a schedule and generate security alerts. The rules are driven by queries. Along with configurations, they determine how often the rule should run, what kind of query results should generate security alerts and incidents, and, optionally, when to trigger an automated response. -While you can always create new analytics rules from scratch, Microsoft Sentinel provides a set of built-in rule templates, created by Microsoft security engineers, to leverage your threat indicators. These built-in rule templates are based on the type of threat indicators (domain, email, file hash, IP address, or URL) and data source events you want to match. Each template lists the required sources needed for the rule to function. This makes it easy to determine if the necessary events are already imported in Microsoft Sentinel. +Although you can always create new analytics rules from scratch, Microsoft Sentinel provides a set of built-in rule templates, created by Microsoft security engineers, to take advantage of your threat indicators. These templates are based on the type of threat indicators (domain, email, file hash, IP address, or URL) and data source events that you want to match. Each template lists the required sources that are needed for the rule to function. This information makes it easy to determine if the necessary events are already imported in Microsoft Sentinel. -By default, when these built-in rules are triggered, an alert will be created. In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents which can be found in **Incidents** under **Threat Management** on the Microsoft Sentinel menu. Incidents are what your security operations teams will triage and investigate to determine the appropriate response actions. Find detailed information in this [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md). +By default, when these built-in rules are triggered, an alert is created. In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents. On the Microsoft Sentinel menu, under **Threat management**, select **Incidents**. Incidents are what your security operations teams triage and investigate to determine the appropriate response actions. For more information, see [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md). -For more details on using threat indicators in your analytics rules, see [Use threat intelligence to detect threats](use-threat-indicators-in-analytics-rules.md). +For more information on using threat indicators in your analytics rules, see [Use threat intelligence to detect threats](use-threat-indicators-in-analytics-rules.md). -Microsoft provides access to its threat intelligence through the **Microsoft Defender Threat Intelligence Analytics** rule. For more information on how to take advantage of this rule which generates high fidelity alerts and incidents, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md) +Microsoft provides access to its threat intelligence through the Defender Threat Intelligence analytics rule. For more information on how to take advantage of this rule, which generates high-fidelity alerts and incidents, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md). ## Workbooks provide insights about your threat intelligence -Workbooks provide powerful interactive dashboards that give you insights into all aspects of Microsoft Sentinel, and threat intelligence is no exception. Use the built-in **Threat Intelligence workbook** to visualize key information about your threat intelligence, and easily customize the workbook according to your business needs. Create new dashboards combining many different data sources so to visualize your data in unique ways. Since Microsoft Sentinel workbooks are based on Azure Monitor workbooks, there is already extensive documentation available, and many more templates. A great place to start is this article on how to [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md). +Workbooks provide powerful interactive dashboards that give you insights into all aspects of Microsoft Sentinel, and threat intelligence is no exception. Use the built-in **Threat Intelligence** workbook to visualize key information about your threat intelligence. You can easily customize the workbook according to your business needs. Create new dashboards by combining many data sources to help you visualize your data in unique ways. ++Because Microsoft Sentinel workbooks are based on Azure Monitor workbooks, extensive documentation and many more templates are already available. For more information, see [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md). -There is also a rich community of [Azure Monitor workbooks on GitHub](https://github.com/microsoft/Application-Insights-Workbooks) to download additional templates and contribute your own templates. +There's also a rich resource for [Azure Monitor workbooks on GitHub](https://github.com/microsoft/Application-Insights-Workbooks), where you can download more templates and contribute your own templates. -For more details on using and customizing the Threat Intelligence workbook, see [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md#workbooks-provide-insights-about-your-threat-intelligence). +For more information on using and customizing the **Threat Intelligence** workbook, see [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md#workbooks-provide-insights-about-your-threat-intelligence). -## Next steps +## Related content -In this document, you learned about the threat intelligence capabilities of Microsoft Sentinel, including the Threat Intelligence blade. For practical guidance on using Microsoft Sentinel's threat intelligence capabilities, see the following articles: +In this article, you learned about the threat intelligence capabilities of Microsoft Sentinel, including the **Threat Intelligence** pane. For practical guidance on using Microsoft Sentinel threat intelligence capabilities, see the following articles: - Connect Microsoft Sentinel to [STIX/TAXII threat intelligence feeds](./connect-threat-intelligence-taxii.md). - [Connect threat intelligence platforms](./connect-threat-intelligence-tip.md) to Microsoft Sentinel.-- See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel.+- See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) are readily integrated with Microsoft Sentinel. - [Work with threat indicators](work-with-threat-indicators.md) throughout the Microsoft Sentinel experience.-- Detect threats with [built-in](./detect-threats-built-in.md) or [custom](./detect-threats-custom.md) analytics rules in Microsoft Sentinel+- Detect threats with [built-in](./detect-threats-built-in.md) or [custom](./detect-threats-custom.md) analytics rules in Microsoft Sentinel. - [Investigate incidents](./investigate-cases.md) in Microsoft Sentinel. |
sentinel | Use Threat Indicators In Analytics Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-threat-indicators-in-analytics-rules.md | appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -#Customer intent: As a SOC analyst, I want to connect the threat intelligence available to analytics rules so I can generate alerts and incidents. +#Customer intent: As an SOC analyst, I want to connect the threat intelligence available to analytics rules so that I can generate alerts and incidents. # Use threat indicators in analytics rules -Power your analytics rules with your threat indicators to automatically generate alerts based on the threat intelligence you've integrated. +Power your analytics rules with your threat indicators to automatically generate alerts based on the threat intelligence that you integrated. ## Prerequisites -- Threat indicators. These can be from threat intelligence feeds, threat intelligence platforms, bulk import from a flat file, or manual input.--- Data sources. Events from your data connectors must be flowing to your Sentinel workspace.--- An analytics rule of the format, "*TI map*..." that can map the threat indicators you have with the events you've ingested.-+- Threat indicators. These indicators can be from threat intelligence feeds, threat intelligence platforms, bulk import from a flat file, or manual input. +- Data sources. Events from your data connectors must be flowing to your Microsoft Sentinel workspace. +- An analytics rule of the format `TI map...`. It must use this format so that it can map the threat indicators you have with the events you ingested. ## Configure a rule to generate security alerts -Below is an example of how to enable and configure a rule to generate security alerts using the threat indicators you've imported into Microsoft Sentinel. For this example, use the rule template called **TI map IP entity to AzureActivity**. This rule will match any IP address-type threat indicator with all your Azure Activity events. When a match is found, an **alert** will be generated along with a corresponding **incident** for investigation by your security operations team. This particular analytics rule requires the **Azure Activity** data connector (to import your Azure subscription-level events), and one or both of the **Threat Intelligence** data connectors (to import threat indicators). This rule will also trigger from imported indicators or manually created ones. +The following example shows how to enable and configure a rule to generate security alerts by using the threat indicators that you imported into Microsoft Sentinel. For this example, use the rule template called **TI map IP entity to AzureActivity**. This rule matches any IP address-type threat indicator with all your Azure Activity events. When a match is found, an alert is generated along with a corresponding incident for investigation by your security operations team. -1. From the [Azure portal](https://portal.azure.com/), navigate to the **Microsoft Sentinel** service. +This particular analytics rule requires the Azure Activity data connector (to import your Azure subscription-level events). It also requires one or both of the Threat Intelligence data connectors (to import threat indicators). This rule also triggers from imported indicators or manually created ones. -1. Choose the **workspace** to which you imported threat indicators using the **Threat Intelligence** data connectors and Azure activity data using the **Azure Activity** data connector. +1. In the [Azure portal](https://portal.azure.com/), go to **Microsoft Sentinel**. -1. Select **Analytics** from the **Configuration** section of the Microsoft Sentinel menu. --1. Select the **Rule templates** tab to see the list of available analytics rule templates. +1. Choose the workspace to which you imported threat indicators by using the Threat Intelligence data connectors and Azure Activity data by using the Azure Activity data connector. -1. Find the rule titled **TI map IP entity to AzureActivity** and ensure you have connected all the required data sources as shown below. +1. On the Microsoft Sentinel menu, under the **Configuration** section, select **Analytics**. - :::image type="content" source="media/work-with-threat-indicators/threat-intel-required-data-sources.png" alt-text="Screenshot of required data sources for the TI map IP entity to AzureActivity analytics rule."::: +1. Select the **Rule templates** tab to see the list of available analytics rule templates. -1. Select the **TI map IP entity to AzureActivity** rule and then select **Create rule** to open a rule configuration wizard. Configure the settings in the wizard and then select **Next: Set rule logic >**. +1. Find the rule titled **TI map IP entity to AzureActivity**, and ensure that you connected all the required data sources. - :::image type="content" source="media/work-with-threat-indicators/threat-intel-create-analytics-rule.png" alt-text="Screenshot of the create analytics rule configuration wizard."::: + :::image type="content" source="media/work-with-threat-indicators/threat-intel-required-data-sources.png" alt-text="Screenshot that shows required data sources for the TI map IP entity to AzureActivity analytics rule."::: -1. The rule logic portion of the wizard has been pre-populated with the following items: +1. Select the **TI map IP entity to AzureActivity** rule. Then select **Create rule** to open a rule configuration wizard. Configure the settings in the wizard, and then select **Next: Set rule logic >**. - - The query that will be used in the rule. + :::image type="content" source="media/work-with-threat-indicators/threat-intel-create-analytics-rule.png" alt-text="Screenshot that shows the Create analytics rule configuration wizard."::: - - Entity mappings, which tell Microsoft Sentinel how to recognize entities like Accounts, IP addresses, and URLs, so that **incidents** and **investigations** understand how to work with the data in any security alerts generated by this rule. +1. The rule logic portion of the wizard is prepopulated with the following items: + - The query that's used in the rule. + - Entity mappings, which tell Microsoft Sentinel how to recognize entities like accounts, IP addresses, and URLs. Incidents and investigations can then understand how to work with the data in any security alerts that were generated by this rule. - The schedule to run this rule.- - The number of query results needed before a security alert is generated. The default settings in the template are: - Run once an hour.+ - Match any IP address threat indicators from the `ThreatIntelligenceIndicator` table with any IP address found in the last one hour of events from the `AzureActivity` table. + - Generate a security alert if the query results are greater than zero to indicate that matches were found. + - Ensure that the rule is enabled. - - Match any IP address threat indicators from the **ThreatIntelligenceIndicator** table with any IP address found in the last one hour of events from the **AzureActivity** table. -- - Generate a security alert if the query results are greater than zero, meaning if any matches are found. - - - The rule is enabled. -- You can leave the default settings or change them to meet your requirements, and you can define incident-generation settings on the **Incident settings** tab. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md). When you are finished, select the **Automated response** tab. + You can leave the default settings or change them to meet your requirements. You can define incident-generation settings on the **Incident settings** tab. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md). When you're finished, select the **Automated response** tab. -1. Configure any automation you'd like to trigger when a security alert is generated from this analytics rule. Automation in Microsoft Sentinel is done using combinations of **automation rules** and **playbooks** powered by Azure Logic Apps. To learn more, see this [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](./tutorial-respond-threats-playbook.md). When finished, select the **Next: Review >** button to continue. +1. Configure any automation you want to trigger when a security alert is generated from this analytics rule. Automation in Microsoft Sentinel uses combinations of automation rules and playbooks powered by Azure Logic Apps. To learn more, see [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](./tutorial-respond-threats-playbook.md). When you're finished, select **Next: Review >** to continue. -1. When you see the message that the rule validation has passed, select the **Create** button and you are finished. +1. When you see a message stating that the rule validation passed, select **Create**. ## Review your rules -Find your enabled rules in the **Active rules** tab of the **Analytics** section of Microsoft Sentinel. Edit, enable, disable, duplicate, or delete the active rule from there. The new rule runs immediately upon activation, and then runs on its defined schedule. +Find your enabled rules on the **Active rules** tab of the **Analytics** section of Microsoft Sentinel. Edit, enable, disable, duplicate, or delete the active rule from there. The new rule runs immediately upon activation and then runs on its defined schedule. -According to the default settings, each time the rule runs on its schedule, any results found will generate a security alert. Security alerts in Microsoft Sentinel can be viewed in the **Logs** section of Microsoft Sentinel, in the **SecurityAlert** table under the **Microsoft Sentinel** group. +According to the default settings, each time the rule runs on its schedule, any results that are found generate a security alert. To see security alerts in Microsoft Sentinel in the **Logs** section of Microsoft Sentinel, under the **Microsoft Sentinel** group, see the `SecurityAlert` table. -In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents, which can be found in **Incidents** under **Threat Management** on the Microsoft Sentinel menu. Incidents are what your security operations teams will triage and investigate to determine the appropriate response actions. You can find detailed information in this [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md). +In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents. On the Microsoft Sentinel menu, under **Threat Management**, select **Incidents**. Incidents are what your security operations teams triage and investigate to determine the appropriate response actions. For more information, see [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md). > [!NOTE]-> Since analytic rules constrain lookups beyond 14 days, Microsoft Sentinel refreshes indicators every 12 days to make sure they are available for matching purposes through the analytic rules. +> Because analytic rules constrain lookups beyond 14 days, Microsoft Sentinel refreshes indicators every 12 days to make sure they're available for matching purposes through the analytic rules. ## Related content |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | If you've onboarded Microsoft Sentinel to the Microsoft unified security operati Your premium license for Microsoft Defender Threat Intelligence (MDTI) now unlocks the ability to ingest all premium indicators directly into your workspace. The premium MDTI data connector adds more to your hunting and research capabilities within Microsoft Sentinel. -For more information, see [Understand threat intelligence](understand-threat-intelligence.md#add-threat-indicators-to-microsoft-sentinel-with-the-microsoft-defender-threat-intelligence-data-connector). +For more information, see [Understand threat intelligence](understand-threat-intelligence.md#add-threat-indicators-to-microsoft-sentinel-with-the-defender-threat-intelligence-data-connector). ### Unified AMA-based connectors for syslog ingestion |
storage | Access Tiers Online Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md | +- By setting the default online access tier for the storage account. Blobs in the account inherit this access tier unless you explicitly override the setting for an individual blob. - By explicitly setting a blob's tier on upload. You can create a blob in the hot, cool, cold, or archive tier. - By changing an existing blob's tier with a Set Blob Tier operation. Typically, you would use this operation to move from a hotter tier to a cooler one. - By copying a blob with a Copy Blob operation. Typically, you would use this operation to move from a cooler tier to a hotter one. To set the default access tier for a storage account at create time in the Azure 2. Fill out the **Basics** tab. -3. On the **Advanced** tab, under **Blob storage**, set the **Access tier** to either *Hot* or *Cool*. The default setting is *Hot*. +3. On the **Advanced** tab, under **Blob storage**, set the **Access tier** to either *Hot*, *Cool* or *Cold*. The default setting is *Hot*. 4. Select **Review + Create** to validate your settings and create your storage account. To update the default access tier for an existing storage account in the Azure p 2. Under **Settings**, select **Configuration**. -3. Locate the **Blob access tier (default)** setting, and select either *Hot* or *Cool*. The default setting is *Hot*, if you have not previously set this property. +3. Locate the **Blob access tier (default)** setting, and select either *Hot*, *Cool*, or *Cold*. The default setting is *Hot*, if you have not previously set this property. 4. Save your changes. A blob that doesn't have an explicitly assigned tier infers its tier from the de #### [Portal](#tab/azure-portal) -If a blob's access tier is inferred from the default account access tier setting, then the Azure portal displays the access tier as **Hot (inferred)** or **Cool (inferred)**. +If a blob's access tier is inferred from the default account access tier setting, then the Azure portal displays the access tier as **Hot (inferred)**, **Cool (inferred)**, or **Cold (inferred)**. :::image type="content" source="media/access-tiers-online-manage/default-access-tier-portal.png" alt-text="Screenshot showing blobs with the default access tier in the Azure portal."::: |
storage | Access Tiers Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md | description: Azure storage offers different access tiers so that you can store y Previously updated : 05/01/2024 Last updated : 09/03/2024 Storage accounts have a default access tier setting that indicates the online ti The default access tier for a new general-purpose v2 storage account is set to the hot tier by default. You can change the default access tier setting when you create a storage account or after it's created. If you don't change this setting on the storage account or explicitly set the tier when uploading a blob, then a new blob is uploaded to the hot tier by default. -A blob that doesn't have an explicitly assigned tier infers its tier from the default account access tier setting. If a blob's access tier is inferred from the default account access tier setting, then the Azure portal displays the access tier as **Hot (inferred)** or **Cool (inferred)**. +A blob that doesn't have an explicitly assigned tier infers its tier from the default account access tier setting. If a blob's access tier is inferred from the default account access tier setting, then the Azure portal displays the access tier as **Hot (inferred)**, **Cool (inferred)**, or **Cold (inferred)**. Changing the default access tier setting for a storage account applies to all blobs in the account for which an access tier hasn't been explicitly set. If you toggle the default access tier setting to a cooler tier in a general-purpose v2 account, then you're charged for write operations (per 10,000) for all blobs for which the access tier is inferred. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle to a warmer tier in a general-purpose v2 account. When you create a legacy Blob Storage account, you must specify the default access tier setting as hot or cool at create time. There's no charge for changing the default account access tier setting to a cooler tier in a legacy Blob Storage account. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle to a warmer tier in a Blob Storage account. Microsoft recommends using general-purpose v2 storage accounts rather than Blob Storage accounts when possible. > [!NOTE]-> The cold tier and the archive tier are not supported as the default access tier for a storage account. +> The archive tier is not supported as the default access tier for a storage account. ## Setting or changing a blob's tier Changing the access tier for a blob when versioning is enabled, or if the blob h ## Cold tier -### Limitations and known issues --- The default access tier setting of the account can't be set to cold tier.--### Required versions of REST, SDKs, and tools +The cold tier requires the following minimum versions of REST, SDKs, and tools | Environment | Minimum version | ||| |
storage | Secure File Transfer Protocol Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md | The following clients are known to be incompatible with SFTP for Azure Blob Stor - Five9 - Kemp-- Mule - paramiko 1.16.0 - SSH.NET 2016.1.0 |
storage | Secure File Transfer Protocol Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md | Different protocols are supported by the hierarchical namespace. SFTP is one of SFTP clients can't be authorized by using Microsoft Entra identities. Instead, SFTP utilizes a new form of identity management called _local users_. -Local users must use either a password or a Secure Shell (SSH) private key credential for authentication. You can have a maximum of 2,000 local users for a storage account. +Local users must use either a password or a Secure Shell (SSH) private key credential for authentication. You can have a maximum of 8,000 local users for a storage account. To set up access permissions, you create a local user, and choose authentication methods. Then, for each container in your account, you can specify the level of access you want to give that user. The following clients have compatible algorithm support with SFTP for Azure Blob - libssh 0.9.5+ - Maverick Legacy 1.7.15+ - Moveit 12.7+- Mule 2.1.2+ - OpenSSH 7.4+ - paramiko 2.8.1+ - phpseclib 1.0.13+ |
storage | Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md | |
storage | File Sync Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md | The following Azure File Sync agent versions are supported: | Milestone | Agent version number | Release date | Status | |-|-|--||+| V19 Release - [KB5040924](https://support.microsoft.com/topic/e44fc142-8a24-4dea-9bf9-6e884b4b342e)| 19.1.0.0 | September 3, 2024 | Supported - Flighting | | V18.2 Release - [KB5023059](https://support.microsoft.com/topic/613d00dc-998b-4885-86b9-73750195baf5)| 18.2.0.0 | July 9, 2024 | Supported | | V18.1 Release - [KB5023057](https://support.microsoft.com/topic/961af341-40f2-4e95-94c4-f2854add60a5)| 18.1.0.0 | June 11, 2024 | Supported - Security Update | | V17.3 Release - [KB5039814](https://support.microsoft.com/topic/97bd6ab9-fa4c-42c0-a510-cdb1d23825bf)| 17.3.0.0 | June 11, 2024 | Supported - Security Update | Windows Server 2012 R2 reached [end of support](/lifecycle/announcements/windows Perform one of the following options for your Windows Server 2012 R2 servers prior to v17 agent expiration on March 4, 2025: -- Option #1: Perform an [in-place upgrade](/windows-server/get-started/perform-in-place-upgrade) to a [supported operation system version](file-sync-planning.md#operating-system-requirements). Once the in-place upgrade completes, uninstall the Azure File Sync agent for Windows Server 2012 R2, restart the server, and then install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022).+- Option #1: Perform an [in-place upgrade](/windows-server/get-started/perform-in-place-upgrade) to a [supported operation system version](file-sync-planning.md#operating-system-requirements). Once the in-place upgrade completes, uninstall the Azure File Sync agent for Windows Server 2012 R2, restart the server, and then install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, Windows Server 2022 or Windows Server 2025). - Option #2: Deploy a new Azure File Sync server that's running a [supported operation system version](file-sync-planning.md#operating-system-requirements) to replace your Windows 2012 R2 servers. For guidance, see [Replace an Azure File Sync server](file-sync-replace-server.md). >[!NOTE] >Azure File Sync agent v17.3 is the last agent release currently planned for Windows Server 2012 R2. To continue to receive product improvements and bug fixes, upgrade your servers to Windows Server 2016 or later. -## Version 18.2.0.0 +## Version 19.1.0.0 +The following release notes are for Azure File Sync version 19.1.0.0 (released September 3, 2024). This release contains improvements for the Azure File Sync service and agent. ++### Improvements and issues that are fixed +**Faster server provisioning and improved disaster recovery for Azure File Sync server endpoints.** ++We have reduced the time it takes for the new server endpoint to be ready to use. Prior to the v19 release, when a new server endpoint is provisioned, it could take hours and sometime days for the server to be ready to use. With our latest improvements, we've substantially shortened this duration, ensuring a faster setup process. ++The improvement applies to the following scenarios, when the server endpoint location is empty (no files or directories): +- Creating the first server endpoint of new sync topology after data is copied to the Azure File Share. +- Adding a new empty server endpoint to an existing sync topology. ++This improvement will be gradually enabled in all regions within the next month. Once the improvement is enabled in your region, you will see a Provisioning steps tab in the portal after server endpoint creation which allows you to easily determine when the server endpoint is ready for use. For more information, see [Create an Azure File Sync server endpoint](file-sync-server-endpoint-create.md#provisioning-steps) documentation. ++**Preview: Managed Identity support for Azure File Sync service and servers** +Azure File Sync support for managed identities eliminates the need for shared keys as a method of authentication by utilizing a system-assigned managed identity provided by Microsoft Entra ID. ++When you enable this configuration, the system-assigned managed identities will be used for the following scenarios: +- Storage Sync Service authentication to Azure file share +- Registered server authentication to Azure file share +- Registered server authentication to Storage Sync Service ++Azure File Sync support for system-assigned managed identities will be in preview soon. More details will be provided once this feature is enabled in all regions. ++**Sync performance improvements** +Sync performance has significantly improved for file share migrations and when metadata-only is changed (for example, ACL changes). Performance numbers will be posted when they are available. ++**Miscellaneous reliability and telemetry improvements for cloud tiering and sync** ++### Evaluation Tool +Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide. ++### Agent installation and server configuration +For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). ++- The agent installation requires a restart for servers that have an existing Azure File Sync agent installation if the agent version is older than 18.2.0.0. +- The agent installation package must be installed with elevated (admin) permissions. +- The agent isn't supported on Nano Server deployment option. +- The agent is supported only on Windows Server 2016, Windows Server 2019, Windows Server 2022 and Windows Server 2025. +- The agent installation package is for a specific operating system version. If a server with an Azure File Sync agent installed is upgraded to a newer operating system version, the existing agent must be uninstalled. Restart the server and then install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, Windows Server 2022 or Windows Server 2025). +- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information. +- The agent uses TLS 1.2 or 1.3 (Windows Server 2022 or newer) by default and TLS 1.0 and 1.1 are not supported. +- The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. If the SVI directory is compressed, the Storage Sync Agent (FileSyncSvc) service will fail to start. ++### Interoperability +- Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json). +- File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen. +- Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup. ++### Sync limitations +The following items don't sync, but the rest of the system continues to operate normally: +- Azure File Sync supports all characters that are supported by the [NTFS file system](/windows/win32/fileio/naming-a-file) except invalid surrogate pairs. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for more information. +- Paths that are longer than 2,048 characters. +- The system access control list (SACL) portion of a security descriptor that's used for auditing. +- Extended attributes. +- Alternate data streams. +- Reparse points. +- Hard links. +- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints. +- Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data. ++> [!NOTE] +> Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure. ++### Server endpoint +- A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync. +- Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint. +- Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs). +- A server endpoint can't be nested. It can coexist on the same volume in parallel with another endpoint. +- Don't store an OS or application paging file within a server endpoint location. ++### Cloud endpoint +- Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, use the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet to manually initiate the detection of changes in the Azure file share. +- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Microsoft Entra (formerly Azure AD) tenant. After moving the storage sync service or storage account, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)). ++> [!NOTE] +> When creating the cloud endpoint, the storage sync service and storage account must be in the same Microsoft Entra ID tenant. After you create the cloud endpoint, you can move the storage sync service and storage account to different Microsoft Entra ID tenants. ++### Cloud tiering +- If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations. +- When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files. ++## Version 18.2.0.0 The following release notes are for Azure File Sync version 18.2.0.0 (released July 9, 2024). This release contains improvements for the Azure File Sync agent. These notes are in addition to the release notes listed for version 18.0.0.0 and 18.1.0.0. ### Improvements and issues that are fixed- - Rollup update for Azure File Sync agent [v18](#version-18000) and [v18.1](#version-18100-security-update) releases. - This release also includes sync reliability improvements. ## Version 18.1.0.0 (Security Update)- The following release notes are for Azure File Sync version 18.1.0.0 (released June 11, 2024). This release contains a security update for servers that have v18 agent version installed. These notes are in addition to the release notes listed for version 18.0.0.0. ### Improvements and issues that are fixed--- Fixes an issue that might allow unauthorized users to delete files in locations they don’t have access. This is a security-only update. For more information about this vulnerability, see [CVE-2024-35253](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-35253).+Fixes an issue that might allow unauthorized users to delete files in locations they don’t have access. This is a security-only update. For more information about this vulnerability, see [CVE-2024-35253](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-35253). ## Version 17.3.0.0 (Security Update)- The following release notes are for Azure File Sync version 17.3.0.0 (released June 11, 2024). This release contains a security update for servers that have v16.x or v17.x agent versions installed. These notes are in addition to the release notes listed for version 17.0.0.0. ### Improvements and issues that are fixed--- Fixes an issue that might allow unauthorized users to delete files in locations they don’t have access. This is a security-only update. For more information about this vulnerability, see [CVE-2024-35253](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-35253).+Fixes an issue that might allow unauthorized users to delete files in locations they don’t have access. This is a security-only update. For more information about this vulnerability, see [CVE-2024-35253](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-35253). ## Version 18.0.0.0- The following release notes are for Azure File Sync version 18.0.0.0 (released May 8, 2024). This release contains improvements for the Azure File Sync service and agent. ### Improvements and issues that are fixed+**Faster server provisioning and improved disaster recovery for Azure File Sync server endpoints** +We're reducing the time it takes for the new server endpoint to be ready to use. When a new server endpoint is provisioned, it could take hours and sometime days for the server to be ready to use. With our latest improvements, we've substantially shortened this duration for a more efficient setup process. -- Faster server provisioning and improved disaster recovery for Azure File Sync server endpoints.- - We're reducing the time it takes for the new server endpoint to be ready to use. When a new server endpoint is provisioned, it could take hours and sometime days for the server to be ready to use. With our latest improvements, we've substantially shortened this duration for a more efficient setup process. - - The improvement applies to the following scenarios, when the server endpoint location is empty (no files or directories): - - Creating the first server endpoint of new sync topology after data is copied to the Azure File Share. - - Adding a new empty server endpoint to an existing sync topology. - - How to get started: Sign up for the public preview [here](https://forms.office.com/r/gCLr1PDZKL). -- Sync performance improvements- - Sync upload performance has improved, and performance numbers will be posted when they are available. This improvement will mainly benefit file share migrations (initial upload) and high churn events on the server in which a large number of files need to be uploaded, for example ACL changes. -- Miscellaneous reliability and telemetry improvements for cloud tiering and sync+The improvement applies to the following scenarios, when the server endpoint location is empty (no files or directories): +- Creating the first server endpoint of new sync topology after data is copied to the Azure File Share. +- Adding a new empty server endpoint to an existing sync topology. -### Evaluation Tool +How to get started: Sign up for the public preview [here](https://forms.office.com/r/gCLr1PDZKL). ++**Sync performance improvements** +Sync upload performance has improved, and performance numbers will be posted when they are available. This improvement will mainly benefit file share migrations (initial upload) and high churn events on the server in which a large number of files need to be uploaded, for example ACL changes. ++**Miscellaneous reliability and telemetry improvements for cloud tiering and sync** +### Evaluation Tool Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide. ### Agent installation and server configuration- For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). - The agent installation package must be installed with elevated (admin) permissions. For more information on how to install and configure the Azure File Sync agent w - All supported Azure File Sync agent versions use TLS 1.2 by default and TLS 1.0 and 1.1 are not supported. Starting with v18 agent version TLS 1.3 will be supported for Windows Server 2022. ### Interoperability- - Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json). - File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen. - Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup. ### Sync limitations- The following items don't sync, but the rest of the system continues to operate normally: - Azure File Sync v17 agent and later supports all characters that are supported by the [NTFS file system](/windows/win32/fileio/naming-a-file) except invalid surrogate pairs. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for more information. The following items don't sync, but the rest of the system continues to operate > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure. ### Server endpoint- - A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync. - Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint. - Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs). The following items don't sync, but the rest of the system continues to operate - Don't store an OS or application paging file within a server endpoint location. ### Cloud endpoint- - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, use the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet to manually initiate the detection of changes in the Azure file share. - The storage sync service and/or storage account can be moved to a different resource group, subscription, or Microsoft Entra (formerly Azure AD) tenant. After moving the storage sync service or storage account, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)). The following items don't sync, but the rest of the system continues to operate > When creating the cloud endpoint, the storage sync service and storage account must be in the same Microsoft Entra tenant. After you create the cloud endpoint, you can move the storage sync service and storage account to different Microsoft Entra tenants. ### Cloud tiering- - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations. - When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files. ## Version 17.2.0.0- The following release notes are for Azure File Sync version 17.2.0.0 (released February 28, 2024). This release contains improvements for the Azure File Sync service and agent. ### Improvements and issues that are fixed- The Azure File Sync v17.2 release is a rollup update for the v17.0 and v17.1 releases: - [Azure File Sync Agent v17 Release - December 2023](https://support.microsoft.com/topic/azure-file-sync-agent-v17-release-december-2023-flighting-2d8cba16-c035-4c54-b35d-1bd8fd795ba9) - [Azure File Sync Agent v17.1 Release - February 2024](https://support.microsoft.com/topic/azure-file-sync-agent-v17-1-release-february-2024-security-only-update-bd1ce41c-27f4-4e3d-a80f-92f74817c55b) ### Evaluation tool- Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide. ### Agent installation and server configuration- For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). - The agent installation package must be installed with elevated (admin) permissions. For more information on how to install and configure the Azure File Sync agent w - The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results. ### Interoperability- - Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json). - File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen. - Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup. ### Sync limitations- The following items don't sync, but the rest of the system continues to operate normally: - Files with unsupported characters. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for a list of unsupported characters. The following items don't sync, but the rest of the system continues to operate > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure. ### Server endpoint- - A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync. - Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint. - Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs). The following items don't sync, but the rest of the system continues to operate - Don't store an OS or application paging file within a server endpoint location. ### Cloud endpoint- - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, you can use the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet to manually initiate the detection of changes in the Azure file share. - The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)). The following items don't sync, but the rest of the system continues to operate > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants. ### Cloud tiering- - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations. - When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure that older files are tiered sooner than recently accessed files. ## Version 17.1.0.0 (Security Update)- The following release notes are for Azure File Sync version 17.1.0.0 (released February 13, 2024). This release contains a security update for the Azure File Sync agent. These notes are in addition to the release notes listed for version 17.0.0.0. ### Improvements and issues that are fixed--- Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397).+Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397). ## Version 16.2.0.0 (Security Update)- The following release notes are for Azure File Sync version 16.2.0.0 (released February 13, 2024). This release contains security updates for the Azure File Sync agent. These notes are in addition to the release notes listed for version 16.0.0.0. ### Improvements and issues that are fixed--- Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397).+Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397). ## Version 17.0.0.0 - The following release notes are for Azure File Sync version 17.0.0.0 (released December 6, 2023). This release contains improvements for the Azure File Sync service and agent. ### Improvements and issues that are fixed+**Sync upload performance improvements** +Sync upload performance has improved (performance numbers to be posted in the near future). This improvement will mainly benefit file share migrations (initial upload) and high churn events on the server in which a large number of files need to be uploaded. -- Sync upload performance improvements- - Sync upload performance has improved (performance numbers to be posted in the near future). This improvement will mainly benefit file share migrations (initial upload) and high churn events on the server in which a large number of files need to be uploaded. -- Expanded character support for file and directory names- - Azure File Sync now supports an expanded list of characters. This expansion allows for users to create and sync SMB file shares with file and directory names on par with NTFS file system, for valid Unicode characters. For more information on unsupported characters, refer to the documentation [here](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=%2Fazure%2Fstorage%2Ffile-sync%2Ftoc.json&tabs=portal1%2Cazure-portal#handling-unsupported-characters). -- New cloud tiering low disk space mode metric- - You can now configure an alert if a server is in low disk space mode. To learn more, see [Monitor Azure File Sync](file-sync-monitoring.md). -- Fixed an issue that caused the agent upgrade to hang-- Fixed a bug that caused the ESE database engine (also known as JET) to generate logs under C:\Windows\System32 directory-- Miscellaneous reliability and telemetry improvements for cloud tiering and sync+**Expanded character support for file and directory names** +Azure File Sync now supports an expanded list of characters. This expansion allows for users to create and sync SMB file shares with file and directory names on par with NTFS file system, for valid Unicode characters. For more information on unsupported characters, refer to the documentation [here](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=%2Fazure%2Fstorage%2Ffile-sync%2Ftoc.json&tabs=portal1%2Cazure-portal#handling-unsupported-characters). -### Evaluation Tool +**New cloud tiering low disk space mode metric** +You can now configure an alert if a server is in low disk space mode. To learn more, see [Monitor Azure File Sync](file-sync-monitoring.md). ++**Fixed an issue that caused the agent upgrade to hang** ++**Fixed a bug that caused the ESE database engine (also known as JET) to generate logs under C:\Windows\System32 directory** +**Miscellaneous reliability and telemetry improvements for cloud tiering and sync** ++### Evaluation Tool Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide. ### Agent installation and server configuration- For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). - The agent installation package must be installed with elevated (admin) permissions. For more information on how to install and configure the Azure File Sync agent w - The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results. ### Interoperability- - Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json). - File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen. - Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup. ### Sync limitations- The following items don't sync, but the rest of the system continues to operate normally: - Azure File Sync v17 agent supports all characters that are supported by the [NTFS file system](/windows/win32/fileio/naming-a-file) except invalid surrogate pairs. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for more information. The following items don't sync, but the rest of the system continues to operate > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure. ### Server endpoint- - A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync. - Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint. - Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs). The following items don't sync, but the rest of the system continues to operate - Don't store an OS or application paging file within a server endpoint location. ### Cloud endpoint- - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, use the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet to manually initiate the detection of changes in the Azure file share. - The storage sync service and/or storage account can be moved to a different resource group, subscription, or Microsoft Entra (formerly Azure AD) tenant. After moving the storage sync service or storage account, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)). The following items don't sync, but the rest of the system continues to operate > When creating the cloud endpoint, the storage sync service and storage account must be in the same Microsoft Entra tenant. After you create the cloud endpoint, you can move the storage sync service and storage account to different Microsoft Entra tenants. ### Cloud tiering- - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations. - When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files. ## Version 16.0.0.0- The following release notes are for Azure File Sync version 16.0.0.0 (released January 30, 2023). This release contains improvements for the Azure File Sync service and agent. ### Improvements and issues that are fixed+**Improved Azure File Sync service availability** +Azure File Sync is now a zone-redundant service, which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully use this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or Geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see [Azure Files redundancy](../files/files-redundancy.md). -- Improved Azure File Sync service availability- - Azure File Sync is now a zone-redundant service, which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully use this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or Geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see [Azure Files redundancy](../files/files-redundancy.md). -- Immediately run server change enumeration to detect files changes that were missed on the server- - Azure File Sync uses the [Windows USN journal](/windows/win32/fileio/change-journals) feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files won't sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don't want to wait until the next server change enumeration job runs, you can now use the `Invoke-StorageSyncServerChangeDetection` PowerShell cmdlet to immediately run server change enumeration on a server endpoint path. +**Immediately run server change enumeration to detect files changes that were missed on the server** +Azure File Sync uses the [Windows USN journal](/windows/win32/fileio/change-journals) feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files won't sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don't want to wait until the next server change enumeration job runs, you can now use the `Invoke-StorageSyncServerChangeDetection` PowerShell cmdlet to immediately run server change enumeration on a server endpoint path. - To immediately run server change enumeration on a server endpoint path, run the following PowerShell commands: -- ```powershell - Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" - Invoke-StorageSyncServerChangeDetection -ServerEndpointPath <path> - ``` -- > [!NOTE] - > By default, the server change enumeration scan will only check the modified timestamp. To perform a deeper check, use the -DeepScan parameter. +To immediately run server change enumeration on a server endpoint path, run the following PowerShell commands: +```powershell +Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" +Invoke-StorageSyncServerChangeDetection -ServerEndpointPath <path> +``` +> [!NOTE] +> By default, the server change enumeration scan will only check the modified timestamp. To perform a deeper check, use the -DeepScan parameter. -- Bug fix for the PowerShell script FileSyncErrorsReport.ps1+**Bug fix for the PowerShell script FileSyncErrorsReport.ps1** -- Miscellaneous reliability and telemetry improvements for cloud tiering and sync+**Miscellaneous reliability and telemetry improvements for cloud tiering and sync** ### Evaluation Tool- Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide. ### Agent installation and server configuration- For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). - The agent installation package must be installed with elevated (admin) permissions. For more information on how to install and configure the Azure File Sync agent w - The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results. ### Interoperability- - Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json). - File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen. - Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup. ### Sync limitations- The following items don't sync, but the rest of the system continues to operate normally: - Files with unsupported characters. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for a list of unsupported characters. The following items don't sync, but the rest of the system continues to operate > [!NOTE] > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.-### Server endpoint +### Server endpoint - A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync. - Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint. - Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs). The following items don't sync, but the rest of the system continues to operate - Don't store an OS or application paging file within a server endpoint location. ### Cloud endpoint- - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. - The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)). > [!NOTE] > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.-### Cloud tiering +### Cloud tiering - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations. - When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files. |
virtual-desktop | Configure Session Lock Behavior | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-session-lock-behavior.md | + + Title: Configure the session lock behavior for Azure Virtual Desktop +description: Learn how to configure session lock behavior for Azure Virtual Desktop. +++ Last updated : 09/02/2024+++# Configure the session lock behavior for Azure Virtual Desktop ++You can choose whether the session is disconnected or the remote lock screen shown when a remote session is locked, either by the user or by policy. When the session lock behavior is set to disconnect, a dialog is shown to let users know they were disconnected. Users can choose the **Reconnect** option from the dialog when they're ready to connect again. ++When used with single sign-on using Microsoft Entra ID, disconnecting the session provides the following benefits: ++- A consistent sign-in experience through Microsoft Entra ID when needed. ++- A single sign-on experience and reconnection without authentication prompt, when allowed by conditional access policies. ++- Support for passwordless authentication like passkeys and FIDO2 devices, contrary to the remote lock screen. Disconnecting the session is necessary to ensure full support of passwordless authentication. ++- Conditional access policies, including multifactor authentication and sign-in frequency, are reevaluated when the user reconnects to their session. ++- You can require multifactor authentication to return to the session and prevent users from unlocking with a simple username and password. ++For scenarios that rely on legacy authentication, including NTLM, CredSSP, RDSTLS, TLS, and RDP basic authentication protocols, users are prompted to re-enter their credentials. ++The default session lock behavior is different depending on whether you're using single sign-on with Microsoft Entra ID or legacy authentication. The following table shows the default configuration for each scenario: ++| Scenario | Default configuration | +|--|--| +| Single sign-on using Microsoft Entra ID | Disconnect the session | +| Legacy authentication protocols | Show the remote lock screen | ++This article shows you how to change the session lock behavior from its default configuration using Microsoft Intune or Group Policy. ++## Prerequisites ++Select the relevant tab for your configuration method. ++# [Intune](#tab/intune) ++Before you can configure the session lock behavior, you need to meet the following prerequisites: ++- An existing host pool with session hosts. ++- Your session hosts must be running one of the following operating systems with the relevant cumulative update installed: ++ - Windows 11 single or multi-session with the [2024-05 Cumulative Updates for Windows 11 (KB5037770)](https://support.microsoft.com/kb/KB5037770) or later installed. + - Windows 10 single or multi-session, versions 21H2 or later with the [2024-06 Cumulative Updates for Windows 10 (KB5039211)](https://support.microsoft.com/kb/KB5039211) or later installed. + - Windows Server 2022 with the [2024-05 Cumulative Update for Microsoft server operating system (KB5037782)](https://support.microsoft.com/kb/KB5037782) or later installed. ++- To configure Intune, you need: ++ - A Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role. + - A group containing the devices you want to configure. ++# [Group Policy](#tab/group-policy) ++Before you can configure the session lock behavior, you need to meet the following prerequisites: ++- An existing host pool with session hosts. ++- Your session hosts must be running one of the following operating systems with the relevant cumulative update installed: ++ - Windows 11 single or multi-session with the [2024-05 Cumulative Updates for Windows 11 (KB5037770)](https://support.microsoft.com/kb/KB5037770) or later installed. + - Windows 10 single or multi-session, versions 21H2 or later with the [2024-06 Cumulative Updates for Windows 10 (KB5039211)](https://support.microsoft.com/kb/KB5039211) or later installed. + - Windows Server 2022 with the [2024-05 Cumulative Update for Microsoft server operating system (KB5037782)](https://support.microsoft.com/kb/KB5037782) or later installed. ++- To configure Group Policy, you need: ++ - A domain account that has permission to create or edit Group Policy objects. + - A security group or organizational unit (OU) containing the devices you want to configure. ++++## Configure the session lock behavior ++Select the relevant tab for your configuration method. ++# [Intune](#tab/intune) ++To configure the session lock experience using Intune: ++1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/). ++1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type. ++1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Security**. ++ :::image type="content" source="media/configure-session-lock-behavior/remote-desktop-session-host-security-intune.png" alt-text="A screenshot showing the Remote Desktop Session Host security options in the Microsoft Intune portal." lightbox="media/configure-session-lock-behavior/remote-desktop-session-host-security-intune.png"::: ++1. Check the box for one of the following settings, depending on your requirements: ++ - For single sign-on using Microsoft Entra ID: ++ 1. Check the box for **Disconnect remote session on lock for Microsoft identity platform authentication**, then close the settings picker. ++ 1. Expand the **Administrative templates** category, then toggle the switch for **Disconnect remote session on lock for Microsoft identity platform authentication** to **Enabled** or **Disabled**: ++ - To disconnect the remote session when the session locks, toggle the switch to **Enabled**, then select **OK**. ++ - To show the remote lock screen when the session locks, toggle the switch to **Disabled**, then select **OK**. ++ - For legacy authentication protocols: ++ 1. Check the box for **Disconnect remote session on lock for legacy authentication**, then close the settings picker. ++ 1. Expand the **Administrative templates** category, then toggle the switch for **Disconnect remote session on lock for legacy authentication** to **Enabled** or **Disabled**: ++ - To disconnect the remote session when the session locks, toggle the switch to **Enabled**, then select **OK**. ++ - To show the remote lock screen when the session locks, toggle the switch to **Disabled**, then select **OK**. ++1. Select **Next**. ++1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags). ++1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**. ++1. On the **Review + create** tab, review the settings, then select **Create**. ++1. Once the policy applies to the session hosts, restart them for the settings to take effect. ++1. To test the configuration, connect to a remote session, then lock the remote session. Verify that the session either disconnects or the remote lock screen is shown, depending on your configuration. ++# [Group Policy](#tab/group-policy) ++To configure the session lock experience using Group Policy, follow these steps. ++1. The Group Policy settings are only available the operating systems listed in [Prerequisites](#prerequisites). To make them available on other versions of Windows Server, you need to copy the administrative template files `C:\Windows\PolicyDefinitions\terminalserver.admx` and `C:\Windows\PolicyDefinitions\en-US\terminalserver.adml` from a session host to the same location on your domain controllers or the [Group Policy Central Store](/troubleshoot/windows-client/group-policy/create-and-manage-central-store), depending on your environment. In the file path for `terminalserver.adml` replace `en-US` with the appropriate language code if you're using a different language. ++1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain. ++1. Create or edit a policy that targets the computers providing a remote session you want to configure. ++1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Security**. ++ :::image type="content" source="media/configure-session-lock-behavior/remote-desktop-session-host-security-group-policy.png" alt-text="A screenshot showing the Remote Desktop Session Host security options in the Group Policy editor." lightbox="media/configure-session-lock-behavior/remote-desktop-session-host-security-group-policy.png"::: ++1. Double-click one of the following policy settings, depending on your requirements: ++ - For single sign-on using Microsoft Entra ID: + + 1. Double-click **Disconnect remote session on lock for Microsoft identity platform authentication** to open it. ++ - To disconnect the remote session when the session locks, select **Enabled** or **Not configured**. ++ - To show the remote lock screen when the session locks, select **Disabled**. ++ 1. Select **OK**. ++ - For legacy authentication protocols: ++ 1. Double-click **Disconnect remote session on lock for legacy authentication** to open it. ++ - To disconnect the remote session when the session locks, select **Enabled** or **Not configured**. ++ - To show the remote lock screen when the session locks, select **Disabled**. ++ 1. Select **OK**. ++1. Ensure the policy is applied to the session hosts, then restart them for the settings to take effect. ++1. To test the configuration, connect to a remote session, then lock the remote session. Verify that the session either disconnects or the remote lock screen is shown, depending on your configuration. ++++## Related content ++- Learn how to [Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID](configure-single-sign-on.md). ++- Check out [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication) to learn how to enable passwordless authentication. ++- For more information about Microsoft Entra Kerberos, see [Deep dive: How Microsoft Entra Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889) |
virtual-desktop | Configure Single Sign On | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md | Title: Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID authentication -description: Learn how to configure single sign-on for an Azure Virtual Desktop environment using Microsoft Entra ID authentication. + Title: Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID +description: Learn how to configure single sign-on for an Azure Virtual Desktop environment using Microsoft Entra ID. Previously updated : 08/28/2024 Last updated : 09/02/2024 -# Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID authentication +# Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID -This article walks you through the process of configuring single sign-on (SSO) for Azure Virtual Desktop using Microsoft Entra ID authentication. When you enable single sign-on, users authenticate to Windows using a Microsoft Entra ID token. This token enables the use of passwordless authentication and third-party identity providers that federate with Microsoft Entra ID when connecting to a session host, making the sign-in experience seamless. +Single sign-on (SSO) for Azure Virtual Desktop using Microsoft Entra ID provides a seamless sign-in experience for users connecting to session hosts. When you enable single sign-on, users authenticate to Windows using a Microsoft Entra ID token. This token enables the use of passwordless authentication and third-party identity providers that federate with Microsoft Entra ID when connecting to a session host, making the sign-in experience seamless. -Single sign-on using Microsoft Entra ID authentication also provides a seamless experience for Microsoft Entra ID-based resources inside the session. For more information on using passwordless authentication within a session, see [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication). +Single sign-on using Microsoft Entra ID also provides a seamless experience for Microsoft Entra ID-based resources within the session. For more information on using passwordless authentication within a session, see [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication). To enable single sign-on using Microsoft Entra ID authentication, there are five tasks you must complete: To enable single sign-on using Microsoft Entra ID authentication, there are five Before you enable single sign-on, review the following information for using it in your environment. -### Disconnection when the session is locked +### Session lock behavior -When single sign-on is enabled and the remote session is locked, either by the user or by policy, the session is instead disconnected and a dialog is shown to let users know they were disconnected. Users can choose the **Reconnect** option from the dialog when they are ready to connect again. This is done for security reasons and to ensure full support of passwordless authentication. Disconnecting the session provides the following benefits: +When single sign-on using Microsoft Entra ID is enabled and the remote session is locked, either by the user or by policy, you can choose whether the session is disconnected or the remote lock screen shown. The default behavior is to disconnect the session when it locks. ++When the session lock behavior is set to disconnect, and a dialog is shown to let users know they were disconnected. Users can choose the **Reconnect** option from the dialog when they're ready to connect again. This behavior is done for security reasons and to ensure full support of passwordless authentication. Disconnecting the session provides the following benefits: - Consistent sign-in experience through Microsoft Entra ID when needed. When single sign-on is enabled and the remote session is locked, either by the u - Supports passwordless authentication like passkeys and FIDO2 devices, contrary to the remote lock screen. -- Conditional access policies, including multifactor authentication and sign-in frequency, are re-evaluated when the user reconnects to their session.--- Can require multi-factor authentication to return to the session and prevent users from unlocking with a simple username and password.--If you prefer to show the remote lock screen instead of disconnecting the session, your session hosts must use the following operating systems: --- Windows 11 single or multi-session with the [2024-05 Cumulative Updates for Windows 11 (KB5037770)](https://support.microsoft.com/kb/KB5037770) or later installed.--- Windows 10 single or multi-session, versions 21H2 or later with the [2024-06 Cumulative Updates for Windows 10 (KB5039211)](https://support.microsoft.com/kb/KB5039211) or later installed.--- Windows Server 2022 with the [2024-05 Cumulative Update for Microsoft server operating system (KB5037782)](https://support.microsoft.com/kb/KB5037782) or later installed.--You can configure the session lock behavior of your session hosts by using Intune, Group Policy, or the registry. --# [Intune](#tab/intune) --To configure the session lock experience using Intune, follow these steps. This process creates an Intune [settings catalog](/mem/intune/configuration/settings-catalog) policy. --1. Sign in to the [Microsoft Intune admin center](https://intune.microsoft.com/). --1. Select **Devices** > **Manage devices** > **Configuration** > **Create** > **New policy**. --1. Enter the following properties: -- - **Platform**: Select **Windows 10 and later**. -- - **Profile type**: Select **Settings catalog**. --1. Select **Create**. --1. In **Basics**, enter the following properties: -- - **Name**: Enter a descriptive name for the profile. Name your profile so you can easily identify it later. -- - **Description**: Enter a description for the profile. This setting is optional, but recommended. --1. Select **Next**. --1. In **Configuration settings**, select **Add settings**. Then: -- 1. In the settings picker, expand **Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Security**. -- 1. Select the **Disconnect remote session on lock for Microsoft identity platform authentication** setting. -- 1. Close the settings picker. --1. Configure the setting to "Disabled" to show the remote lock screen when the session locks. --1. Select **Next**. +- Conditional access policies, including multifactor authentication and sign-in frequency, are reevaluated when the user reconnects to their session. -1. (Optional) Add the **Scope tags**. For more information about scope tags in Intune, see [Use RBAC roles and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags). +- Can require multifactor authentication to return to the session and prevent users from unlocking with a simple username and password. -1. Select **Next**. --1. For the **Assignments** tab, select the devices, or groups to receive the profile, then select **Next**. For more information on assigning profiles, see [Assign user and device profiles](/mem/intune/configuration/device-profile-assign). --1. On the **Review + create** tab, review the configuration information, then select **Create**. --1. Once the policy configuration is created, the setting will take effect after the session hosts sync with Intune and users initiate a new session. --# [Group Policy](#tab/group-policy) --To configure the session lock experience using Group Policy, follow these steps. --1. Open **Local Group Policy Editor** from the Start menu or by running `gpedit.msc`. --1. Browse to the following policy section: -- - `Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Security` --1. Select the **Disconnect remote session on lock for Microsoft identity platform authentication** policy. --1. Set the policy to **Disabled** to show the remote lock screen when the session locks. --1. Select **OK** to save your changes. --1. Once the policy is configured, it will take effect after the user initiates a new session. --> [!TIP] -> To configure the Group Policy centrally on Active Directory Domain Controllers using Windows Server 2019 or Windows Server 2016, copy the `terminalserver.admx` and `terminalserver.adml` administrative template files from a session host to the [Group Policy Central Store](/troubleshoot/windows-client/group-policy/create-and-manage-central-store) on the domain controller. --# [Registry](#tab/registry) --To configure the session lock experience using the registry on a session host, follow these steps. --1. Open **Registry Editor** from the Start menu or by running `regedit.exe`. --1. Set the following registry key and its value. -- - **Key**: `HKLM\Software\Policies\Microsoft\Windows NT\Terminal Services` -- - **Type**: `REG_DWORD` -- - **Value name**: `fdisconnectonlockmicrosoftidentity` -- - **Value data**: Enter a value from the following table: -- | Value Data | Description | - |--|--| - | `0` | Show the remote lock screen. | - | `1` | Disconnect the session. | +If you want to configure the session lock behavior to show the remote lock screen instead of disconnecting the session, see [Configure the session lock behavior](configure-session-lock-behavior.md). ### Active Directory domain administrator accounts with single sign-on -In environments with an Active Directory Domain Services (AD DS) and hybrid user accounts, the default *Password Replication Policy* on read-only domain controllers denies password replication for members of *Domain Admins* and *Administrators* security groups. This policy prevents these administrator accounts from signing in to Microsoft Entra hybrid joined hosts and might keep prompting them to enter their credentials. It also prevents administrator accounts from accessing on-premises resources that use Kerberos authentication from Microsoft Entra joined hosts. We don't recommend connecting to a remote session using an account that is a domain administrator. +In environments with an Active Directory Domain Services (AD DS) and hybrid user accounts, the default *Password Replication Policy* on read-only domain controllers denies password replication for members of *Domain Admins* and *Administrators* security groups. This policy prevents these administrator accounts from signing in to Microsoft Entra hybrid joined hosts and might keep prompting them to enter their credentials. It also prevents administrator accounts from accessing on-premises resources that use Kerberos authentication from Microsoft Entra joined hosts. We don't recommend connecting to a remote session using an account that is a domain administrator for security reasons. If you need to make changes to a session host as an administrator, sign in to the session host using a non-administrator account, then use the *Run as administrator* option or the [runas](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/cc771525(v=ws.11)) tool from a command prompt to change to an administrator. If you need to make changes to a session host as an administrator, sign in to th Before you can enable single sign-on, you must meet the following prerequisites: -- To configure your Microsoft Entra tenant, you must be assigned one of the following [Microsoft Entra built-in roles](/entra/identity/role-based-access-control/manage-roles-portal):+- To configure your Microsoft Entra tenant, you must be assigned one of the following [Microsoft Entra built-in roles](/entra/identity/role-based-access-control/manage-roles-portal) or equivalent: - [Application Administrator](/entra/identity/role-based-access-control/permissions-reference#application-administrator) - [Cloud Application Administrator](/entra/identity/role-based-access-control/permissions-reference#cloud-application-administrator) - - [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator) - - Your session hosts must be running one of the following operating systems with the relevant cumulative update installed: - Windows 11 Enterprise single or multi-session with the [2022-10 Cumulative Updates for Windows 11 (KB5018418)](https://support.microsoft.com/kb/KB5018418) or later installed. Before you can enable single sign-on, you must meet the following prerequisites: - Your session hosts must be [Microsoft Entra joined](/entra/identity/devices/concept-directory-join) or [Microsoft Entra hybrid joined](/entra/identity/devices/concept-hybrid-join). Session hosts joined to Microsoft Entra Domain Services or to Active Directory Domain Services only aren't supported. - If your Microsoft Entra hybrid joined session hosts are in a different Active Directory domain than your user accounts, there must be a two-way trust between the two domains. Without the two-way trust, connections will fall back to older authentication protocols. + If your Microsoft Entra hybrid joined session hosts are in a different Active Directory domain than your user accounts, there must be a two-way trust between the two domains. Without the two-way trust, connections fall back to older authentication protocols. - [Install the Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation) version 2.9.0 or later on your local device or in [Azure Cloud Shell](../cloud-shell/overview.md). You must first allow Microsoft Entra authentication for Windows in your Microsof | Application Name | Application ID | |--|--|-| Microsoft Remote Desktop | a4a365df-50f1-4397-bc59-1a1564b8bb9c | -| Windows Cloud Login | 270efc09-cd0d-444b-a71f-39af4910ec45 | +| Microsoft Remote Desktop | `a4a365df-50f1-4397-bc59-1a1564b8bb9c` | +| Windows Cloud Login | `270efc09-cd0d-444b-a71f-39af4910ec45` | > [!IMPORTANT] > As part of an upcoming change, we're transitioning from Microsoft Remote Desktop to Windows Cloud Login, beginning in 2024. Configuring both applications now ensures you're ready for the change. To configure the service principal, use the [Microsoft Graph PowerShell SDK](/po ## Hide the consent prompt dialog -By default when single sign-on is enabled, users will see a dialog to allow the Remote Desktop connection when connecting to a new session host. Microsoft Entra remembers up to 15 hosts for 30 days before prompting again. If users see this dialogue to allow the Remote Desktop connection, they can select **Yes** to connect. +By default when single sign-on is enabled, users see a dialog to allow the Remote Desktop connection when connecting to a new session host. Microsoft Entra remembers up to 15 hosts for 30 days before prompting again. If users see this dialogue to allow the Remote Desktop connection, they can select **Yes** to connect. You can hide this dialog by configuring a list of trusted devices. To configure the list of devices, create one or more groups in Microsoft Entra ID that contains your session hosts, then add the group IDs to a property on the SSO service principals, *Microsoft Remote Desktop* and *Windows Cloud Login*. To enable single sign-on on your host pool, you must configure the following RDP - Check out [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication) to learn how to enable passwordless authentication. -- For more information about Microsoft Entra Kerberos, see [Deep dive: How Microsoft Entra Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889)--- If you're accessing Azure Virtual Desktop from our Windows Desktop client, see [Connect with the Windows Desktop client](./users/connect-windows.md).+- Learn how to [Configure the session lock behavior for Azure Virtual Desktop](configure-session-lock-behavior.md). -- If you're accessing Azure Virtual Desktop from our web client, see [Connect with the web client](./users/connect-web.md).+- For more information about Microsoft Entra Kerberos, see [Deep dive: How Microsoft Entra Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889). - If you encounter any issues, go to [Troubleshoot connections to Microsoft Entra joined VMs](troubleshoot-azure-ad-connections.md). |
virtual-desktop | Publish Applications Stream Remoteapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/publish-applications-stream-remoteapp.md | Applications aren't assigned individually to users unless you're using app attac ## Publish Microsoft Store applications -Applications in the Microsoft Store are updated frequently and often install automatically. The directory path for an application installed from the Microsoft Store includes the version number, which changes each time an application is updated. If an update happens automatically, the path changes and the application is no longer available to users. You can publish applications using the Windows `shell:appsFolder` location in the format `shell:AppsFolder\<PackageFamilyName>!<AppId>`, which doesn't use the `.exe` file or the directory path with the version number. This method ensures that the application location is always correct. +Applications in the Microsoft Store are updated frequently and often install automatically. The directory path for an application installed from the Microsoft Store includes the version number, which changes each time an application is updated. If an update happens automatically, the path changes and the application is no longer available to users. You can publish applications using the Windows `shell:appsFolder` location as the path in the format `shell:AppsFolder\<PackageFamilyName>!<AppId>`, which doesn't use the `.exe` file or the directory path with the version number. This method ensures that the application location is always correct. Using `shell:appsFolder` means the application icon isn't picked up automatically from the application. You should provide an icon file on a local drive on each session host in a path that doesn't change, unlike the application installation directory. |
virtual-desktop | Teams On Avd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md | After installing the WebRTC Redirector Service and the Teams desktop app, follow ## Publish Teams as a RemoteApp -New Teams is installed as an `MSIX` package, which is a format used for applications from the Microsoft Store. The directory path for an application installed from the Microsoft Store includes the version number, which changes each time an application is updated. To publish new Teams as a RemoteApp, follow the steps in [Publish Microsoft Store applications](publish-applications-stream-remoteapp.md#publish-microsoft-store-applications). +New Teams is installed as an `MSIX` package, which is a format used for applications from the Microsoft Store. The directory path for an application installed from the Microsoft Store includes the version number, which changes each time an application is updated. To publish new Teams as a RemoteApp, follow the steps in [Publish Microsoft Store applications](publish-applications-stream-remoteapp.md#publish-microsoft-store-applications), and for the path enter `shell:appsFolder\MSTeams_8wekyb3d8bbwe!MSTeams`. ## Enable registry keys for optional features |
virtual-wan | Virtual Wan Site To Site Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-site-to-site-portal.md | The device configuration file contains the settings to use when configuring your }, "gatewayConfiguration":{ "IpAddresses":{ - "Instance0":"104.45.18.186", - "Instance1":"104.45.13.195" + "Instance0":"203.0.113.186", + "Instance1":"203.0.113.195" } }, "connectionConfiguration":{ The device configuration file contains the settings to use when configuring your }, "vpnSiteConfiguration":{ "Name":" testsite2",- "IPAddress":"66.193.205.122" + "IPAddress":"198.51.100.122" }, "vpnSiteConnections":[ { The device configuration file contains the settings to use when configuring your }, "gatewayConfiguration":{ "IpAddresses":{ - "Instance0":"104.45.18.187", - "Instance1":"104.45.13.195" + "Instance0":"203.0.113.186", + "Instance1":"203.0.113.195" } }, "connectionConfiguration":{ The device configuration file contains the settings to use when configuring your }, "vpnSiteConfiguration":{ "Name":" testsite3",- "IPAddress":"182.71.123.228" + "IPAddress":"192.0.2.228" }, "vpnSiteConnections":[ { The device configuration file contains the settings to use when configuring your }, "gatewayConfiguration":{ "IpAddresses":{ - "Instance0":"104.45.18.187", - "Instance1":"104.45.13.195" + "Instance0":"203.0.113.186", + "Instance1":"203.0.113.195" } }, "connectionConfiguration":{ |
vpn-gateway | Vpn Gateway Troubleshoot Vpn Point To Site Connection Problems | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md | Title: 'Troubleshoot Azure point-to-site connection problems' + Title: Troubleshoot Azure point-to-site connection problems description: Learn to troubleshoot and solve common point-to-site connection problems and other virtual private network errors and issues. Previously updated : 04/03/2024 Last updated : 09/03/2024 # Troubleshooting: Azure point-to-site connection problems This problem might occur if the root certificate public key that you uploaded co ### Solution -Make sure that the data in the certificate doesn't contain invalid characters, such as line breaks (carriage returns). The entire value should be one long line. The following text is a sample of the certificate: --```text BEGIN CERTIFICATE---MIIC5zCCAc+gAwIBAgIQFSwsLuUrCIdHwI3hzJbdBjANBgkqhkiG9w0BAQsFADAW -MRQwEgYDVQQDDAtQMlNSb290Q2VydDAeFw0xNzA2MTUwMjU4NDZaFw0xODA2MTUw -MzE4NDZaMBYxFDASBgNVBAMMC1AyU1Jvb3RDZXJ0MIIBIjANBgkqhkiG9w0BAQEF -AAOCAQ8AMIIBCgKCAQEAz8QUCWCxxxTrxF5yc5uUpL/bzwC5zZ804ltB1NpPa/PI -sa5uwLw/YFb8XG/JCWxUJpUzS/kHUKFluqkY80U+fAmRmTEMq5wcaMhp3wRfeq+1 -G9OPBNTyqpnHe+i54QAnj1DjsHXXNL4AL1N8/TSzYTm7dkiq+EAIyRRMrZlYwije -407ChxIp0stB84MtMShhyoSm2hgl+3zfwuaGXoJQwWiXh715kMHVTSj9zFechYd7 -5OLltoRRDyyxsf0qweTFKIgFj13Hn/bq/UJG3AcyQNvlCv1HwQnXO+hckVBB29wE -sF8QSYk2MMGimPDYYt4ZM5tmYLxxxvGmrGhc+HWXzMeQIDAQABozEwLzAOBgNVHQ8B -Af8EBAMCAgQwHQYDVR0OBBYEFBE9zZWhQftVLBQNATC/LHLvMb0OMA0GCSqGSIb3 -DQEBCwUAA4IBAQB7k0ySFUQu72sfj3BdNxrXSyOT4L2rADLhxxxiK0U6gHUF6eWz -/0h6y4mNkg3NgLT3j/WclqzHXZruhWAXSF+VbAGkwcKA99xGWOcUJ+vKVYL/kDja -gaZrxHlhTYVVmwn4F7DWhteFqhzZ89/W9Mv6p180AimF96qDU8Ez8t860HQaFkU6 -2Nw9ZMsGkvLePZZi78yVBDCWMogBMhrRVXG/xQkBajgvL5syLwFBo2kWGdC+wyWY -U/Z+EK9UuHnn3Hkq/vXEzRVsYuaxchta0X2UNRzRq+o706l+iyLTpe6fnvW6ilOi -e8Jcej7mzunzyjz4chN0/WVF94MtxbUkLkqP END CERTIFICATE---``` +Make sure that the data in the certificate doesn't contain invalid characters, such as line breaks (carriage returns). The entire value should be one long line. The following example shows the area to copy within the certificate: ++ :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/certificate.png" alt-text="Screenshot of data in the certificate." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/certificate-expand.png"::: ## Azure portal error: Failed to save the VPN gateway, and the resource name is invalid |