Updates from: 07/28/2022 01:10:26
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Continuous Access Evaluation Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-workload.md
+
+ Title: Continuous access evaluation for workload identities in Azure AD
+description: Respond to changes to applications with continuous access evaluation for workload identities in Azure AD
+++++ Last updated : 07/22/2022++++++++
+# Continuous access evaluation for workload identities (preview)
+
+Continuous access evaluation (CAE) for [workload identities](../develop/workload-identities-overview.md) provides security benefits to your organization. It enables real-time enforcement of Conditional Access location and risk policies along with instant enforcement of token revocation events for workload identities.
+
+Continuous access evaluation doesn't currently support managed identities.
+
+## Scope of preview
+
+The continuous access evaluation for workload identities public preview scope includes support for Microsoft Graph as a resource provider.
+
+The preview targets service principals for line of business (LOB) applications.
+
+We support the following revocation events:
+
+- Service principal disable
+- Service principal delete
+- High service principal risk as detected by Azure AD Identity Protection
+
+Continuous access evaluation for workload identities supports [Conditional Access policies that target location and risk](workload-identity.md#implementation).
+
+## Enable your application
+
+Developers can opt in to Continuous access evaluation for workload identities when their API requests `xms_cc` as an optional claim. The `xms_cc` claim with a value of `cp1` in the access token is the authoritative way to identify a client application is capable of handling a claims challenge. For more information about how to make this work in your application, see the article, [Claims challenges, claims requests, and client capabilities](../develop/claims-challenge.md).
+
+### Disable
+
+In order to opt out, don't send the `xms_cc` claim with a value of `cp1`.
+
+Organizations who have Azure AD Premium can create a [Conditional Access policy to disable continuous access evaluation](concept-conditional-access-session.md#customize-continuous-access-evaluation) applied to specific workload identities as an immediate stop-gap measure.
+
+## Troubleshooting
+
+When a clientΓÇÖs access to a resource is blocked due to CAE being triggered, the clientΓÇÖs session will be revoked, and the client will need to reauthenticate. This behavior can be verified in the sign-in logs.
+
+The following steps detail how an admin can verify sign in activity in the sign-in logs:
+
+1. Sign into the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process.
+1. Select an entry to see activity details. The **Continuous access evaluation** field indicates whether a CAE token was issued in a particular sign-in attempt.
+
+## Next steps
+
+- [Register an application with Azure AD and create a service principal](../develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal)
+- [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md)
+- [Sample application using continuous access evaluation](https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae)
+- [What is continuous access evaluation?](../conditional-access/concept-continuous-access-evaluation.md)
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
The log files you use for investigation and monitoring are:
* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) * [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
-* [Azure Key Vault insights](../../azure-monitor/insights/key-vault-insights-overview.md)
+* [Azure Key Vault insights](../../key-vault/key-vault-insights-overview.md)
From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
active-directory Security Emergency Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-emergency-access.md
Some organizations use AD Domain Services and AD FS or similar identity provider
## Store account credentials safely
-Organizations need to ensure that the credentials for emergency access accounts are kept secure and known only to individuals who are authorized to use them. Some customers use a smartcard and others use passwords. A password for an emergency access account is usually separated into two or three parts, written on separate pieces of paper, and stored in secure, fireproof safes that are in secure, separate locations.
+Organizations need to ensure that the credentials for emergency access accounts are kept secure and known only to individuals who are authorized to use them. Some customers use a smartcard for Windows Server AD, a [FIDO2 security key](../authentication/howto-authentication-passwordless-security-key.md) for Azure AD and others use passwords. A password for an emergency access account is usually separated into two or three parts, written on separate pieces of paper, and stored in secure, fireproof safes that are in secure, separate locations.
If using passwords, make sure the accounts have strong passwords that do not expire the password. Ideally, the passwords should be at least 16 characters long and randomly generated.
aks Automated Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/automated-deployments.md
+
+ Title: Automated deployments for Azure Kubernetes Service (Preview)
+description: Learn how to use automated deployments to simplify the process of adding GitHub Actions to your Azure Kubernetes Service (AKS) project
++ Last updated : 7/21/2022+++
+# Automated Deployments for Azure Kubernetes Service (Preview)
+
+Automated deployments simplify the process of setting up a GitHub Action and creating an automated pipeline for your code releases to your Azure Kubernetes Service (AKS) cluster. Once connected, every new commit will kick off the pipeline, resulting in your application being updated.
++
+> [!NOTE]
+> This feature is not yet available in all regions.
+
+## Prerequisites
+
+* A GitHub account.
+* An AKS cluster.
+* An Azure Container Registry (ACR)
+
+## Deploy an application to your AKS cluster
+
+1. In the Azure portal, navigate to the resource group containing the AKS cluster you want to deploy the application to.
+
+1. Select your AKS cluster, and then select **Automated deployments (preview)** on the left blade. Select **Create an automated deployment**.
+
+ :::image type="content" source="media/automated-deployments/ad-homescreen.png" alt-text="The automated deployments screen in the Azure portal." lightbox="media/automated-deployments/ad-homescreen-expanded.png":::
+
+1. Name your workflow and click **Authorize** to connect your Azure account with your GitHub account. After your accounts are linked, choose which repository and branch you would like to create the GitHub Action for.
+
+ - **GitHub**: Authorize and select the repository for your GitHub account.
+
+ :::image type="content" source="media/automated-deployments/ad-ghactivate-repo.png" alt-text="The authorize and repository selection screen." lightbox="media/automated-deployments/ad-ghactivate-repo-expanded.png":::
+
+1. Pick your dockerfile and your ACR and image.
+
+ :::image type="content" source="media/automated-deployments/ad-image.png" alt-text="The image selection screen." lightbox="media/automated-deployments/ad-image-expanded.png":::
+
+1. Determine whether you'll deploy with Helm or regular Kubernetes manifests. Once decided, pick the appropriate deployment files from your repository and decide which namespace you want to deploy into.
+
+ :::image type="content" source="media/automated-deployments/ad-deployment-details.png" alt-text="The deployment details screen." lightbox="media/automated-deployments/ad-deployment-details-expanded.png":::
+
+1. Review your deployment before creating the pull request.
+
+1. Click **view pull request** to see your GitHub Action.
+
+ :::image type="content" source="media/automated-deployments/ad-view-pr.png" alt-text="The final screen of the deployment process. The view pull request button is highlighted." lightbox="media/automated-deployments/ad-view-pr-expanded.png" :::
+
+1. Merge the pull request to kick off the GitHub Action and deploy your application.
+
+ :::image type="content" source="media/automated-deployments/ad-accept-pr.png" alt-text="The pull request page in GitHub. The merge pull request button is highlighted." lightbox="media/automated-deployments/ad-accept-pr-expanded.png" :::
+
+1. Once your application is deployed, go back to automated deployments to see your history.
+
+ :::image type="content" source="media/automated-deployments/ad-view-history.png" alt-text="The history screen in Azure portal, showing all the previous automated deployments." lightbox="media/automated-deployments/ad-view-history-expanded.png" :::
+
+## Clean up resources
+
+You can remove any related resources that you created when you don't need them anymore individually or by deleting the resource group to which they belong. To delete your automated deployment, navigate to the automated deployment dashboard and select **...**, then select **delete** and confirm your action.
+
+## Next steps
+
+You can modify these GitHub Actions to meet the needs of your team by opening them up in an editor like Visual Studio Code and changing them as you see fit.
+
+Learn more about [GitHub Actions for Kubernetes][kubernetes-action].
+
+<!-- LINKS -->
+[kubernetes-action]: kubernetes-action.md
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
For more information about creating SSH keys, see [Create and manage SSH keys fo
The template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/aks/). - For more AKS samples, see the [AKS quickstart templates][aks-quickstart-templates] site. ## Deploy the template
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
You'll define the outbound type to use the UDR that already exists on the subnet
> [!NOTE] > AKS will create a system-assigned kubelet identity in the Node resource group if you do not [specify your own kubelet managed identity][Use a pre-created kubelet managed identity].
+>
+> For user defined routing (UDR), system-assigned identity only supports CNI network plugin. Because for kubelet network plugin, AKS cluster needs permission on route table as kubernetes cloud-provider manages rules.
-You can create an AKS cluster using a system-assigned managed identity by running the following CLI command.
+You can create an AKS cluster using a system-assigned managed identity with CNI network plugin by running the following CLI command.
```azurecli az aks create -g $RG -n $AKSNAME -l $LOC \ --node-count 3 \
- --network-plugin $PLUGIN \
+ --network-plugin azure \
--outbound-type userDefinedRouting \ --vnet-subnet-id $SUBNETID \ --api-server-authorized-ip-ranges $FWPUBLIC_IP ```
-> [!NOTE]
-> For creating and using your own VNet and route table where the resources are outside of the worker node resource group, the CLI will add the role assignment automatically. If you are using an ARM template or other client, you need to use the Principal ID of the cluster managed identity to perform a [role assignment.][add role to identity]
->
-> If you are not using the CLI but using your own VNet or route table which are outside of the worker node resource group, it's recommended to use [user-assigned control plane identity][Create an AKS cluster with user-assigned identities]. For system-assigned control plane identity, we cannot get the identity ID before creating cluster, which causes delay for role assignment to take effect.
- #### Create an AKS cluster with user-assigned identities ##### Create user-assigned managed identities
The output should resemble the following:
} ```
+> [!NOTE]
+> For creating and using your own VNet and route table where the resources are outside of the worker node resource group, the CLI will add the role assignment automatically. If you are using an ARM template or other client, you need to use the Principal ID of the cluster managed identity to perform a [role assignment.][add role to identity]
+ ##### Create an AKS cluster with user-assigned identities Now you can use the following command to create your AKS cluster with your existing identities in the subnet. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
Now you can use the following command to create your AKS cluster with your exist
```azurecli az aks create -g $RG -n $AKSNAME -l $LOC \ --node-count 3 \
- --network-plugin $PLUGIN \
+ --network-plugin kubenet \
--outbound-type userDefinedRouting \ --vnet-subnet-id $SUBNETID \ --api-server-authorized-ip-ranges $FWPUBLIC_IP
az aks create -g $RG -n $AKSNAME -l $LOC \
--assign-kubelet-identity <kubelet-identity-resource-id> ```
-> [!NOTE]
-> For creating and using your own VNet and route table where the resources are outside of the worker node resource group, the CLI will add the role assignment automatically. If you are using an ARM template or other client, you need to use the Principal ID of the cluster managed identity to perform a [role assignment.][add role to identity]
### Enable developer access to the API server
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
description: Learn how to get a PHP app working in Azure, with connection to a M
ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Previously updated : 06/13/2022 Last updated : 07/22/2022
-zone_pivot_groups: app-service-platform-windows-linux
# Tutorial: Build a PHP and MySQL app in Azure App Service
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a secure PHP app in Azure App Service that's connected to a MySQL database (using Azure Database for MySQL flexible server). When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Linux.
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Windows operating system. This tutorial shows how to create a PHP app in Azure and connect it to a MySQL database. When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Windows.
---
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a PHP app in Azure and connect it to a MySQL database. When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Linux.
-- In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a MySQL database in Azure
-> * Connect a PHP app to MySQL
-> * Deploy the app to Azure
-> * Update the data model and redeploy the app
+> * Create a secure-by-default PHP and MySQL app in Azure
+> * Configure connection secrets to MySQL using app settings
+> * Deploy application code using GitHub Actions
+> * Update and redeploy the app
+> * Run database migrations securely
> * Stream diagnostic logs from Azure > * Manage the app in the Azure portal [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-## Prerequisites
-
-To complete this tutorial:
--- [Install Git](https://git-scm.com/)-- [Install PHP 7.4](https://php.net/downloads.php)-- [Install Composer](https://getcomposer.org/doc/00-intro.md)-- [Install and start MySQL](https://dev.mysql.com/doc/refman/5.7/en/installing.html)-- <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> to run commands in any shell to provision and configure Azure resources.-
-## Prepare local MySQL
+## Sample application
-In this step, you create a database in your local MySQL server for your use in this tutorial.
-
-### Connect to local MySQL server
-
-In a local terminal window, connect to your local MySQL server. You can use this terminal window to run all the commands in this tutorial.
+To follow along with this tutorial, clone or download the sample application from the repository:
```terminal
-mysql -u root -p
+git clone https://github.com/Azure-Samples/laravel-tasks.git
```
-If you're prompted for a password, enter the password for the `root` account. If you don't remember your root account password, see [MySQL: How to Reset the Root Password](https://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html).
-
-If your command runs successfully, then your MySQL server is running. If not, ensure that your local MySQL server is started by following the [MySQL post-installation steps](https://dev.mysql.com/doc/refman/5.7/en/postinstallation.html).
-
-### Create a database locally
-
-1. At the `mysql` prompt, create a database.
-
- ```sql
- CREATE DATABASE sampledb;
- ```
-
-1. Exit your server connection by typing `quit`.
-
- ```sql
- quit
- ```
-
-<a name="step2"></a>
+If you want to run the application locally, do the following:
-## Create a PHP app locally
-In this step, you get a Laravel sample application, configure its database connection, and run it locally.
-
-### Clone the sample
-
-1. `cd` to a working directory.
-
-1. Clone the sample repository and change to the repository root.
-
- ```terminal
- git clone https://github.com/Azure-Samples/laravel-tasks
- cd laravel-tasks
- ```
-
-1. Install the required packages.
+- In **.env**, configure the database settings (like `DB_DATABASE`, `DB_USERNAME`, and `DB_PASSWORD`) using settings in your local MySQL database. You need a local MySQL server to run this sample.
+- From the root of the repository, start Laravel with the following commands:
```terminal composer install
- ```
-
-### Configure MySQL connection
-
-In the repository root, create a file named *.env*. Copy the following variables into the *.env* file. Replace the _&lt;root_password>_ placeholder with the MySQL root user's password.
-
-```txt
-APP_ENV=local
-APP_DEBUG=true
-APP_KEY=
-
-DB_CONNECTION=mysql
-DB_HOST=127.0.0.1
-DB_DATABASE=sampledb
-DB_USERNAME=root
-DB_PASSWORD=<root_password>
-```
-
-For information on how Laravel uses the _.env_ file, see [Laravel Environment Configuration](https://laravel.com/docs/8.x#environment-based-configuration).
-
-### Run the sample locally
-
-1. Run [Laravel database migrations](https://laravel.com/docs/8.x/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the _database/migrations_ directory in the Git repository.
-
- ```terminal
php artisan migrate
- ```
-
-1. Generate a new Laravel application key.
-
- ```terminal
php artisan key:generate
- ```
-
-1. Run the application.
-
- ```terminal
php artisan serve ```
-1. Go to `http://localhost:8000` in a browser. Add a few tasks in the page.
-
- ![PHP connects successfully to MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
-
-1. To stop PHP, enter `Ctrl + C` in the terminal.
-
-## Deploy Laravel sample to App Service
-
-### Deploy sample code
--
-1. In the root directory of the respository, add a file called *.deployment*. This file tells App Service to run a custom deployment script during build automation. Copy the following text into it as its content:
-
- ```
- [config]
- command = bash deploy.sh
- ```
-
- > [!NOTE]
- > The deployment process installs [Composer](https://getcomposer.org/) packages at the end. App Service on Windows does not run these automations during default deployment, so this sample repository has two additional files in its root directory to enable it:
- >
- > - `deploy.sh` - The custom deployment script. If you review the file, you see that it runs `php composer.phar install`.
- > - `composer.phar` - The Composer package manager.
- >
- > You can use this approach to add any step to your [Git-based](deploy-local-git.md) or [ZIP](deploy-zip.md) deployment to App Service. For more information, see [Custom Deployment Script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script).
- >
-
--
-1. From the command line, sign in to Azure using the [`az login`](/cli/azure#az_login) command.
-
- ```azurecli
- az login
- ```
-
-1. Deploy the code in your local folder using the [`az webapp up`](/cli/azure/webapp#az_webapp_up) command. Replace *\<app-name>* with a unique name for your app.
+## 1 - Create App Service and MySQL resources
- ::: zone pivot="platform-windows"
-
- ```azurecli
- az webapp up --resource-group myResourceGroup --name <app-name> --location "West Europe" --sku FREE --runtime "php|7.4" --os-type=windows
- ```
-
- ::: zone-end
-
- ::: zone pivot="platform-linux"
-
- ```azurecli
- az webapp up --resource-group myResourceGroup --name <app-name> --location "West Europe" --sku FREE --runtime "php|7.4" --os-type=linux
- ```
-
- ::: zone-end
-
- [!include [az webapp up command note](../../includes/app-service-web-az-webapp-up-note.md)]
--
-### Configure Laravel environment variables
-
-Laravel needs an application key in App Service. You can configure it with app settings.
-
-1. Use `php artisan` to generate a new application key without saving it to _.env_.
-
- ```terminal
- php artisan key:generate --show
- ```
-
-1. Set the application key in the App Service app by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command. Replace the placeholders _&lt;app-name>_ and _&lt;outputofphpartisankey:generate>_.
-
- ```azurecli-interactive
- az webapp config appsettings set --settings APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
- ```
-
- `APP_DEBUG="true"` tells Laravel to return debugging information when the deployed app encounters errors. When running a production application, set it to `false`, which is more secure.
-
-### Set the virtual application path
--
-Set the virtual application path for the app. This step is required because the [Laravel application lifecycle](https://laravel.com/docs/8.x/lifecycle#lifecycle-overview) begins in the _public_ directory instead of the application's root directory. Other PHP frameworks whose lifecycle start in the root directory can work without manual configuration of the virtual application path.
-
-Set the virtual application path by using the [`az resource update`](/cli/azure/resource#az-resource-update) command. Replace the _&lt;app-name>_ placeholder.
-
-```azurecli-interactive
-az resource update --name web --resource-group myResourceGroup --namespace Microsoft.Web --resource-type config --parent sites/<app_name> --set properties.virtualApplications[0].physicalPath="site\wwwroot\public" --api-version 2015-06-01
-```
-
-By default, Azure App Service points the root virtual application path (_/_) to the root directory of the deployed application files (_sites\wwwroot_).
---
-[Laravel application lifecycle](https://laravel.com/docs/8.x/lifecycle#lifecycle-overview) begins in the _public_ directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. But you can use `.htaccess` to rewrite all requests to point to _/public_ instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
-
-For more information, see [Change site root](configure-language-php.md#change-site-root).
--
-If you browse to `https://<app-name>.azurewebsites.net` now and see a `Whoops, looks like something went wrong` message, then you have configured your App Service app properly and it's running in Azure. It just doesn't have database connectivity yet. In the next step, you create a MySQL database in [Azure Database for MySQL](../mysql/index.yml).
-
-## Create MySQL in Azure
-
-1. Create a MySQL server in Azure with the [`az mysql server create`](/cli/azure/mysql/server#az-mysql-server-create) command.
-
- In the following command, substitute a unique server name for the *\<mysql-server-name>* placeholder, a user name for the *\<admin-user>*, and a password for the *\<admin-password>* placeholder. The server name is used as part of your MySQL endpoint (`https://<mysql-server-name>.mysql.database.azure.com`), so the name needs to be unique across all servers in Azure. For details on selecting MySQL DB SKU, see [Create an Azure Database for MySQL server](../mysql/quickstart-create-mysql-server-database-using-azure-cli.md#create-an-azure-database-for-mysql-server).
+In this step, you create the Azure resources. The steps used in this tutorial create an App Service and Azure Database for MySQL configuration that's secure by default. For the creation process, you'll specify:
- ```azurecli-interactive
- az mysql server create --resource-group myResourceGroup --name <mysql-server-name> --location "West Europe" --admin-user <admin-user> --admin-password <admin-password> --sku-name B_Gen5_1
- ```
-
-1. Create a database called `sampledb` by using the [`az mysql db create`](/cli/azure/mysql/db#az-mysql-db-create) command.
-
- ```azurecli-interactive
- az mysql db create --resource-group myResourceGroup --server-name <mysql-server-name> --name sampledb
- ```
-
-## Connect the app to the database
-
-Configure the connection between your app and the SQL database by using the [az webapp connection create mysql](/cli/azure/webapp/connection/create#az-webapp-connection-create-mysql) command. `--target-resource-group` is the resource group that contains the MySQL database.
--
-```azurecli-interactive
-az webapp connection create mysql --resource-group myResourceGroup --name <app-name> --target-resource-group myResourceGroup --server <mysql-server-name> --database sampledb --connection my_laravel_db --client-type php
-```
---
-```azurecli-interactive
-az webapp connection create mysql --resource-group myResourceGroup --name <app-name> --target-resource-group myResourceGroup --server <mysql-server-name> --database sampledb --connection my_laravel_db
-```
--
-When prompted, provide the administrator username and password for the MySQL database.
+* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
+* The **Runtime** for the app. It's where you select the version of PHP to use for your app.
+* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
-> [!NOTE]
-> The CLI command does everything the app needs to successfully connect to the database, including:
->
-> - In your App Service app, adds [six app settings](../service-connector/how-to-integrate-mysql.md#php-mysqli-secret--connection-string) with the names `AZURE_MYSQL_<setting>`, which your code can use for its database connection. If the app setting names are already in use, the `AZURE_MYSQL_<connection-name>_<setting>` format is used instead.
-> - In your MySQL database server, allows Azure services to access the MySQL database server.
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
-## Generate the database schema
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create app service step 1](./includes/tutorial-php-mysql-app/azure-portal-create-app-mysql-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-1.png"::: |
+| [!INCLUDE [Create app service step 2](./includes/tutorial-php-mysql-app/azure-portal-create-app-mysql-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-2-240px.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-2.png"::: |
+| [!INCLUDE [Create app service step 3](./includes/tutorial-php-mysql-app/azure-portal-create-app-mysql-3.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-3-240px.png" alt-text="A screenshot showing the form to fill out to create a web app in Azure." lightbox="./media/tutorial-php-mysql-app/azure-portal-create-app-mysql-3.png"::: |
-<a name="devconfig"></a>
+## 2 - Set up database connectivity
-1. Allow access to the Azure database from your local computer by using the [az mysql server firewall-rule create](/cli/azure/mysql/server/firewall-rule#az-mysql-server-firewall-rule-create) and replacing *\<your-ip-address>* with [your local IPv4 IP address](https://www.whatsmyip.org/).
+The creation wizard generated a connection string to the database for you, but not in a format that's useable for your code yet. In this step, you create [app settings](configure-common.md#configure-app-settings) with the format that your app needs.
- ```azurecli-interactive
- az mysql server firewall-rule create --name AllowLocalClient --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address=<your-ip-address> --end-ip-address=<your-ip-address>
- ```
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Get connection string step 1](./includes/tutorial-php-mysql-app/azure-portal-get-connection-string-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1-240px.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1.png"::: |
+| [!INCLUDE [Get connection string step 2](./includes/tutorial-php-mysql-app/azure-portal-get-connection-string-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-2-240px.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-2.png"::: |
+| [!INCLUDE [Get connection string step 3](./includes/tutorial-php-mysql-app/azure-portal-get-connection-string-3.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-3-240px.png" alt-text="A screenshot showing how to create an app setting." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-3.png"::: |
+| [!INCLUDE [Get connection string step 4](./includes/tutorial-php-mysql-app/azure-portal-get-connection-string-4.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-4-240px.png" alt-text="A screenshot showing all the required app settings in the configuration page." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-4.png"::: |
- > [!TIP]
- > Once the firewall rule for your local computer is enabled, you can connect to the server like any MySQL server with the `mysql` client. For example:
- > ```terminal
- > mysql -u <admin-user>@<mysql-server-name> -h <mysql-server-name>.mysql.database.azure.com -P 3306 -p
- > ```
+## 3 - Deploy sample code
-1. Generate the environment variables from the [service connector you created earlier](#connect-the-app-to-the-database) by running the [`az webapp connection list-configuration`](/cli/azure/webapp/connection/create#az-webapp-connection-create-mysql) command.
+In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action. You'll make some changes to your codebase with Visual Studio Code directly in the browser, then let GitHub Actions deploy automatically for you.
- ::: zone pivot="platform-windows"
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Deploy sample code step 1](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-1-240px.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-1.png"::: |
+| [!INCLUDE [Deploy sample code step 2](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2-240px.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.png"::: |
+| [!INCLUDE [Deploy sample code step 3](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-3.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-3-240px.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-3.png"::: |
+| [!INCLUDE [Deploy sample code step 4](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-4.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-4-240px.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-4.png"::: |
+| [!INCLUDE [Deploy sample code step 5](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-5.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-5-240px.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-5.png"::: |
+| [!INCLUDE [Deploy sample code step 6](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6-240px.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.png"::: |
+| [!INCLUDE [Deploy sample code step 7](./includes/tutorial-php-mysql-app/azure-portal-deploy-sample-code-7.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-7-240px.png" alt-text="A screenshot showing how to commit your changes in the Visual Studio Code browser experience." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-7.png"::: |
- ```powershell
- $Settings = az webapp connection list-configuration --resource-group myResourceGroup --name <app-name> --connection my_laravel_db --query configurations | ConvertFrom-Json
- foreach ($s in $Settings) { New-Item -Path Env:$($s.name) -Value $s.value}
- ```
-
- > [!TIP]
- > These commands are equivalent to setting the database variables manually like this:
- >
- > ```powershell
- > New-Item -Path Env:AZURE_MYSQL_DBNAME -Value ...
- > New-Item -Path Env:AZURE_MYSQL_HOST -Value ...
- > New-Item -Path Env:AZURE_MYSQL_PORT -Value ...
- > New-Item -Path Env:AZURE_MYSQL_FLAG -Value ...
- > New-Item -Path Env:AZURE_MYSQL_USERNAME -Value ...
- > New-Item -Path Env:AZURE_MYSQL_PASSWORD -Value ...
- > ```
- ::: zone-end
-
- ::: zone pivot="platform-linux"
-
- ```bash
- export $(az webapp connection list-configuration --resource-group myResourceGroup --name <app-name> --connection my_laravel_db --query "configurations[].[name,value] | [*].join('=',@)" --output tsv)
- ```
+## 4 - Generate database schema
- > [!TIP]
- > The [JMESPath query](https://jmespath.org/) in `--query` and the `--output tsv` formatting let you feed the output directly into the `export` command. It's equivalent to setting the database variables manually like this:
- >
- > ```powershell
- > export AZURE_MYSQL_DBNAME=...
- > export AZURE_MYSQL_HOST=...
- > export AZURE_MYSQL_PORT=...
- > export AZURE_MYSQL_FLAG=...
- > export AZURE_MYSQL_USERNAME=...
- > export AZURE_MYSQL_PASSWORD=...
- > ```
-
- <!-- export $(az webapp connection list-configuration -g myResourceGroup -n <app-name> --connection my-laravel-db | jq -r '.configurations[] | "\(.name)=\(.value)"')
- -->
- ::: zone-end
-
-1. Open _config/database.php_ and find the `mysql` section. It's already set up to retrieve connection secrets from environment variables.
-
- ```php
- 'mysql' => [
- 'driver' => 'mysql',
- 'url' => env('DATABASE_URL'),
- 'host' => env('DB_HOST', '127.0.0.1'),
- 'port' => env('DB_PORT', '3306'),
- 'database' => env('DB_DATABASE', 'forge'),
- 'username' => env('DB_USERNAME', 'forge'),
- 'password' => env('DB_PASSWORD', ''),
- ...
- ],
- ```
+The creation wizard puts the MySQL database server behind a private endpoint, so it's accessible only from the virtual network. Because the App Service app is already integrated with the virtual network, the easiest way to run database migrations with your database is directly from within the App Service container.
- Change the default environment variables to the ones that the service connector created:
-
- ```php
- 'mysql' => [
- 'driver' => 'mysql',
- 'url' => env('DATABASE_URL'),
- 'host' => env('AZURE_MYSQL_HOST', '127.0.0.1'),
- 'port' => env('AZURE_MYSQL_PORT', '3306'),
- 'database' => env('AZURE_MYSQL_DBNAME', 'forge'),
- 'username' => env('AZURE_MYSQL_USERNAME', 'forge'),
- 'password' => env('AZURE_MYSQL_PASSWORD', ''),
- ...
- ],
- ```
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Generate database schema step 1](./includes/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1-240px.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.png"::: |
+| [!INCLUDE [Generate database schema step 2](./includes/tutorial-php-mysql-app/azure-portal-generate-db-schema-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-2-240px.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output." lightbox="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-2.png"::: |
- > [!TIP]
- > PHP uses the [getenv](https://www.php.net/manual/en/function.getenv.php) method to access the settings. the Laravel code uses an [env](https://laravel.com/docs/8.x/helpers#method-env) wrapper over the PHP `getenv`.
+## 5 - Change site root
-1. By default, Azure Database for MySQL enforces TLS connections from clients. To connect to your MySQL database in Azure, you must use the [_.pem_ certificate supplied by Azure Database for MySQL](../mysql/single-server/how-to-configure-ssl.md). The certificate `BaltimoreCyberTrustRoot.crt.pem` is provided in the sample repository for convenience in this tutorial. At the bottom of the `mysql` section in _config/database.php_, change the `options` parameter to the following code:
+[Laravel application lifecycle](https://laravel.com/docs/8.x/lifecycle#lifecycle-overview) begins in the **/public** directory instead. The default PHP 8.0 container for App Service uses Nginx, which starts in the application's root directory. To change the site root, you need to change the Nginx configuration file in the PHP 8.0 container (*/etc/nginx/sites-available/default*). For your convenience, the sample repository contains a custom configuration file called *default*. As noted previously, you don't want to replace this file using the SSH shell, because your changes will be lost after an app restart.
- ```php
- 'options' => extension_loaded('pdo_mysql') ? array_filter([
- PDO::MYSQL_ATTR_SSL_KEY => '/ssl/BaltimoreCyberTrustRoot.crt.pem',
- ]) : [],
- ```
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Change site root step 1](./includes/tutorial-php-mysql-app/azure-portal-change-site-root-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-change-site-root-1-240px.png" alt-text="A screenshot showing how to open the general settings tab in the configuration page of App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-change-site-root-1.png"::: |
+| [!INCLUDE [Change site root step 2](./includes/tutorial-php-mysql-app/azure-portal-change-site-root-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-change-site-root-2-240px.png" alt-text="A screenshot showing how to configure a startup command in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-change-site-root-2.png"::: |
-1. Your sample app is now configured to connect to the Azure MySQL database. Run Laravel database migrations again to create the tables and run the sample app.
+## 6 - Browse to the app
- ```bash
- php artisan migrate
- php artisan serve
- ```
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Browse to app step 1](./includes/tutorial-php-mysql-app/azure-portal-browse-app-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-1-240px.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-browse-app-1.png"::: |
+| [!INCLUDE [Browse to app step 2](./includes/tutorial-php-mysql-app/azure-portal-browse-app-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-2-240px.png" alt-text="A screenshot of the Laravel app running in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-browse-app-2.png"::: |
-1. Go to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
+## 7 - Stream diagnostic logs
-1. Add a few tasks in the page.
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Stream diagnostic logs step 1](./includes/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-1-240px.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-1.png"::: |
+| [!INCLUDE [Stream diagnostic logs step 2](./includes/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2-240px.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.png"::: |
- ![PHP connects successfully to Azure Database for MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
+## Clean up resources
-1. To stop PHP, enter `Ctrl + C` in the terminal.
+When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
-## Deploy changes to Azure
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Remove resource group Azure portal 1](./includes/tutorial-php-mysql-app/azure-portal-clean-up-resources-1.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-1-240px.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-1.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 2](./includes/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2-240px.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 3](./includes/tutorial-php-mysql-app/azure-portal-clean-up-resources-3.md)] | :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-3-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-3.png"::: |
-1. Deploy your code changes by running `az webapp up` again.
+## Frequently asked questions
- ::: zone pivot="platform-windows"
-
- ```azurecli
- az webapp up --os-type=windows
- ```
-
- ::: zone-end
-
- ::: zone pivot="platform-linux"
-
- ```azurecli
- az webapp up --runtime "php|7.4" --os-type=linux
- ```
-
- > [!NOTE]
- > `--runtime` is still needed for deployment with `az webapp up`. Otherwise, the runtime is detected to be Node.js due to the presence of *package.json*.
+- [How much does this setup cost?](#how-much-does-this-setup-cost)
+- [How do I connect to the MySQL database that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-mysql-database-thats-secured-behind-the-virtual-network-with-other-tools)
+- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions)
+- [Why is the GitHub Actions deployment so slow?](#why-is-the-github-actions-deployment-so-slow)
- ::: zone-end
+#### How much does this setup cost?
-1. Browse to `http://<app-name>.azurewebsites.net` and add a few tasks to the list.
+Pricing for the create resources is as follows:
- :::image type="content" source="./media/tutorial-php-mysql-app/php-mysql-in-azure.png" alt-text="Screenshot of the Azure app example titled Task List showing new tasks added.":::
+- The App Service plan is created in **Premium V2** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).
+- The MySQL flexible server is created in **B1ms** tier and can be scaled up or down. With an Azure free account, **B1ms** tier is free for 12 months, up to the monthly limits. See [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
+- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).
+- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
-Congratulations, you're running a data-driven PHP app in Azure App Service.
+#### How do I connect to the MySQL database that's secured behind the virtual network with other tools?
-## Stream diagnostic logs
+- For basic access from a commmand-line tool, you can run `mysql` from the app's SSH terminal.
+- To connect from a desktop tool like MySQL Workbench, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.
+- You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network.
-While the PHP application runs in Azure App Service, you can get the console logs piped to your terminal. That way, you can get the same diagnostic messages to help you debug application errors.
+#### How does local app development work with GitHub Actions?
-To start log streaming, use the [`az webapp log tail`](/cli/azure/webapp/log#az-webapp-log-tail) command.
+Take the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates push it to GitHub. For example:
-```azurecli-interactive
-az webapp log tail
+```terminal
+git add .
+git commit -m "<some-message>"
+git push origin main
```
-Once log streaming has started, refresh the Azure app in the browser to get some web traffic. You can now see console logs piped to the terminal. If you don't see console logs immediately, check again in 30 seconds.
-
-To stop log streaming at any time, enter `Ctrl`+`C`.
--
-> [!NOTE]
-> You can also inspect the log files from the browser at `https://<app-name>.scm.azurewebsites.net/api/logs/docker`.
+#### Why is the GitHub Actions deployment so slow?
+The autogenerated workflow file from App Service defines build-then-deploy, two-job run. Because each job runs in its own clean environment, the workflow file ensures that the `deploy` job has access to the files from the `build` job:
-> [!TIP]
-> A PHP application can use the standard [error_log()](https://php.net/manual/function.error-log.php) to output to the console. The sample application uses this approach in _app/Http/routes.php_.
->
-> As a web framework, [Laravel uses Monolog](https://laravel.com/docs/8.x/logging) as the logging provider. To see how to get Monolog to output messages to the console, see [PHP: How to use monolog to log to console (php://out)](https://stackoverflow.com/questions/25787258/php-how-to-use-monolog-to-log-to-console-php-out).
->
+- At the end of the `build` job, [upload files as artifacts](https://docs.github.com/actions/using-workflows/storing-workflow-data-as-artifacts).
+- At the beginning of the `deploy` job, download the artifacts.
+Most of the time taken by the two-job process is spent uploading and download artifacts. If you want, you can simplify the workflow file by combining the two jobs into one, which eliminates the need for the upload and download steps.
<a name="next"></a>
To stop log streaming at any time, enter `Ctrl`+`C`.
In this tutorial, you learned how to: > [!div class="checklist"]
-> * Create a MySQL database in Azure
-> * Connect a PHP app to MySQL
-> * Deploy the app to Azure
-> * Update the data model and redeploy the app
+> * Create a secure-by-default PHP and MySQL app in Azure
+> * Configure connection secrets to MySQL using app settings
+> * Deploy application code using GitHub Actions
+> * Update and redeploy the app
+> * Run database migrations securely
> * Stream diagnostic logs from Azure > * Manage the app in the Azure portal
application-gateway Tutorial Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-powershell.md
$frontendRule = New-AzApplicationGatewayRequestRoutingRule `
-RuleType Basic ` -HttpListener $defaultlistener ` -BackendAddressPool $defaultPool `
- -BackendHttpSettings $poolSettings
+ -BackendHttpSettings $poolSettings `
+ -priority 100
``` ### Create the application gateway with the certificate
applied-ai-services Compose Custom Models Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-preview.md
If you want to use manually labeled data, you'll also have to upload the *.label
When you [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
+Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
### [Form Recognizer Studio](#tab/studio)
The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services
#### Analyze documents
-To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) request, use a unique model name in the request parameters.
+To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) request, use a unique model name in the request parameters.
:::image type="content" source="media/custom-model-analyze-request.png" alt-text="Screenshot of a custom model request URL.":::
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
You'll need a business card document. You can use our [sample business card docu
* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
## Next steps
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
* View the REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
+ > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID| |||:|
-|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|***custom-model-id***|
+|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|***custom-model-id***|
### Try Form Recognizer
Explore Form Recognizer quickstarts and REST APIs:
| Quickstart | REST API| |--|--|
-|[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-06-30](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)|
+|[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-06-30](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)|
| [v2.1 quickstart](quickstarts/get-started-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Keys can also exist in isolation when the model detects that a key exists, with
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
-* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
-* The total size of the training data is 500 pages or less.
-* If your PDFs are password-locked, you must remove the lock before submission.
## Supported languages and locales
Keys can also exist in isolation when the model detects that a key exists, with
* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
> [!div class="nextstepaction"] > [Try the Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
You'll need an ID document. You can use our [sample ID document](https://raw.git
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
-* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
-* The total size of the training data is 500 pages or less.
-* If your PDFs are password-locked, you must remove the lock before submission.
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
You'll need an ID document. You can use our [sample ID document](https://raw.git
* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
## Next steps
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
You'll need an invoice document. You can use our [sample invoice document](https
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
-* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
-* The total size of the training data is 500 pages or less.
-* If your PDFs are password-locked, you must remove the lock before submission.
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | **Supported paragraph roles**:
-The paragraph roles are best used with unstructured documents. PAragraph roles help analyze the structure of the extracted content for better semantic search and analysis.
+The paragraph roles are best used with unstructured documents. Paragraph roles help analyze the structure of the extracted content for better semantic search and analysis.
* title * sectionHeading
Try extracting data from forms and documents using the Form Recognizer Studio. Y
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned).
-* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* The minimum height of the text to be extracted is 12 pixels for a 1024 X 768 image. This dimension corresponds to about eight font point text at 150 DPI.
## Supported languages and locales
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
A composed model is created by taking a collection of custom models and assignin
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Additionally, the Read API supports Microsoft Word (DOCX), Excel (XLS), PowerPoint (PPT), and HTML files.
-* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* The total size of the training data is 500 pages or less.
-* If your PDFs are password-locked, you must remove the lock before submission.
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Try extracting text from forms and documents using the Form Recognizer Studio. Y
## Input requirements
-* Supported file formats: These include JPEG/JPG, PNG, BMP, TIFF, PDF (text-embedded or scanned). Additionally, the newest API version `2022-06-30-preview` supports Microsoft Word (DOCX), Excel (XLS), PowerPoint (PPT), and HTML files.
-* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* The minimum height of the text to be extracted is 12 pixels for a 1024X768 image. This dimension corresponds to about eight font point text at 150 DPI.
## Supported languages and locales
Complete a Form Recognizer quickstart:
Explore our REST API: > [!div class="nextstepaction"]
-> [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
+> [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
You'll need a receipt document. You can use our [sample receipt document](https:
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
-* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
-* The total size of the training data is 500 pages or less.
-* If your PDFs are password-locked, you must remove the lock before submission.
## Supported languages and locales v2.1
You'll need a receipt document. You can use our [sample receipt document](https:
* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
## Next steps
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
The prebuilt W-2 model is supported by Form Recognizer v3.0 with the following t
| Feature | Resources | Model ID | |-|-|--|
-|**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
+|**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
### Try Form Recognizer
Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
-* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
-* The total size of the training data is 500 pages or less.
-* If your PDFs are password-locked, you must remove the lock before submission.
## Supported languages and locales
Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need
* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
## Next steps
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
The following features and development options are supported by the Form Recogn
| Feature | Description | Development options | |-|--|-| |[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
-|[🆕 **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul> |
+|[🆕 **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul> |
|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> | |[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>| |[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
In this quickstart, you used the Form Recognizer REST API preview (v3.0) to anal
> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio) > [!div class="nextstepaction"]
-> [REST API preview (v3.0) reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
+> [REST API preview (v3.0) reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| Azure Logic Apps | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Monitor](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Monitor: Application Insights](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Monitor: Log Analytics](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Monitor: Log Analytics](migrate-monitor-log-analytics.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Network Watcher](../network-watcher/frequently-asked-questions.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Network Watcher:ΓÇ»[Traffic Analytics](../network-watcher/frequently-asked-questions.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Notification Hubs | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
availability-zones Migrate Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-monitor-log-analytics.md
+
+ Title: Migrate Log Analytics workspaces to availability zone support
+description: Learn how to migrate Log Analytics workspaces to availability zone support.
+++ Last updated : 07/21/2022+++++
+# Migrate Log Analytics workspaces to availability zone support
+
+This guide describes how to migrate Log Analytics workspaces from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+> [!NOTE]
+> Application Insights resources can also use availability zones, but only if they are workspace-based and the workspace uses a dedicated cluster as explained below. Classic (non-workspace-based) Application Insights resources cannot use availability zones.
++
+## Prerequisites
+
+For availability zone support, your workspace must be located in one of the following supported regions:
+
+- East US 2
+- West US 2
+
+## Dedicated clusters
+
+Azure Monitor support for availability zones requires a Log Analytics workspace linked to an [Azure Monitor dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md). Dedicated clusters are a deployment option that enables advanced capabilities for Azure Monitor Logs including availability zones.
+
+Not all dedicated clusters can use availability zones. Dedicated clusters created after mid-October 2020 can be set to support availability zones when they are created. New clusters created after that date default to be enabled for availability zones in regions where Azure Monitor supports them.
+
+## Downtime requirements
+
+There are no downtime requirements.
+
+## Migration process: Moving to a dedicated cluster
+
+### Step 1: Determine the current cluster for your workspace
+
+To determine the current workspace link status for your workspace, use [CLI, PowerShell or REST](../azure-monitor/logs/logs-dedicated-clusters.md#check-workspace-link-status) to retrieve the [cluster details](../azure-monitor/logs/logs-dedicated-clusters.md#check-cluster-provisioning-status). If the cluster uses an availability zone, then it will have a property called `isAvailabilityZonesEnabled` with a value of `true`. Once a cluster is created, this property cannot be altered.
+
+### Step 2: Create a dedicated cluster with availability zone support
+
+Move your workspace to an availability zone by [creating a new dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md#create-a-dedicated-cluster) in a region that supports availability zones. The cluster will automatically be enabled for availability zones. Then [link your workspace to the new cluster](../azure-monitor/logs/logs-dedicated-clusters.md#link-a-workspace-to-a-cluster).
+
+> [!IMPORTANT]
+> Availability zone is defined on the cluster at creation time and canΓÇÖt be modified.
+
+Transitioning to a new cluster can be a gradual process. Don't remove the previous cluster until it has been purged of any data. For example, if your workspace retention is set 60 days, you may want to keep your old cluster running for that period before removing it.
+
+Any queries against your workspace will query both clusters as required to provide you with a single, unified result set. That means that all Azure Monitor features relying on the workspace such as workbooks and dashboards will keep getting the full, unified result set based on data from both clusters.
+
+## Billing
+There is a [cost for using a dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md#create-a-dedicated-cluster). It requires a daily capacity reservation of 500 GB.
+
+If you already have a dedicated cluster and choose to retain it to access its data, youΓÇÖll be charged for both dedicated clusters. Starting August 4, 2021, the minimum required capacity reservation for dedicated clusters is reduced from 1000GB/Daily to 500GB/Daily, so weΓÇÖd recommend applying that minimum to your old cluster to reduce charges.
+
+The new cluster isnΓÇÖt billed during its first day to avoid double billing during configuration. Only the data ingested before the migration completes would still be billed on the date of migration.
++
+## Next steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Azure Monitor Logs Dedicated Clusters](../azure-monitor/logs/logs-dedicated-clusters.md)
+
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](az-region.md)
azure-app-configuration Enable Dynamic Configuration Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-azure-functions-csharp.md
Title: Tutorial for using Azure App Configuration dynamic configuration in an Azure Functions app | Microsoft Docs
+ Title: Tutorial for using Azure App Configuration dynamic configuration in an Azure Functions app
description: In this tutorial, you learn how to dynamically update the configuration data for Azure Functions apps documentationcenter: ''
In this tutorial, you learn how to:
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- [Visual Studio 2019](https://visualstudio.microsoft.com/vs) with the **Azure development** workload-- [Azure Functions tools](../azure-functions/functions-develop-vs.md#check-your-tools-version)
+- [Visual Studio](https://visualstudio.microsoft.com/vs) with the **Azure development** workload
+- [Azure Functions tools](../azure-functions/functions-develop-vs.md), if it's not installed already with Visual Studio.
- Finish quickstart [Create an Azure functions app with Azure App Configuration](./quickstart-azure-functions-csharp.md) ## Reload data from App Configuration
-1. Open *Startup.cs*, and update the `ConfigureAppConfiguration` method.
+Azure Functions support running [in-process](/azure/azure-functions/functions-dotnet-class-library) or [isolated-process](/azure/azure-functions/dotnet-isolated-process-guide). The main difference in App Configuration usage between the two modes is how the configuration is refreshed. In the in-process mode, you must make a call in each function to refresh the configuration. In the isolated-process mode, there is support for middleware. The App Configuration middleware, `Microsoft.Azure.AppConfiguration.Functions.Worker`, enables the call to refresh configuration automatically before each function is executed.
- The `ConfigureRefresh` method registers a setting to be checked for changes whenever a refresh is triggered within the application, which you will do in the later step when adding `_configurationRefresher.TryRefreshAsync()`. The `refreshAll` parameter instructs the App Configuration provider to reload the entire configuration whenever a change is detected in the registered setting.
+1. Update the code that connects to App Configuration and add the data refreshing conditions.
+
+ ### [In-process](#tab/in-process)
+
+ Open *Startup.cs*, and update the `ConfigureAppConfiguration` method.
- All settings registered for refresh have a default cache expiration of 30 seconds. It can be updated by calling the `AzureAppConfigurationRefreshOptions.SetCacheExpiration` method.
```csharp public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder)
In this tutorial, you learn how to:
builder.ConfigurationBuilder.AddAzureAppConfiguration(options => { options.Connect(Environment.GetEnvironmentVariable("ConnectionString"))
- // Load all keys that start with `TestApp:`
- .Select("TestApp:*")
- // Configure to reload configuration if the registered sentinel key is modified
- .ConfigureRefresh(refreshOptions =>
- refreshOptions.Register("TestApp:Settings:Sentinel", refreshAll: true));
+ // Load all keys that start with `TestApp:` and have no label
+ .Select("TestApp:*")
+ // Configure to reload configuration if the registered sentinel key is modified
+ .ConfigureRefresh(refreshOptions =>
+ refreshOptions.Register("TestApp:Settings:Sentinel", refreshAll: true));
}); } ```
- > [!TIP]
- > When you are updating multiple key-values in App Configuration, you would normally don't want your application to reload configuration before all changes are made. You can register a *sentinel key* and only update it when all other configuration changes are completed. This helps to ensure the consistency of configuration in your application.
+ ### [Isolated process](#tab/isolated-process)
+
+ Open *Program.cs*, and update the `Main` method.
+
+ ```csharp
+ public static void Main()
+ {
+ var host = new HostBuilder()
+ .ConfigureAppConfiguration(builder =>
+ {
+ builder.AddAzureAppConfiguration(options =>
+ {
+ options.Connect(Environment.GetEnvironmentVariable("ConnectionString"))
+ // Load all keys that start with `TestApp:` and have no label
+ .Select("TestApp:*")
+ // Configure to reload configuration if the registered sentinel key is modified
+ .ConfigureRefresh(refreshOptions =>
+ refreshOptions.Register("TestApp:Settings:Sentinel", refreshAll: true));
+ });
+ })
+ .ConfigureFunctionsWorkerDefaults()
+ .Build();
+
+ host.Run();
+ }
+ ```
+
+
+ The `ConfigureRefresh` method registers a setting to be checked for changes whenever a refresh is triggered within the application. The `refreshAll` parameter instructs the App Configuration provider to reload the entire configuration whenever a change is detected in the registered setting.
+
+ All settings registered for refresh have a default cache expiration of 30 seconds before a new refresh is attempted. It can be updated by calling the `AzureAppConfigurationRefreshOptions.SetCacheExpiration` method.
+
+ > [!TIP]
+ > When you are updating multiple key-values in App Configuration, you normally don't want your application to reload configuration before all changes are made. You can register a *sentinel key* and update it only when all other configuration changes are completed. This helps to ensure the consistency of configuration in your application.
+
+### [In-process](#tab/in-process)
2. Update the `Configure` method to make Azure App Configuration services available through dependency injection.
In this tutorial, you learn how to:
using Microsoft.Extensions.Configuration.AzureAppConfiguration; ```
- Update the constructor to obtain the instance of `IConfigurationRefresherProvider` through dependency injection, from which you can obtain the instance of `IConfigurationRefresher`.
+ Update the constructor to obtain the instance of `IConfigurationRefresherProvider` through dependency injection, from which you can obtain the instance of `IConfigurationRefresher`.
```csharp private readonly IConfiguration _configuration;
In this tutorial, you learn how to:
} ```
+### [Isolated process](#tab/isolated-process)
+2. Add a `ConfigureServices` call to the `HostBuilder` to make Azure App Configuration services available through dependency injection. Then update the `ConfigureFunctionsWorkerDefaults` to use App Configuration middleware for configuration data refresh.
+
+ ```csharp
+ public static void Main()
+ {
+ var host = new HostBuilder()
+ .ConfigureAppConfiguration(builder =>
+ {
+ // Omitted the code added in the previous step.
+ // ... ...
+ })
+ .ConfigureServices(services =>
+ {
+ // Make Azure App Configuration services available through dependency injection.
+ services.AddAzureAppConfiguration();
+ })
+ .ConfigureFunctionsWorkerDefaults(app =>
+ {
+ // Use Azure App Configuration middleware for data refresh.
+ app.UseAzureAppConfiguration();
+ })
+ .Build();
+
+ host.Run();
+ }
+ ```
++ ## Test the function locally 1. Set an environment variable named **ConnectionString**, and set it to the access key to your app configuration store. If you use the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
In this tutorial, you learn how to:
In this tutorial, you enabled your Azure Functions app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
+> [Access App Configuration using managed identity](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Quickstart Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-functions-csharp.md
Title: Quickstart for Azure App Configuration with Azure Functions | Microsoft Docs description: "In this quickstart, make an Azure Functions app with Azure App Configuration and C#. Create and connect to an App Configuration store. Test the function locally." -+ ms.devlang: csharp Last updated 06/02/2021-+ #Customer intent: As an Azure Functions developer, I want to manage all my app settings in one place using Azure App Configuration. # Quickstart: Create an Azure Functions app with Azure App Configuration
In this quickstart, you incorporate the Azure App Configuration service into an
## Prerequisites -- Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)-- [Visual Studio 2019](https://visualstudio.microsoft.com/vs) with the **Azure development** workload.-- [Azure Functions tools](../azure-functions/functions-develop-vs.md#check-your-tools-version)
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet).
+- [Visual Studio](https://visualstudio.microsoft.com/vs) with the **Azure development** workload.
+- [Azure Functions tools](../azure-functions/functions-develop-vs.md), if you don't have it installed with Visual Studio already.
## Create an App Configuration store
In this quickstart, you incorporate the Azure App Configuration service into an
[!INCLUDE [Create a project using the Azure Functions template](../../includes/functions-vstools-create.md)] ## Connect to an App Configuration store
-This project will use [dependency injection in .NET Azure Functions](../azure-functions/functions-dotnet-dependency-injection.md) and add Azure App Configuration as an extra configuration source.
+This project will use [dependency injection in .NET Azure Functions](/azure/azure-functions/functions-dotnet-dependency-injection) and add Azure App Configuration as an extra configuration source. Azure Functions support running [in-process](/azure/azure-functions/functions-dotnet-class-library) or [isolated-process](/azure/azure-functions/dotnet-isolated-process-guide). Pick the one that matches your requirements.
1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search for and add following NuGet packages to your project.
- - [Microsoft.Extensions.Configuration.AzureAppConfiguration](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.AzureAppConfiguration/) version 4.1.0 or later
- - [Microsoft.Azure.Functions.Extensions](https://www.nuget.org/packages/Microsoft.Azure.Functions.Extensions/) version 1.1.0 or later
+ ### [In-process](#tab/in-process)
-2. Add a new file, *Startup.cs*, with the following code. It defines a class named `Startup` that implements the `FunctionsStartup` abstract class. An assembly attribute is used to specify the type name used during Azure Functions startup.
+ - [Microsoft.Extensions.Configuration.AzureAppConfiguration](https://www.nuget.org/packages/Microsoft.Extensions.Configuration.AzureAppConfiguration/) version 4.1.0 or later
+ - [Microsoft.Azure.Functions.Extensions](https://www.nuget.org/packages/Microsoft.Azure.Functions.Extensions/) version 1.1.0 or later
+
+ ### [Isolated process](#tab/isolated-process)
+
+ - [Microsoft.Azure.AppConfiguration.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.AppConfiguration.Functions.Worker)
+
+
+
+2. Add code to connect to Azure App Configuration.
+ ### [In-process](#tab/in-process)
+
+ Add a new file, *Startup.cs*, with the following code. It defines a class named `Startup` that implements the `FunctionsStartup` abstract class. An assembly attribute is used to specify the type name used during Azure Functions startup.
The `ConfigureAppConfiguration` method is overridden and Azure App Configuration provider is added as an extra configuration source by calling `AddAzureAppConfiguration()`. The `Configure` method is left empty as you don't need to register any services at this point.
This project will use [dependency injection in .NET Azure Functions](../azure-fu
} ```
-3. Open *Function1.cs*, and add the following namespace.
+ ### [Isolated process](#tab/isolated-process)
+
+ Open *Program.cs* and update the `Main()` method as following. You add Azure App Configuration provider as an extra configuration source by calling `AddAzureAppConfiguration()`.
+
+ ```csharp
+ public static void Main()
+ {
+ var host = new HostBuilder()
+ .ConfigureAppConfiguration(builder =>
+ {
+ string cs = Environment.GetEnvironmentVariable("ConnectionString");
+ builder.AddAzureAppConfiguration(cs);
+ })
+ .ConfigureFunctionsWorkerDefaults()
+ .Build();
+
+ host.Run();
+ }
+ ```
+
+
+3. Open *Function1.cs*, and add the following namespace if it's not present already.
```csharp using Microsoft.Extensions.Configuration; ```
- Add a constructor used to obtain an instance of `IConfiguration` through dependency injection.
+ Add or update the constructor used to obtain an instance of `IConfiguration` through dependency injection.
+ ### [In-process](#tab/in-process)
```csharp private readonly IConfiguration _configuration;
This project will use [dependency injection in .NET Azure Functions](../azure-fu
} ```
+ ### [Isolated process](#tab/isolated-process)
+ ```csharp
+ private readonly IConfiguration _configuration;
+
+ public Function1(ILoggerFactory loggerFactory, IConfiguration configuration)
+ {
+ _logger = loggerFactory.CreateLogger<Function1>();
+ _configuration = configuration;
+ }
+ ```
+
+ 4. Update the `Run` method to read values from the configuration.
+ ### [In-process](#tab/in-process)
```csharp
+ [FunctionName("Function1")]
public async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request.");
+ // Read configuration data
string keyName = "TestApp:Settings:Message"; string message = _configuration[keyName];
This project will use [dependency injection in .NET Azure Functions](../azure-fu
> [!NOTE] > The `Function1` class and the `Run` method should not be static. Remove the `static` modifier if it was autogenerated.
+ ### [Isolated process](#tab/isolated-process)
+
+ ```csharp
+ [Function("Function1")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req)
+ {
+ _logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.Headers.Add("Content-Type", "text/plain; charset=utf-8");
+
+ // Read configuration data
+ string keyName = "TestApp:Settings:Message";
+ string message = _configuration[keyName];
+
+ response.WriteString(message ?? $"Please create a key-value with the key '{keyName}' in Azure App Configuration.");
+
+ return response;
+ }
+ ```
+
+ ## Test the function locally 1. Set an environment variable named **ConnectionString**, and set it to the access key to your App Configuration store. If you use the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
In this quickstart, you created a new App Configuration store and used it with a
> [!div class="nextstepaction"] > [Enable dynamic configuration in Azure Functions](./enable-dynamic-configuration-azure-functions-csharp.md)+
+To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Access App Configuration using managed identity](./howto-integrate-azure-managed-service-identity.md)
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Title: "Upgrading Azure Arc-enabled Kubernetes agents"
Last updated 03/03/2021-+ description: "Control agent upgrades for Azure Arc-enabled Kubernetes" keywords: "Kubernetes, Arc, Azure, K8s, containers, agent, upgrade"
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
Title: "Azure RBAC for Azure Arc-enabled Kubernetes clusters"
Last updated 04/05/2021-+ description: "Use Azure RBAC for authorization checks on Azure Arc-enabled Kubernetes clusters."
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
``` ```console
- $TOKEN=(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
+ TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
```
+1. Get the token to output to console
+
+ ```console
+ echo $TOKEN
+ ```
### [Azure PowerShell](#tab/azure-powershell)
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
Title: "Create and manage custom locations on Azure Arc-enabled Kubernetes" Previously updated : 07/21/2022 Last updated : 07/27/2022 description: "Use custom locations to deploy Azure PaaS services on Azure Arc-enabled Kubernetes clusters"
description: "Use custom locations to deploy Azure PaaS services on Azure Arc-en
# Create and manage custom locations on Azure Arc-enabled Kubernetes
- The *Custom locations* feature provides a way for tenant or cluster administrators to configure their Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL Hyperscale, or application instances, such as App Services, Functions, Event Grid, Logic Apps, and API Management.
+ The *custom locations* feature provides a way for tenant or cluster administrators to configure their Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL Hyperscale, or application instances, such as App Services, Functions, Event Grid, Logic Apps, and API Management.
-A custom location has a one-to-one mapping to a namespace within the Azure Arc-enabled Kubernetes cluster. The custom location Azure resource combined with Azure RBAC can be used to grant granular permissions to application developers or database admins, enabling them to deploy resources such as databases or application instances on top of Arc-enabled Kubernetes clusters in a multi-tenant manner.
+A custom location has a one-to-one mapping to a namespace within the Azure Arc-enabled Kubernetes cluster. The custom location Azure resource combined with Azure role-based access control (Azure RBAC) can be used to grant granular permissions to application developers or database admins, enabling them to deploy resources such as databases or application instances on top of Arc-enabled Kubernetes clusters in a multi-tenant manner.
A conceptual overview of this feature is available in [Custom locations - Azure Arc-enabled Kubernetes](conceptual-custom-locations.md).
In this article, you learn how to:
- Install the following Azure CLI extensions: - `connectedk8s` (version 1.2.0 or later) - `k8s-extension` (version 1.0.0 or later)
- - `customlocation` (version 0.1.3 or later)
+ - `customlocation` (version 0.1.3 or later)
```azurecli az extension add --name connectedk8s
In this article, you learn how to:
az extension add --name customlocation ```
- If you have already installed the `connectedk8s`, `k8s-extension`, and `customlocation` extensions, update to the **latest version** using the following command:
+ If you have already installed the `connectedk8s`, `k8s-extension`, and `customlocation` extensions, update to the **latest version** by using the following command:
```azurecli az extension update --name connectedk8s
In this article, you learn how to:
## Enable custom locations on your cluster
-If you are signed in to Azure CLI as an Azure AD user, to enable this feature on your cluster, execute the following command:
+If you are signed in to Azure CLI as an Azure Active Directory (Azure AD) user, to enable this feature on your cluster, execute the following command:
```azurecli az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features cluster-connect custom-locations
If you run the above command while signed in to Azure CLI using a service princi
Unable to fetch oid of 'custom-locations' app. Proceeding without enabling the feature. Insufficient privileges to complete the operation. ```
-This is because a service principal doesn't have permissions to get information of the application used by the Azure Arc service. To avoid this error, execute the following steps:
+This is because a service principal doesn't have permissions to get information about the application used by the Azure Arc service. To avoid this error, complete the following steps:
-1. Sign in to Azure CLI using your user account. Fetch the Object ID of the Azure AD application used by Azure Arc service:
+1. Sign in to Azure CLI using your user account. Fetch the `objectId` or `id` of the Azure AD application used by Azure Arc service. The command you use depends on your version of Azure CLI.
- ```azurecli
- az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv
- ```
+ If you're using an Azure CLI version lower than 2.37.0, use the following command:
+
+ ```azurecli
+ az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv
+ ```
+
+ If you're using Azure CLI version 2.37.0 or higher, use the following command instead:
+
+ ```azurecli
+ az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv
+ ```
-1. Sign in to Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
+1. Sign in to Azure CLI using the service principal. Use the `<objectId>` or `id` value from the previous step to enable custom locations on the cluster:
```azurecli
- az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations
+ az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId/id> --features cluster-connect custom-locations
``` > [!NOTE]
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Last updated 07/12/2022-+ description: "Deploy and manage lifecycle of extensions on Azure Arc-enabled Kubernetes"
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
# Last updated 06/13/2022-+ description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux"
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
description: Learn how to set up the Azure Key Vault Provider for Secrets Store
Last updated 5/26/2022-+
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Title: Azure Arc-enabled Open Service Mesh
description: Open Service Mesh (OSM) extension on Azure Arc-enabled Kubernetes cluster Last updated 05/25/2022-+
azure-arc Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy.md
# Last updated 11/23/2021-+ description: "Apply configurations at-scale using Azure Policy" keywords: "Kubernetes, Arc, Azure, K8s, containers"
azure-arc Use Gitops With Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-gitops-with-helm.md
# Last updated 05/24/2022-+ description: "Use GitOps with Helm for an Azure Arc-enabled cluster configuration" keywords: "GitOps, Kubernetes, K8s, Azure, Helm, Arc, AKS, Azure Kubernetes Service, containers"
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
Title: "Azure Arc-enabled Kubernetes validation"
Last updated 03/03/2021-+ description: "Describes Arc validation program for Kubernetes distributions" keywords: "Kubernetes, Arc, Azure, K8s, validation"
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
There are two ways you can achieve this:
|Priority |150 (must be lower than any rules that block internet access) |151 (must be lower than any rules that block internet access) | |Name |AllowAADOutboundAccess |AllowAzOutboundAccess | -- Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Azure AD and Azure using the downloadable service tag files. The JSON file contains all the public IP address ranges used by Azure AD and Azure and is updated monthly to reflect any changes. Azure ADs service tag is `AzureActiveDirectory` and Azure's service tag is `AzureResourceManager`. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
+- Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Azure AD and Azure using the downloadable service tag files. The [JSON file](https://www.microsoft.com/en-us/download/details.aspx?id=56519) contains all the public IP address ranges used by Azure AD and Azure and is updated monthly to reflect any changes. Azure ADs service tag is `AzureActiveDirectory` and Azure's service tag is `AzureResourceManager`. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
See the visual diagram under the section [How it works](#how-it-works) for the network traffic flows.
See the visual diagram under the section [How it works](#how-it-works) for the n
1. Go to **Create a resource** in the Azure portal and search for **Azure Arc Private Link Scope**. Or you can use the following link to open the [Azure Arc Private Link Scope](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.HybridCompute%2FprivateLinkScopes) page in the portal.
- :::image type="content" source="./media/private-link-security/find-scope.png" alt-text="Find Private Link Scope" border="true":::
+ :::image type="content" source="./media/private-link-security/private-scope-home.png" lightbox="./media/private-link-security/private-scope-home.png" alt-text="Screenshot of private scope home page with Create button." border="true":::
1. Select **Create**.
-1. Pick a Subscription and Resource Group.
+1. In the **Basics** tab, select a Subscription and Resource Group.
-1. Give the Azure Arc Private Link Scope a name. It's best to use a meaningful and clear name.
+1. Enter a name for the Azure Arc Private Link Scope. It's best to use a meaningful and clear name.
- You can optionally require every Azure Arc-enabled machine or server associated with this Azure Arc Private Link Scope to send data to the service through the private endpoint. If you select **Enable public network access**, machines or servers associated with this Azure Arc Private Link Scope can communicate with the service over both private or public networks. You can change this setting after creating the scope if you change your mind.
+ Optionally, you can require every Azure Arc-enabled machine or server associated with this Azure Arc Private Link Scope to send data to the service through the private endpoint. To do so, check the box for **Allow public network access** so machines or servers associated with this Azure Arc Private Link Scope can communicate with the service over both private or public networks. You can change this setting after creating the scope if you change your mind.
+
+1. Select the **Private endpoint** tab, then select **Create**.
+1. In the **Create private endpoint** window:
+ 1. Enter a **Name** for the endpoint.
+
+ 1. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone.
+
+ > [!NOTE]
+ > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the Private Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Arc-enabled servers.
+
+ 1. Select **OK**.
1. Select **Review + Create**.
- :::image type="content" source="./media/private-link-security/create-private-link-scope.png" alt-text="Create Private Link Scope" border="true":::
+ :::image type="content" source="./media/private-link-security/create-private-link-scope.png" alt-text="Screenshot showing the Create Private Link Scope window" border="true":::
1. Let the validation pass, and then select **Create**.
-## Create a private endpoint
+<!--## Create a private endpoint
Once your Azure Arc Private Link Scope is created, you need to connect it with one or more virtual networks using a private endpoint. The private endpoint exposes access to the Azure Arc services on a private IP in your virtual network address space.
Once your Azure Arc Private Link Scope is created, you need to connect it with o
d. Let validation pass.
- e. Select **Create**.
+ e. Select **Create**.-->
## Configure on-premises DNS forwarding
If you opted out of using Azure private DNS zones during private endpoint creati
1. From the left-hand pane, select **DNS configuration** to see a list of the DNS records and corresponding IP addresses you'll need to set up on your DNS server. The FQDNs and IP addresses will change based on the region you selected for your private endpoint and the available IP addresses in your subnet.
- :::image type="content" source="./media/private-link-security/dns-configuration.png" alt-text="DNS configuration details" border="true":::
+ :::image type="content" source="./media/private-link-security/dns-configuration.png" lightbox="./media/private-link-security/dns-configuration.png" alt-text="DNS configuration details" border="true":::
1. Follow the guidance from your DNS server vendor to add the necessary DNS zones and A records to match the table in the portal. Ensure that you select a DNS server that is appropriately scoped for your network. Every machine or server that uses this DNS server now resolves the private endpoint IP addresses and must be associated with the Azure Arc Private Link Scope, or the connection will be refused.
For Azure Arc-enabled servers that were set up prior to your private link scope,
> [!NOTE] > Only Azure Arc-enabled servers in the same subscription and region as your Private Link Scope is shown.
- :::image type="content" source="./media/private-link-security/select-servers-private-link-scope.png" alt-text="Selecting Azure Arc resources" border="true":::
+ :::image type="content" source="./media/private-link-security/select-servers-private-link-scope.png" lightbox="./media/private-link-security/select-servers-private-link-scope.png" alt-text="Selecting Azure Arc resources" border="true":::
It may take up to 15 minutes for the Private Link Scope to accept connections from the recently associated server(s).
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
You control access permissions and can extract detailed activity logs from the A
> - **[Enable logging for Azure Key Vault](../key-vault/general/logging.md)** > - **[How to secure storage account for Azure Key Vault logs](../storage/blobs/security-recommendations.md)**
-You can also use the [Azure Key Vault solution in Azure Monitor](../azure-monitor/insights/key-vault-insights-overview.md) to review Key Vault logs. To use this solution, you need to enable logging of Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it isn't necessary to write logs to Azure Blob storage.
+You can also use the [Azure Key Vault solution in Azure Monitor](../key-vault/key-vault-insights-overview.md) to review Key Vault logs. To use this solution, you need to enable logging of Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it isn't necessary to write logs to Azure Blob storage.
> [!NOTE] > For a comprehensive list of Azure Key Vault security recommendations, see **[Azure security baseline for Key Vault](/security/benchmark/azure/baselines/key-vault-security-baseline)**.
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
# Install Log Analytics agent on Linux computers
-This article provides details on installing the Log Analytics agent on Linux computers using the following methods:
+This article provides details on installing the Log Analytics agent on Linux computers hosted in other clouds or on-premises.
-* [Install the agent for Linux using a wrapper-script](#install-the-agent-using-wrapper-script) hosted on GitHub. This is the recommended method to install and upgrade the agent when the computer has connectivity with the Internet, directly or through a proxy server.
-* [Manually download and install](#install-the-agent-manually) the agent. This is required when the Linux computer doesn't have access to the Internet and will be communicating with Azure Monitor or Azure Automation through the [Log Analytics gateway](./gateway.md).
-The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
+The [installation methods described in this article](#install-the-agent) are:
+* Install the agent for Linux using a wrapper-script hosted on GitHub. This is the recommended method to install and upgrade the agent when the computer has connectivity with the Internet, directly or through a proxy server.
+* Manually download and install the agent. This is required when the Linux computer doesn't have access to the Internet and will be communicating with Azure Monitor or Azure Automation through the [Log Analytics gateway](./gateway.md).
+
+See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
-## Supported operating systems
+## Requirements
+
+### Supported operating systems
See [Overview of Azure Monitor agents](agents-overview.md#supported-operating-systems) for a list of Linux distributions supported by the Log Analytics agent.
See [Overview of Azure Monitor agents](agents-overview.md#supported-operating-sy
>OpenSSL 1.1.0 is only supported on x86_x64 platforms (64-bit) and OpenSSL earlier than 1.x is not supported on any platform. >[!NOTE]
->Running the Log Analytics Linux Agent in containers is not supported. If you need to monitor containers, please leverage the [Container Monitoring solution](../containers/containers.md) for Docker hosts or [Container insights](../containers/container-insights-overview.md) for Kubernetes.
+>The Log Analytics Linux Agent does not run in containers. To monitor containers, use the [Container Monitoring solution](../containers/containers.md) for Docker hosts or [Container insights](../containers/container-insights-overview.md) for Kubernetes.
Starting with versions released after August 2018, we're making the following changes to our support model:
Again, only if you're using an older version of the agent, the python2 executabl
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 1 ```
-## Supported Linux hardening
+### Supported Linux hardening
The OMS Agent has limited customization and hardening support for Linux. The following are currently supported:
The following aren't supported:
CIS and SELinux hardening support is planned for [Azure Monitoring Agent](./azure-monitor-agent-overview.md). Further hardening and customization methods aren't supported nor planned for OMS Agent. For instance, OS images like GitHub Enterprise Server which include customizations such as limitations to user account privileges aren't supported.
-## Agent prerequisites
+### Agent prerequisites
The following table highlights the packages required for [supported Linux distros](#supported-operating-systems) that the agent will be installed on.
The following table highlights the packages required for [supported Linux distro
>[!NOTE] >Either rsyslog or syslog-ng are required to collect syslog messages. The default syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) is not supported for syslog event collection. To collect syslog data from this version of these distributions, the rsyslog daemon should be installed and configured to replace sysklog.
-## Network requirements
+### Network requirements
See [Log Analytics agent overview](./log-analytics-agent.md#network-requirements) for the network requirements for the Linux agent.
-## Workspace ID and key
+### Workspace ID and key
Regardless of the installation method used, you'll require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then select **Agents management** in the **Settings** section.
docker-cimprov | 1.0.0 | Docker provider for OMI. Only installed if Docker is de
### Agent installation details
-After installing the Log Analytics agent for Linux packages, the following system-wide configuration changes are also applied. These artifacts are removed when the omsagent package is uninstalled.
+
+Installing the Log Analytics agent for Linux packages also applies the system-wide configuration changes below. Uninstalling the omsagent package removes these artifacts.
* A non-privileged user named: `omsagent` is created. The daemon runs under this credential. * A sudoers *include* file is created in `/etc/sudoers.d/omsagent`. This authorizes `omsagent` to restart the syslog and omsagent daemons. If sudo *include* directives aren't supported in the installed version of sudo, these entries will be written to `/etc/sudoers`.
After installing the Log Analytics agent for Linux packages, the following syste
On a monitored Linux computer, the agent is listed as `omsagent`. `omsconfig` is the Log Analytics agent for Linux configuration agent that looks for new portal side configuration every 5 minutes. The new and updated configuration is applied to the agent configuration files located at `/etc/opt/microsoft/omsagent/conf/omsagent.conf`.
-## Install the agent using wrapper script
+## Install the agent
++
+### [Wrapper script](#tab/wrapper-script)
The following steps configure setup of the agent for Log Analytics in Azure and Azure Government cloud using the wrapper script for Linux computers that can communicate directly or through a proxy server to download the agent hosted on GitHub and install the agent.
If authentication is required in either case, you need to specify the username a
wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -p [protocol://]<proxy user>:<proxy password>@<proxyhost>[:port] -w <YOUR WORKSPACE ID> -s <YOUR WORKSPACE PRIMARY KEY> ```
-2. To configure the Linux computer to connect to Log Analytics workspace in Azure Government cloud, run the following command providing the workspace ID and primary key copied earlier. The following command downloads the agent, validates its checksum, and installs it.
+1. To configure the Linux computer to connect to Log Analytics workspace in Azure Government cloud, run the following command providing the workspace ID and primary key copied earlier. The following command downloads the agent, validates its checksum, and installs it.
``` wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -w <YOUR WORKSPACE ID> -s <YOUR WORKSPACE PRIMARY KEY> -d opinsights.azure.us
If authentication is required in either case, you need to specify the username a
``` wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -p [protocol://]<proxy user>:<proxy password>@<proxyhost>[:port] -w <YOUR WORKSPACE ID> -s <YOUR WORKSPACE PRIMARY KEY> -d opinsights.azure.us ```
-2. Restart the agent by running the following command:
+1. Restart the agent by running the following command:
``` sudo /opt/microsoft/omsagent/bin/service_control restart [<workspace id>] ``` -
-## Install the agent manually
+### [Shell](#tab/shell)
The Log Analytics agent for Linux is provided in a self-extracting and installable shell script bundle. This bundle contains Debian and RPM packages for each of the agent components and can be installed directly or extracted to retrieve the individual packages. One bundle is provided for x64 and one for x86 architectures.
The Log Analytics agent for Linux is provided in a self-extracting and installab
> For Azure VMs, we recommend you install the agent on them using the [Azure Log Analytics VM extension](../../virtual-machines/extensions/oms-linux.md) for Linux. - 1. [Download](https://github.com/microsoft/OMS-Agent-for-Linux#azure-install-guide) and transfer the appropriate bundle (x64 or x86) to your Linux VM or physical computer, using scp/sftp.
-2. Install the bundle by using the `--install` argument. To onboard to a Log Analytics workspace during installation, provide the `-w <WorkspaceID>` and `-s <workspaceKey>` parameters copied earlier.
+1. Install the bundle by using the `--install` argument. To onboard to a Log Analytics workspace during installation, provide the `-w <WorkspaceID>` and `-s <workspaceKey>` parameters copied earlier.
>[!NOTE]
- >You need to use the `--upgrade` argument if any dependent packages such as omi, scx, omsconfig or their older versions are installed, as would be the case if the system Center Operations Manager agent for Linux is already installed.
+ > Use the `--upgrade` argument if any dependent packages such as omi, scx, omsconfig, or their older versions are installed, as would be the case if the system Center Operations Manager agent for Linux is already installed.
+
+ ```
+ sudo sh ./omsagent-*.universal.x64.sh --install -w <workspace id> -s <shared key> --skip-docker-provider-install
+ ```
-> [!NOTE]
-> Because the [Container Monitoring solution](../containers/containers.md) is being retired, the following documentation uses the optional setting --skip-docker-provider-install to disable the Container Monitoring data collection.
+ > [!NOTE]
+ > The command above uses the optional `--skip-docker-provider-install` flag to disable the Container Monitoring data collection because the [Container Monitoring solution](../containers/containers.md) is being retired.
- ```
- sudo sh ./omsagent-*.universal.x64.sh --install -w <workspace id> -s <shared key> --skip-docker-provider-install
- ```
-
-3. To configure the Linux agent to install and connect to a Log Analytics workspace through a Log Analytics gateway, run the following command providing the proxy, workspace ID, and workspace key parameters. This configuration can be specified on the command line by including `-p [protocol://][user:password@]proxyhost[:port]`. The *proxyhost* property accepts a fully qualified domain name or IP address of the Log Analytics gateway server.
+1. To configure the Linux agent to install and connect to a Log Analytics workspace through a Log Analytics gateway, run the following command providing the proxy, workspace ID, and workspace key parameters. This configuration can be specified on the command line by including `-p [protocol://][user:password@]proxyhost[:port]`. The *proxyhost* property accepts a fully qualified domain name or IP address of the Log Analytics gateway server.
``` sudo sh ./omsagent-*.universal.x64.sh --upgrade -p https://<proxy address>:<proxy port> -w <workspace id> -s <shared key>
The Log Analytics agent for Linux is provided in a self-extracting and installab
sudo sh ./omsagent-*.universal.x64.sh --upgrade -p https://<proxy user>:<proxy password>@<proxy address>:<proxy port> -w <workspace id> -s <shared key> ```
-4. To configure the Linux computer to connect to a Log Analytics workspace in Azure Government cloud, run the following command providing the workspace ID and primary key copied earlier.
+1. To configure the Linux computer to connect to a Log Analytics workspace in Azure Government cloud, run the following command providing the workspace ID and primary key copied earlier.
``` sudo sh ./omsagent-*.universal.x64.sh --upgrade -w <workspace id> -s <shared key> -d opinsights.azure.us ```
-If you want to install the agent packages and configure it to report to a specific Log Analytics workspace at a later time, run the following command:
+To install the agent packages and configure the agent to report to a specific Log Analytics workspace at a later time, run:
``` sudo sh ./omsagent-*.universal.x64.sh --upgrade ```
-If you want to extract the agent packages from the bundle without installing the agent, run the following command:
+To extract the agent packages from the bundle without installing the agent, run:
``` sudo sh ./omsagent-*.universal.x64.sh --extract ``` ++ ## Upgrade from a previous release Upgrading from a previous version, starting with version 1.0.0-47, is supported in each release. Perform the installation with the `--upgrade` parameter to upgrade all components of the agent to the latest version.
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
This article provides information on how to install the Log Analytics agent on Windows computers by using the following methods:
-* Manual installation using the [setup wizard](#install-agent-using-setup-wizard) or [command line](#install-agent-using-command-line)
-* [Azure Automation Desired State Configuration (DSC)](#install-agent-using-dsc-in-azure-automation)
+* Manual installation using the setup wizard or command line.
+* Azure Automation Desired State Configuration (DSC).
The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. For more efficient options that you can use for Azure virtual machines, see [Installation options](./log-analytics-agent.md#installation-options).
The installation methods described in this article are typically used for virtua
> [!NOTE] > Installing the Log Analytics agent typically won't require you to restart the machine.
-## Supported operating systems
+## Requirements
+
+### Supported operating systems
For a list of Windows versions supported by the Log Analytics agent, see [Overview of Azure Monitor agents](agents-overview.md#supported-operating-systems).
The change doesn't require any customer action unless you're running the agent o
1. Update to the latest version of the Windows agent (version 10.20.18029). 1. We recommend that you configure the agent to [use TLS 1.2](agent-windows.md#configure-agent-to-use-tls-12).
-## Network requirements
-
-For the network requirements for the Windows agent, see [Log Analytics agent overview](./log-analytics-agent.md#network-requirements).
-
-## Configure agent to use TLS 1.2
-
-[TLS 1.2](/windows-server/security/tls/tls-registry-settings#tls-12) protocol ensures the security of data in transit for communication between the Windows agent and the Log Analytics service. If you're installing on an [operating system without TLS 1.2 enabled by default](../logs/data-security.md#sending-data-securely-using-tls-12), configure TLS 1.2 by following these steps:
+### Network requirements
+See [Log Analytics agent overview](./log-analytics-agent.md#network-requirements) for the network requirements for the Windows agent.
+
+### Configure Agent to use TLS 1.2
+[TLS 1.2](/windows-server/security/tls/tls-registry-settings#tls-12) protocol ensures the security of data in transit for communication between the Windows agent and the Log Analytics service. If you're installing on an [operating system without TLS 1.2 enabled by default](../logs/data-security.md#sending-data-securely-using-tls-12), then you should configure TLS 1.2 using the steps below.
1. Locate the following registry subkey: **HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols**. 1. Create a subkey under **Protocols** for TLS 1.2: **HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2**.
Configure .NET Framework 4.6 or later to support secure cryptography because by
1. Create the DWORD value **SchUseStrongCrypto** under this subkey with a value of **1**. 1. Restart the system for the settings to take effect.
-## Workspace ID and key
+### Workspace ID and key
Regardless of the installation method used, you'll require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then in the **Settings** section, select **Agents management**.
Regardless of the installation method used, you'll require the workspace ID and
> [!NOTE] > You can't configure the agent to report to more than one workspace during initial setup. [Add or remove a workspace](agent-manage.md#adding-or-removing-a-workspace) after installation by updating the settings from Control Panel or PowerShell.
-## Install agent using setup wizard
+## Install the agent
++
+### [Setup wizard](#tab/setup-wizard)
-The following steps install and configure the Log Analytics agent in Azure and Azure Government cloud by using the setup wizard for the agent on your computer. If you want to learn how to configure the agent to also report to a System Center Operations Manager management group, see [Deploy the Operations Manager agent with the Agent Setup Wizard](/system-center/scom/manage-deploy-windows-agent-manually#to-deploy-the-operations-manager-agent-with-the-agent-setup-wizard).
+The following steps install and configure the Log Analytics agent in Azure and Azure Government cloud by using the setup wizard for the agent on your computer. If you want to learn how to configure the agent to also report to a System Center Operations Manager management group, see [deploy the Operations Manager agent with the Agent Setup Wizard](/system-center/scom/manage-deploy-windows-agent-manually#to-deploy-the-operations-manager-agent-with-the-agent-setup-wizard).
-1. In your Log Analytics workspace, from the **Windows Servers** page you navigated to earlier, select the appropriate **Download Windows Agent** version to download depending on the processor architecture of the Windows operating system.
-1. Run setup to install the agent on your computer.
-1. On the **Welcome** page, select **Next**.
-1. On the **License Terms** page, read the license and then select **I Agree**.
-1. On the **Destination Folder** page, change or keep the default installation folder and then select **Next**.
-1. On the **Agent Setup Options** page, choose to connect the agent to Azure Log Analytics and then select **Next**.
-1. On the **Azure Log Analytics** page:
- 1. Paste the **Workspace ID** and **Workspace Key (Primary Key)** that you copied earlier. If the computer should report to a Log Analytics workspace in Azure Government cloud, select **Azure US Government** from the **Azure Cloud** dropdown list.
- 1. If the computer needs to communicate through a proxy server to the Log Analytics service, select **Advanced** and provide the URL and port number of the proxy server. If your proxy server requires authentication, enter the username and password to authenticate with the proxy server and then select **Next**.
-1. Select **Next** after you've finished providing the necessary configuration settings.<br><br> ![Screenshot that shows pasting Workspace ID and Primary Key.](media/agent-windows/log-analytics-mma-setup-laworkspace.png)<br><br>
-1. On the **Ready to Install** page, review your choices and then select **Install**.
-1. On the **Configuration completed successfully** page, select **Finish**.
+1. In your Log Analytics workspace, from the **Windows Servers** page you navigated to earlier, select the appropriate **Download Windows Agent** version to download depending on the processor architecture of the Windows operating system.
+2. Run Setup to install the agent on your computer.
+2. On the **Welcome** page, click **Next**.
+3. On the **License Terms** page, read the license and then click **I Agree**.
+4. On the **Destination Folder** page, change or keep the default installation folder and then click **Next**.
+5. On the **Agent Setup Options** page, choose to connect the agent to Azure Log Analytics and then click **Next**.
+6. On the **Azure Log Analytics** page, perform the following:
+ 1. Paste the **Workspace ID** and **Workspace Key (Primary Key)** that you copied earlier. If the computer should report to a Log Analytics workspace in Azure Government cloud, select **Azure US Government** from the **Azure Cloud** drop-down list.
+ 2. If the computer needs to communicate through a proxy server to the Log Analytics service, click **Advanced** and provide the URL and port number of the proxy server. If your proxy server requires authentication, type the username and password to authenticate with the proxy server and then click **Next**.
+7. Click **Next** once you have completed providing the necessary configuration settings.<br><br> ![paste Workspace ID and Primary Key](media/agent-windows/log-analytics-mma-setup-laworkspace.png)<br><br>
+8. On the **Ready to Install** page, review your choices and then click **Install**.
+9. On the **Configuration completed successfully** page, click **Finish**.
-When setup is finished, the **Microsoft Monitoring Agent** appears in **Control Panel**. To confirm it's reporting to Log Analytics, review [Verify agent connectivity to Log Analytics](#verify-agent-connectivity-to-azure-monitor).
+When complete, the **Microsoft Monitoring Agent** appears in **Control Panel**. To confirm it is reporting to Log Analytics, review [Verify agent connectivity to Log Analytics](#verify-agent-connectivity-to-azure-monitor).
-## Install agent using command line
+### [Command Line](#tab/command-line)
-The downloaded file for the agent is a self-contained installation package. The setup program for the agent and supporting files are contained in the package and need to be extracted to properly install by using the command line shown in the following examples.
+The downloaded file for the agent is a self-contained installation package. The setup program for the agent and supporting files are contained in the package and need to be extracted in order to properly install using the command line shown in the following examples.
>[!NOTE] >If you want to upgrade an agent, you need to use the Log Analytics scripting API. For more information, see [Managing and maintaining the Log Analytics agent for Windows and Linux](agent-manage.md).
The following table highlights the specific parameters supported by setup for th
>[!NOTE] >The string values for the parameters *OPINSIGHTS_WORKSPACE_ID* and *OPINSIGHTS_WORKSPACE_KEY* need to be enclosed in double quotation marks to instruct Windows Installer to interpret as valid options for the package.
-## Install agent using DSC in Azure Automation
+#### [Azure Automation](#tab/azure-automation)
-You can use the following script example to install the agent by using Azure Automation DSC. If you don't have an Automation account, see [Get started with Azure Automation](../../automation/index.yml) to understand requirements and steps for creating an Automation account required before you use Automation DSC. If you aren't familiar with Automation DSC, see [Getting started with Automation DSC](../../automation/automation-dsc-getting-started.md).
+You can use the following script example to install the agent using Azure Automation DSC. If you do not have an Automation account, see [Get started with Azure Automation](../../automation/index.yml) to understand requirements and steps for creating an Automation account required before using Automation DSC. If you are not familiar with Automation DSC, review [Getting started with Automation DSC](../../automation/automation-dsc-getting-started.md).
The following example installs the 64-bit agent, identified by the `URI` value. You can also use the 32-bit version by replacing the URI value. The URIs for both versions are:
The following example installs the 64-bit agent, identified by the `URI` value.
- **Windows 32-bit agent:** https://go.microsoft.com/fwlink/?LinkId=828604 >[!NOTE]
->This procedure and script example doesn't support upgrading the agent already deployed to a Windows computer.
-
-The 32-bit and 64-bit versions of the agent package have different product codes, and new versions released also have a unique value. The product code is a GUID that's the principal identification of an application or product and is represented by the Windows Installer **ProductCode** property. The `ProductId` value in the **MMAgent.ps1** script has to match the product code from the 32-bit or 64-bit agent installer package.
-
-To retrieve the product code from the agent installer package directly, you can use Orca.exe from the [Windows SDK Components for Windows Installer Developers](/windows/win32/msi/platform-sdk-components-for-windows-installer-developers) that's a component of the Windows Software Development Kit. Or you can use PowerShell by following an [example script](https://www.scconfigmgr.com/2014/08/22/how-to-get-msi-file-information-with-powershell/) written by a Microsoft Valuable Professional. For either approach, you first need to extract the **MOMagent.msi** file from the MMASetup installation package. For instructions, see the first step in the section [Install agent using command line](#install-agent-using-command-line).
-
-1. Import the xPSDesiredStateConfiguration DSC Module from [https://www.powershellgallery.com/packages/xPSDesiredStateConfiguration](https://www.powershellgallery.com/packages/xPSDesiredStateConfiguration) into Azure Automation.
-1. Create Azure Automation variable assets for *OPSINSIGHTS_WS_ID* and *OPSINSIGHTS_WS_KEY*. Set *OPSINSIGHTS_WS_ID* to your Log Analytics workspace ID. Set *OPSINSIGHTS_WS_KEY* to the primary key of your workspace.
-1. Copy the script and save it as **MMAgent.ps1**.
-
- ```powershell
- Configuration MMAgent
- {
- $OIPackageLocalPath = "C:\Deploy\MMASetup-AMD64.exe"
- $OPSINSIGHTS_WS_ID = Get-AutomationVariable -Name "OPSINSIGHTS_WS_ID"
- $OPSINSIGHTS_WS_KEY = Get-AutomationVariable -Name "OPSINSIGHTS_WS_KEY"
-
- Import-DscResource -ModuleName xPSDesiredStateConfiguration
- Import-DscResource -ModuleName PSDesiredStateConfiguration
-
- Node OMSnode {
- Service OIService
- {
- Name = "HealthService"
- State = "Running"
- DependsOn = "[Package]OI"
- }
-
- xRemoteFile OIPackage {
- Uri = "https://go.microsoft.com/fwlink/?LinkId=828603"
- DestinationPath = $OIPackageLocalPath
- }
-
- Package OI {
- Ensure = "Present"
- Path = $OIPackageLocalPath
- Name = "Microsoft Monitoring Agent"
- ProductId = "8A7F2C51-4C7D-4BFD-9014-91D11F24AAE2"
- Arguments = '/C:"setup.exe /qn NOAPM=1 ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_ID=' + $OPSINSIGHTS_WS_ID + ' OPINSIGHTS_WORKSPACE_KEY=' + $OPSINSIGHTS_WS_KEY + ' AcceptEndUserLicenseAgreement=1"'
- DependsOn = "[xRemoteFile]OIPackage"
+>This procedure and script example does not support upgrading the agent already deployed to a Windows computer.
+
+The 32-bit and 64-bit versions of the agent package have different product codes and new versions released also have a unique value. The product code is a GUID that is the principal identification of an application or product and is represented by the Windows Installer **ProductCode** property. The `ProductId` value in the **MMAgent.ps1** script has to match the product code from the 32-bit or 64-bit agent installer package.
+
+To retrieve the product code from the agent install package directly, you can use Orca.exe from the [Windows SDK Components for Windows Installer Developers](/windows/win32/msi/platform-sdk-components-for-windows-installer-developers), which is a component of the Windows Software Development Kit, or using PowerShell following an [example script](https://www.scconfigmgr.com/2014/08/22/how-to-get-msi-file-information-with-powershell/) written by a Microsoft Valuable Professional (MVP). For either approach, you first need to extract the **MOMagent.msi** file from the MMASetup installation package, as explained in the first step of the instructions for installing the agent using the command line.
+
+1. Import the xPSDesiredStateConfiguration DSC Module from [https://www.powershellgallery.com/packages/xPSDesiredStateConfiguration](https://www.powershellgallery.com/packages/xPSDesiredStateConfiguration) into Azure Automation.
+2. Create Azure Automation variable assets for *OPSINSIGHTS_WS_ID* and *OPSINSIGHTS_WS_KEY*. Set *OPSINSIGHTS_WS_ID* to your Log Analytics workspace ID and set *OPSINSIGHTS_WS_KEY* to the primary key of your workspace.
+3. Copy the script and save it as MMAgent.ps1.
+
+ ```powershell
+ Configuration MMAgent
+ {
+ $OIPackageLocalPath = "C:\Deploy\MMASetup-AMD64.exe"
+ $OPSINSIGHTS_WS_ID = Get-AutomationVariable -Name "OPSINSIGHTS_WS_ID"
+ $OPSINSIGHTS_WS_KEY = Get-AutomationVariable -Name "OPSINSIGHTS_WS_KEY"
+
+ Import-DscResource -ModuleName xPSDesiredStateConfiguration
+ Import-DscResource -ModuleName PSDesiredStateConfiguration
+
+ Node OMSnode {
+ Service OIService
+ {
+ Name = "HealthService"
+ State = "Running"
+ DependsOn = "[Package]OI"
+ }
+
+ xRemoteFile OIPackage {
+ Uri = "https://go.microsoft.com/fwlink/?LinkId=828603"
+ DestinationPath = $OIPackageLocalPath
+ }
+
+ Package OI {
+ Ensure = "Present"
+ Path = $OIPackageLocalPath
+ Name = "Microsoft Monitoring Agent"
+ ProductId = "8A7F2C51-4C7D-4BFD-9014-91D11F24AAE2"
+ Arguments = '/C:"setup.exe /qn NOAPM=1 ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_ID=' + $OPSINSIGHTS_WS_ID + ' OPINSIGHTS_WORKSPACE_KEY=' + $OPSINSIGHTS_WS_KEY + ' AcceptEndUserLicenseAgreement=1"'
+ DependsOn = "[xRemoteFile]OIPackage"
+ }
} }
- }
-
- ```
+
+ ```
1. Update the `ProductId` value in the script with the product code extracted from the latest version of the agent installation package by using the methods recommended earlier. 1. [Import the MMAgent.ps1 configuration script](../../automation/automation-dsc-getting-started.md#import-a-configuration-into-azure-automation) into your Automation account. 1. [Assign a Windows computer or node](../../automation/automation-dsc-getting-started.md#enable-an-azure-resource-manager-vm-for-management-with-state-configuration) to the configuration. Within 15 minutes, the node checks its configuration and the agent is pushed to the node. ++ ## Verify agent connectivity to Azure Monitor After installation of the agent is finished, you can verify that it's successfully connected and reporting in two ways.
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
The computer that runs the Log Analytics gateway requires the agent to identify
A gateway can be multihomed to up to ten workspaces using the Azure Monitor Agent and [data collection rules](./data-collection-rule-azure-monitor-agent.md). Using the legacy Microsoft Monitor Agent, you can only multihome up to four workspaces as that is the total number of workspaces the legacy Windows agent supports.
-Each agent must have network connectivity to the gateway so that agents can automatically transfer data to and from the gateway. Avoid installing the gateway on a domain controller. Linux computers that are behind a gateway server cannot use the [wrapper script installation](../agents/agent-linux.md#install-the-agent-using-wrapper-script) method to install the Log Analytics agent for Linux. The agent must be downloaded manually, copied to the computer, and installed manually because the gateway only supports communicating with the Azure services mentioned earlier.
+Each agent must have network connectivity to the gateway so that agents can automatically transfer data to and from the gateway. Avoid installing the gateway on a domain controller. Linux computers that are behind a gateway server cannot use the [wrapper script installation](../agents/agent-linux.md#install-the-agent) method to install the Log Analytics agent for Linux. The agent must be downloaded manually, copied to the computer, and installed manually because the gateway only supports communicating with the Azure services mentioned earlier.
The following diagram shows data flowing from direct agents, through the gateway, to Azure Automation and Log Analytics. The agent proxy configuration must match the port that the Log Analytics gateway is configured with.
To learn how to design and deploy a Windows Server 2016 network load balancing c
1. Sign onto the Windows server that is a member of the NLB cluster with an administrative account. 2. Open Network Load Balancing Manager in Server Manager, click **Tools**, and then click **Network Load Balancing Manager**.
-3. To connect an Log Analytics gateway server with the Microsoft Monitoring Agent installed, right-click the cluster's IP address, and then click **Add Host to Cluster**.
+3. To connect a Log Analytics gateway server with the Microsoft Monitoring Agent installed, right-click the cluster's IP address, and then click **Add Host to Cluster**.
![Network Load Balancing Manager ΓÇô Add Host To Cluster](./media/gateway/nlb02.png)
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
For a comparison between the Log Analytics and other agents in Azure Monitor, se
## Installation options
-There are multiple methods to install the Log Analytics agent and connect your machine to Azure Monitor depending on your requirements. The following sections list the possible methods for different types of virtual machine.
+This section explains how to install the Log Analytics agent on different types of virtual machines and connect the machines to Azure Monitor.
+ > [!NOTE] > Cloning a machine with the Log Analytics Agent already configured is *not* supported. If the agent is already associated with a workspace, cloning won't work for "golden images."
There are multiple methods to install the Log Analytics agent and connect your m
### Azure virtual machine - Use [VM insights](../vm/vminsights-enable-overview.md) to install the agent for a [single machine using the Azure portal](../vm/vminsights-enable-portal.md) or for [multiple machines at scale](../vm/vminsights-enable-policy.md). This installs the Log Analytics agent and [Dependency agent](../vm/vminsights-dependency-agent-maintenance.md). -- Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md) can be installed with the Azure portal, Azure CLI, Azure PowerShell, or a Azure Resource Manager template.
+- Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md) can be installed with the Azure portal, Azure CLI, Azure PowerShell, or an Azure Resource Manager template.
- [Microsoft Defender for Cloud can provision the Log Analytics agent](../../security-center/security-center-enable-data-collection.md) on all supported Azure VMs and any new ones that are created if you enable it to monitor for security vulnerabilities and threats. - Install for individual Azure virtual machines [manually from the Azure portal](../vm/monitor-virtual-machine.md?toc=%2fazure%2fazure-monitor%2ftoc.json). - Connect the machine to a workspace from the **Virtual machines** option in the **Log Analytics workspaces** menu in the Azure portal.
There are multiple methods to install the Log Analytics agent and connect your m
- Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension. Review the [deployment options](../../azure-arc/servers/concept-log-analytics-extension-deployment.md) to understand the different deployment methods available for the extension on machines registered with Azure Arc-enabled servers. - [Manually install](../agents/agent-windows.md) the agent from the command line.-- Automate the installation with [Azure Automation DSC](../agents/agent-windows.md#install-agent-using-dsc-in-azure-automation).
+- Automate the installation with [Azure Automation DSC](../agents/agent-windows.md#install-the-agent).
- Use a [Resource Manager template with Azure Stack](https://github.com/Azure/AzureStack-QuickStart-Templates/tree/master/MicrosoftMonitoringAgent-ext-win). ### Linux virtual machine on-premises or in another cloud
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Before you start, confirm the following:
The value shown for AKS should be version **ciprod05262020** or later. The value shown for Azure Arc-enabled Kubernetes cluster should be version **ciprod09252020** or later. If your cluster has an older version, see [How to upgrade the Container insights agent](container-insights-manage-agent.md#upgrade-agent-on-aks-cluster) for steps to get the latest version. For more information related to the agent release, see [agent release history](https://github.com/microsoft/docker-provider/tree/ci_feature_prod). To verify metrics are being collected, you can use Azure Monitor metrics explorer and verify from the **Metric namespace** that **insights** is listed. If it is, you can go ahead and start setting up the alerts. If you don't see any metrics collected, the cluster Service Principal or MSI is missing the necessary permissions. To verify the SPN or MSI is a member of the **Monitoring Metrics Publisher** role, follow the steps described in the section [Upgrade per cluster using Azure CLI](container-insights-update-metrics.md#update-one-cluster-by-using-the-azure-cli) to confirm and set role assignment.
+
+> [!TIP]
+> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
## Alert rules overview
azure-monitor Azure Key Vault Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-key-vault-deprecated.md
# Azure Key Vault Analytics solution in Azure Monitor > [!NOTE]
-> This solution is deprecated. [We now recommend using Key Vault insights](./key-vault-insights-overview.md).
+> This solution is deprecated. [We now recommend using Key Vault insights](../../key-vault/key-vault-insights-overview.md).
![Key Vault symbol](media/azure-key-vault/key-vault-analytics-symbol.png)
azure-monitor Resource Manager Sql Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/resource-manager-sql-insights.md
Last updated 03/25/2021
# Resource Manager template samples for SQL Insights (preview)
-This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to enable SQL Insights (preview) for monitoring SQL running in Azure. See the [SQL Insights (preview) documentation](sql-insights-overview.md) for details on the offering and versions of SQL we support. Each sample includes a template file and a parameters file with sample values to provide to the template.
+This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to enable SQL Insights (preview) for monitoring SQL running in Azure. See the [SQL Insights (preview) documentation](/azure/azure-sql/database/sql-insights-overview) for details on the offering and versions of SQL we support. Each sample includes a template file and a parameters file with sample values to provide to the template.
[!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)]
View the [parameter file on git hub](https://github.com/microsoft/Application-In
## Next steps * [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about SQL Insights (preview)](sql-insights-overview.md).
+* [Learn more about SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview).
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
After the Customer-managed key configuration, new ingested data to workspaces li
> [!IMPORTANT] > Customer-managed key capability is regional. Your Azure Key Vault, cluster and linked workspaces must be in the same region, but they can be in different subscriptions.
-![Customer-managed key overview](media/customer-managed-keys/cmk-overview.png)
+[![Customer-managed key overview](media/customer-managed-keys/cmk-overview.png "Screenshot of Customer-managed key diagram.")](media/customer-managed-keys/cmk-overview.png#lightbox)
1. Key Vault 2. Log Analytics cluster resource having managed identity with permissions to Key VaultΓÇöThe identity is propagated to the underlay dedicated cluster storage
Customer-managed key configuration isn't supported in Azure portal currently and
Create or use an existing Azure Key Vault in the region that the cluster is planed, and generate or import a key to be used for logs encryption. The Azure Key Vault must be configured as recoverable, to protect your key and the access to your data in Azure Monitor. You can verify this configuration under properties in your Key Vault, both *Soft delete* and *Purge protection* should be enabled.
-![Soft delete and purge protection settings](media/customer-managed-keys/soft-purge-protection.png)
+[![Soft delete and purge protection settings](media/customer-managed-keys/soft-purge-protection.png "Screenshot of Key Vault soft delete and purge protection properties")](media/customer-managed-keys/soft-purge-protection.png#lightbox)
These settings can be updated in Key Vault via CLI and PowerShell:
Follow the procedure illustrated in [Dedicated Clusters article](./logs-dedicate
## Grant Key Vault permissions
-Create Access Policy in Key Vault to grants permissions to your cluster. These permissions are used by the underlay cluster storage. Open your Key Vault in Azure portal and click *Access Policies* then *+ Add Access Policy* to create a policy with these settings:
+There are two permission models in Key Vault to grants permissions to your cluster and underlay storage, Vault access policy and Azure role-based access control.
-- Key permissionsΓÇöselect *Get*, *Wrap Key* and *Unwrap Key*.-- Select principalΓÇödepending on the identity type used in the cluster (system or user assigned managed identity)
- - System assigned managed identity - enter the cluster name or cluster principal ID
- - User assigned managed identity - enter the identity name
+1. Vault access policy
-![grant Key Vault permissions](media/customer-managed-keys/grant-key-vault-permissions-8bit.png)
+ Open your Key Vault in Azure portal and click *Access Policies*, select *Vault access policy*, then click *+ Add Access Policy* to create a policy with these settings:
-The *Get* permission is required to verify that your Key Vault is configured as recoverable to protect your key and the access to your Azure Monitor data.
+ - Key permissionsΓÇöselect *Get*, *Wrap Key* and *Unwrap Key*.
+ - Select principalΓÇödepending on the identity type used in the cluster (system or user assigned managed identity)
+ - System assigned managed identity - enter the cluster name or cluster principal ID
+ - User assigned managed identity - enter the identity name
+
+ [![grant Key Vault permissions](media/customer-managed-keys/grant-key-vault-permissions-8bit.png "Screenshot of Key Vault access policy permissions")](media/customer-managed-keys/grant-key-vault-permissions-8bit.png#lightbox)
+
+ The *Get* permission is required to verify that your Key Vault is configured as recoverable to protect your key and the access to your Azure Monitor data.
+
+2. Azure role-based access control
+ Open your Key Vault in Azure portal and click *Access Policies*, select *Azure role-based access control*, then enter *Access control (IAM)* and add *Key Vault Crypto Service Encryption User* role assignment.
## Update cluster with key identifier details
This step updates dedicated cluster storage with the key and version to use for
>- Key rotation can be automatic or require explicit key update, see [Key rotation](#key-rotation) to determine approach that is suitable for you before updating the key identifier details in cluster. >- Cluster update should not include both identity and key identifier details in the same operation. If you need to update both, the update should be in two consecutive operations.
-![Grant Key Vault permissions](media/customer-managed-keys/key-identifier-8bit.png)
+[![Grant Key Vault permissions](media/customer-managed-keys/key-identifier-8bit.png "Screenshot of Key Vault key identifier details")](media/customer-managed-keys/key-identifier-8bit.png#lightbox)
Update KeyVaultProperties in cluster with key identifier details.
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The table below lists the available curated visualizations and more detailed inf
| [Azure Data Explorer insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. | | [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status| | [Azure IoT Edge](../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal using Azure Monitor Workbooks based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
- | [Azure Key Vault Insights (preview)](./insights/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
+ | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
| [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible Application Performance Management (APM) service which monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. | | [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). | | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
- | [Azure SQL insights (preview)](./insights/sql-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
+ | [Azure SQL insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
| [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. | | [Azure Network Insights](../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. | | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Monitor](./index.yml) | microsoft.insights/autoscalesettings | [**Yes**](./essentials/metrics-supported.md#microsoftinsightsautoscalesettings) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightsautoscalesettings) | | | | [Azure Monitor](./index.yml) | microsoft.insights/components | [**Yes**](./essentials/metrics-supported.md#microsoftinsightscomponents) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightscomponents) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | | [Azure IoT Central](../iot-central/index.yml) | Microsoft.IoTCentral/IoTApps | [**Yes**](./essentials/metrics-supported.md#microsoftiotcentraliotapps) | No | | |
- | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/managedHSMs | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultmanagedhsms) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultmanagedhsms) | [Azure Key Vault Insights (preview)](./insights/key-vault-insights-overview.md) | |
- | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultvaults) | [Azure Key Vault Insights (preview)](./insights/key-vault-insights-overview.md) | |
+ | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/managedHSMs | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultmanagedhsms) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultmanagedhsms) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | |
+ | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultvaults) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | |
| [Azure Kubernetes Service (AKS)](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftkubernetesconnectedclusters) | No | | | | [Azure Data Explorer](/azure/data-explorer/) | Microsoft.Kusto/clusters | [**Yes**](./essentials/metrics-supported.md#microsoftkustoclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkustoclusters) | | | | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationAccounts | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicintegrationaccounts) | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Service Fabric](../service-fabric/index.yml) | Microsoft.ServiceFabric | No | No | [Service Fabric](../service-fabric/index.yml) | Agent required to monitor guest operating system and workflows.| | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/SignalR | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicesignalr) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicesignalr) | | | | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/WebPubSub | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicewebpubsub) | | |
- | [Azure SQL Managed Instance](/azure/azure-sql/database/monitoring-tuning-index) | Microsoft.Sql/managedInstances | [**Yes**](./essentials/metrics-supported.md#microsoftsqlmanagedinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsqlmanagedinstances) | [Azure SQL Insights (preview)](./insights/sql-insights-overview.md) | |
- | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/databases | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserversdatabases) | No | [Azure SQL Insights (preview)](./insights/sql-insights-overview.md) | |
- | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/elasticpools | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserverselasticpools) | No | [Azure SQL Insights (preview)](./insights/sql-insights-overview.md) | |
+ | [Azure SQL Managed Instance](/azure/azure-sql/database/monitoring-tuning-index) | Microsoft.Sql/managedInstances | [**Yes**](./essentials/metrics-supported.md#microsoftsqlmanagedinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsqlmanagedinstances) | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | |
+ | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/databases | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserversdatabases) | No | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | |
+ | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/elasticpools | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserverselasticpools) | No | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | |
| [Azure Storage](../storage/index.yml) | Microsoft.Storage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccounts) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Storage Blobs](../storage/blobs/index.yml) | Microsoft.Storage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsblobservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Storage Files](../storage/files/index.yml) | Microsoft.Storage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsfileservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
azure-monitor Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/partners.md
LogRhythm, a leader in next-generation security information and event management
If you're a LogRhythm customer and are ready to start your Azure journey, you'll need to install and configure the LogRhythm Open Collector and Azure Event Hubs integration. For more information, see the [documentation on configuring Azure Monitor and the Open Collector](https://logrhythm.com/six-tips-for-securing-your-azure-cloud-environment/).
+## Logz.io
+
+![Logz.io logo](./media/partners/logzio.png)
+
+Logz.io delivers the observability that todayΓÇÖs developers need to continuously innovate and optimize their modern applications. As a massively scalable, analytics-driven cloud native platform, Logz.io specifically provides DevOps teams with the visibility and data needed to address their most complex, microservices-driven Azure applications.
+
+As modern cloud environments generate overwhelming data volumes, Logz.io makes it easy to organize observability data into dedicated environments for every team, while identifying and eliminating noisy data that clutters the critical data. The result is a more secure, cost efficient, and productive way to implement cross-organizational observability.
+
+Logz.io provides you with seamless experience to provision Logz.io accounts and configure Azure resources to send logs to Logz.io from Azure portal through its direct integration with Azure.
+
+With the integration you can
+- Provision a new Logz.io account from Azure client interfaces like Azure Portal Azure PowerShell and SDK
+- Configure your Azure resources to send logs to Logz.ioΓÇöa fully managed setup with no infrastructure for customers to setup and operate
+- Seamlessly send logs and metrics to Logz.io. Without the integration, you had to set up event hubs and write Azure Functions to receive logs from Azure Monitor and send them to Logz.io.
+- Easily install the Logz.io agent on virtual machines hosts through a single-click
+- Streamline single-sign on (SSO) to Logz.io. Previously, a separate sign-on from the Logz.io was required.
+- Get unified billing of Logz.io SaaS through Azure subscription invoicing
+
+The Logz.io integration with Azure is available in Azure Marketplace
+ ## Microfocus ![Microfocus logo.](./media/partners/microfocus.png)
azure-monitor Workbooks Commonly Used Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-commonly-used-components.md
You may want to summarize status using a simple visual indication instead of pre
The example below shows how do setup a traffic light icon per computer based on the CPU utilization metric. 1. [Create a new empty workbook](workbooks-create-workbook.md).
-1. [Add a parameters](workbooks-create-workbook.md#add-a-parameter-to-a-workbook), make it a [time range parameter](workbooks-time.md), and name it **TimeRange**.
+1. [Add a parameters](workbooks-create-workbook.md#add-a-parameter-to-an-azure-workbook), make it a [time range parameter](workbooks-time.md), and name it **TimeRange**.
1. Select **Add query** to add a log query control to the workbook. 1. Select the `log` query type, a `Log Analytics' resource type, and a Log Analytics workspace in your subscription that has VM performance data as a resource. 1. In the Query editor, enter:
The following example shows how to enable this scenario: Let's say you want the
### Setup parameters
-1. [Create a new empty workbook](workbooks-create-workbook.md) and [add a parameter component](workbooks-create-workbook.md#add-a-parameter-to-a-workbook).
+1. [Create a new empty workbook](workbooks-create-workbook.md) and [add a parameter component](workbooks-create-workbook.md#add-a-parameter-to-an-azure-workbook).
1. Select **Add parameter** to create a new parameter. Use the following settings: - Parameter name: `OsFilter` - Display name: `Operating system`
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
Title: Create an Azure workbook
-description: Learn how to create a workbook in Azure Workbooks.
+ Title: Creating an Azure Workbook
+description: Learn how to create an Azure Workbook.
Last updated 05/30/2022
-# Create an Azure workbook
-
-This article describes how to create a new workbook and how to add elements to your Azure workbook.
+# Creating an Azure Workbook
+This article describes how to create a new workbook and how to add elements to your Azure Workbook.
This video walks you through creating workbooks. > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4B4Ap]
-## Create a new workbook
-
-To create a new workbook:
+## Create a new Azure Workbook
-1. On the **Azure Workbooks** page, select an empty template or select **New**.
+To create a new Azure workbook:
+1. From the Azure Workbooks page, select an empty template or select **New** in the top toolbar.
1. Combine any of these elements to add to your workbook:
- - [Text](#add-text)
- - [Queries](#add-queries)
- - [Parameters](#add-parameters)
- - [Metric charts](#add-metric-charts)
- - [Links](#add-links)
- - [Groups](#add-groups)
+ - [Text](#adding-text)
+ - [Parameters](#adding-parameters)
+ - [Queries](#adding-queries)
+ - [Metric charts](#adding-metric-charts)
+ - [Links](#adding-links)
+ - [Groups](#adding-groups)
- Configuration options
-## Add text
-
-You can include text blocks in your workbooks. For example, the text can be human analysis of the telemetry, information to help users interpret the data, and section headings.
+## Adding text
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example.png" alt-text="Screenshot that shows adding text to a workbook.":::
+Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc.
-Text is added through a Markdown control that you use to add your content. You can use the full formatting capabilities of Markdown like different heading and font styles, hyperlinks, and tables. By using Markdown, you can create rich Word- or portal-like reports or analytic narratives. Text can contain parameter values in the Markdown text. Those parameter references are updated as the parameters change.
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example.png" alt-text="Screenshot of adding text to a workbook.":::
-Edit mode:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode.png" alt-text="Screenshot that shows adding text to a workbook in edit mode.":::
+Text is added through a markdown control into which an author can add their content. An author can use the full formatting capabilities of markdown. These include different heading and font styles, hyperlinks, tables, etc. Markdown allows authors to create rich Word- or Portal-like reports or analytic narratives. Text can contain parameter values in the markdown text, and those parameter references will be updated as the parameters change.
-Preview mode:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot that shows adding text to a workbook in preview mode.":::
+**Edit mode**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode.png" alt-text="Screenshot showing adding text to a workbook in edit mode.":::
-### Add text to a workbook
+**Preview mode**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode.":::
-1. Make sure you're in edit mode by selecting **Edit**.
-1. Add text by doing one of these steps:
+### Add text to an Azure workbook
- * Select **Add** > **Add text** below an existing element or at the bottom of the workbook.
- * Select the ellipsis (...) to the right of the **Edit** button next to one of the elements in the workbook. Then select **Add** > **Add text**.
-
-1. Enter Markdown text in the editor field.
-1. Use the **Text Style** option to switch between plain Markdown and Markdown wrapped with the Azure portal's standard info, warning, success, and error styling.
+1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a query by doing either of these steps:
+ - Select **Add**, and **Add text** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add text**.
+1. Enter markdown text into the editor field.
+1. Use the **Text Style** option to switch between plain markdown, and markdown wrapped with the Azure portal's standard info/warning/success/error styling.
> [!TIP]
- > Use this [Markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to see the different formatting options.
+ > Use this [markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to see the different formatting options.
-1. Use the **Preview** tab to see how your content will look. The preview shows the content inside a scrollable area to limit its size. At runtime, the Markdown content expands to fill whatever space it needs, without a scrollbar.
+1. Use the **Preview** tab to see how your content will look. The preview shows the content inside a scrollable area to limit its size, but when displayed at runtime, the markdown content will expand to fill whatever space it needs, without a scrollbar.
1. Select **Done Editing**. ### Text styles-
-These text styles are available.
+These text styles are available:
| Style | Description | | | |
-|plain| No formatting is applied. |
-|info| The portal's info style, with an `ℹ` or similar icon and blue background. |
-|error| The portal's error style, with an `❌` or similar icon and red background. |
-|success| The portal's success style, with a `Γ£ö` or similar icon and green background. |
-|upsell| The portal's upsell style, with a `🚀` or similar icon and purple background. |
-|warning| The portal's warning style, with a `ΓÜá` or similar icon and blue background. |
+| plain| No formatting is applied |
+|info| The portal's "info" style, with a `ℹ` or similar icon and blue background |
+|error| The portal's "error" style, with a `❌` or similar icon and red background |
+|success| The portal's "success" style, with a `Γ£ö` or similar icon and green background |
+|upsell| The portal's "upsell" style, with a `🚀` or similar icon and purple background |
+|warning| The portal's "warning" style, with a `ΓÜá` or similar icon and blue background |
+
-You can also choose a text parameter as the source of the style. The parameter value must be one of the preceding text values. The absence of a value or any unrecognized value is treated as plain style.
+You can also choose a text parameter as the source of the style. The parameter value must be one of the above text values. The absence of a value, or any unrecognized value will be treated as `plain` style.
### Text style examples
-Info style example:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot that shows adding text to a workbook in preview mode showing info style.":::
+**Info style example**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot of adding text to a workbook in preview mode showing info style.":::
-Warning style example:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example-warning.png" alt-text="Screenshot that shows a text visualization in warning style.":::
+**Warning style example**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
-## Add queries
+## Adding queries
-You can query any of the supported workbook [data sources](workbooks-data-sources.md).
+Azure Workbooks allow you to query any of the supported workbook [data sources](workbooks-data-sources.md).
For example, you can query Azure Resource Health to help you view any service problems affecting your resources. You can also query Azure Monitor metrics, which is numeric data collected at regular intervals. Azure Monitor metrics provide information about an aspect of a system at a particular time.
-### Add a query to a workbook
+### Add a query to an Azure Workbook
-1. Make sure you're in edit mode by selecting **Edit**.
-1. Add a query by doing one of these steps:
- - Select **Add** > **Add query** below an existing element or at the bottom of the workbook.
- - Select the ellipsis (...) to the right of the **Edit** button next to one of the elements in the workbook. Then select **Add** > **Add query**.
+1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a query by doing either of these steps:
+ - Select **Add**, and **Add query** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add query**.
1. Select the [data source](workbooks-data-sources.md) for your query. The other fields are determined based on the data source you choose. 1. Select any other values that are required based on the data source you selected. 1. Select the [visualization](workbooks-visualizations.md) for your workbook.
-1. In the query section, enter your query, or select from a list of sample queries by selecting **Samples**. Then edit the query to your liking.
+1. In the query section, enter your query, or select from a list of sample queries by selecting **Samples**, and then edit the query to your liking.
1. Select **Run Query**.
-1. When you're sure you have the query you want in your workbook, select **Done Editing**.
-
-## Add parameters
-
-This section discusses how to add parameters.
-
-### Best practices for using resource-centric log queries
-
-This video shows you how to use resource-level logs queries in Azure Workbooks. It also has tips and tricks on how to enable advanced scenarios and improve performance.
-
-> [!VIDEO https://www.youtube.com/embed/8CvjM0VvOA80]
-
-#### Use a dynamic resource type parameter
+1. When you're sure you have the query you want in your workbook, select **Done editing**.
-Dynamic resource type parameters use dynamic scopes for more efficient querying. The following snippet uses this heuristic:
-1. **Individual resources**: If the count of selected resource is less than or equal to 5
-1. **Resource groups**: If the number of resources is over 5 but the number of resource groups the resources belong to is less than or equal to 3
-1. **Subscriptions**: Otherwise
+### Best practices for querying logs
+ - **Use the smallest possible time ranges.** The longer the time ranges, the slower the queries, and the more data returned. For longer time ranges, the query might have to go to slower "cold" storage, making the query even slower. Default to the shortest useful time range, but allow the user to pick a larger time range that may be slower.
+ - **Use the "All" special value in dropdowns.** You can add an **All** special item in the dropdown parameter settings. You can use a special value. Using an **All** special item correctly can dramatically simplify queries.
+ - **Protect against missing columns.** If you're using a custom table or custom columns, design your template so that it will work if the column is missing in a workspace. See the [column_ifexists](/azure/kusto/query/columnifexists) function.
+ - **Protect against a missing table.** If your template is installed as part of a solution, or in other cases where the tables are guaranteed to exist, checking for missing columns is unnecessary. If you're creating generic templates that could be visible on any resource or workspace, it's a good idea to protect for tables that don't exist.
+ The log analytics query language doesn't have a **table_ifexists** function like the function for testing for columns. However, there are some ways to check if a table exists. For example, you can use a [fuzzy union](/azure/kusto/query/unionoperator?pivots=azuredataexplorer). When doing a union, you can use the **isfuzzy=true** setting to let the union continue if some of the tables don't exist. You can add a parameter query in your workbook that checks for existence of the table, and hides some content if it doesn't. Items that aren't visible aren't run, so you can design your template so that other queries in the workbook that would fail if the table doesn't exist, don't run until after the test verifies that the table exists.
- ```
- Resources
- | take 1
- | project x = dynamic(["microsoft.compute/virtualmachines", "microsoft.compute/virtualmachinescalesets", "microsoft.resources/resourcegroups", "microsoft.resources/subscriptions"])
- | mvexpand x to typeof(string)
- | extend jkey = 1
- | join kind = inner (Resources
- | where id in~ ({VirtualMachines})
- | summarize Subs = dcount(subscriptionId), resourceGroups = dcount(resourceGroup), resourceCount = count()
- | extend jkey = 1) on jkey
- | project x, label = 'x',
- selected = case(
- x in ('microsoft.compute/virtualmachinescalesets', 'microsoft.compute/virtualmachines') and resourceCount <= 5, true,
- x == 'microsoft.resources/resourcegroups' and resourceGroups <= 3 and resourceCount > 5, true,
- x == 'microsoft.resources/subscriptions' and resourceGroups > 3 and resourceCount > 5, true,
- false)
- ```
+ For example:
-#### Use a static resource scope for querying multiple resource types
+ ```
+ let MissingTable = view () { print isMissing=1 };
+ union isfuzzy=true MissingTable, (AzureDiagnostics | getschema | summarize c=count() | project isMissing=iff(c > 0, 0, 1))
+ | top 1 by isMissing asc
+ ```
-```json
-[
- { "value":"microsoft.compute/virtualmachines", "label":"Virtual machine", "selected":true },
- { "value":"microsoft.compute/virtualmachinescaleset", "label":"Virtual machine scale set", "selected":true }
-]
-```
+ This query returns a **1** if the **AzureDiagnostics** table doesn't exist in the workspace. If the real table doesn't exist, the fake row of the **MissingTable** will be returned. If any columns exist in the schema for the **AzureDiagnostics** table, a **0** is returned. You could use this as a parameter value, and conditionally hide your query steps unless the parameter value is 0. You could also use conditional visibility to show text that says that the current workspace does not have the missing table, and send the user to documentation on how to onboard.
-#### Use resource parameters grouped by resource type
+ Instead of hiding steps, you may just want to have no rows as a result. You can change the **MissingTable** to be an empty data table with the appropriate matching schema:
+
+ ```
+ let MissingTable = datatable(ResourceId: string) [];
+ union isfuzzy=true MissingTable, (AzureDiagnostics
+ | extend ResourceId = column_ifexists('ResourceId', '')
+ ```
-```
-Resources
-| where type =~ 'microsoft.compute/virtualmachines' or type =~ 'microsoft.compute/virtualmachinescalesets'
-| where resourceGroup in~({ResourceGroups})
-| project value = id, label = id, selected = false,
- group = iff(type =~ 'microsoft.compute/virtualmachines', 'Virtual machines', 'Virtual machine scale sets')
-```
+ In this case, the query returns no rows if the **AzureDiagnostics** table is missing, or if the **ResourceId** column is missing from the table.
+## Adding parameters
-## Add a parameter
+You can collect input from consumers and reference it in other parts of the workbook using parameters. Often, you would use parameters to scope the result set or to set the right visual. Parameters help you build interactive reports and experiences.
-You can control how your parameter controls are presented to consumers with workbooks. Examples include text box versus dropdown list, single- versus multi-select, or values from text, JSON, KQL, or Azure Resource Graph.
+Workbooks allow you to control how your parameter controls are presented to consumers ΓÇô text box vs. drop down, single- vs. multi-select, values from text, JSON, KQL, or Azure Resource Graph, etc.
-### Add a parameter to a workbook
+### Add a parameter to an Azure Workbook
-1. Make sure you're in edit mode by selecting **Edit**.
-1. Add a parameter by doing one of these steps:
- - Select **Add** > **Add parameter** below an existing element or at the bottom of the workbook.
- - Select the ellipsis (...) to the right of the **Edit** button next to one of the elements in the workbook. Then select **Add** > **Add parameter**.
-1. In the new parameter pane that appears, enter values for these fields:
+1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a parameter by doing either of these steps:
+ - Select **Add**, and **Add parameter** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add parameter**.
+1. In the new parameter pane that pops up enter values for these fields:
- - **Parameter name**: Parameter names can't include spaces or special characters.
- - **Display name**: Display names can include spaces, special characters, and emojis.
- - **Parameter type**:
- - **Required**:
-
-1. Select **Done Editing**.
+ - Parameter name: Parameter names can't include spaces or special characters
+ - Display name: Display names can include spaces, special characters, emoji, etc.
+ - Parameter type:
+ - Required:
+
+1. Select **Done editing**.
- :::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot that shows the creation of a time range parameter.":::
+ :::image type="content" source="media/workbooks-parameters/workbooks-time-settings.png" alt-text="Screenshot showing the creation of a time range parameter.":::
-## Add metric charts
+## Adding metric charts
-Most Azure resources emit metric data about state and health, such as CPU utilization, storage availability, count of database transactions, and failing app requests. You can create visualizations of this metric data as time-series charts in workbooks.
+Most Azure resources emit metric data about state and health such as CPU utilization, storage availability, count of database transactions, failing app requests, etc. Using workbooks, you can create visualizations of the metric data as time-series charts.
-The following example shows the number of transactions in a storage account over the prior hour. This information allows you to see the transaction trend and look for anomalies in behavior.
+The example below shows the number of transactions in a storage account over the prior hour. This allows the storage owner to see the transaction trend and look for anomalies in behavior.
- :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-area.png" alt-text="Screenshot that shows a metric area chart for storage transactions in a workbook.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-area.png" alt-text="Screenshot showing a metric area chart for storage transactions in a workbook.":::
-### Add a metric chart to a workbook
+### Add a metric chart to an Azure Workbook
-1. Make sure you're in edit mode by selecting **Edit**.
-1. Add a metric chart by doing one of these steps:
- - Select **Add** > **Add metric** below an existing element or at the bottom of the workbook.
- - Select the ellipsis (...) to the right of the **Edit** button next to one of the elements in the workbook. Then select **Add** > **Add metric**.
+1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a metric chart by doing either of these steps:
+ - Select **Add**, and **Add metric** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add metric**.
1. Select a **resource type**, the resources to target, the metric namespace and name, and the aggregation to use.
-1. Set parameters such as time range, split by, visualization, size, and color palette, if needed.
+1. Set other parameters if needed such time range, split-by, visualization, size and color palette.
1. Select **Done Editing**.
-Example of a metric chart in edit mode:
+This is a metric chart in edit mode:
### Metric chart parameters
-| Parameter | Description | Examples |
+| Parameter | Explanation | Example |
| - |:-|:-|
-| Resource type| The resource type to target. | Storage or Virtual Machine |
-| Resources| A set of resources to get the metrics value from. | MyStorage1 |
-| Namespace | The namespace with the metric. | Storage > Blob |
-| Metric| The metric to visualize. | Storage > Blob > Transactions |
-| Aggregation | The aggregation function to apply to the metric. | Sum, count, average |
-| Time range | The time window to view the metric in. | Last hour, last 24 hours |
-| Visualization | The visualization to use. | Area, bar, line, scatter, grid |
-| Split by| Optionally split the metric on a dimension. | Transactions by geo type |
-| Size | The vertical size of the control. | Small, medium, or large |
-| Color palette | The color palette to use in the chart. It's ignored if the **Split by** parameter is used. | Blue, green, red |
+| Resource Type| The resource type to target | Storage or Virtual Machine. |
+| Resources| A set of resources to get the metrics value from | MyStorage1 |
+| Namespace | The namespace with the metric | Storage > Blob |
+| Metric| The metric to visualize | Storage > Blob > Transactions |
+| Aggregation | The aggregation function to apply to the metric | Sum, Count, Average, etc. |
+| Time Range | The time window to view the metric in | Last hour, Last 24 hours, etc. |
+| Visualization | The visualization to use | Area, Bar, Line, Scatter, Grid |
+| Split By| Optionally split the metric on a dimension | Transactions by Geo type |
+| Size | The vertical size of the control | Small, medium or large |
+| Color palette | The color palette to use in the chart. Ignored if the `Split by` parameter is used | Blue, green, red, etc. |
### Metric chart examples
-Examples of metric charts are shown.
+**Transactions split by API name as a line chart**
-#### Transactions split by API name as a line chart
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-split-line.png" alt-text="Screenshot showing a metric line chart for Storage transactions split by API name.":::
- :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-split-line.png" alt-text="Screenshot that shows a metric line chart for storage transactions split by API name.":::
-#### Transactions split by response type as a large bar chart
+**Transactions split by response type as a large bar chart**
- :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-bar-large.png" alt-text="Screenshot that shows a large metric bar chart for storage transactions split by response type.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-bar-large.png" alt-text="Screenshot showing a large metric bar chart for Storage transactions split by response type.":::
-#### Average latency as a scatter chart
+**Average latency as a scatter chart**
- :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-scatter.png" alt-text="Screenshot that shows a metric scatter chart for storage latency.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-metric-chart-storage-scatter.png" alt-text="Screenshot showing a metric scatter chart for storage latency.":::
-## Add links
+## Adding links
-You can use links to create links to other views, workbooks, and other components inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs.
+You can use links to create links to other views, workbooks, other items inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs.
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-empty-links.png" alt-text="Screenshot of adding a link to a workbook.":::
### Link styles- You can apply styles to the link element itself and to individual links.
-#### Link element styles
+**Link element styles**
+ |Style |Sample |Notes | ||||
-|Bullet List | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-bullet.png" alt-text="Screenshot that shows a bullet-style workbook link."::: | The default, links, appears as a bulleted list of links, one on each line. The **Text before link** and **Text after link** fields can be used to add more text before or after the link components. |
-|List |:::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-list.png" alt-text="Screenshot that shows a list-style workbook link."::: | Links appear as a list of links, with no bullets. |
-|Paragraph | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-paragraph.png" alt-text="Screenshot that shows a paragraph-style workbook link."::: |Links appear as a paragraph of links, wrapped like a paragraph of text. |
-|Navigation | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-navigation.png" alt-text="Screenshot that shows a navigation-style workbook link."::: | Links appear as links with vertical dividers, or pipes, between each link. |
-|Tabs | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-tabs.png" alt-text="Screenshot that shows a tabs-style workbook link."::: |Links appear as tabs. Each link appears as a tab. No link styling options apply to individual links. To configure tabs, see the [Use tabs](#use-tabs) section. |
-|Toolbar | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-toolbar.png" alt-text="Screenshot that shows a toolbar-style workbook link."::: | Links appear as an Azure portal-styled toolbar, with icons and text. Each link appears as a toolbar button. To configure toolbars, see the [Use toolbars](#use-toolbars) section. |
+|Bullet List | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-bullet.png" alt-text="Screenshot of bullet style workbook link."::: | The default, links, appears as a bulleted list of links, one on each line. The **Text before link** and **Text after link** fields can be used to add more text before or after the link items. |
+|List |:::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-list.png" alt-text="Screenshot of list style workbook link."::: | Links appear as a list of links, with no bullets. |
+|Paragraph | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-paragraph.png" alt-text="Screenshot of paragraph style workbook link."::: |Links appear as a paragraph of links, wrapped like a paragraph of text. |
+|Navigation | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-navigation.png" alt-text="Screenshot of navigation style workbook link."::: | Links appear as links, with vertical dividers, or pipes (`|`) between each link. |
+|Tabs | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-tabs.png" alt-text="Screenshot of tabs style workbook link."::: |Links appear as tabs. Each link appears as a tab, no link styling options apply to individual links. See the [tabs](#using-tabs) section below for how to configure tabs. |
+|Toolbar | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-toolbar.png" alt-text="Screenshot of toolbar style workbook link."::: | Links appear an Azure portal styled toolbar, with icons and text. Each link appears as a toolbar button. See the [toolbar](#using-toolbars) section below for how to configure toolbars. |
+
-#### Link styles
+**Link styles**
| Style | Description | |:- |:-|
-| Link | By default, links appear as a hyperlink. URL links can only be link style. |
-| Button (primary) | The link appears as a "primary" button in the portal, usually a blue color. |
-| Button (secondary) | The links appear as a "secondary" button in the portal, usually a "transparent" color, a white button in light themes, and a dark gray button in dark themes. |
+| Link | By default links appear as a hyperlink. URL links can only be link style. |
+| Button (Primary) | The link appears as a "primary" button in the portal, usually a blue color |
+| Button (Secondary) | The links appear as a "secondary" button in the portal, usually a "transparent" color, a white button in light themes and a dark gray button in dark themes. |
-If required parameters are used in button text, tooltip text, or value fields, and the required parameter is unset when you use buttons, the button is disabled. You can use this capability, for example, to disable buttons when no value is selected in another parameter or control.
+If required parameters are used in button text, tooltip text, or value fields, and the required parameter is unset when using buttons, the button is disabled. You can use this capability, for example, to disable buttons when no value is selected in another parameter or control.
### Link actions-
-Links can use all the link actions available in [link actions](workbooks-link-actions.md), and they have two more available actions.
+Links can use all of the link actions available in [link actions](workbooks-link-actions.md), and have two more available actions:
| Action | Description | |:- |:-|
-|Set a parameter value | A parameter can be set to a value when you select a link, button, or tab. Tabs are often configured to set a parameter to a value, which hides and shows other parts of the workbook based on that value.|
-|Scroll to a step| When you select a link, the workbook moves focus and scrolls to make another component visible. This action can be used to create a "table of contents" or a "go back to the top"-style experience. |
+|Set a parameter value | A parameter can be set to a value when selecting a link, button, or tab. Tabs are often configured to set a parameter to a value, which hides and shows other parts of the workbook based on that value.|
+|Scroll to a step| When selecting a link, the workbook will move focus and scroll to make another step visible. This action can be used to create a "table of contents", or a "go back to the top" style experience. |
-### Use tabs
+### Using tabs
-Most of the time, tab links are combined with the **Set a parameter value** action. This example shows the links step configured to create two tabs, where selecting either tab sets a **selectedTab** parameter to a different value. The example also shows a third tab being edited to show the parameter name and parameter value placeholders.
+Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links step configured to create 2 tabs, where selecting either tab will set a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):
- :::image type="content" source="media/workbooks-create-workbook/workbooks-creating-tabs.png" alt-text="Screenshot that shows creating tabs in workbooks.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-creating-tabs.png" alt-text="Screenshot of creating tabs in workbooks.":::
-You can then add other components in the workbook that are conditionally visible if the **selectedTab** parameter value is **1** by using the advanced settings.
- :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab.png" alt-text="Screenshot that shows conditionally visible tab in workbooks.":::
+You can then add other items in the workbook that are conditionally visible if the **selectedTab** parameter value is "1" by using the advanced settings:
-The first tab is selected by default, initially setting **selectedTab** to **1** and making that component visible. Selecting the second tab changes the value of the parameter to **2**, and different content is displayed.
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab.png" alt-text="Screenshot of conditionally visible tab in workbooks.":::
- :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab2.png" alt-text="Screenshot that shows workbooks with content displayed when the selected tab is 2.":::
+The first tab is selected by default, initially setting **selectedTab** to 1, and making that step visible. Selecting the second tab will change the value of the parameter to "2", and different content will be displayed:
-A sample workbook with the preceding tabs is available in [sample Azure workbooks with links](workbooks-sample-links.md#sample-workbook-with-links).
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-selected-tab2.png" alt-text="Screenshot of workbooks with content displayed when selected tab is 2.":::
+A sample workbook with the above tabs is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-links).
+g
### Tabs limitations - URL links aren't supported in tabs. A URL link in a tab appears as a disabled tab.
+ - No item styling is supported in tabs. Items are displayed as tabs, and only the tab name (link text) field is displayed. Fields that aren't used in tab style are hidden while in edit mode.
- The first tab is selected by default, invoking whatever action that tab has specified. If the first tab's action opens another view, as soon as the tabs are created, a view appears.
+ - You can use tabs to open another views, but this functionality should be used sparingly, since most users won't expect to navigate by selecting a tab. If other tabs are setting a parameter to a specific value, a tab that opens a view wouldn't change that value, so the rest of the workbook content will continue to show the view or data for the previous tab.
-### Use toolbars
+### Using toolbars
-Use the toolbar style to have your links appear styled as a toolbar. In toolbar style, you must fill in fields for:
+Use the Toolbar style to have your links appear styled as a toolbar. In toolbar style, the author must fill in fields for:
+ - Button text, the text to display on the toolbar. Parameters may be used in this field.
+ - Icon, the icon to display in the toolbar.
+ - Tooltip Text, text to be displayed on the toolbar button's tooltip text. Parameters may be used in this field.
:::image type="content" source="media/workbooks-create-workbook/workbooks-links-create-toolbar.png" alt-text="Screenshot of creating links styled as a toolbar in workbooks.":::
-If any required parameters are used in button text, tooltip text, or value fields, and the required parameter is unset, the toolbar button will be disabled. For example, this functionality can be used to disable toolbar buttons when no value is selected in another parameter/control.
+If any required parameters are used in button text, tooltip text, or value fields, and the required parameter is unset, the toolbar button will be disabled. For example, this can be used to disable toolbar buttons when no value is selected in another parameter/control.
-A sample workbook with toolbars, global parameters, and Azure Resource Manager actions is available in [sample workbooks with links](workbooks-sample-links.md#sample-workbook-with-toolbar-links).
+A sample workbook with toolbars, globals parameters, and ARM Actions is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-toolbar-links).
-## Add groups
+## Adding groups
-You can logically group a set of components by using a group component in a workbook.
+A group item in a workbook allows you to logically group a set of steps in a workbook.
Groups in workbooks are useful for several things:
- - **Layout**: When you want components to be organized vertically, you can create a group of components that will all stack up and set the styling of the group to be a percentage width instead of setting percentage width on all the individual components.
- - **Visibility**: When you want several components to hide or show together, you can set the visibility of the entire group of components, instead of setting visibility settings on each individual component. This functionality can be useful in templates that use tabs. You can use a group as the content of the tab, and the entire group can be hidden or shown based on a parameter set by the selected tab.
- - **Performance**: When you have a large template with many sections or tabs, you can convert each section into its own subtemplate. You can use groups to load all the subtemplates within the top-level template. The content of the subtemplates won't load or run until a user makes those groups visible. Learn more about [how to split a large template into many templates](#split-a-large-template-into-many-templates).
+ - **Layout**: When you want items to be organized vertically, you can create a group of items that will all stack up and set the styling of the group to be a percentage width instead of setting percentage width on all the individual items.
+ - **Visibility**: When you want several items to hide or show together, you can set the visibility of the entire group of items, instead of setting visibility settings on each individual item. This can be useful in templates that use tabs, as you can use a group as the content of the tab, and the entire group can be hidden/shown based on a parameter set by the selected tab.
+ - **Performance**: When you have a large template with many sections or tabs, you can convert each section into its own subtemplate, and use groups to load all the subtemplates within the top-level template. The content of the subtemplates won't load or run until a user makes those groups visible. Learn more about [how to split a large template into many templates](#splitting-a-large-template-into-many-templates).
### Add a group to your workbook
-1. Make sure you're in edit mode by selecting **Edit**.
-1. Add a group by doing one of these steps:
- - Select **Add** > **Add group** below an existing element or at the bottom of the workbook.
- - Select the ellipsis (...) to the right of the **Edit** button next to one of the elements in the workbook. Then select **Add** > **Add group**.
-
- :::image type="content" source="media/workbooks-create-workbook/workbooks-add-group.png" alt-text="Screenshot that shows adding a group to a workbook. ":::
+1. Make sure you are in **Edit** mode by selecting the **Edit** in the toolbar. Add a parameter by doing either of these steps:
+ - Select **Add**, and **Add group** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add group**.
-1. Select components for your group.
-1. Select **Done Editing.**
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-add-group.png" alt-text="Screenshot showing selecting adding a group to a workbook. ":::
+1. Select items for your group.
+1. Select **Done editing.**
- This group is in read mode with two components inside: a text component and a query component.
+ This is a group in read mode with two items inside: a text item and a query item.
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-view.png" alt-text="Screenshot that shows a group in read mode in a workbook.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-view.png" alt-text="Screenshot showing a group in read mode in a workbook.":::
- In edit mode, you can see those two components are actually inside a group component. In the following screenshot, the group is in edit mode. The group contains two components inside the dashed area. Each component can be in edit or read mode, independent of each other. For example, the text step is in edit mode while the query step is in read mode.
+ In edit mode, you can see those two items are actually inside a group item. In the screenshot below, the group is in edit mode. The group contains two items inside the dashed area. Each item can be in edit or read mode, independent of each other. For example, the text step is in edit mode while the query step is in read mode.
:::image type="content" source="media/workbooks-create-workbook/workbooks-groups-edit.png" alt-text="Screenshot of a group in edit mode in a workbook."::: ### Scoping a group
-A group is treated as a new scope in the workbook. Any parameters created in the group are only visible inside the group. This is also true for merge. You can only see data inside the group or at the parent level.
+A group is treated as a new scope in the workbook. Any parameters created in the group are only visible inside the group. This is also true for merge - you can only see data inside their group or at the parent level.
### Group types You can specify which type of group to add to your workbook. There are two types of groups:
+ - **Editable**: The group in the workbook allows you to add, remove, or edit the contents of the items in the group. This is most commonly used for layout and visibility purposes.
+ - **From a template**: The group in the workbook loads from the contents of another workbook by its ID. The content of that workbook is loaded and merged into the workbook at runtime. In edit mode, you can't modify any of the contents of the group, as they will just load again from the template next time the item loads. When loading a group from a template, use the full Azure Resource ID of an existing workbook.
### Load types
You can specify how and when the contents of a group are loaded.
#### Lazy loading
-Lazy loading is the default. In lazy loading, the group is only loaded when the component is visible. This functionality allows a group to be used by tab components. If the tab is never selected, the group never becomes visible, so the content isn't loaded.
+Lazy loading is the default. In lazy loading, the group is only loaded when the item is visible. This allows a group to be used by tab items. If the tab is never selected, the group never becomes visible and therefore the content isn't loaded.
-For groups created from a template, the content of the template isn't retrieved and the components in the group aren't created until the group becomes visible. Users see progress spinners for the whole group while the content is retrieved.
+For groups created from a template, the content of the template isn't retrieved and the items in the group aren't created until the group becomes visible. Users see progress spinners for the whole group while the content is retrieved.
#### Explicit loading
-In this mode, a button is displayed where the group would be. No content is retrieved or created until the user explicitly selects the button to load the content. This functionality is useful in scenarios where the content might be expensive to compute or rarely used. You can specify the text to appear on the button.
+In this mode, a button is displayed where the group would be, and no content is retrieved or created until the user explicitly clicks the button to load the content. This is useful in scenarios where the content might be expensive to compute or rarely used. The author can specify the text to appear on the button.
+
+This screenshot shows explicit load settings with a configured "Load more" button.
-This screenshot shows explicit load settings with a configured **Load More** button:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded.png" alt-text="Screenshot of explicit load settings for a group in workbooks.":::
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded.png" alt-text="Screenshot that shows explicit load settings for a group in the workbook.":::
+This is the group before being loaded in the workbook:
-This screenshot shows the group before being loaded in the workbook:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-before.png" alt-text="Screenshot showing an explicit group before being loaded in the workbook.":::
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-before.png" alt-text="Screenshot that shows an explicit group before being loaded in the workbook.":::
-This screenshot shows the group after being loaded in the workbook:
+The group after being loaded in the workbook:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-after.png" alt-text="Screenshot that shows an explicit group after being loaded in the workbook.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-explicitly-loaded-after.png" alt-text="Screenshot showing an explicit group after being loaded in the workbook.":::
#### Always mode
-In **Always** mode, the content of the group is always loaded and created as soon as the workbook loads. This functionality is most frequently used when you're using a group only for layout purposes, where the content is always visible.
+In **Always** mode, the content of the group is always loaded and created as soon as the workbook loads. This is most frequently used when you're using a group only for layout purposes, where the content will always be visible.
-### Use templates inside a group
+### Using templates inside a group
-When a group is configured to load from a template, by default, that content is loaded in lazy mode. It only loads when the group is visible.
+When a group is configured to load from a template, by default, that content will be loaded in lazy mode, and it will only load when the group is visible.
-When a template is loaded into a group, the workbook attempts to merge any parameters declared in the template with parameters that already exist in the group. Any parameters that already exist in the workbook with identical names are merged out of the template being loaded. If all parameters in a parameter component are merged out, the entire parameters component disappears.
+When a template is loaded into a group, the workbook attempts to merge any parameters declared in the template with parameters that already exist in the group. Any parameters that already exist in the workbook with identical names will be merged out of the template being loaded. If all parameters in a parameter step are merged out, the entire parameters step will disappear.
#### Example 1: All parameters have identical names
-Suppose you have a template that has two parameters at the top, a time range parameter and a text parameter named **Filter**:
+Suppose you have a template that has two parameters at the top, a time range parameter and a text parameter named "**Filter**":
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-top-level-params.png" alt-text="Screenshot that shows top-level parameters in a workbook.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-top-level-params.png" alt-text="Screenshot showing top level parameters in a workbook.":::
-Then a group component loads a second template that has its own two parameters and a text component, where the parameters are named the same:
+Then a group item loads a second template that has its own two parameters and a text step, where the parameters are named the same:
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-merged-away.png" alt-text="Screenshot that shows a workbook template with top-level parameters.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-merged-away.png" alt-text="Screenshot of a workbook template with top level parameters.":::
-When the second template is loaded into the group, the duplicate parameters are merged out. Because all the parameters are merged away, the inner parameters component is also merged out. The result is that the group contains only the text component.
+When the second template is loaded into the group, the duplicate parameters are merged out. Since all of the parameters are merged away, the inner parameters step is also merged out, resulting in the group containing only the text step.
### Example 2: One parameter has an identical name
-Suppose you have a template that has two parameters at the top, a time range parameter and a text parameter named **FilterB** ():
+Suppose you have a template that has two parameters at the top, a **time range** parameter and a text parameter named "**FilterB**" ():
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away.png" alt-text="Screenshot that shows a group component with the result of parameters merged away.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away.png" alt-text="Screenshot of a group item with the result of parameters merged away.":::
-When the group's component's template is loaded, the **TimeRange** parameter is merged out of the group. The workbook contains the initial parameters component with **TimeRange** and **Filter**, and the group's parameter only includes **FilterB**.
+When the group's item's template is loaded, the **TimeRange** parameter is merged out of the group. The workbook contains the initial parameters step with **TimeRange** and **Filter**, and the group's parameter only includes **FilterB**.
- :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away-result.png" alt-text="Screenshot that shows a workbook group where parameters won't merge away.":::
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-groups-wont-merge-away-result.png" alt-text="Screenshot of workbook group where parameters won't merge away.":::
-If the loaded template had contained **TimeRange** and **Filter** (instead of **FilterB**), the resulting workbook would have a parameters component and a group with only the text component remaining.
+If the loaded template had contained **TimeRange** and **Filter** (instead of **FilterB**), then the resulting workbook would have a parameters step and a group with only the text step remaining.
-### Split a large template into many templates
+### Splitting a large template into many templates
-To improve performance, it's helpful to break up a large template into multiple smaller templates that load some content in lazy mode or on demand by the user. This arrangement makes the initial load faster because the top-level template can be much smaller.
+To improve performance, it's helpful to break up a large template into multiple smaller templates that loads some content in lazy mode or on demand by the user. This makes the initial load faster since the top-level template can be much smaller.
-When you split a template into parts, you need to split the template into many templates (subtemplates) that all work individually. If the top-level template has a **TimeRange** parameter that other components use, the subtemplate also needs to have a parameters component that defines a parameter with the same exact name. The subtemplates work independently and can load inside larger templates in groups.
+When splitting a template into parts, you'll basically need to split the template into many templates (subtemplates) that all work individually. If the top-level template has a **TimeRange** parameter that other items use, the subtemplate will need to also have a parameters item that defines a parameter with same exact name. The subtemplates will work independently and can load inside larger templates in groups.
To turn a larger template into multiple subtemplates:
-1. Create a new empty group near the top of the workbook, after the shared parameters. This new group eventually becomes a subtemplate.
-1. Create a copy of the shared parameters component. Then use **move into group** to move the copy into the group created in step 1. This parameter allows the subtemplate to work independently of the outer template and is merged out when it's loaded inside the outer template.
+1. Create a new empty group near the top of the workbook, after the shared parameters. This new group will eventually become a subtemplate.
+1. Create a copy of the shared parameters step, and then use **move into group** to move the copy into the group created in step 1. This parameter allows the subtemplate to work independently of the outer template, and will get merged out when loaded inside the outer template.
> [!NOTE]
- > Subtemplates don't technically need to have the parameters that get merged out if you never plan on the subtemplates being visible by themselves. If the subtemplates don't have the parameters, they'll be hard to edit or debug if you need to do so later.
-
-1. Move each component in the workbook you want to be in the subtemplate into the group created in step 1.
-1. If the individual components moved in step 3 had conditional visibilities, that will become the visibility of the outer group (like used in tabs). Remove them from the components inside the group and add that visibility setting to the group itself. Save here to avoid losing changes. You can also export and save a copy of the JSON content.
-1. If you want that group to be loaded from a template, you can use **Edit** in the group. This action opens only the content of that group as a workbook in a new window. You can then save it as appropriate and close this workbook view. Don't close the browser. Only close that view to go back to the previous workbook where you were editing.
-1. You can then change the group component to load from a template and set the template ID field to the workbook/template you created in step 5. To work with workbook IDs, the source needs to be the full Azure Resource ID of a shared workbook. Select **Load** and the content of that group is now loaded from that subtemplate instead of being saved inside this outer workbook.
-
-## Next steps
+ > Subtemplates don't technically need to have the parameters that get merged out if you never plan on the sub-templates being visible by themselves. However, if the sub-templates do not have the parameters, it will make them very hard to edit or debug if you need to do so later.
-[Common Azure Workbooks use cases](workbooks-commonly-used-components.md)
+1. Move each item in the workbook you want to be in the subtemplate into the group created in step 1.
+1. If the individual steps moved in step 3 had conditional visibilities, that will become the visibility of the outer group (like used in tabs). Remove them from the items inside the group and add that visibility setting to the group itself. Save here to avoid losing changes and/or export and save a copy of the json content.
+1. If you want that group to be loaded from a template, you can use the **Edit** toolbar button in the group. This will open just the content of that group as a workbook in a new window. You can then save it as appropriate and close this workbook view (don't close the browser, just that view to go back to the previous workbook you were editing).
+1. You can then change the group step to load from template and set the template ID field to the workbook/template you created in step 5. To work with workbooks IDs, the source needs to be the full Azure Resource ID of a shared workbook. Press *Load* and the content of that group will now be loaded from that subtemplate instead of saved inside this outer workbook.
azure-monitor Workbooks Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-data-sources.md
Title: Azure Workbooks data sources | Microsoft docs
-description: Simplify complex reporting with prebuilt and custom parameterized Azure Workbooks built from multiple data sources.
+description: Simplify complex reporting with prebuilt and custom parameterized workbooks built from multiple data sources.
Workbooks can extract data from these data sources:
- [Azure resource health](#azure-resource-health) - [Azure RBAC](#azure-rbac) - [Change Analysis (preview)](#change-analysis-preview)+ ## Logs
-Workbooks allow querying logs from the following sources:
+With workbooks, you can query logs from the following sources:
-* Azure Monitor Logs (Application Insights Resources and Log Analytics Workspaces.)
-* Resource-centric data (Activity logs)
+* Azure Monitor Logs (Application Insights resources and Log Analytics workspaces)
+* Resource-centric data (activity logs)
-Workbook authors can use KQL queries that transform the underlying resource data to select a result set that can visualized as text, charts, or grids.
+You can use Kusto query language (KQL) queries that transform the underlying resource data to select a result set that can be visualized as text, charts, or grids.
-![Screenshot of workbooks logs report interface.](./media/workbooks-data-sources/logs.png)
+![Screenshot that shows a workbook logs report interface.](./media/workbooks-data-sources/logs.png)
-Workbook authors can easily query across multiple resources creating a truly unified rich reporting experience.
+You can easily query across multiple resources to create a unified rich reporting experience.
## Metrics
-Azure resources emit [metrics](../essentials/data-platform-metrics.md) that can be accessed via workbooks. Metrics can be accessed in workbooks through a specialized control that allows you to specify the target resources, the desired metrics, and their aggregation. This data can then be plotted in charts or grids.
+Azure resources emit [metrics](../essentials/data-platform-metrics.md) that can be accessed via workbooks. Metrics can be accessed in workbooks through a specialized control that allows you to specify the target resources, the desired metrics, and their aggregation. You can then plot this data in charts or grids.
-![Screenshot of workbook metrics charts of cpu utilization.](./media/workbooks-data-sources/metrics-graph.png)
+![Screenshot that shows workbook metrics charts of CPU utilization.](./media/workbooks-data-sources/metrics-graph.png)
-![Screenshot of workbook metrics interface.](./media/workbooks-data-sources/metrics.png)
+![Screenshot that shows a workbook metrics interface.](./media/workbooks-data-sources/metrics.png)
## Azure Resource Graph
-Workbooks support querying for resources and their metadata using Azure Resource Graph (ARG). This functionality is primarily used to build custom query scopes for reports. The resource scope is expressed via a KQL-subset that ARG supports ΓÇô which is often sufficient for common use cases.
+Workbooks support querying for resources and their metadata by using Azure Resource Graph. This functionality is primarily used to build custom query scopes for reports. The resource scope is expressed via a KQL subset that Resource Graph supports, which is often sufficient for common use cases.
-To make a query control use this data source, use the Query type drop-down to choose Azure Resource Graph and select the subscriptions to target. Use the Query control to add the ARG KQL-subset that selects an interesting resource subset.
+To make a query control that uses this data source, use the **Query type** dropdown and select **Azure Resource Graph**. Then select the subscriptions to target. Use **Query control** to add the Resource Graph KQL subset that selects an interesting resource subset.
-![Screenshot of Azure Resource Graph KQL query.](./media/workbooks-data-sources/azure-resource-graph.png)
+![Screenshot that shows an Azure Resource Graph KQL query.](./media/workbooks-data-sources/azure-resource-graph.png)
## Azure Resource Manager
-Workbook supports Azure Resource Manager REST operations. This allows the ability to query management.azure.com endpoint without the need to provide your own authorization header token.
+Azure Workbooks supports Azure Resource Manager REST operations so that you can query the management.azure.com endpoint without providing your own authorization header token.
-To make a query control use this data source, use the Data source drop-down to choose Azure Resource Manager. Provide the appropriate parameters such as Http method, url path, headers, url parameters and/or body.
+To make a query control that uses this data source, use the **Data source** dropdown and select **Azure Resource Manager**. Provide the appropriate parameters, such as **Http method**, **url path**, **headers**, **url parameters**, and **body**.
> [!NOTE] > Only GET, POST, and HEAD operations are currently supported.
To make a query control use this data source, use the Data source drop-down to c
## Azure Data Explorer Workbooks now have support for querying from [Azure Data Explorer](/azure/data-explorer/) clusters with the powerful [Kusto](/azure/kusto/query/index) query language.
-For the **Cluster Name** field, you should add the region name following the cluster name. For example: *mycluster.westeurope*.
+For the **Cluster Name** field, add the region name following the cluster name. An example is *mycluster.westeurope*.
-![Screenshot of Kusto query window.](./media/workbooks-data-sources/data-explorer.png)
+![Screenshot that shows Kusto query window.](./media/workbooks-data-sources/data-explorer.png)
## JSON
-The JSON provider allows you to create a query result from static JSON content. It is most commonly used in Parameters to create dropdown parameters of static values. Simple JSON arrays or objects will automatically be converted into grid rows and columns. For more specific behaviors, you can use the Results tab and JSONPath settings to configure columns.
+The JSON provider allows you to create a query result from static JSON content. It's most commonly used in parameters to create dropdown parameters of static values. Simple JSON arrays or objects will automatically be converted into grid rows and columns. For more specific behaviors, you can use the **Results** tab and JSONPath settings to configure columns.
> [!NOTE]
-> Do not include any sensitive information in any fields (headers, parameters, body, url), since they will be visible to all of the Workbook users.
+> Do *not* include sensitive information in fields like headers, parameters, body, and URL, because they'll be visible to all the workbook users.
This provider supports [JSONPath](workbooks-jsonpath.md). ## Merge
-Merging data from different sources can enhance the insights experience. An example is augmenting active alert information with related metric data. This allows users to see not just the effect (an active alert), but also potential causes (for example, high CPU usage). The monitoring domain has numerous such correlatable data sources that are often critical to the triage and diagnostic workflow.
+Merging data from different sources can enhance the insights experience. An example is augmenting active alert information with related metric data. Merging data allows users to see not just the effect (an active alert) but also potential causes, for example, high CPU usage. The monitoring domain has numerous such correlatable data sources that are often critical to the triage and diagnostic workflow.
-Workbooks allow not just the querying of different data sources, but also provides simple controls that allow you to merge or join the data to provide rich insights. The **merge** control is the way to achieve it.
+With workbooks, you can query different data sources. Workbooks also provide simple controls that you can use to merge or join data to provide rich insights. The *merge* control is the way to achieve it.
-### Combining alerting data with Log Analytics VM performance data
+### Combine alerting data with Log Analytics VM performance data
-The example below combines alerting data with Log Analytics VM performance data to get a rich insights grid.
+The following example combines alerting data with Log Analytics VM performance data to get a rich insights grid.
-![Screenshot of a workbook with a merge control that combines alert and log analytics data.](./media/workbooks-data-sources/merge-control.png)
+![Screenshot that shows a workbook with a merge control that combines alert and Log Analytics data.](./media/workbooks-data-sources/merge-control.png)
-### Using merge control to combine Azure Resource Graph and Log Analytics data
+### Use merge control to combine Resource Graph and Log Analytics data
-Here is a tutorial on using the merge control to combine Azure Resource Graph and Log Analytics data:
+Watch this tutorial on using the merge control to combine Resource Graph and Log Analytics data:
[![Combining data from different sources in workbooks](https://img.youtube.com/vi/7nWP_YRzxHg/0.jpg)](https://www.youtube.com/watch?v=7nWP_YRzxHg "Video showing how to combine data from different sources in workbooks.")
Workbooks support these merges:
## Custom endpoint
-Workbooks support getting data from any external source. If your data lives outside Azure you can bring it to Workbooks by using this data source type.
+Workbooks support getting data from any external source. If your data lives outside Azure, you can bring it to workbooks by using this data source type.
-To make a query control use this data source, use the **Data source** drop-down to choose **Custom Endpoint**. Provide the appropriate parameters such as **Http method**, **url**, **headers**, **url parameters**, and/or **body**. Make sure your data source supports [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) otherwise the request will fail.
+To make a query control that uses this data source, use the **Data source** dropdown and select **Custom Endpoint**. Provide the appropriate parameters, such as **Http method**, **url**, **headers**, **url parameters**, and **body**. Make sure your data source supports [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS). Otherwise, the request will fail.
-To avoid automatically making calls to untrusted hosts when using templates, the user needs to mark the used hosts as trusted. This can be done by either selecting the **Add as trusted** button, or by adding it as a trusted host in Workbook settings. These settings will be saved in [browsers that support IndexDb with web workers](https://caniuse.com/#feat=indexeddb).
+To avoid automatically making calls to untrusted hosts when you use templates, you need to mark the used hosts as trusted. You can either select **Add as trusted** or add it as a trusted host in workbook settings. These settings will be saved in [browsers that support IndexDb with web workers](https://caniuse.com/#feat=indexeddb).
This provider supports [JSONPath](workbooks-jsonpath.md).+ ## Workload health
-Azure Monitor has functionality that proactively monitors the availability and performance of Windows or Linux guest operating systems. Azure Monitor models key components and their relationships, criteria for how to measure the health of those components, and which components alert you when an unhealthy condition is detected. Workbooks allow users to use this information to create rich interactive reports.
+Azure Monitor has functionality that proactively monitors the availability and performance of Windows or Linux guest operating systems. Azure Monitor models key components and their relationships, criteria for how to measure the health of those components, and which components alert you when an unhealthy condition is detected. With workbooks, you can use this information to create rich interactive reports.
-To make a query control use this data source, use the **Query type** drop-down to choose Workload Health and select subscription, resource group or VM resources to target. Use the health filter drop downs to select an interesting subset of health incidents for your analytic needs.
+To make a query control that uses this data source, use the **Query type** dropdown to select **Workload Health**. Then select subscription, resource group, or VM resources to target. Use the health filter dropdowns to select an interesting subset of health incidents for your analytic needs.
-![Screenshot of alerts query.](./media/workbooks-data-sources/workload-health.png)
+![Screenshot that shows an alerts query.](./media/workbooks-data-sources/workload-health.png)
## Azure resource health
-Workbooks support getting Azure resource health and combining it with other data sources to create rich, interactive health reports
+Workbooks support getting Azure resource health and combining it with other data sources to create rich, interactive health reports.
-To make a query control use this data source, use the **Query type** drop-down to choose Azure health and select the resources to target. Use the health filter drop downs to select an interesting subset of resource issues for your analytic needs.
+To make a query control that uses this data source, use the **Query type** dropdown and select **Azure health**. Then select the resources to target. Use the health filter dropdowns to select an interesting subset of resource issues for your analytic needs.
-![Screenshot of alerts query that shows the health filter lists.](./media/workbooks-data-sources/resource-health.png)
+![Screenshot that shows an alerts query that shows the health filter lists.](./media/workbooks-data-sources/resource-health.png)
## Azure RBAC
-The Azure RBAC provider allows you to check permissions on resources. It is most commonly used in parameter to check if the correct RBAC are set up. A use case would be to create a parameter to check deployment permission and then notify the user if they don't have deployment permission. Simple JSON arrays or objects will automatically be converted into grid rows and columns or text with a 'hasPermission' column with either true or false. The permission is checked on each resource and then either 'or' or 'and' to get the result. The [operations or actions](../../role-based-access-control/resource-provider-operations.md) can be a string or an array.
+
+The Azure role-based access control (RBAC) provider allows you to check permissions on resources. It's most commonly used in parameters to check if the correct RBACs are set up. A use case would be to create a parameter to check deployment permission and then notify the user if they don't have deployment permission.
+
+Simple JSON arrays or objects will automatically be converted into grid rows and columns or text with a `hasPermission` column with either true or false. The permission is checked on each resource and then either `or` or `and` to get the result. The [operations or actions](../../role-based-access-control/resource-provider-operations.md) can be a string or an array.
**String:** ```
The Azure RBAC provider allows you to check permissions on resources. It is most
## Change Analysis (preview)
-To make a query control using [Application Change Analysis](../app/change-analysis.md) as the data source, use the **Data source** drop-down and choose *Change Analysis (preview)* and select a single resource. Changes for up to the last 14 days can be shown. The *Level* drop-down can be used to filter between "Important", "Normal", and "Noisy" changes, and this drop down supports workbook parameters of type [drop down](workbooks-dropdowns.md).
+To make a query control that uses [Application Change Analysis](../app/change-analysis.md) as the data source, use the **Data source** dropdown and select **Change Analysis (preview)**. Then select a single resource. Changes for up to the last 14 days can be shown. Use the **Level** dropdown to filter between **Important**, **Normal**, and **Noisy** changes. This dropdown supports workbook parameters of the type [drop down](workbooks-dropdowns.md).
> [!div class="mx-imgBorder"]
-> ![A screenshot of a workbook with Change Analysis.](./media/workbooks-data-sources/change-analysis-data-source.png)
+> ![A screenshot that shows a workbook with Change Analysis.](./media/workbooks-data-sources/change-analysis-data-source.png)
## Next steps
+ - [Get started with Azure Workbooks](workbooks-getting-started.md)
+ - [Create an Azure workbook](workbooks-create-workbook.md)
azure-monitor Workbooks Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-getting-started.md
Title: Common Workbooks tasks
-description: Learn how to perform the commonly used tasks in Workbooks.
+ Title: Common Azure Workbooks tasks
+description: Learn how to perform the commonly used tasks in workbooks.
# Get started with Azure Workbooks
-This article describes how to access Azure Workbooks and the common tasks used to work with Workbooks.
+This article describes how to access Azure Workbooks and the common tasks used to work with workbooks.
-You can access Workbooks in a few ways:
-- In the [Azure portal](https://portal.azure.com), click on **Monitor**, and then select **Workbooks** from the menu bar on the left.
+You can access Azure Workbooks in a few ways:
- :::image type="content" source="./media/workbooks-overview/workbooks-menu.png" alt-text="Screenshot of Workbooks icon in the menu.":::
+- In the [Azure portal](https://portal.azure.com), select **Monitor** > **Workbooks** from the menu bars on the left.
-- In a **Log Analytics workspace** page, select the **Workbooks** icon at the top of the page.
+ :::image type="content" source="./media/workbooks-overview/workbooks-menu.png" alt-text="Screenshot that shows Workbooks in the menu.":::
- :::image type="content" source="media/workbooks-overview/workbooks-log-analytics-icon.png" alt-text="Screenshot of Workbooks icon on Log analytics workspace page.":::
+- On a **Log Analytics workspaces** page, select **Workbooks** at the top of the page.
-The gallery opens. Select a saved workbook or a template from the gallery, or search for the name in the search bar.
+ :::image type="content" source="media/workbooks-overview/workbooks-log-analytics-icon.png" alt-text="Screenshot of Workbooks on the Log Analytics workspaces page.":::
+
+When the gallery opens, select a saved workbook or a template. You can also search for a name in the search box.
## Save a workbook+ To save a workbook, save the report with a specific title, subscription, resource group, and location.
-The workbook will autofill to the same settings as the LA workspace, with the same subscription, resource group, however, users may change these report settings. Workbooks are shared resources that require write access to the parent resource group to be saved.
+
+The workbook is auto-filled with the same settings as the LA workspace, with the same subscription and resource group. You can change the report settings if you want. Workbooks are saved to 'My Reports' by default, and are only accessible by the individual user, but they can be saved directly to shared reports or shared later on. Workbooks are shared resources and they require write access to the parent resource group to be saved.
## Share a workbook template
-Once you start creating your own workbook template, you may want to share it with the wider community. To learn more, and to explore other templates that aren't part of the default Azure Monitor gallery, visit our [GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/README.md). To browse existing workbooks, visit the [Workbook library](https://github.com/microsoft/Application-Insights-Workbooks/tree/master/Workbooks) on GitHub.
+After you start creating your own workbook template, you might want to share it with the wider community. To learn more, and to explore other templates that aren't part of the default Azure Monitor gallery, see the [GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/README.md). To browse existing workbooks, see the [Workbook library](https://github.com/microsoft/Application-Insights-Workbooks/tree/master/Workbooks) on GitHub.
## Pin a visualization
-Use the pin button next to a text, query, or metrics components in a workbook can be pinned by using the pin button on those items while the workbook is in pin mode, or if the workbook author has enabled settings for that element to make the pin icon visible.
+You can pin text, query, or metrics components in a workbook by using the **Pin** button on those items while the workbook is in pin mode. Or you can use the **Pin** button if the workbook author has enabled settings for that element to make it visible.
-To access pin mode, select **Edit** to enter editing mode, and select the blue pin icon in the top bar. An individual pin icon will then appear above each corresponding workbook part's *Edit* box on the right-hand side of your screen.
+To access pin mode, select **Edit** to enter editing mode. Select **Pin** on the top bar. An individual **Pin** then appears above each corresponding workbook part's **Edit** button on the right side of the screen.
> [!NOTE]
-> The state of the workbook is saved at the time of the pin, and pinned workbooks on a dashboard will not update if the underlying workbook is modified. In order to update a pinned workbook part, you will need to delete and re-pin that part.
+> The state of the workbook is saved at the time of the pin. Pinned workbooks on a dashboard won't update if the underlying workbook is modified. To update a pinned workbook part, you must delete and re-pin that part.
### Time ranges for pinned queries
-Pinned workbook query parts will respect the dashboard's time range if the pinned item is configured to use a *Time Range* parameter. The dashboard's time range value will be used as the time range parameter's value, and any change of the dashboard time range will cause the pinned item to update. If a pinned part is using the dashboard's time range, you will see the subtitle of the pinned part update to show the dashboard's time range whenever the time range changes.
+Pinned workbook query parts will respect the dashboard's time range if the pinned item is configured to use a *TimeRange* parameter. The dashboard's time range value will be used as the time range parameter's value. Any change of the dashboard time range will cause the pinned item to update. If a pinned part is using the dashboard's time range, you'll see the subtitle of the pinned part update to show the dashboard's time range whenever the time range changes.
-Additionally, pinned workbook parts using a time range parameter will auto refresh at a rate determined by the dashboard's time range. The last time the query ran will appear in the subtitle of the pinned part.
+Pinned workbook parts using a time range parameter will auto-refresh at a rate determined by the dashboard's time range. The last time the query ran will appear in the subtitle of the pinned part.
-If a pinned component has an explicitly set time range (does not use a time range parameter), that time range will always be used for the dashboard, regardless of the dashboard's settings. The subtitle of the pinned part will not show the dashboard's time range, and the query will not auto-refresh on the dashboard. The subtitle will show the last time the query executed.
+If a pinned component has an explicitly set time range and doesn't use a time range parameter, that time range will always be used for the dashboard, regardless of the dashboard's settings. The subtitle of the pinned part won't show the dashboard's time range. The query won't auto-refresh on the dashboard. The subtitle will show the last time the query executed.
> [!NOTE]
-> Queries using the *merge* data source are not currently supported when pinning to dashboards.
+> Queries that use the *merge* data source aren't currently supported when pinning to dashboards.
+
+## Auto refresh
+
+Select **Auto refresh** to open a list of intervals that you can use to select the interval. The workbook will keep refreshing after the selected time interval.
+
+* **Auto refresh** only refreshes when the workbook is in read mode. If a user sets an interval of 5 minutes and after 4 minutes switches to edit mode, refreshing doesn't occur if the user is still in edit mode. But if the user returns to read mode, the interval of 5 minutes resets and the workbook will be refreshed after 5 minutes.
+* Selecting **Auto refresh** in read mode also resets the interval. If a user sets the interval to 5 minutes and after 3 minutes the user selects **Auto refresh** to manually refresh the workbook, the **Auto refresh** interval resets and the workbook will be auto-refreshed after 5 minutes.
+* This setting isn't saved with the workbook. Every time a user opens a workbook, **Auto refresh** is **Off** and needs to be set again.
+* Switching workbooks and going out of the gallery clears the **Auto refresh** interval.
-## Auto-Refresh
-Clicking on the Auto-Refresh button opens a list of intervals to let the user pick up the interval. The Workbook will keep refreshing after the selected time interval.
-* Auto-Refresh only refreshes when the Workbook is in read mode. If a user sets an interval of say 5 minutes and after 4 minutes switches to edit mode then there is no refreshing when the user is still in edit mode. But if the user comes back to read mode, the interval of 5 minutes resets and the Workbook will be refreshed after 5 minutes.
-* Clicking on the Refresh button on Read mode also reset the interval. Say a user sets the interval to 5 minutes and after 3 minutes, the user clicks on the refresh button to manually refresh the Workbook, then the Auto-refresh interval resets and the Workbook will be auto refreshed after 5 minutes.
-* This setting is not saved with Workbook. Every time a user opens a Workbook, the Auto-refresh is Off initially and needs to be set again.
-* Switching Workbooks, going out of gallery will clear the Auto refresh interval.
+## Next steps
-## Next Steps
+[Azure Workbooks data sources](workbooks-data-sources.md)
azure-monitor Workbooks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-overview.md
Title: Azure Workbooks Overview
+ Title: Azure Workbooks overview
description: Learn how workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal.
# Azure Workbooks
-Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences. Workbooks let you combine multiple kinds of visualizations and analyses, making them great for free-form exploration.
+Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure and combine them into unified interactive experiences. Workbooks let you combine multiple kinds of visualizations and analyses, making them great for freeform exploration.
-Workbooks combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports.
+Workbooks combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports.
Workbooks are helpful for scenarios such as: -- Exploring the usage of your virtual machine when you don't know the metrics of interest in advance: CPU utilization, disk space, memory, network dependencies, etc.-- Explaining to your team how a recently provisioned VM is performing, by showing metrics for key counters and other log events.-- Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text, then show each usage metric and analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above or below target.-- Reporting the impact of an outage on the usage of your VM, combining data, text explanation, and a discussion of next steps to prevent outages in the future.
+- Exploring the usage of your virtual machine when you don't know the metrics of interest in advance. You can discover metrics for CPU utilization, disk space, memory, and network dependencies.
+- Explaining to your team how a recently provisioned VM is performing. You can show metrics for key counters and other log events.
+- Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text. Then you can show each usage metric and the analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above or below target.
+- Reporting the impact of an outage on the usage of your VM. You can combine data, text explanation, and a discussion of next steps to prevent outages in the future.
-## The Gallery
-The gallery opens listing all the saved workbooks and templates for your workspace, allowing you to easily organize, sort, and manage workbooks of all types.
+## The gallery
+
+The gallery lists all the saved workbooks and templates for your workspace. You can easily organize, sort, and manage workbooks of all types.
+ #### Gallery tabs There are four tabs in the gallery to help organize workbook types. | Tab | Description | |||
-| All | Shows the top four items for each type - workbooks, public templates, and my templates. Workbooks are sorted by modified date so you will see the most recent eight modified workbooks.|
+| All | Shows the top four items for workbooks, public templates, and my templates. Workbooks are sorted by modified date, so you'll see the most recent eight modified workbooks.|
| Workbooks | Shows the list of all the available workbooks that you created or are shared with you. | | Public Templates | Shows the list of all the available ready to use, get started functional workbook templates published by Microsoft. Grouped by category. | | My Templates | Shows the list of all the available deployed workbook templates that you created or are shared with you. Grouped by category. | ## Data sources
-Workbooks can query data from multiple Azure sources. You can transform this data to provide insights into the availability, performance, usage, and overall health of the underlying components. For example:
-- You can analyze performance logs from virtual machines to identify high CPU or low memory instances and display the results as a grid in an interactive report.-- You can combine data from several different sources within a single report. This allows you to create composite resource views or joins across resources enabling richer data and insights that would otherwise be impossible.
+Workbooks can query data from multiple Azure sources. You can transform this data to provide insights into the availability, performance, usage, and overall health of the underlying components. For example, you can:
+
+- Analyze performance logs from virtual machines to identify high CPU or low memory instances and display the results as a grid in an interactive report.
+- Combine data from several different sources within a single report. You can create composite resource views or joins across resources to gain richer data and insights that would otherwise be impossible.
+
+For more information about the supported data sources, see [Azure Workbooks data sources](workbooks-data-sources.md).
-See [this article](workbooks-data-sources.md) for detailed information about the supported data sources.
## Visualizations
-Workbooks provide a rich set of capabilities for visualizing your data. Each data source and result set support visualizations that are most useful for that data. See [this article](workbooks-visualizations.md) for detailed information about the visualizations.
+Workbooks provide a rich set of capabilities for visualizing your data. Each data source and result set support visualizations that are most useful for that data. For more information about the visualizations, see [Workbook visualizations](workbooks-visualizations.md).
:::image type="content" source="./media/workbooks-overview/visualizations.png" alt-text="Screenshot that shows an example of workbook visualizations." border="false" lightbox="./media/workbooks-overview/visualizations.png"::: ## Access control
-Users must have the appropriate permissions to access to view or edit a workbook. Workbook permissions are based on the permissions the user has for the resources included in the workbooks.
+Users must have the appropriate permissions to view or edit a workbook. Workbook permissions are based on the permissions the user has for the resources included in the workbooks.
-Standard Azure roles that provide the access to workbooks are:
-
-- [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) includes standard /read privileges that would be used by monitoring tools (including workbooks) to read data from resources.
+Standard Azure roles that provide access to workbooks:
-ΓÇ£Workbooks ContributorΓÇ¥ adds ΓÇ£workbooks/writeΓÇ¥ privileges to an object to save shared workbooks.
+- [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) includes standard `/read` privileges that would be used by monitoring tools (including workbooks) to read data from resources.
+ - [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) includes general `/write` privileges used by various monitoring tools for saving items (including `workbooks/write` privilege to save shared workbooks). Workbooks Contributor adds `workbooks/write` privileges to an object to save shared workbooks.
-For custom roles, you must add `microsoft.insights/workbooks/write` to the user's permissions in order to be able to edit and save a workbook. For more details, see the [Workbook Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role.
+For custom roles, you must add `microsoft.insights/workbooks/write` to the user's permissions to edit and save a workbook. For more information, see the [Workbook Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role.
## Next steps
+[Get started with Azure Workbooks](workbooks-getting-started.md)
azure-monitor Workbooks Renderers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-renderers.md
Title: Azure Workbook rendering options
+ Title: Azure Workbooks rendering options
description: Learn about all the Azure Monitor workbook rendering options.
# Rendering options
-These rendering options can be used with grids, tiles, and graphs to produce the visualizations in optimal format.
+
+This article describes the Azure Workbooks rendering options you can use with grids, tiles, and graphs to produce visualizations in optimal format.
+ ## Column renderers
-| Column Renderer | Explanation | More Options |
+| Column renderer | Description | More options |
|:- |:-|:-|
-| Automatic | The default - uses the most appropriate renderer based on the column type. | |
+| Automatic | The default. Uses the most appropriate renderer based on the column type. | |
| Text| Renders the column values as text. | |
-| Right Aligned| Renders the column values as right-aligned text. | |
-| Date/Time| Renders a readable date time string. | |
+| Right aligned| Renders the column values as right-aligned text. | |
+| Date/Time| Renders a readable date/time string. | |
| Heatmap| Colors the grid cells based on the value of the cell. | Color palette and min/max value used for scaling. | | Bar| Renders a bar next to the cell based on the value of the cell. | Color palette and min/max value used for scaling. | | Bar underneath | Renders a bar near the bottom of the cell based on the value of the cell. | Color palette and min/max value used for scaling. |
-| Composite bar| Renders a composite bar using the specified columns in that row. Refer [Composite Bar](workbooks-composite-bar.md) for details. | Columns with corresponding colors to render the bar and a label to display at the top of the bar. |
-|Spark bars| Renders a spark bar in the cell based on the values of a dynamic array in the cell. For example, the Trend column from the screenshot at the top. | Color palette and min/max value used for scaling. |
+| Composite bar| Renders a composite bar by using the specified columns in that row. For more information, see [Composite bar](workbooks-composite-bar.md). | Columns with corresponding colors to render the bar and a label to display at the top of the bar. |
+|Spark bars| Renders a spark bar in the cell based on the values of a dynamic array in the cell. An example is the Trend column from the screenshot at the top. | Color palette and min/max value used for scaling. |
|Spark lines| Renders a spark line in the cell based on the values of a dynamic array in the cell. | Color palette and min/max value used for scaling. | |Icon| Renders icons based on the text values in the cell. Supported values include:<br><ul><li>canceled</li><li>critical</li><li>disabled</li><li>error</li><li>failed</li> <li>info</li><li>none</li><li>pending</li><li>stopped</li><li>question</li><li>success</li><li>unknown</li><li>warning</li><li>uninitialized</li><li>resource</li><li>up</li> <li>down</li><li>left</li><li>right</li><li>trendup</li><li>trenddown</li><li>4</li><li>3</li><li>2</li><li>1</li><li>Sev0</li><li>Sev1</li><li>Sev2</li><li>Sev3</li><li>Sev4</li><li>Fired</li><li>Resolved</li><li>Available</li><li>Unavailable</li><li>Degraded</li><li>Unknown</li><li>Blank</li></ul>| |
-| Link | Renders a link that when clicked or performs a configurable action. Use this setting if you **only** want the item to be a link. Any of the other types of renderers can also be a link by using the **Make this item a link** setting. For more information, see [Link Actions](#link-actions). | |
+| Link | Renders a link when selected or performs a configurable action. Use this setting if you *only* want the item to be a link. Any of the other types of renderers can also be a link by using the **Make this item a link** setting. For more information, see [Link actions](#link-actions). | |
| Location | Renders a friendly Azure region name based on a region ID. | | | Resource type | Renders a friendly resource type string based on a resource type ID. | |
-| Resource| Renders a friendly resource name and link based on a resource ID. | Option to show the resource type icon |
-| Resource group | Renders a friendly resource group name and link based on a resource group ID. If the value of the cell is not a resource group, it will be converted to one. | Option to show the resource group icon |
-|Subscription| Renders a friendly subscription name and link based on a subscription ID. if the value of the cell is not a subscription, it will be converted to one. | Option to show the subscription icon. |
-|Hidden| Hides the column in the grid. Useful when the default query returns more columns than needed but a project-away is not desired | |
+| Resource| Renders a friendly resource name and link based on a resource ID. | Option to show the resource type icon. |
+| Resource group | Renders a friendly resource group name and link based on a resource group ID. If the value of the cell isn't a resource group, it will be converted to one. | Option to show the resource group icon. |
+|Subscription| Renders a friendly subscription name and link based on a subscription ID. If the value of the cell isn't a subscription, it will be converted to one. | Option to show the subscription icon. |
+|Hidden| Hides the column in the grid. Useful when the default query returns more columns than needed but a project-away isn't desired. | |
## Link actions
-If the **Link** renderer is selected or the **Make this item a link** checkbox is selected, the author can configure a link action to occur when the user selects the cell to taking the user to another view with context coming from the cell, or to open up a url. See link renderer actions for more details.
+If the **Link** renderer is selected or the **Make this item a link** checkbox is selected, you can configure a link action. The action can occur when the user selects the cell to take the user to another view with context coming from the cell or to open a URL. For more information, see link renderer actions.
-## Using thresholds with links
+## Use thresholds with links
-The instructions below will show you how to use thresholds with links to assign icons and open different workbooks. Each link in the grid will open up a different workbook template for that Application Insights resource.
+The following instructions show you how to use thresholds with links to assign icons and open different workbooks. Each link in the grid opens a different workbook template for that Application Insights resource.
-1. Switch the workbook to edit mode by selecting **Edit** toolbar item.
-1. Select **Add** then **Add query**.
-1. Change the **Data source** to "JSON" and **Visualization** to "Grid".
-1. Enter this query.
+1. Switch the workbook to edit mode by selecting **Edit**.
+1. Select **Add** > **Add query**.
+1. Change **Data source** to **JSON** and change **Visualization** to **Grid**.
+1. Enter this query:
```json [
The instructions below will show you how to use thresholds with links to assign
] ```
-1. Run query.
+1. Run the query.
1. Select **Column Settings** to open the settings.
-1. Select "name" from **Columns**.
-1. Under **Column renderer**, choose "Thresholds".
-1. Enter and choose the following **Threshold Settings**.
+1. Under **Columns**, select **name**.
+1. Under **Column renderer**, select **Thresholds**.
+1. Enter and choose the following **Threshold Settings**:
- Keep the default row as is. You may enter whatever text you like. The Text column takes a String format as an input and populates it with the column value and unit if specified. For example, if warning is the column value the text can be "{0} {1} link!", it will be displayed as "warning link!".
+ 1. Keep the **Default** row as is.
+ 1. Enter whatever text you like.
+ 1. The **Text** column takes a string format as an input and populates it with the column value and unit if specified. For example, if **warning** is the column value, the text can be `{0} {1} link!`. It will be displayed as `warning link!`.
| Operator | Value | Icons | |-||| | == | warning | Warning | | == | error | Failed |
- ![Screenshot of Edit column settings tab with the above settings.](./media/workbooks-grid-visualizations/column-settings.png)
+ ![Screenshot that shows the Edit column settings tab with the preceding settings.](./media/workbooks-grid-visualizations/column-settings.png)
-1. Select the **Make this item a link** box.
- - Under **View to open**, choose **Workbook (Template)**.
- - Under **Link value comes from**, choose **link**.
- - Select the **Open link in Context Blade** box.
- - Choose the following settings in **Workbook Link Settings**
- - Under **Template Id comes from**, choose **Column**.
- - Under **Column** choose **link**.
-
- ![Screenshot of link settings with the above settings.](./media/workbooks-grid-visualizations/make-this-item-a-link.png)
-
-1. Select **link** from **Columns**. Under **Settings**, next to **Column renderer**, select **(Hide column)**.
-1. To change the display name of the **name** column, select the **Labels** tab. On the row with **name** as its **Column ID**, under **Column Label** enter the name you want displayed.
+1. Select the **Make this item a link** checkbox.
+ - Under **View to open**, select **Workbook (Template)**.
+ - Under **Link value comes from**, select **link**.
+ - Select the **Open link in Context Blade** checkbox.
+ - Choose the following settings in **Workbook Link Settings**:
+ - Under **Template Id comes from**, select **Column**.
+ - Under **Column**, select **link**.
+
+ ![Screenshot that shows link settings with the preceding settings.](./media/workbooks-grid-visualizations/make-this-item-a-link.png)
+
+1. Under **Columns**, select **link**. Under **Settings**, next to **Column renderer**, select **(Hide column)**.
+1. To change the display name of the **name** column, select the **Labels** tab. On the row with **name** as its **Column ID**, under **Column Label**, enter the name you want displayed.
1. Select **Apply**.
- ![Screenshot of a thresholds in grid with the above settings.](./media/workbooks-grid-visualizations/thresholds-workbooks-links.png)
+ ![Screenshot that shows thresholds in a grid with the preceding settings.](./media/workbooks-grid-visualizations/thresholds-workbooks-links.png)
azure-monitor Workbooks View Designer Conversion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-view-designer-conversion-overview.md
+
+ Title: Transition from View Designer to workbooks
+description: Transition from View Designer to workbooks.
++++ Last updated : 07/22/2022++++
+# Transition from View Designer to Workbooks
+[View designer](view-designer.md) is a feature of Azure Monitor that allows you to create custom views to help you visualize data in your Log Analytics workspace, with charts, lists, and timelines. View designer has been transitioned to workbooks to provide a flexible canvas for data analysis and creation of rich visual reports within the Azure portal. This article helps you make the transition from View designer to Workbooks. While this article describes simple steps to recreate some of the commonly used view designer views, workbooks allow you to create and design any of your own custom visualizations and metrics.
+
+[Workbooks](../vm/vminsights-workbooks.md) combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports. Team members with the same access to Azure resources are also able to edit workbooks.
+
+You can use workbooks to:
+
+ - Explore the usage of your virtual machine when you don't know the metrics of interest in advance: CPU utilization, disk space, memory, network dependencies, etc. Unlike other usage analytics tools, workbooks let you combine multiple kinds of visualizations and analyses, making them great for this kind of free-form exploration.
+ - Explain to your team how a recently provisioned VM is performing, by showing metrics for key counters and other log events.
+ - Share the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text, then show each usage metric and analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above or below target.
+ - Report the impact of an outage on the usage of your VM, combining data, text explanation, and a discussion of next steps to prevent outages in the future.
+
+See the [getting started](workbooks-getting-started.md) article for common workbook tasks such as creating, opening or saving a workbook.
+## Why move from view designer dashboards to workbooks?
+
+With View designer, you can generate different query-based views and visualizations, but many high-level customizations remain limited, such as formatting the grids and tile layouts or selecting alternative graphics to represent your data. View designer is restricted to a total of nine distinct tiles to represent your data.
+
+View designer has a fixed static style of representation, while workbooks enable freedom to include and modify how the data is represented.
+
+Workbooks is a platform that unlocks the full potential of your data. workbooks not only retain all the capabilities, but also supports more functionality through text, metrics, parameters, and much more. For example, workbooks allow users to consolidate dense grids and add search bars to easily filter and analyze the data.
+
+These workbooks features provide additional functionality that was not available in View Designer.
+- Supports both logs and metrics.
+- Allows both personal views for individual access control and shared workbooks views.
+- Custom layout options with tabs, sizing, and scaling controls.
+- Support for querying across multiple Log Analytics workspaces, Application Insights applications, and subscriptions.
+- Enables custom parameters that dynamically update associated charts and visualizations.
+- Template gallery support from public GitHub.
+
+This screenshot is from the [Workspace usage template](https://go.microsoft.com/fwlink/?linkid=874159&resourceId=Azure%20Monitor&featureName=Workbooks&itemId=community-Workbooks%2FAzure%20Monitor%20-%20Workspaces%2FWorkspace%20Usage&workbookTemplateName=Workspace%20Usage&func=NavigateToPortalFeature&type=workbook) and shows an example of what you can create with workbooks:
+
+## Replicate common View Designer views
+
+While View Designer manages views through the workspace summary, workbooks have a gallery that displays saved workbooks and templates for your workspace. Users can utilize the gallery to access, modify, and create views.
++
+The examples below show commonly used View Designer styles and how they can be converted to workbooks.
+### Vertical workspace
+
+Use the [sample JSON](workbooks-view-designer-conversions.md#vertical-workspace) to create a workbook that looks similar to a View Designer vertical workspace.
++
+### Tabbed workspace
+
+Use the [sample JSON](workbooks-view-designer-conversions.md#tabbed-workspace) to create a workbook that looks similar to a View Designer tabbed workspace.
+
+This is a workbook with a data type distribution tab:
++
+This is a workbook with a data types over time tab:
++
+## Replicate the View Designer overview tile
+
+In View Designer, you can use the overview tile to represent and summarize the overall state. These are presented in seven tiles, ranging from numbers to charts. In workbooks, you can create similar visualizations and pin them to your [Azure portal Dashboard](/azure/azure-portal/azure-portal-dashboards). Just like the overview tiles in the Workspace summary, pinned workbook items will link directly to the workbook view.
+
+You can also take advantage of the high level of customization features provided with Azure Dashboard, which allows auto refresh, moving, sizing, and more filtering for your pinned items and visualizations.
++
+## Pin a workbook item
+
+1. Create a new Azure Dashboard or select an existing Azure Dashboard.
+1. Follow the instructions to [pin a visualization](workbooks-getting-started.md#pin-a-visualization).
+1. Check the option to **Always show the pin icon on this step**. A pin icon appears in the upper right hand corner of your workbook item. This pin enables you to pin specific visualizations to your dashboard, just like the overview tiles.
++
+You may also want to pin multiple visualizations from the workbook or the entire workbook content to a dashboard.
+
+## Pin an entire workbook
+1. Enter Edit mode by selecting **Edit** in the top toolbar.
+1. Use the pin icon to pin the entire workbook item or any of the individual elements and visualizations within the workbook.
++
+## Replicate the View Designer 'Donut & List' tile
+
+View designer tiles typically consist of two sections, a visualization and a list that matches the data from the visualization, for example the **Donut & List** tile.
++
+With workbooks, you can choose to query one or both sections of the view. Formulating queries in workbooks is a simple two-step process. First, the data is generated from the query, and second, the data is rendered as a visualization. An example of how this view would be recreated in workbooks is as follows:
++
+## Next steps
+
+- [Sample conversions](workbooks-view-designer-conversions.md)
azure-monitor Workbooks View Designer Conversions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-view-designer-conversions.md
+
+ Title: Sample JSON to convert a View Designer workspace
+description: Sample JSON to create a workbook that looks like a View Designer workspace.
++++ Last updated : 07/22/2022+++
+# Sample JSON to convert a View Designer workspace
+
+To convert the View Designer workspace, copy and paste the following JSON into the Advanced editor, using the </> symbol in the toolbar.
+
+## Vertical workspace
+
+```json
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "crossComponentResources": [
+ "/subscriptions/1f3fa6d2-851c-4a91-9087-1a050f3a9c38/resourcegroups/defaultresourcegroup-eus/providers/microsoft.operationalinsights/workspaces/defaultworkspace-1f3fa6d2-851c-4a91-9087-1a050f3a9c38-eus"
+ ],
+ "parameters": [
+ {
+ "id": "f90c348b-4933-4b02-8959-1246d4ceb19c",
+ "version": "KqlParameterItem/1.0",
+ "name": "Subscription",
+ "type": 6,
+ "isRequired": true,
+ "value": "/subscriptions/5c038d14-3833-463f-a492-de956f63f12a",
+ "typeSettings": {
+ "additionalResourceOptions": [
+ "value::1"
+ ],
+ "includeAll": false
+ }
+ },
+ {
+ "id": "98860972-bc1f-4305-b15e-7c529c8def06",
+ "version": "KqlParameterItem/1.0",
+ "name": "TimeRange",
+ "type": 4,
+ "isRequired": true,
+ "value": {
+ "durationMs": 86400000
+ },
+ "typeSettings": {
+ "selectableValues": [
+ {
+ "durationMs": 300000
+ },
+ {
+ "durationMs": 900000
+ },
+ {
+ "durationMs": 1800000
+ },
+ {
+ "durationMs": 3600000
+ },
+ {
+ "durationMs": 14400000
+ },
+ {
+ "durationMs": 43200000
+ },
+ {
+ "durationMs": 86400000
+ },
+ {
+ "durationMs": 172800000
+ },
+ {
+ "durationMs": 259200000
+ },
+ {
+ "durationMs": 604800000
+ },
+ {
+ "durationMs": 1209600000
+ },
+ {
+ "durationMs": 2419200000
+ },
+ {
+ "durationMs": 2592000000
+ },
+ {
+ "durationMs": 5184000000
+ },
+ {
+ "durationMs": 7776000000
+ }
+ ]
+ }
+ }
+ ],
+ "style": "pills",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces"
+ },
+ "name": "parameters - 5"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search *\r\n| where TimeGenerated {TimeRange}\r\n | summarize AggregatedValue = count() by Type | order by AggregatedValue desc\r\n| render piechart ",
+ "size": 1,
+ "showAnalytics": true,
+ "title": "Data Type Distribution",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": [
+ "/subscriptions/5c038d14-3833-463f-a492-de956f63f12a/resourceGroups/Aul-RG/providers/Microsoft.OperationalInsights/workspaces/AUL-Test"
+ ]
+ },
+ "customWidth": "50",
+ "showPin": true,
+ "name": "query - 0",
+ "styleSettings": {
+ "showBorder": true
+ }
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search * | summarize Count = count() by Type",
+ "size": 1,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 0
+ },
+ "timeContextFromParameter": "TimeRange",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": [
+ "/subscriptions/5c038d14-3833-463f-a492-de956f63f12a/resourceGroups/Aul-RG/providers/Microsoft.OperationalInsights/workspaces/AUL-Test"
+ ],
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "Type",
+ "formatter": 0,
+ "formatOptions": {
+ "showIcon": true
+ }
+ },
+ {
+ "columnMatch": "Count",
+ "formatter": 4,
+ "formatOptions": {
+ "showIcon": true,
+ "aggregation": "Count"
+ },
+ "numberFormat": {
+ "unit": 17,
+ "options": {
+ "style": "decimal"
+ }
+ }
+ }
+ ],
+ "labelSettings": [
+ {
+ "columnId": "Type",
+ "label": "Type"
+ },
+ {
+ "columnId": "Count",
+ "label": "Count"
+ }
+ ]
+ }
+ },
+ "customWidth": "50",
+ "name": "query - 1",
+ "styleSettings": {
+ "showBorder": true
+ }
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search *\r\n| summarize AggregatedValue = count() by Type, bin(TimeGenerated, 1h)\r\n| sort by TimeGenerated desc\r\n| render linechart\r\n",
+ "size": 1,
+ "showAnalytics": true,
+ "title": "Data Types Over Time",
+ "timeContext": {
+ "durationMs": 0
+ },
+ "timeContextFromParameter": "TimeRange",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": [
+ "/subscriptions/5c038d14-3833-463f-a492-de956f63f12a/resourceGroups/Aul-RG/providers/Microsoft.OperationalInsights/workspaces/AUL-Test"
+ ]
+ },
+ "customWidth": "50",
+ "showPin": true,
+ "name": "query - 2",
+ "styleSettings": {
+ "showBorder": true
+ }
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search * | summarize Count = count() by Type",
+ "size": 1,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 0
+ },
+ "timeContextFromParameter": "TimeRange",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": [
+ "/subscriptions/5c038d14-3833-463f-a492-de956f63f12a/resourceGroups/Aul-RG/providers/Microsoft.OperationalInsights/workspaces/AUL-Test"
+ ],
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "Type",
+ "formatter": 0,
+ "formatOptions": {
+ "showIcon": true
+ }
+ },
+ {
+ "columnMatch": "Count",
+ "formatter": 4,
+ "formatOptions": {
+ "showIcon": true
+ },
+ "numberFormat": {
+ "unit": 17,
+ "options": {
+ "style": "decimal"
+ }
+ }
+ }
+ ],
+ "labelSettings": [
+ {
+ "columnId": "Type",
+ "label": "Type"
+ },
+ {
+ "columnId": "Count",
+ "label": "Count"
+ }
+ ]
+ }
+ },
+ "customWidth": "50",
+ "name": "query - 3",
+ "styleSettings": {
+ "showBorder": true
+ }
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search *\r\n| summarize AggregatedValue = count() by Computer | summarize Count = count()",
+ "size": 1,
+ "showAnalytics": true,
+ "title": "Computers sending data",
+ "timeContext": {
+ "durationMs": 0
+ },
+ "timeContextFromParameter": "TimeRange",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": [
+ "/subscriptions/5c038d14-3833-463f-a492-de956f63f12a/resourceGroups/Aul-RG/providers/Microsoft.OperationalInsights/workspaces/AUL-Test"
+ ],
+ "visualization": "tiles",
+ "tileSettings": {
+ "titleContent": {
+ "formatter": 1,
+ "formatOptions": {
+ "showIcon": true
+ }
+ },
+ "leftContent": {
+ "columnMatch": "Count",
+ "formatter": 12,
+ "formatOptions": {
+ "showIcon": true
+ }
+ },
+ "showBorder": false
+ }
+ },
+ "customWidth": "50",
+ "showPin": true,
+ "name": "query - 5",
+ "styleSettings": {
+ "showBorder": true
+ }
+ }
+ ],
+ "defaultResourceIds": [
+ "/subscriptions/1f3fa6d2-851c-4a91-9087-1a050f3a9c38/resourcegroups/defaultresourcegroup-eus/providers/microsoft.operationalinsights/workspaces/defaultworkspace-1f3fa6d2-851c-4a91-9087-1a050f3a9c38-eus",
+ "/subscriptions/1f3fa6d2-851c-4a91-9087-1a050f3a9c38/resourcegroups/defaultresourcegroup-eus/providers/microsoft.operationalinsights/workspaces/defaultworkspace-1f3fa6d2-851c-4a91-9087-1a050f3a9c38-eus"
+ ],
+ "fallbackResourceIds": [
+ "/subscriptions/1f3fa6d2-851c-4a91-9087-1a050f3a9c38/resourcegroups/defaultresourcegroup-eus/providers/microsoft.operationalinsights/workspaces/defaultworkspace-1f3fa6d2-851c-4a91-9087-1a050f3a9c38-eus",
+ "/subscriptions/1f3fa6d2-851c-4a91-9087-1a050f3a9c38/resourcegroups/defaultresourcegroup-eus/providers/microsoft.operationalinsights/workspaces/defaultworkspace-1f3fa6d2-851c-4a91-9087-1a050f3a9c38-eus"
+ ],
+ "styleSettings": {},
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
+```
+
+## Tabbed workspace
+
+```json
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "crossComponentResources": [],
+ "parameters": [
+ {
+ "id": "81018bf4-b214-4d2f-bfac-9efb30ea7afb",
+ "version": "KqlParameterItem/1.0",
+ "name": "Subscription",
+ "type": 6,
+ "isRequired": true,
+ "value": "",
+ "typeSettings": {
+ "additionalResourceOptions": [],
+ "includeAll": false
+ }
+ },
+ {
+ "id": "12e24ac4-d5f3-42ec-9c32-118fd5438150",
+ "version": "KqlParameterItem/1.0",
+ "name": "TimeRange",
+ "type": 4,
+ "isRequired": true,
+ "value": {
+ "durationMs": 86400000
+ },
+ "typeSettings": {
+ "selectableValues": [
+ {
+ "durationMs": 300000
+ },
+ {
+ "durationMs": 900000
+ },
+ {
+ "durationMs": 1800000
+ },
+ {
+ "durationMs": 3600000
+ },
+ {
+ "durationMs": 14400000
+ },
+ {
+ "durationMs": 43200000
+ },
+ {
+ "durationMs": 86400000
+ },
+ {
+ "durationMs": 172800000
+ },
+ {
+ "durationMs": 259200000
+ },
+ {
+ "durationMs": 604800000
+ },
+ {
+ "durationMs": 1209600000
+ },
+ {
+ "durationMs": 2419200000
+ },
+ {
+ "durationMs": 2592000000
+ },
+ {
+ "durationMs": 5184000000
+ },
+ {
+ "durationMs": 7776000000
+ }
+ ]
+ }
+ }
+ ],
+ "style": "pills",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces"
+ },
+ "name": "parameters - 6"
+ },
+ {
+ "type": 11,
+ "content": {
+ "version": "LinkItem/1.0",
+ "style": "tabs",
+ "links": [
+ {
+ "cellValue": "selectedTab",
+ "linkTarget": "parameter",
+ "linkLabel": "Data Type Distribution",
+ "subTarget": "DataType",
+ "style": "link"
+ },
+ {
+ "cellValue": "selectedTab",
+ "linkTarget": "parameter",
+ "linkLabel": "Data Types Over Time",
+ "subTarget": "OverTime",
+ "style": "link"
+ },
+ {
+ "cellValue": "selectedTab",
+ "linkTarget": "parameter",
+ "linkLabel": "Computers Sending Data",
+ "subTarget": "Computers",
+ "style": "link"
+ }
+ ]
+ },
+ "name": "links - 5"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search * | summarize AggregatedValue = count() by Type | order by AggregatedValue desc\r\n| render piechart ",
+ "size": 1,
+ "showAnalytics": true,
+ "title": "Data Type Distribution",
+ "timeContext": {
+ "durationMs": 0
+ },
+ "timeContextFromParameter": "TimeRange",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": []
+ },
+ "conditionalVisibility": {
+ "parameterName": "selectedTab",
+ "comparison": "isEqualTo",
+ "value": "DataType"
+ },
+ "customWidth": "50",
+ "showPin": true,
+ "name": "query - 0",
+ "styleSettings": {
+ "showBorder": true
+ }
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search * | summarize Count = count() by Type",
+ "size": 1,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 0
+ },
+ "timeContextFromParameter": "TimeRange",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": [
+ ],
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "Type",
+ "formatter": 0,
+ "formatOptions": {
+ "showIcon": true
+ }
+ },
+ {
+ "columnMatch": "Count",
+ "formatter": 4,
+ "formatOptions": {
+ "showIcon": true,
+ "aggregation": "Count"
+ },
+ "numberFormat": {
+ "unit": 17,
+ "options": {
+ "style": "decimal"
+ }
+ }
+ }
+ ],
+ "labelSettings": [
+ {
+ "columnId": "Type",
+ "label": "Type"
+ },
+ {
+ "columnId": "Count",
+ "label": "Count"
+ }
+ ]
+ }
+ },
+ "conditionalVisibility": {
+ "parameterName": "selectedTab",
+ "comparison": "isEqualTo",
+ "value": "DataType"
+ },
+ "customWidth": "50",
+ "name": "query - 1",
+ "styleSettings": {
+ "showBorder": true
+ }
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search *\r\n| summarize AggregatedValue = count() by Type, bin(TimeGenerated, 1h)\r\n| sort by TimeGenerated desc\r\n| render linechart\r\n",
+ "size": 1,
+ "showAnalytics": true,
+ "title": "Data Types Over Time",
+ "timeContext": {
+ "durationMs": 0
+ },
+ "timeContextFromParameter": "TimeRange",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": [
+ ]
+ },
+ "conditionalVisibility": {
+ "parameterName": "selectedTab",
+ "comparison": "isEqualTo",
+ "value": "OverTime"
+ },
+ "customWidth": "50",
+ "showPin": true,
+ "name": "query - 2",
+ "styleSettings": {
+ "showBorder": true
+ }
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search * | summarize Count = count() by Type",
+ "size": 1,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 0
+ },
+ "timeContextFromParameter": "TimeRange",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": [
+ ],
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "Type",
+ "formatter": 0,
+ "formatOptions": {
+ "showIcon": true
+ }
+ },
+ {
+ "columnMatch": "Count",
+ "formatter": 4,
+ "formatOptions": {
+ "showIcon": true
+ },
+ "numberFormat": {
+ "unit": 17,
+ "options": {
+ "style": "decimal"
+ }
+ }
+ }
+ ],
+ "labelSettings": [
+ {
+ "columnId": "Type",
+ "label": "Type"
+ },
+ {
+ "columnId": "Count",
+ "label": "Count"
+ }
+ ]
+ }
+ },
+ "conditionalVisibility": {
+ "parameterName": "selectedTab",
+ "comparison": "isEqualTo",
+ "value": "OverTime"
+ },
+ "customWidth": "50",
+ "name": "query - 3",
+ "styleSettings": {
+ "showBorder": true
+ }
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "search *\r\n| summarize AggregatedValue = count() by Computer | summarize Count = count()",
+ "size": 1,
+ "showAnalytics": true,
+ "title": "Computers sending data",
+ "timeContext": {
+ "durationMs": 0
+ },
+ "timeContextFromParameter": "TimeRange",
+ "exportToExcelOptions": "visible",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "crossComponentResources": [
+ ],
+ "visualization": "tiles",
+ "tileSettings": {
+ "titleContent": {
+ "formatter": 1,
+ "formatOptions": {
+ "showIcon": true
+ }
+ },
+ "leftContent": {
+ "columnMatch": "Count",
+ "formatter": 12,
+ "formatOptions": {
+ "showIcon": true
+ }
+ },
+ "showBorder": false
+ }
+ },
+ "conditionalVisibility": {
+ "parameterName": "selectedTab",
+ "comparison": "isEqualTo",
+ "value": "Computers"
+ },
+ "customWidth": "50",
+ "showPin": true,
+ "name": "query - 5",
+ "styleSettings": {
+ "showBorder": true
+ }
+ }
+ ],
+ "defaultResourceIds": [
+ ],
+ "fallbackResourceIds": [
+ ],
+ "styleSettings": {},
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
+```
+
azure-monitor Workbooks Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-visualizations.md
Title: Workbook visualizations
-description: Learn about the types of visualizations you can use to create rich visual reports with Azure workbooks.
+description: Learn about the types of visualizations you can use to create rich visual reports with Azure Workbooks.
# Workbook visualizations
-Workbooks provide a rich set of capabilities for visualizing Azure Monitor data. The exact set of capabilities depends on the data sources and result sets, but authors can expect them to converge over time. These controls allow authors to present their analysis in rich, interactive reports.
+Workbooks provide a rich set of capabilities for visualizing Azure Monitor data. The exact set of capabilities depends on the data sources and result sets, but you can expect them to converge over time. These controls allow you to present your analysis in rich interactive reports.
Workbooks support these kinds of visual components:+ * [Text parameters](#text-parameters) * Using queries: * [Charts](#charts) * [Grids](#grids) * [Tiles](#tiles) * [Trees](#trees)
- * [Honey comb](#honey-comb)
+ * [Honeycomb](#honeycomb)
* [Graphs](#graphs) * [Maps](#maps) * [Text visualization](#text-visualizations) > [!NOTE]
-> Each visualization and data source may have its own [limits](workbooks-limits.md).
+> Each visualization and data source might have its own [limits](workbooks-limits.md).
## Examples ### [Text parameters](workbooks-text.md) ### [Charts](workbooks-chart-visualizations.md) ### [Grids](workbooks-grid-visualizations.md) ### [Tiles](workbooks-tile-visualizations.md) ### [Trees](workbooks-tree-visualizations.md)
-### [Honey comb](workbooks-honey-comb.md)
+### [Honeycomb](workbooks-honey-comb.md)
### [Graphs](workbooks-graph-visualizations.md) ### [Maps](workbooks-map-visualizations.md) ### [Text visualizations](workbooks-text-visualizations.md) + ## Next steps
+[Get started with Azure Workbooks](workbooks-getting-started.md)
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
A synthetic transaction connects to an application or service running on a machi
## SQL Server
-Use [SQL Insights (preview)](../insights/sql-insights-overview.md) to monitor SQL Server running on your virtual machines.
+Use [SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) to monitor SQL Server running on your virtual machines.
## Next steps
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
All references to unsupported versions of .NET and .NET CORE have been scrubbed
| Article | Description | |:|:|
-| [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md) | Added known issue for OS computer name. |
+| [Troubleshoot SQL Insights (preview)](/azure/azure-sql/database/sql-insights-troubleshoot) | Added known issue for OS computer name. |
### Logs | Article | Description |
azure-netapp-files Azure Netapp Files Performance Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-considerations.md
na Previously updated : 02/19/2021 Last updated : 07/26/2021 # Performance considerations for Azure NetApp Files
The [throughput limit](azure-netapp-files-service-levels.md) for a volume with a
The throughput limit is only one determinant of the actual performance that will be realized.
-Typical storage performance considerations, including read and write mix, the transfer size, random or sequential patterns, and many other factors will contribute to the total performance delivered.
+Typical storage performance considerations, including read and write mix, the transfer size, random or sequential patterns, and many other factors will contribute to the total performance delivered.
-The maximum empirical throughput that has been observed in testing is 4,500 MiB/s. At the Premium storage tier, an automatic QoS volume quota of 70.31 TiB will provision a throughput limit that is high enough to achieve this level of performance.
+Metrics are reported as aggregates of multiple data points collected during a five-minute interval. For more information about metrics aggregation, see [Azure Monitor Metrics aggregation and display explained](../azure-monitor/essentials/metrics-aggregation-explained.md).
+
+The maximum empirical throughput that has been observed in testing is 4,500 MiB/s. At the Premium storage tier, an automatic QoS volume quota of 70.31 TiB will provision a throughput limit that is high enough to achieve this level of performance.
In the case of automatic QoS volumes, if you are considering assigning volume quota amounts beyond 70.31 TiB, additional quota may be assigned to a volume for storing additional data. However, the added quota will not result in a further increase in actual throughput.
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 05/24/2022 Last updated : 07/26/2022 # Create and manage Active Directory connections for Azure NetApp Files Several features of Azure NetApp Files require that you have an Active Directory connection. For example, you need to have an Active Directory connection before you can create an [SMB volume](azure-netapp-files-create-volumes-smb.md), a [NFSv4.1 Kerberos volume](configure-kerberos-encryption.md), or a [dual-protocol volume](create-volumes-dual-protocol.md). This article shows you how to create and manage Active Directory connections for Azure NetApp Files.
-## Before you begin
-
-* You must have already set up a capacity pool. See [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md).
-* A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
- ## <a name="requirements-for-active-directory-connections"></a>Requirements and considerations for Active Directory connections
-* You can configure only one Active Directory (AD) connection per subscription and per region.
-
- Azure NetApp Files doesn't support multiple AD connections in a single *region*, even if the AD connections are in different NetApp accounts. However, you can have multiple AD connections in a single subscription if the AD connections are in different regions. If you need multiple AD connections in a single region, you can use separate subscriptions to do so.
+> [!IMPORTANT]
+> You must follow guidelines described in [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md) for Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (AAD DS) used with Azure NetApp Files.
+> In addition, before creating the AD connection, review [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md) to understand the impact of making changes to the AD connection configuration options after the AD connection has been created. Changes to the AD connection configuration options are disruptive to client access and some options cannot be changed at all.
- The AD connection is visible only through the NetApp account it's created in. However, you can enable the Shared AD feature to allow NetApp accounts that are under the same subscription and same region to use an AD server created in one of the NetApp accounts. See [Map multiple NetApp accounts in the same subscription and region to an AD connection](#shared_ad). When you enable this feature, the AD connection becomes visible in all NetApp accounts that are under the same subscription and same region.
+* An Azure NetApp Files account must be created in the region where the Azure NetApp Files volumes are deployed.
-* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you'll specify.
+* You can configure only one Active Directory (AD) connection per subscription per region.
-* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you'll specify. In some cases, `msDS-SupportedEncryptionTypes` write permission is required to set account attributes within AD.
+ Azure NetApp Files doesnΓÇÖt support multiple AD connections in a single region, even if the AD connections are created in different NetApp accounts. However, you can have multiple AD connections in a single subscription if the AD connections are in different regions. If you need multiple AD connections in a single region, you can use separate subscriptions to do so.
+ The AD connection is visible only through the NetApp account it's created in. However, you can enable the Shared AD feature to allow NetApp accounts that are under the same subscription and same region to use the same AD connection. See [Map multiple NetApp accounts in the same subscription and region to an AD connection](#shared_ad).
-* Group Managed Service Accounts (GMSA) can't be used with the Active Directory connection user account.
+* The Azure NetApp Files AD connection admin account must have the following properties:
+ * It must be an AD DS domain user account in the same domain where the Azure NetApp Files machine accounts are created.
+ * It must have the permission to create machine accounts (for example, AD domain join) in the AD DS organizational unit path specified in the **Organizational unit path option** of the AD connection.
+ * It cannot be a [Group Managed Service Account](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview.md).
-* If you change the password of the Active Directory user account that is used in Azure NetApp Files, be sure to update the password configured in the [Active Directory Connections](#create-an-active-directory-connection). Otherwise, you won't be able to create new volumes, and your access to existing volumes might also be affected depending on the setup.
+* The AD connection admin account supports DES, Kerberos AES-128, and Kerberos AES-256 encryption types for authentication with AD DS for Azure NetApp Files machine account creation (for example, AD domain join operations).
-* Before you can remove an Active Directory connection from your NetApp account, you need to first remove all volumes associated with it.
+* To enable the AES encryption on the Azure NetApp Files AD connection admin account, you must use an AD domain user account that is a member of one of the following AD DS groups:
-* Proper ports must be open on the applicable Windows Active Directory (AD) server.
- The required ports are as follows:
+ * Domain Admins
+ * Enterprise Admins
+ * Administrators
+ * Account Operators
+ * Azure AD DC Administrators _(Azure AD DS Only)_
- | Service | Port | Protocol |
- |--|--||
- | AD Web Services | 9389 | TCP |
- | DNS | 53 | TCP |
- | DNS | 53 | UDP |
- | ICMPv4 | N/A | Echo Reply |
- | Kerberos | 464 | TCP |
- | Kerberos | 464 | UDP |
- | Kerberos | 88 | TCP |
- | Kerberos | 88 | UDP |
- | LDAP | 389 | TCP |
- | LDAP | 389 | UDP |
- | LDAP | 3268 | TCP |
- | NetBIOS name | 138 | UDP |
- | SAM/LSA | 445 | TCP |
- | SAM/LSA | 445 | UDP |
- | w32time | 123 | UDP |
+ Alternatively, an AD domain user account with `msDS-SupportedEncryptionTypes` write permission on the AD connection admin account can also be used to set the Kerberos encryption type property on the AD connection admin account.
-* The site topology for the targeted Active Directory Domain Services must adhere to the guidelines, in particular the Azure VNet where Azure NetApp Files is deployed.
+ >[!NOTE]
+ >It's not recommended or required to add the Azure NetApp Files AD admin account to the AD domain groups listed above. Nor is it recommended or required to grant `msDS-SupportedEncryptionTypes` write permission to the AD admin account.
- The address space for the virtual network where Azure NetApp Files is deployed must be added to a new or existing Active Directory site (where a domain controller reachable by Azure NetApp Files is).
+ If you set both AES-128 and AES-256 Kerberos encryption on the admin account of the AD connection, the highest level of encryption supported by your AD DS will be used. If AES encryption is not set, DES encryption will be used by default.
-* The specified DNS servers must be reachable from the [delegated subnet](./azure-netapp-files-delegate-subnet.md) of Azure NetApp Files.
+* To enable AES encryption support for the admin account in the AD connection, run the following Active Directory PowerShell commands:
- See [Guidelines for Azure NetApp Files network planning](./azure-netapp-files-network-topologies.md) for supported network topologies.
-
- The Network Security Groups (NSGs) and firewalls must have appropriately configured rules to allow for Active Directory and DNS traffic requests.
+ ```powershell
+ Get-ADUser -Identity <ANF AD connection account username>
+ Set-ADUser -KerberosEncryptionType <encryption_type>
+ ```
-* The Azure NetApp Files delegated subnet must be able to reach all Active Directory Domain Services (AD DS) domain controllers in the domain, including all local and remote domain controllers. Otherwise, service interruption can occur.
+ `KerberosEncryptionType` is a multivalued parameter that supports AES-128 and AES-256 values.
- If you have domain controllers that are unreachable by the Azure NetApp Files delegated subnet, you can specify an Active Directory site during creation of the Active Directory connection. Azure NetApp Files needs to communicate only with domain controllers in the site where the Azure NetApp Files delegated subnet address space is.
+* For more information, see the [Set-ADUser documentation](/powershell/module/activedirectory/set-aduser).
- See [Designing the site topology](/windows-server/identity/ad-ds/plan/designing-the-site-topology) about AD sites and services.
-* Avoid configuring overlapping subnets in the AD machine. Even if the site name is defined in the Active Directory connections, overlapping subnets might result in the wrong site being discovered, thus affecting the service. It might also affect new volume creation or AD modification.
-
-* You can enable AES encryption for AD Authentication by checking the **AES Encryption** box in the [Join Active Directory](#create-an-active-directory-connection) window. Azure NetApp Files supports DES, Kerberos AES 128, and Kerberos AES 256 encryption types (from the least secure to the most secure). If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled that matches the capabilities enabled for your Active Directory.
+## Create an Active Directory connection
- For example, if your Active Directory has only the AES-128 capability, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory doesn't have any Kerberos encryption capability, Azure NetApp Files uses DES by default.
+1. From your NetApp account, select **Active Directory connections**, then select **Join**.
- You can enable the account options in the properties of the Active Directory Users and Computers Microsoft Management Console (MMC):
+ ![Screenshot showing the Active Directory connections menu. The join button is highlighted.](../media/azure-netapp-files/azure-netapp-files-active-directory-connections.png)
- ![Active Directory Users and Computers MMC](../media/azure-netapp-files/ad-users-computers-mmc.png)
+ >[!NOTE]
+ >Azure NetApp Files supports only one Active Directory connection within the same region and the same subscription.
-* Azure NetApp Files supports [LDAP signing](/troubleshoot/windows-server/identity/enable-ldap-signing-in-windows-server), which enables secure transmission of LDAP traffic between the Azure NetApp Files service and the targeted [Active Directory domain controllers](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview). If you're following the guidance of Microsoft Advisory [ADV190023](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023) for LDAP signing, then you should enable the LDAP signing feature in Azure NetApp Files by checking the **LDAP Signing** box in the [Join Active Directory](#create-an-active-directory-connection) window.
+2. In the Join Active Directory window, provide the following information, based on the Domain Services you want to use:
- [LDAP channel binding](https://support.microsoft.com/help/4034879/how-to-add-the-ldapenforcechannelbinding-registry-entry) configuration alone has no effect on the Azure NetApp Files service. However, if you use both LDAP channel binding and secure LDAP (for example, LDAPS or `start_tls`), then the SMB volume creation will fail.
+ * **Primary DNS (required)**
+ This is the IP address of the primary DNS server that is required for Active Directory domain join operations, SMB authentication, Kerberos, and LDAP operations.
-* Azure NetApp Files will attempt to add an A/PTR record in DNS for AD integrated DNS servers. Add a reverse lookup zone if one is missing under Reverse Lookup Zones on AD server. For non-AD integrated DNS, you should add a DNS A/PTR record to enable Azure NetApp Files to function by using a ΓÇ£friendly name".
+ * **Secondary DNS**
+ This is the IP address of the secondary DNS server that is required for Active Directory domain join operations, SMB authentication, Kerberos, and LDAP operations.
+
+ >[!NOTE]
+ >It is recommended that you configure a Secondary DNS server. See [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md). Ensure that your DNS server configuration meets the requirements for Azure NetApp Files. Otherwise, Azure NetApp Files service operations, SMB authentication, Kerberos, or LDAP operations might fail.
-* The following table describes the Time to Live (TTL) settings for the LDAP cache. You need to wait until the cache is refreshed before trying to access a file or directory through a client. Otherwise, an access or permission denied message appears on the client.
+ If you use Azure AD DS (AAD DS), you should use the IP addresses of the AAD DS domain controllers for Primary DNS and Secondary DNS respectively.
+ * **AD DNS Domain Name (required)**
+ This is the fully qualified domain name of the AD DS that will be used with Azure NetApp Files (for example, `contoso.com`).
+ * **AD Site Name**
+ This is the AD DS site name that will be used by Azure NetApp Files for domain controller discovery.
+
+ >[!NOTE]
+ > See [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md). Ensure that your AD DS site design and configuration meets the requirements for Azure NetApp Files. Otherwise, Azure NetApp Files service operations, SMB authentication, Kerberos, or LDAP operations might fail.
- | Error condition | Resolution |
- |-|-|
- | Cache | Default Timeout |
- | Group membership list | 24-hour TTL |
- | Unix groups | 24-hour TTL, 1-minute negative TTL |
- | Unix users | 24-hour TTL, 1-minute negative TTL |
+ * **SMB server (computer account) prefix (required)**
+ This is the naming prefix for new machine accounts created in AD DS for Azure NetApp Files SMB, dual protocol, and NFSv4.1 Kerberos volumes.
- Caches have a specific timeout period called *Time to Live*. After the timeout period, entries age out so that stale entries don't linger. The *negative TTL* value is where a lookup that has failed resides to help avoid performance issues due to LDAP queries for objects that might not exist.
+ For example, if the naming standard that your organization uses for file services is `NAS-01`, `NAS-02`, and so on, then you would use `NAS` for the prefix.
-* Azure NetApp Files doesn't support the use of Active Directory Domain Services Read-Only Domain Controllers (RODC). To ensure that Azure NetApp Files doesn't try to use an RODC domain controller, configure the **AD Site** field of the Azure NetApp Files Active Directory connection with an Active Directory site that doesn't contain any RODC domain controllers.
-
-## Decide which Domain Services to use
-
-Azure NetApp Files supports both [Active Directory Domain Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) (AD DS) and Azure Active Directory Domain Services (AADDS) for AD connections. Before you create an AD connection, you need to decide whether to use AD DS or AADDS.
-
-For more information, see [Compare self-managed Active Directory Domain Services, Azure Active Directory, and managed Azure Active Directory Domain Services](../active-directory-domain-services/compare-identity-solutions.md).
-
-### Active Directory Domain Services
-
-You can use your preferred [Active Directory Sites and Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) scope for Azure NetApp Files. This option enables reads and writes to Active Directory Domain Services (AD DS) domain controllers that are [accessible by Azure NetApp Files](azure-netapp-files-network-topologies.md). It also prevents the service from communicating with domain controllers that aren't in the specified Active Directory Sites and Services site.
-
-To find your site name when you use AD DS, you can contact the administrative group in your organization that is responsible for Active Directory Domain Services. The example below shows the Active Directory Sites and Services plugin where the site name is displayed:
-
-![Active Directory Sites and Services](../media/azure-netapp-files/azure-netapp-files-active-directory-sites-services.png)
-
-When you configure an AD connection for Azure NetApp Files, you specify the site name in scope for the **AD Site Name** field.
-
-### Azure Active Directory Domain Services
-
-For Azure Active Directory Domain Services (AADDS) configuration and guidance, see [Azure AD Domain Services documentation](../active-directory-domain-services/index.yml).
-
-Additional AADDS considerations apply for Azure NetApp Files:
-
-* Ensure the VNet or subnet where AADDS is deployed is in the same Azure region as the Azure NetApp Files deployment.
-* If you use another VNet in the region where Azure NetApp Files is deployed, you should create a peering between the two VNets.
-* Azure NetApp Files supports `user` and `resource forest` types.
-* For synchronization type, you can select `All` or `Scoped`.
- If you select `Scoped`, ensure the correct Azure AD group is selected for accessing SMB shares. If you're uncertain, you can use the `All` synchronization type.
-* If you use AADDS with a dual-protocol volume, you must be in a custom OU in order to apply POSIX attributes. See [Manage LDAP POSIX Attributes](create-volumes-dual-protocol.md#manage-ldap-posix-attributes) for details.
-
-When you create an Active Directory connection, note the following specifics for AADDS:
+ Azure NetApp Files will create additional machine accounts in AD DS as needed.
+
+ >[!IMPORTANT]
+ >Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You will need to re-mount existing SMB shares after renaming the SMB server prefix.
-* You can find information for **Primary DNS**, **Secondary DNS**, and **AD DNS Domain Name** in the AADDS menu.
-For DNS servers, two IP addresses will be used for configuring the Active Directory connection.
-* The **organizational unit path** is `OU=AADDC Computers`.
-This setting is configured in the **Active Directory Connections** under **NetApp Account**:
+ * **Organizational unit path**
+ This is the LDAP path for the organizational unit (OU) where SMB server machine accounts will be created. That is, `OU=second level, OU=first level`. For example, if you want to use an OU called `ANF` created at the root of the domain, the value would be `OU=ANF`.
- ![Organizational unit path](../media/azure-netapp-files/azure-netapp-files-org-unit-path.png)
+ If no value is provided, Azure NetApp Files will use the `CN=Computers` container.
-* **Username** credentials can be any user that is a member of the Azure AD group **Azure AD DC Administrators**.
+ If you're using Azure NetApp Files with Azure Active Directory Domain Services (AAD DS), the organizational unit path is `OU=AADDC Computers`
+ ![Screenshot of the Join Active Directory input fields.](../media/azure-netapp-files/azure-netapp-files-join-active-directory.png)
-## Create an Active Directory connection
+ * <a name="aes-encryption"></a>**AES Encryption**
+ This option enables AES encryption authentication support for the admin account of the AD connection.
-1. From your NetApp account, select **Active Directory connections**, then select **Join**.
+ ![Screenshot of the AES description field which is a checkbox.](../media/azure-netapp-files/active-directory-aes-encryption.png)
+
+ See [Requirements for Active Directory connections](#requirements-for-active-directory-connections) for requirements.
+
+ * <a name="ldap-signing"></a>**LDAP Signing**
- Azure NetApp Files supports only one Active Directory connection within the same region and the same subscription. If Active Directory is already configured by another NetApp account in the same subscription and region, you can't configure and join a different Active Directory from your NetApp account. However, you can enable the Shared AD feature to allow an Active Directory configuration to be shared by multiple NetApp accounts within the same subscription and the same region. See [Map multiple NetApp accounts in the same subscription and region to an AD connection](#shared_ad).
+ This option enables LDAP signing. This functionality enables integrity verification for Simple Authentication and Security Layer (SASL) LDAP binds from Azure NetApp Files and the user-specified [Active Directory Domain Services domain controllers](/windows/win32/ad/active-directory-domain-services).
+
+ Azure NetApp Files supports LDAP Channel Binding if both LDAP Signing and LDAP over TLS settings options are enabled in the Active Directory Connection. For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023).
- ![Active Directory Connections](../media/azure-netapp-files/azure-netapp-files-active-directory-connections.png)
+ ![Screenshot of the LDAP signing checkbox.](../media/azure-netapp-files/active-directory-ldap-signing.png)
-2. In the Join Active Directory window, provide the following information, based on the Domain Services you want to use:
+ * **Allow local NFS users with LDAP**
+ This option enables local NFS client users to access to NFS volumes. Setting this option disables extended groups for NFS volumes. It also limits the number of groups to 16. For more information, see [Allow local NFS users with LDAP to access a dual-protocol volume](create-volumes-dual-protocol.md#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume).
- For information specific to the Domain Services you use, see [Decide which Domain Services to use](#decide-which-domain-services-to-use).
+ * **LDAP over TLS**
- * **Primary DNS**
- This is the DNS that is required for the Active Directory domain join and SMB authentication operations.
- * **Secondary DNS**
- This is the secondary DNS server for ensuring redundant name services.
- * **AD DNS Domain Name**
- This is the domain name of your Active Directory Domain Services that you want to join.
- * **AD Site Name**
- This is the site name that the domain controller discovery will be limited to. This should match the site name in Active Directory Sites and Services.
+ This option enables LDAP over TLS for secure communication between an Azure NetApp Files volume and the Active Directory LDAP server. You can enable LDAP over TLS for NFS, SMB, and dual-protocol volumes of Azure NetApp Files.
- > [!IMPORTANT]
- > Without an AD Site Name specified, service disruption may occur. Without an AD Site Name specified, the Azure NetApp Files service may attempt to authenticate with a domain controller beyond what your network topology allows and result in a service disruption. See [Understanding Active Directory Site Topology | Microsoft Docs](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) for more information.
-
- * **SMB server (computer account) prefix**
- This is the naming prefix for the machine account in Active Directory that Azure NetApp Files will use for creation of new accounts.
-
- For example, if the naming standard that your organization uses for file servers is NAS-01, NAS-02..., NAS-045, then you would enter "NAS" for the prefix.
-
- The service will create additional machine accounts in Active Directory as needed.
+ >[!NOTE]
+ >LDAP over TLS must not be enabled if you're using Azure Active Directory Domain Services (AAD DS). AAD DS uses LDAPS (port 636) to secure LDAP traffic instead of LDAP over TLS (port 389).
+
+ For more information, see [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md).
- > [!IMPORTANT]
- > Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You will need to re-mount existing SMB shares after renaming the SMB server prefix.
+ * **Server root CA Certificate**
+
+ This option uploads the CA certificate used with LDAP over TLS.
+
+ For more information, seeΓÇ»[Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md).ΓÇ»
- * **Organizational unit path**
- This is the LDAP path for the organizational unit (OU) where SMB server machine accounts will be created. That is, OU=second level, OU=first level.
+ * **LDAP Search Scope**, **User DN**, **Group DN**, and **Group Membership Filter**
- If you're using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account.
+ The **LDAP search scope** option optimizes Azure NetApp Files storage LDAP queries for use with large AD DS topologies and LDAP with extended groups or Unix security style with an Azure NetApp Files dual-protocol volume.
+
+ The **User DN** and **Group DN** options allow you to set the search base in AD DS LDAP.
+
+ The **Group Membership Filter** option allows you to create a custom search filter for users who are members of specific AD DS groups.
- ![Join Active Directory](../media/azure-netapp-files/azure-netapp-files-join-active-directory.png)
+ ![Screenshot of the LDAP search scope field, showing a checked box.](../media/azure-netapp-files/ldap-search-scope-checked.png)
- * <a name="aes-encryption"></a>**AES Encryption**
- Select this checkbox if you want to enable AES encryption for AD authentication or if you require [encryption for SMB volumes](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume).
-
- See [Requirements for Active Directory connections](#requirements-for-active-directory-connections) for requirements.
-
- ![Active Directory AES encryption](../media/azure-netapp-files/active-directory-aes-encryption.png)
+ See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
- * <a name="ldap-signing"></a>**LDAP Signing**
- Select this checkbox to enable LDAP signing. This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified [Active Directory Domain Services domain controllers](/windows/win32/ad/active-directory-domain-services). For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023).
+ * <a name="backup-policy-users"></a> **Backup policy users**
+ This option grants addition security privileges to AD DS domain users or groups that require elevated backup privileges to support backup, restore, and migration workflows in Azure NetApp Files. The specified AD DS user accounts or groups will have elevated NTFS permissions at the file or folder level.
- ![Active Directory LDAP signing](../media/azure-netapp-files/active-directory-ldap-signing.png)
+ ![Screenshot of the Backup policy users field showing an empty text input field.](../media/azure-netapp-files/active-directory-backup-policy-users.png)
- * **LDAP over TLS**
- See [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) for information about this option.
+ The following privileges apply when you use the **Backup policy users** setting:
- * **LDAP Search Scope**, **User DN**, **Group DN**, and **Group Membership Filter**
- See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
+ | Privilege | Description |
+ |||
+ | `SeBackupPrivilege` | Back up files and directories, overriding any ACLs. |
+ | `SeRestorePrivilege` | Restore files and directories, overriding any ACLs. <br> Set any valid user or group SID as the file owner. |
+ | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege aren't required to have traversed (`x`) permissions to traverse folders or symlinks. |
* **Security privilege users** <!-- SMB CA share feature -->
- You can grant security privilege (`SeSecurityPrivilege`) to AD users or groups that require elevated privilege to access the Azure NetApp Files volumes. The specified AD users or groups will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users.
+ This option grants security privilege (`SeSecurityPrivilege`) to AD DS domain users or groups that require elevated privileges to access Azure NetApp Files volumes. The specified AD DS users or groups will be allowed to perform certain actions on SMB shares that require security privilege not assigned by default to domain users.
+
+ ![Screenshot showing the Security privilege users box of Active Directory connections window.](../media/azure-netapp-files/security-privilege-users.png)
The following privilege applies when you use the **Security privilege users** setting:
This setting is configured in the **Active Directory Connections** under **NetAp
||| | `SeSecurityPrivilege` | Manage log operations. |
- For example, user accounts used for installing SQL Server in certain scenarios must (temporarily) be granted elevated security privilege. If you're using a non-administrator (domain) account to install SQL Server and the account doesn't have the security privilege assigned, you should add security privilege to the account.
+ This feature is used for installing SQL Server in certain scenarios where a non-administrator AD DS domain account must temporarily be granted elevated security privilege.
- > [!IMPORTANT]
- > Using the **Security privilege users** feature requires that you submit a waitlist request through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using this feature.
- >
- > Using this feature is optional and supported only for SQL Server. The domain account used for installing SQL Server must already exist before you add it to the **Security privilege users** field. When you add the SQL Server installer's account to **Security privilege users**, the Azure NetApp Files service might validate the account by contacting the domain controller. The command might fail if it cannot contact the domain controller.
+ >[!NOTE]
+ > Using the Security privilege users feature requires that you submit a waitlist request through the Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page. Wait for an official confirmation email from the Azure NetApp Files team before using this feature.
+ > [!IMPORTANT]
+ > Using the **Security privilege users** feature requires that you submit a waitlist request through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using this feature.
+ >This feature is optional and supported only with SQL server. The AD DS domain account used for installing SQL server must already exist before you add it to the **Security privilege users** option. When you add the SQL Server installer account to **Security privilege users** option, the Azure NetApp Files service might validate the account by contacting an AD DS domain controller. This action might fail if Azure NetApp Files cannot contact the AD DS domain controller.
+
For more information about `SeSecurityPrivilege` and SQL Server, see [SQL Server installation fails if the Setup account doesn't have certain user rights](/troubleshoot/sql/install/installation-fails-if-remove-user-right).
- ![Screenshot showing the Security privilege users box of Active Directory connections window.](../media/azure-netapp-files/security-privilege-users.png)
-
- * <a name="backup-policy-users"></a>**Backup policy users**
- You can grant additional security privileges to AD users or groups that require elevated backup privileges to access the Azure NetApp Files volumes. The specified AD user accounts or groups will have elevated NTFS permissions at the file or folder level. For example, you can specify a non-privileged service account used for backing up, restoring, or migrating data to an SMB file share in Azure NetApp Files.
-
- The following privileges apply when you use the **Backup policy users** setting:
-
- | Privilege | Description |
- |||
- | `SeBackupPrivilege` | Back up files and directories, overriding any ACLs. |
- | `SeRestorePrivilege` | Restore files and directories, overriding any ACLs. <br> Set any valid user or group SID as the file owner. |
- | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege aren't required to have traverse (`x`) permissions to traverse folders or symlinks. |
-
- ![Active Directory backup policy users](../media/azure-netapp-files/active-directory-backup-policy-users.png)
- * <a name="administrators-privilege-users"></a>**Administrators privilege users**
- You can grant additional security privileges to AD users or groups that require even more elevated privileges to access the Azure NetApp Files volumes. The specified accounts will have further elevated permissions at the file or folder level.
+ This option grants additional security privileges to AD DS domain users or groups that require elevated privileges to access the Azure NetApp Files volumes. The specified accounts will have elevated permissions at the file or folder level.
+
+ ![Screenshot that shows the Administrators box of Active Directory connections window.](../media/azure-netapp-files/active-directory-administrators.png)
The following privileges apply when you use the **Administrators privilege users** setting:
This setting is configured in the **Active Directory Connections** under **NetAp
| `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege aren't required to have traverse (`x`) permissions to traverse folders or symlinks. | | `SeTakeOwnershipPrivilege` | Take ownership of files or other objects. | | `SeSecurityPrivilege` | Manage log operations. |
- | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege aren't required to have traverse (`x`) permissions to traverse folders or symlinks. |
-
- ![Screenshot that shows the Administrators box of Active Directory connections window.](../media/azure-netapp-files/active-directory-administrators.png)
+ | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege aren't required to have traverse (`x`) permissions to traverse folders or symlinks. | <!-- tHIS option IS REMOVED -->
* Credentials, including your **username** and **password**
- ![Active Directory credentials](../media/azure-netapp-files/active-directory-credentials.png)
+ ![Screenshot that shows Active Directory credentials fields showing username, password and confirm password fields.](../media/azure-netapp-files/active-directory-credentials.png)
3. Select **Join**. The Active Directory connection you created appears.
- ![Created Active Directory connections](../media/azure-netapp-files/azure-netapp-files-active-directory-connections-created.png)
+ ![Screenshot of the Active Directory connections menu showing a successfully created connection.](../media/azure-netapp-files/azure-netapp-files-active-directory-connections-created.png)
## <a name="shared_ad"></a>Map multiple NetApp accounts in the same subscription and region to an AD connection
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
## Next steps
+* [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md)
* [Modify Active Directory connections](modify-active-directory-connections.md) * [Create an SMB volume](azure-netapp-files-create-volumes-smb.md) * [Create a dual-protocol volume](create-volumes-dual-protocol.md)
azure-netapp-files Faq Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-integration.md
Previously updated : 06/02/2022 Last updated : 07/27/2022 # Integration FAQs for Azure NetApp Files
This article answers frequently asked questions (FAQs) about using other product
## Can I use Azure NetApp Files NFS or SMB volumes with Azure VMware Solution (AVS)?
-You can mount Azure NetApp Files NFS volumes on AVS Windows VMs or Linux VMs. You can map Azure NetApp Files SMB shares on AVS Windows VMs. For more information, see [Azure NetApp Files with Azure VMware Solution]( ../azure-vmware/netapp-files-with-azure-vmware-solution.md).
+Yes, Azure NetApp Files can be used to expand your AVS private cloud storage via [Azure NetApp Files datastores](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md). In addition, you can mount Azure NetApp Files NFS volumes on AVS Windows VMs or Linux VMs. You can map Azure NetApp Files SMB shares on AVS Windows VMs. For more information, see [Azure NetApp Files with Azure VMware Solution]( ../azure-vmware/netapp-files-with-azure-vmware-solution.md).
## What regions are supported for using Azure NetApp Files NFS or SMB volumes with Azure VMware Solution (AVS)?
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
Previously updated : 03/15/2022 Last updated : 07/22/2022
Once you have [created an Active Directory connection](create-active-directory-c
**\*There is no impact on a modified entry only if the modifications are entered correctly. If you enter data incorrectly, users and applications will lose access.** ## Next Steps-
+* [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md)
* [Configure ADDS LDAP with extended groups for NFS](configure-ldap-extended-groups.md) * [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md)
-* [Create and manage Active Directory connections](create-active-directory-connections.md)
+* [Create and manage Active Directory connections](create-active-directory-connections.md)
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
+
+ Title: Understand guidelines for Active Directory Domain Services site design and planning
+description: Proper Active Directory Domain Services (AD DS) design and planning are key to solution architectures that use Azure NetApp Files volumes.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 07/26/2022++
+# Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
+
+Proper Active Directory Domain Services (AD DS) design and planning are key to solution architectures that use Azure NetApp Files volumes. Azure NetApp Files features such as [SMB volumes](azure-netapp-files-create-volumes-smb.md), [dual-protocol volumes](create-volumes-dual-protocol.md), and [NFSv4.1 Kerberos volumes](configure-kerberos-encryption.md) are designed to be used with AD DS.
+
+This article provides recommendations to help you develop an AD DS deployment strategy for Azure NetApp Files. Before reading this article, you need to have a good understanding about how AD DS works on a functional level.
+
+## <a name="ad-ds-requirements"></a> Identify AD DS requirements for Azure NetApp Files
+
+Before you deploy Azure NetApp Files volumes, you must identify the AD DS integration requirements for Azure NetApp Files to ensure that Azure NetApp Files is well connected to AD DS. _Incorrect or incomplete AD DS integration with Azure NetApp Files might cause client access interruptions or outages for SMB, dual-protocol, or Kerberos NFSv4.1 volumes_.
+
+### <a name="network-requirements"></a>Network requirements
+
+Azure NetApp Files SMB, dual-protocol, and Kerberos NFSv4.1 volumes require reliable and low-latency network connectivity (< 10ms RTT) to AD DS domain controllers. Poor network connectivity or high network latency between Azure NetApp Files and AD DS domain controllers can cause client access interruptions or client timeouts.
+
+Ensure that you meet the following requirements about network topology and configurations:
+
+* Ensure that a [supported network topology for Azure NetApp Files](azure-netapp-files-network-topologies.md) is used.
+* Ensure that AD DS domain controllers have network connectivity from the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes.
+* Network Security Groups (NSGs) and AD DS domain controller firewalls must have appropriately configured rules to support Azure NetApp Files connectivity to AD DS and DNS.
+* Ensure that the latency is less than 10ms RTT between Azure NetApp Files and AD DS domain controllers.
+
+The required network ports are as follows:
+
+| Service | Port | Protocol |
+| -- | - | - |
+|AD Web Services | 9839 | TCP |
+| DNS* | 53 | TCP |
+| DNS* | 53 | UDP |
+| ICMPv4 | N/A | Echo Reply |
+| Kerberos | 464 | TCP |
+| Kerberos | 464 | UDP |
+| Kerberos | 88 | TCP |
+| Kerberos | 88 | UDP |
+| LDAP | 389 | TCP |
+| LDAP | 389 | UDP |
+| LDAP | 3268 | TCP |
+| NetBIOS name | 138 | UDP |
+| SAM/LSA | 445 | TCP |
+| SAM/LSA | 445 | UDP |
+| w32time | 123 | UDP |
+
+*DNS running on AD DS domain controller
+
+### Network requirements
+
+Azure NetApp Files SMB, dual-protocol, and Kerberos NFSv4.1 volumes require reliable access to Domain Name System (DNS) services and up-to-date DNS records. Poor network connectivity between Azure NetApp Files and DNS servers can cause client access interruptions or client timeouts. Incomplete or incorrect DNS records for AD DS or Azure NetApp Files can cause client access interruptions or client timeouts.
+
+Azure NetApp Files supports the use of [Active Directory integrated DNS](/windows-server/identity/ad-ds/plan/active-directory-integrated-dns-zones) or standalone DNS servers.
+
+Ensure that you meet the following requirements about the DNS configurations:
+* If you're using standalone DNS servers:
+* Ensure that DNS servers have network connectivity to the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes.
+ * Ensure that network ports UDP 53 and TCP 53 are not blocked by firewalls or NSGs.
+* Ensure that [the SRV records registered by the AD DS Net Logon service](https://social.technet.microsoft.com/wiki/contents/articles/7608.srv-records-registered-by-net-logon.aspx) have been created on the DNS servers.
+* Ensure that the PTR records for the SRV records registered by the AD DS Net Logon service have been created on the DNS servers.
+* Azure NetApp Files supports standard and secure dynamic DNS updates. If you require secure dynamic DNS updates, ensure that secure updates are configured on the DNS servers.
+* If dynamic DNS updates are not used, you need to manually create A record and PTR records for Azure NetApp Files SMB volumes.
+* For complex or large AD DS topologies, [DNS Policies or DNS subnet prioritization may be required to support LDAP enabled NFS volumes](#ad-ds-ldap-discover).
+
+### Time source requirements
+
+Azure NetApp Files uses **time.windows.com** as the time source. Ensure that the domain controllers used by Azure NetApp Files are configured to use time.windows.com or another accurate, stable root (stratum 1) time source. If there is more than a five-minute skew between Azure NetApp Files and the customer client or AS DS domain controllers, authentication will fail, and access to Azure NetApp Files volumes might also fail.
+
+## Decide which AD DS to use with Azure NetApp Files
+
+Azure NetApp Files supports both Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (AAD DS) for AD connections. Before you create an AD connection, you need to decide whether to use AD DS or AAD DS.
+
+For more information, see [Compare self-managed Active Directory Domain Services, Azure Active Directory, and managed Azure Active Directory Domain Services](../active-directory-domain-services/compare-identity-solutions.md).
+
+### Active Directory Domain Services considerations
+
+You should use Active Directory Domain Services (AD DS) in the following scenarios:
+
+* You have AD DS users hosted in an on-premises AD DS domain that need access to Azure NetApp Files resources.
+* You have applications hosted partially on-premises and partially in Azure that need access to Azure NetApp Files resources.
+* You donΓÇÖt need AAD DS integration with an Azure AD tenant in your subscription, or AAD DS is incompatible with your technical requirements.
+
+> [!NOTE]
+> Azure NetApp Files doesn't support the use of AD DS Read-only Domain Controllers (RODC).
+
+If you choose to use AD DS with Azure NetApp Files, follow the guidance in [Extend AD DS into Azure Architecture Guide](https://docs.microsoft.com/azure/architecture/reference-architectures/identity/adds-extend-domain) and ensure that you meet the Azure NetApp Files [network](#network-requirements) and [DNS requirements](#ad-ds-requirements) for AD DS.
+
+### Azure Active Directory Domain Services considerations
+
+[Azure Active Directory Domain Services (AAD DS)](../active-directory-domain-services/overview.md) is a managed AD DS domain that is synchronized with your Azure AD tenant. The main benefits to using Azure AD DS are as follows:
+
+* AAD DS is a standalone domain. As such, there is no need to set up network connectivity between on-premises and Azure.
+* Provides simplified deployment and management experience.
+
+You should use AAD DS in the following scenarios:
+
+* ThereΓÇÖs no need to extend AD DS from on-premises into Azure to provide access to Azure NetApp Files resources.
+* Your security policies do not allow the extension of on-premises AD DS into Azure.
+* You donΓÇÖt have strong knowledge of AD DS. AAD DS can improve the likelihood of good outcomes with Azure NetApp Files.
+
+If you choose to use AAD DS with Azure NetApp Files, see [Azure AD DS documentation](../active-directory-domain-services/overview.md) for [architecture](../active-directory-domain-services/scenarios.md), deployment, and management guidance. Ensure that you also meet the Azure NetApp Files [Network](#network-requirements) and [DNS requirements](#ad-ds-requirements).
+
+## Design AD DS site topology for use with Azure NetApp Files
+
+A proper design for the AD DS site topology is critical for any solution architecture that involves Azure NetApp Files SMB, dual-protocol, or NFSv4.1 Kerberos volumes.
+
+Incorrect AD DS site topology or configuration can result in the following behaviors:
+* Failure to create Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md), [dual-protocol](create-volumes-dual-protocol.md), or [NFSv4.1 Kerberos](configure-kerberos-encryption.md) volumes
+* Failure to [modify ANF AD connection configuration](modify-active-directory-connections.md)
+* Poor LDAP client query performance
+* Authentication problems
+
+An AD DS site topology for Azure NetApp Files is a logical representation of the [Azure NetApp Files network](#network-requirements). Designing an AD DS site topology for Azure NetApp Files involves planning for domain controller placement, designing sites, DNS infrastructure, and network subnets to ensure good connectivity among the Azure NetApp Files service, Azure NetApp Files storage clients, and AD DS domain controllers.
+
+### How Azure NetApp Files uses AD DS site information
+
+Azure NetApp Files uses the **AD Site Name** configured in the [Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) to discover which domain controllers are present to support authentication, domain join, LDAP queries, and Kerberos ticket operations.
+
+#### AD DS domain controller discovery
+
+Azure NetApp Files initiates domain controller discovery every four hours. Azure NetApp Files queries the site-specific service (SRV) resource record to determine which domain controllers are in the AD DS site specified in the **AD Site Name** field of the Azure NetApp Files AD connection. The associated services hosted on the domain controllers (such as Kerberos, LDAP, Net Logon, and LSA) server discovery checks the status of the services hosted on the domain controllers and selects the optimal domain controller for authentication requests.
+
+> [!NOTE]
+> If you make changes to the domain controllers in the AD DS site that is used by Azure NetApp Files, wait at least four hours between deploying new AD DS domain controllers and retiring existing AD DS domain controllers. This wait time enables Azure NetApp Files to discover the new AD DS domain controllers.
+
+Ensure that stale DNS records associated with the retired AD DS domain controller are removed from DNS. Doing so ensures that Azure NetApp Files will not attempt to communicate with the retired domain controller.
+
+#### <a name="ad-ds-ldap-discover"></a> AD DS LDAP server discovery
+
+A separate discovery process for AD DS LDAP servers occurs when LDAP is enabled for an Azure NetApp Files NFS volume. When the LDAP client is created on Azure NetApp Files, Azure NetApp Files queries the AD DS domain service (SRV) resource record for a list of all AD DS LDAP servers in the domain and not the AD DS LDAP servers assigned to the AD DS site specified in the AD connection.
+
+> [!IMPORTANT]
+> If Azure NetApp Files cannot reach a discovered AD DS LDAP server during the creation of the Azure NetApp Files LDAP client, the creation of the LDAP enabled volume will fail. In large or complex AD DS topologies, you might need to implement [DNS Policies](/windows-server/networking/dns/dns-top) or [DNS subnet prioritization](/previous-versions/windows/it-pro/windows-2000-server/cc961422(v=technet.10)?redirectedfrom=MSDN) to ensure that the AD DS LDAP servers assigned to the AD DS site specified in the AD connection are returned. Contact your Microsoft CSA for guidance on how to best configure your DNS to support LDAP-enabled NFS volumes.
+
+### Consequences of incorrect or incomplete AD Site Name configuration
+
+Incorrect or incomplete AD DS site topology or configuration can result in volume creation failures, problems with client queries, authentication failures, and failures to modify Azure NetApp Files AD connections.
+
+If the **AD Site Name** field is not specified in the Azure NetApp Files AD connection, Azure NetApp Files domain controller discovery will attempt to discover all domain controllers in the AD DS domain. Enumerating all domain controllers and the associated services hosted on them can be a slow process. In this scenario, Azure NetApp Files might select a domain controller that is not in an optimal network location for supporting good communication with Azure NetApp Files or might even be unreachable. As a result, this behavior can result in slow share enumeration. It might also result in inconsistent or no access to Azure NetApp Files volumes that rely on AD DS domain controller communication.
+
+> [!NOTE]
+> Azure NetApp Files doesn't support the use of AD DS Read-only Domain Controllers (RODC). To prevent Azure NetApp Files from using an RODC, do not configure the **AD Site Name** filed of the AD connections with an RODC.
+
+### Sample AD DS site topology configuration for Azure NetApp Files
+
+An AD DS site topology is a logical representation of the network where Azure NetApp Files is deployed. In this section, the sample configuration scenario for AD DS site topology intends to show a _basic_ AD DS site design for Azure NetApp Files. It is not the only way to design network or AD site topology for Azure NetApp Files.
+
+> [!IMPORTANT]
+> For scenarios that involve complex AD DS or complex network topologies, you should have a Microsoft Azure CSA review the Azure NetApp Files networking and AD Site design.
+
+The following diagram shows a sample network topology:
+sample-network-topology.png
+
+In the sample network topology, an on-premises AD DS domain (`anf.local`) is extended into an Azure virtual network. The on-premises network is connected to the Azure virtual network using an Azure ExpressRoute circuit.
+
+The Azure virtual network has four subnets: Gateway Subnet, Azure Bastion Subnet, AD DS Subnet, and an Azure NetApp Files Delegated Subnet. Redundant AD DS domain controllers joined to the `anf.local` domain is deployed into the AD DS subnet. The AD DS subnet is assigned the IP address range 10.0.0.0/24.
+
+Azure NetApp Files can only use one AD DS site to determine which domain controllers will be used for authentication, LDAP queries, and Kerberos. In the sample scenario, two subnet objects are created and assigned to a site called `ANF` using the Active Directory Sites and Services utility. One subnet object is mapped to the AD DS subnet, 10.0.0.0/24, and the other subnet object is mapped to the ANF delegated subnet, 10.0.2.0/24.
+
+In the Active Directory Sites and Services tool, verify that the AD DS domain controllers deployed into the AD DS subnet are assigned to the `ANF` site:
++
+To create the subnet object that maps to the AD DS subnet in the Azure virtual network, right-click the **Subnets** container in the **Active Directory Sites and Services** utility and select **New Subnet...**.
+ΓÇâ
+In the **New Object - Subnet** dialog, the 10.0.0.0/24 IP address range for the AD DS Subnet is entered in the **Prefix** field. Select `ANF` as the site object for the subnet. Select **OK** to create the subnet object and assign it to the `ANF` site.
++
+To verify that the new subnet object is assigned to the correct site, right-click the 10.0.0.0/24 subnet object and select **Properties**. The **Site** field should show the `ANF` site object:
++
+To create the subnet object that maps to the Azure NetApp Files delegated subnet in the Azure virtual network, right-click the **Subnets** container in the **Active Directory Sites and Services** utility and select **New Subnet...**.
+
+### Cross-region replication considerations
+
+[Azure NetApp Files cross-region replication](cross-region-replication-introduction.md) enables you to replicate Azure NetApp Files volumes from one region to another region to support business continuance and disaster recovery (BC/DR) requirements.
+
+Azure NetApp Files SMB, dual-protocol, and NFSv4.1 Kerberos volumes support cross-region replication. Replication of these volumes requires the following:
+
+* A NetApp account created in both the source and destination regions.
+* An Azure NetApp Files Active Directory connection in the NetApp account created in the source and destination regions.
+* AD DS domain controllers are deployed and running in the destination region.
+* Proper Azure NetApp Files network, DNS, and AD DS site design must be deployed in the destination region to enable good network communication of Azure NetApp Files with the AD DS domain controllers in the destination region.
+* The Active Directory connection in the destination region must be configured to use the DNS and AD Site resources in the destination region.
+
+## Next steps
+* [Create and manage Active Directory connections](create-active-directory-connections.md)
+* [Modify Active Directory connections](modify-active-directory-connections.md)
+* [Enable ADDS LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
+* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md)
+* [Create a dual-protocol volume](create-volumes-dual-protocol.md)
+* [Errors for SMB and dual-protocol volumes](troubleshoot-volumes.md#errors-for-smb-and-dual-protocol-volumes)
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
The following example shows the rules that are available for configuration.
"use-protectedsettings-for-commandtoexecute-secrets": { "level": "warning" },
- "use-stable-resource-identifier": {
+ "use-stable-resource-identifiers": {
"level": "warning"
- }
+ },
"use-stable-vm-image": { "level": "warning" }
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [secure-parameter-default](./linter-rule-secure-parameter-default.md) - [simplify-interpolation](./linter-rule-simplify-interpolation.md) - [use-protectedsettings-for-commandtoexecute-secrets](./linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md)-- [use-stable-resource-identifier](./linter-rule-use-stable-resource-identifier.md)
+- [use-stable-resource-identifiers](./linter-rule-use-stable-resource-identifier.md)
- [use-stable-vm-image](./linter-rule-use-stable-vm-image.md) You can customize how the linter rules are applied. To overwrite the default settings, add a **bicepconfig.json** file and apply custom settings. For more information about applying those settings, see [Add custom settings in the Bicep config file](bicep-config-linter.md).
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 04/27/2022 Last updated : 07/27/2022 # Azure subscription and service limits, quotas, and constraints
For limits specific to Media Services v2 (legacy), see [Media Services v2 (legac
[!INCLUDE [azure-virtual-network-limits](../../../includes/azure-virtual-network-limits.md)]
-### ExpressRoute limits
+### Application Gateway limits
+The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise stated.
-### Virtual Network Gateway limits
+### Azure Bastion limits
-### NAT Gateway limits
+### Azure DNS limits
-### Virtual WAN limits
+### Azure Firewall limits
-### Application Gateway limits
+### Azure Front Door (classic) limits
-The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise stated.
+
+### Azure Route Server limits
++
+### ExpressRoute limits
++
+### NAT Gateway limits
+ ### Network Watcher limits
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise s
[!INCLUDE [traffic-manager-limits](../../../includes/traffic-manager-limits.md)]
-### Azure Bastion limits
--
-### Azure DNS limits
--
-### Azure Firewall limits
+### Virtual Network Gateway limits
-### Azure Front Door (classic) limits
+### Virtual WAN limits
## Notification Hubs limits
azure-resource-manager Networking Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md
If you want to move networking resources to a new region, see [Tutorial: Move Az
> [!NOTE] > Please note that VPN Gateways associated with Public IP Standard SKU addresses are not currently able to move between resource groups or subscriptions.
+> [!NOTE]
+> Please note that any resource associated with Public IP Standard SKU addresses are not currently able to move across subscriptions.
+ When moving a resource, you must also move its dependent resources (for example - public IP addresses, virtual network gateways, all associated connection resources). Local network gateways can be in a different resource group. To move a virtual machine with a network interface card to a new subscription, you must move all dependent resources. Move the virtual network for the network interface card, all other network interface cards for the virtual network, and the VPN gateways.
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
Title: Azure Video Indexer concepts - Azure
+ Title: Azure Video Indexer terminology & concepts overview
description: This article gives a brief overview of Azure Video Indexer terminology and concepts. Last updated 01/19/2021 -
-# Azure Video Indexer concepts
+# Azure Video Indexer terminology & concepts
This article gives a brief overview of Azure Video Indexer terminology and concepts.
-## Audio/video/combined insights
-
-When you upload your videos to Azure Video Indexer, it analyses both visuals and audio by running different AI models. As Azure Video Indexer analyzes your video, the insights that are extracted by the AI models. For more information, see [overview](video-indexer-overview.md).
- ## Confidence scores
-The confidence score indicates the confidence in an insight. It is a number between 0.0 and 1.0. The higher the score- the greater the confidence in the answer. For example,
+The confidence score indicates the confidence in an insight. It is a number between 0.0 and 1.0. The higher the score the greater the confidence in the answer. For example:
```json "transcript":[
The confidence score indicates the confidence in an insight. It is a number betw
Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content. For more information, see [Insights: visual and textual content moderation](video-indexer-output-json-v2.md#visualcontentmoderation).
-## Project and editor
-
-The [Azure Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project. Once created, the project can be rendered and downloaded from Azure Video Indexer and be used in your own editing applications or downstream workflows.
+## Insights
-Some scenarios where you may find this feature useful are:
+Insights contain an aggregated view of the data: faces, topics, emotions. Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Below is an illustration of the audio and video analysis performed by Azure Video Indexer in the background.
-* Creating movie highlights for trailers.
-* Using old clips of videos in news casts.
-* Creating shorter content for social media.
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Diagram of Azure Video Indexer flow.":::
+
-For more information, see [Use editor to create projects](use-editor-create-project.md).
+The [Azure Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project. Once created, the project can be rendered and downloaded from Azure Video Indexer and be used in your own editing applications or downstream workflows. For more information, see [Use editor to create projects](use-editor-create-project.md).
## Keyframes Azure Video Indexer selects the frame(s) that best represent each shot. Keyframes are the representative frames selected from the entire video based on aesthetic properties (for example, contrast and stableness). For more information, see [Scenes, shots, and keyframes](scenes-shots-keyframes.md).
-## time range vs. adjusted time range
+## Time range vs. adjusted time range
-TimeRange is the time range in the original video. AdjustedTimeRange is the time range relative to the current playlist. Since you can create a playlist from different lines of different videos, you can take a 1-hour video and use just 1 line from it, for example, 10:00-10:15. In that case, you will have a playlist with 1 line, where the time range is 10:00-10:15 but the adjustedTimeRange is 00:00-00:15.
+Time range is the time period in the original video. Adjusted time range is the time range relative to the current playlist. Since you can create a playlist from different lines of different videos, you can take a 1-hour video and use just 1 line from it, for example, 10:00-10:15. In that case, you will have a playlist with 1 line, where the time range is 10:00-10:15 but the adjusted time range is 00:00-00:15.
## Widgets Azure Video Indexer supports embedding widgets in your apps. For more information, see [Embed Azure Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
-## Insights
-
-Insights contain an aggregated view of the data: faces, topics, emotions. Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Below is an illustration of the audio and video analysis performed by Azure Video Indexer in the background.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Diagram of Azure Video Indexer flow.":::
-
- ## Next steps - [overview](video-indexer-overview.md)
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
To learn about compliance, privacy and security in Azure Video Indexer please vi
Azure Video Indexer's insights can be applied to many scenarios, among them are:
-* *Deep search*: Use the insights extracted from the video to enhance the search experience across a video library. For example, indexing spoken words and faces can enable the search experience of finding moments in a video where a person spoke certain words or when two people were seen together. Search based on such insights from videos is applicable to news agencies, educational institutes, broadcasters, entertainment content owners, enterprise LOB apps, and in general to any industry that has a video library that users need to search against.
-* *Content creation*: Create trailers, highlight reels, social media content, or news clips based on the insights Azure Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps of the people and label appearances make the creation process smoother and easier, enabling you to easily get to the parts of the video you need when creating content.
-* *Accessibility*: Whether you want to make your content available for people with disabilities or if you want your content to be distributed to different regions using different languages, you can use the transcription and translation provided by Azure Video Indexer in multiple languages.
-* *Monetization*: Azure Video Indexer can help increase the value of videos. For example, industries that rely on ad revenue (news media, social media, and so on) can deliver relevant ads by using the extracted insights as additional signals to the ad server.
-* *Content moderation*: Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content.
-* *Recommendations*: Video insights can be used to improve user engagement by highlighting the relevant video moments to users. By tagging each video with additional metadata, you can recommend to users the most relevant videos and highlight the parts of the video that will match their needs.
+* Deep search: Use the insights extracted from the video to enhance the search experience across a video library. For example, indexing spoken words and faces can enable the search experience of finding moments in a video where a person spoke certain words or when two people were seen together. Search based on such insights from videos is applicable to news agencies, educational institutes, broadcasters, entertainment content owners, enterprise LOB apps, and in general to any industry that has a video library that users need to search against.
+* Content creation: Create trailers, highlight reels, social media content, or news clips based on the insights Azure Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps of the people and label appearances make the creation process smoother and easier, enabling you to easily get to the parts of the video you need when creating content.
+* Accessibility: Whether you want to make your content available for people with disabilities or if you want your content to be distributed to different regions using different languages, you can use the transcription and translation provided by Azure Video Indexer in multiple languages.
+* Monetization: Azure Video Indexer can help increase the value of videos. For example, industries that rely on ad revenue (news media, social media, and so on) can deliver relevant ads by using the extracted insights as additional signals to the ad server.
+* Content moderation: Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content.
+* Recommendations: Video insights can be used to improve user engagement by highlighting the relevant video moments to users. By tagging each video with additional metadata, you can recommend to users the most relevant videos and highlight the parts of the video that will match their needs.
## Features
The following list shows the insights you can retrieve from your videos using Az
* **Rolling credits**: Identifies the beginning and end of the rolling credits in the end of TV shows and movies. * **Animated characters detection** (preview): Detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). For more information, see [Animated character detection](animated-characters-recognition.md). * **Editorial shot type detection**: Tagging shots based on their type (like wide shot, medium shot, close up, extreme close up, two shot, multiple people, outdoor and indoor, and so on). For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection).
-* **Observed People Tracking** (preview): detects observed people in videos and provides information such as the location of the person in the video frame (using bounding boxes) and the exact timestamp (start, end) and confidence when a person appears. For more information, see [Trace observed people in a video](observed-people-tracing.md).
+* **Observed people tracking** (preview): detects observed people in videos and provides information such as the location of the person in the video frame (using bounding boxes) and the exact timestamp (start, end) and confidence when a person appears. For more information, see [Trace observed people in a video](observed-people-tracing.md).
* **People's detected clothing**: detects the clothing types of people appearing in the video and provides information such as long or short sleeves, long or short pants and skirt or dress. The detected clothing is associated with the people wearing it and the exact timestamp (start,end) along with a confidence level for the detection are provided. * **Matched person**: matches between people that were observed in the video with the corresponding faces detected. The matching between the observed people and the faces contain a confidence level.
You can access Azure Video Indexer capabilities in three ways:
* Azure Video Indexer portal: An easy-to-use solution that lets you evaluate the product, manage the account, and customize models. For more information about the portal, see [Get started with the Azure Video Indexer website](video-indexer-get-started.md). - * API integration: All of Azure Video Indexer's capabilities are available through a REST API, which lets you integrate the solution into your apps and infrastructure. To get started as a developer, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md).- * Embeddable widget: Lets you embed the Azure Video Indexer insights, player, and editor experiences into your app. For more information, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).- If you're using the website, the insights are added as metadata and are visible in the portal. If you're using APIs, the insights are available as a JSON file. ## Supported browsers
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
description: Learn about storage capacity, storage policies, fault tolerance, an
Previously updated : 05/02/2022 Last updated : 07/27/2022 # Azure VMware Solution storage concepts
vSAN datastores use data-at-rest encryption by default using keys stored in Azur
## Azure storage integration
-You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads. You can also connect Azure disk pools or Azure NetApp Files to expand the storage capacity. This functionality is in preview.
+You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads. You can also connect Azure disk pools or [Azure NetApp Files datastores](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) to expand the storage capacity. This functionality is in preview.
## Alerts and monitoring
azure-vmware Netapp Files With Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
Azure NetApp Files and Azure VMware Solution are created in the same Azure regio
Services where Azure NetApp Files are used: -- **Active Directory connections**: Azure NetApp Files supports [Active Directory Domain Services and Azure Active Directory Domain Services](../azure-netapp-files/create-active-directory-connections.md#decide-which-domain-services-to-use).
+- **Active Directory connections**: Azure NetApp Files supports [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](../azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md).
- **Share Protocol**: Azure NetApp Files supports Server Message Block (SMB) and Network File System (NFS) protocols. This support means the volumes can be mounted on the Linux client and can be mapped on Windows client.
chaos-studio Chaos Studio Fault Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-providers.md
The following are the supported resource types for faults, the target types, and suggested roles to use when giving an experiment permission to a resource of that type.
-| Resource Type | Target name | Suggested role assignment |
+| Resource Type | Target name/type | Suggested role assignment |
| - | - | - | | Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | Redis Cache Contributor | | Microsoft.ClassicCompute/domainNames (service-direct) | Microsoft-DomainNames | Classic Virtual Machine Contributor |
chaos-studio Chaos Studio Samples Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-samples-rest-api.md
az rest --method delete --url "https://management.azure.com/{experimentId}?api-v
#### Start an experiment ```azurecli
-az rest --method get --url "https://management.azure.com/{experimentId}/start?api-version={apiVersion}" --resource "https://management.azure.com"
+az rest --method post --url "https://management.azure.com/{experimentId}/start?api-version={apiVersion}"
``` #### Get statuses (History) of an experiment
az rest --method get --url "https://management.azure.com/{experimentId}/executio
| {experimentName.json} | JSON containing the configuration of the chaos experiment | Generated by the user | | {subscriptionId} | Subscription Id where the target resource is located | Can be found in the [Subscriptions Portal Blade](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) | | {resourceGroupName} | Name of the resource group where the target resource is located | Can be fond in the [Resource Groups Portal Blade](https://portal.azure.com/#blade/HubsExtension/BrowseResourceGroups) |
-| {executionDetailsId} | Execution Id of an experiment execution | Can be found in the [Chaos Studio Experiment Portal Blade](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) |
+| {executionDetailsId} | Execution Id of an experiment execution | Can be found in the [Chaos Studio Experiment Portal Blade](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.chaos%2Fchaosexperiments) |
chaos-studio Chaos Studio Tutorial Aks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md
Azure Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-sour
> [!WARNING] > AKS Chaos Mesh faults are only supported on Linux node pools.
+## Limitations
+- At present Chaos Mesh faults donΓÇÖt work with private clusters.
+ ## Set up Chaos Mesh on your AKS cluster Before you can run Chaos Mesh faults in Chaos Studio, you need to install Chaos Mesh on your AKS cluster.
chaos-studio Sample Template Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-template-experiment.md
In this sample, we create a chaos experiment with a single target resource and a
"value": "eastus" }, "chaosTargetResourceId": {
- "value": "eastus"
+ "value": "/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.DocumentDB/databaseAccounts/<account-name>"
} } }
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/whats-new.md
Learn what's new in the service. These items may release notes, videos, blog pos
Learn what's new with QnA Maker. ### November 2021
-* [Question answering](../language-service/question-answering/overview.md) is now [generally available](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/question-answering-feature-is-generally-available/ba-p/2899497) as a feature within [Azure Cognitive Service for Language](https://ms.portal.azure.com/?quickstart=true#create/Microsoft.CognitiveServicesTextAnalytics).
+* [Question answering](../language-service/question-answering/overview.md) is now [generally available](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/question-answering-feature-is-generally-available/ba-p/2899497) as a feature within [Azure Cognitive Service for Language](https://portal.azure.com/?quickstart=true#create/Microsoft.CognitiveServicesTextAnalytics).
* Question answering is powered by **state-of-the-art transformer models** and [Turing](https://turing.microsoft.com/) Natural Language models. * The erstwhile QnA Maker product will be [retired](https://azure.microsoft.com/updates/azure-qna-maker-will-be-retired-on-31-march-2025/) on 31st March 2025, and no new QnA Maker resources will be created beginning 1st October 2022. * All existing QnA Maker customers are strongly advised to [migrate](../language-service/question-answering/how-to/migrate-qnamaker.md) their QnA Maker knowledge bases to Question answering as soon as possible to continue experiencing the best of QnA capabilities offered by Azure.
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
With Custom Speech, you can upload your own data, test and train a custom model,
Here's more information about the sequence of steps shown in the previous diagram:
-1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal.
+1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal.
1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the Microsoft speech-to-text offering for your applications, tools, and products. 1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data. 1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required.
cognitive-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md
To create a Custom Speech project, follow these steps:
Select the new project by name or select **Go to project**. You will see these menu items in the left panel: **Speech datasets**, **Train custom models**, **Test models**, and **Deploy models**.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Create-a-project&Section=Create-a-project" target="_target">I ran into an issue</a>
+ ::: zone-end ::: zone pivot="speech-cli"
To create a project, use the `spx csr project create` command. Construct the req
Here's an example Speech CLI command that creates a project:
-```azurecli
+```azurecli-interactive
spx csr project create --name "My Project" --description "My Project Description" --language "en-US" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Create-a-project&Section=Create-a-project" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
The top-level `self` property in the response body is the project's URI. Use thi
For Speech CLI help with projects, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr project ```
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/projects" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Create-a-project&Section=Create-a-project" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
To create an endpoint and deploy a model, use the `spx csr endpoint create` comm
Here's an example Speech CLI command to create an endpoint and deploy a model:
-```azurecli
+```azurecli-interactive
spx csr endpoint create --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Add-a-deployment-endpoint" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
The top-level `self` property in the response body is the endpoint's URI. Use th
For Speech CLI help with endpoints, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr endpoint ```
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Add-a-deployment-endpoint" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
To redeploy the custom endpoint with a new model, use the `spx csr model update`
Here's an example Speech CLI command that redeploys the custom endpoint with a new model:
-```azurecli
+```azurecli-interactive
spx csr endpoint update --endpoint YourEndpointId --model YourModelId ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Add-a-deployment-endpoint" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
You should receive a response body in the following format:
For Speech CLI help with endpoints, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr endpoint ```
curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/YourEndpointId" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Change-model-and-redeploy-endpoint" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
To gets logs for an endpoint, use the `spx csr endpoint list` command. Construct
Here's an example Speech CLI command that gets logs for an endpoint:
-```azurecli
+```azurecli-interactive
spx csr endpoint list --endpoint YourEndpointId ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Change-model-and-redeploy-endpoint" target="_target">I ran into an issue</a>
+ The location of each log file with more details are returned in the response body. ::: zone-end
Make an HTTP GET request using the URI as shown in the following example. Replac
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/YourEndpointId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Deploy-a-model&Section=Change-model-and-redeploy-endpoint" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
Follow these steps to create a test:
1. Enter the test name and description, and then select **Next**. 1. Review the test details, and then select **Save and close**.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Create-a-test" target="_target">I ran into an issue</a>
::: zone-end
To create a test, use the `spx csr evaluation create` command. Construct the req
Here's an example Speech CLI command that creates a test:
-```azurecli
+```azurecli-interactive
spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Evaluation" --description "My Evaluation Description" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Create-a-test" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
The top-level `self` property in the response body is the evaluation's URI. Use
For Speech CLI help with evaluations, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr evaluation ```
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Create-a-test" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
Follow these steps to get test results:
This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Get-test-results" target="_target">I ran into an issue</a>
+ ::: zone-end ::: zone pivot="speech-cli"
To get test results, use the `spx csr evaluation status` command. Construct the
Here's an example Speech CLI command that gets test results:
-```azurecli
+```azurecli-interactive
spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Get-test-results" target="_target">I ran into an issue</a>
+ The word error rates and more details are returned in the response body. You should receive a response body in the following format:
You should receive a response body in the following format:
For Speech CLI help with evaluations, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr evaluation ```
Make an HTTP GET request using the URI as shown in the following example. Replac
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Test-model-quantitatively&Section=Get-test-results" target="_target">I ran into an issue</a>
+ The word error rates and more details are returned in the response body. You should receive a response body in the following format:
cognitive-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md
Follow these instructions to create a test:
1. Enter the test name and description, and then select **Next**. 1. Review your settings, and then select **Save and close**.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Create-a-test" target="_target">I ran into an issue</a>
+ ::: zone-end ::: zone pivot="speech-cli"
To create a test, use the `spx csr evaluation create` command. Construct the req
Here's an example Speech CLI command that creates a test:
-```azurecli
+```azurecli-interactive
spx csr evaluation create --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Inspection" --description "My Inspection Description" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Create-a-test" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
The top-level `self` property in the response body is the evaluation's URI. Use
For Speech CLI help with evaluations, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr evaluation ```
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Create-a-test" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
Follow these steps to get test results:
This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Get-test-results" target="_target">I ran into an issue</a>
+ ::: zone-end ::: zone pivot="speech-cli"
To get test results, use the `spx csr evaluation status` command. Construct the
Here's an example Speech CLI command that gets test results:
-```azurecli
+```azurecli-interactive
spx csr evaluation status --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Get-test-results" target="_target">I ran into an issue</a>
+ The models, audio dataset, transcriptions, and more details are returned in the response body. You should receive a response body in the following format:
You should receive a response body in the following format:
For Speech CLI help with evaluations, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr evaluation ```
Make an HTTP GET request using the URI as shown in the following example. Replac
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Test-recognition-quality&Section=Get-test-results" target="_target">I ran into an issue</a>
+ The models, audio dataset, transcriptions, and more details are returned in the response body. You should receive a response body in the following format:
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
To get the training and transcription expiration dates for a base model, use the
Here's an example Speech CLI command to get the training and transcription expiration dates for a base model:
-```azurecli
+```azurecli-interactive
spx csr model status --model https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/b0bbc1e0-78d5-468b-9b7c-a5a43b2bb83f ```
You should receive a response body in the following format:
For Speech CLI help with models, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr model ```
To get the transcription expiration date for your custom model, use the `spx csr
Here's an example Speech CLI command to get the transcription expiration date for your custom model:
-```azurecli
+```azurecli-interactive
spx csr model status --model https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models/YourModelId ```
You should receive a response body in the following format:
For Speech CLI help with models, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr model ```
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.
> [!IMPORTANT] > Take note of the **Expiration** date. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Create-a-model" target="_target">I ran into an issue</a>
+ ::: zone-end ::: zone pivot="speech-cli"
To create a model with datasets for training, use the `spx csr model create` com
Here's an example Speech CLI command that creates a model with datasets for training:
-```azurecli
+```azurecli-interactive
spx csr model create --project YourProjectId --name "My Model" --description "My Model Description" --dataset YourDatasetId --language "en-US" ```+ > [!NOTE] > In this example, the `baseModel` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Create-a-model" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
The top-level `self` property in the response body is the model's URI. Use this
For Speech CLI help with models, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr model ```
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
> [!NOTE] > In this example, the `baseModel` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Create-a-model" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
Follow these instructions to copy a model to a project in another region:
After the model is successfully copied, you'll be notified and can view it in the target project.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Copy-a-model" target="_target">I ran into an issue</a>
+ ::: zone-end ::: zone pivot="speech-cli"
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
> [!NOTE] > Only the `targetSubscriptionKey` property in the request body has information about the destination Speech resource.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Train-a-model&Section=Copy-a-model" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
To connect a model to a project, use the `spx csr model update` command. Constru
Here's an example Speech CLI command that connects a model to a project:
-```azurecli
+```azurecli-interactive
spx csr model update --model YourModelId --project YourProjectId ```
You should receive a response body in the following format:
For Speech CLI help with models, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr model ```
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
To upload your own datasets in Speech Studio, follow these steps:
After your dataset is uploaded, go to the **Train custom models** page to [train a custom model](how-to-custom-speech-train-model.md)
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Speech-studio&Pillar=Speech&Product=Custom-speech&Page=Upload-training-and-testing-datasets&Section=Upload-datasets" target="_target">I ran into an issue</a>
+ ::: zone-end ::: zone pivot="speech-cli"
To create a dataset and connect it to an existing project, use the `spx csr data
Here's an example Speech CLI command that creates a dataset and connects it to an existing project:
-```azurecli
+```azurecli-interactive
spx csr dataset create --kind "Acoustic" --name "My Acoustic Dataset" --description "My Acoustic Dataset Description" --project YourProjectId --content YourContentUrl --language "en-US" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CLI&Pillar=Speech&Product=Custom-speech&Page=Upload-training-and-testing-datasets&Section=Upload-datasets" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
The top-level `self` property in the response body is the dataset's URI. Use thi
For Speech CLI help with datasets, run the following command:
-```azurecli
+```azurecli-interactive
spx help csr dataset ```
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/datasets" ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST&Pillar=Speech&Product=Custom-speech&Page=Upload-training-and-testing-datasets&Section=Upload-datasets" target="_target">I ran into an issue</a>
+ You should receive a response body in the following format: ```json
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following neural voices are in public preview.
| Language | Locale | Gender | Voice name | Style support | |-||--|-||
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaomengNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunfengNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunhaoNeural` <sup>New</sup> | Optimized for promoting a product or service, 1 new multiple style available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunjianNeural` <sup>New</sup> | Optimized for broadcasting sports event, 2 new multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunhaoNeural` <sup>New</sup> | Optimized for promoting a product or service, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunjianNeural` <sup>New</sup> | Optimized for broadcasting sports event, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunxiaNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunzeNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| Chinese (Mandarin, Simplified) | `zh-CN-liaoning` | Female | `zh-CN-liaoning-XiaobeiNeural` <sup>New</sup> | General, Liaoning accent | | Chinese (Mandarin, Simplified) | `zh-CN-sichuan` | Male | `zh-CN-sichuan-YunxiSichuanNeural` <sup>New</sup> | General, Sichuan accent | | English (United States) | `en-US` | Female | `en-US-JaneNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Female | `en-US-NancyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Male | `en-US-DavisNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Male | `en-US-JasonNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-RogerNeural` <sup>New</sup> | General|
| English (United States) | `en-US` | Male | `en-US-TonyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Italian (Italy) | `it-IT` | Female | `it-IT-FabiolaNeural` <sup>New</sup> | General | | Italian (Italy) | `it-IT` | Female | `it-IT-FiammaNeural` <sup>New</sup> | General |
Use the following table to determine supported styles and roles for each neural
|ja-JP-NanamiNeural|`chat`, `cheerful`, `customerservice`||| |pt-BR-FranciscaNeural|`calm`||| |zh-CN-XiaohanNeural|`affectionate`, `angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported||
+|zh-CN-XiaomengNeural <sup>Public preview</sup>|`chat`|Supported||
|zh-CN-XiaomoNeural|`affectionate`, `angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `envious`, `fearful`, `gentle`, `sad`, `serious`|Supported|Supported| |zh-CN-XiaoruiNeural|`angry`, `calm`, `fearful`, `sad`|Supported|| |zh-CN-XiaoshuangNeural|`chat`|Supported|| |zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `disgruntled`, `fearful`, `gentle`, `lyrical`, `newscast`, `poetry-reading`, `sad`, `serious`|Supported|| |zh-CN-XiaoxuanNeural|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported|Supported|
+|zh-CN-YunfengNeural <sup>Public preview</sup>|`calm`, `angry`, ` disgruntled`, `cheerful`, `fearful`, `sad`, `serious`, `depressed`|Supported||
+|zh-CN-YunhaoNeural <sup>Public preview</sup>|`general`, `advertisement-upbeat` <sup>Public preview</sup>|Supported||
+|zh-CN-YunjianNeural <sup>Public preview</sup>|`narration-relaxed`, `sports-commentary` <sup>Public preview</sup>, `sports-commentary-excited` <sup>Public preview</sup>|Supported||
|zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `narration-relaxed`, `sad`, `serious`|Supported|Supported|
+|zh-CN-YunxiaNeural <sup>Public preview</sup>|`angry`, `calm`, `cheerful`, `fearful`, `narration-relaxed`, `sad`|Supported||
|zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported|| |zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported|
-|zh-CN-YunjianNeural <sup>Public preview</sup>|`narration-relaxed`, `sports-commentary` <sup>Public preview</sup>, `sports-commentary-excited` <sup>Public preview</sup>|Supported||
-|zh-CN-YunhaoNeural <sup>Public preview</sup>|`general`, `advertisement-upbeat` <sup>Public preview</sup>|Supported||
-|zh-CN-YunfengNeural <sup>Public preview</sup>|`calm`, `angry`, ` disgruntled`, `cheerful`, `fearful`, `sad`, `serious`, `depressed`|Supported||
+|zh-CN-YunzeNeural <sup>Public preview</sup>|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `documentary-narration`, `fearful`, `sad`, `serious`|Supported|Supported|
+ ### Custom Neural Voice
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
## What's new?
-* Speech SDK 1.22.0 and Speech CLI 1.22.0 were released in June 2022. See details below.
+* Speech SDK 1.23.0 and Speech CLI 1.23.0 were released in July 2022. See details below.
* Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models. * TTS Service March 2022, public preview of Cheerful and Sad styles with fr-FR-DeniseNeural.
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/create-sas-tokens.md
To get started, you'll need the following resources:
* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
-* A [Translator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource.
+* A [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource.
* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account.md
If you need to recover a deleted resource, see [Recover deleted Cognitive Servic
* See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services. * See **[What are Azure Cognitive Services?](./what-are-cognitive-services.md)** to get a list of different categories within Cognitive Services. * See **[Natural language support](language-support.md)** to see the list of natural languages that Cognitive Services supports.
-* See **[Use Cognitive Services as containers](cognitive-services-container-support.md)** to understand how to use Cognitive Services on-prem.
+* See **[Use Cognitive Services as containers](cognitive-services-container-support.md)** to understand how to use Cognitive Services on-premises.
* See **[Plan and manage costs for Cognitive Services](plan-manage-costs.md)** to estimate cost of using Cognitive Services.
cognitive-services Concept Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-active-learning.md
Learn [how to](how-to-manage-model.md#import-a-new-learning-policy) import and e
The settings in the learning policy aren't intended to be changed. Change settings only if you understand how they affect Personalizer. Without this knowledge, you could cause problems, including invalidating Personalizer models.
-Personalizer uses [vowpalwabbit](https://github.com/VowpalWabbit) to train and score the events. Refer to the [vowpalwabbit documentation](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-line-arguments) on how to edit the learning settings using vowpalwabbit. Once you have the correct command line arguments, save the command to a file with the following format (replace the arguments property value with the desired command) and upload the file to import learning settings in the **Model and Learning Settings** pane in the Azure portal for your Personalizer resource.
+Personalizer uses [vowpalwabbit](https://github.com/VowpalWabbit) to train and score the events. Refer to the [vowpalwabbit documentation](https://vowpalwabbit.org/docs/vowpal_wabbit/python/latest/command_line_args.html) on how to edit the learning settings using vowpalwabbit. Once you have the correct command line arguments, save the command to a file with the following format (replace the arguments property value with the desired command) and upload the file to import learning settings in the **Model and Learning Settings** pane in the Azure portal for your Personalizer resource.
The following `.json` is an example of a learning policy.
cognitive-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-apprentice-mode.md
ms.
Previously updated : 05/01/2020 Last updated : 07/26/2022 # Use Apprentice mode to train Personalizer without affecting your existing application
-Due to the nature of **real-world** Reinforcement Learning, a Personalizer model can only be trained in a production environment. When deploying a new use case, the Personalizer model isn't performing efficiently because it takes time for the model to be sufficiently trained. **Apprentice mode** is a learning behavior that eases this situation and allows you to gain confidence in the model ΓÇô without the developer changing any code.
+When deploying a new Personalizer resource, it is initialized with an untrained Reinforcement Learning (RL) model. That is, it has not yet learned from any data and therefore will not perform well in practice. This is known as the "cold start" problem and is resolved over time by training the model with real data from your production environment. **Apprentice mode** is a learning behavior that helps mitigate the "cold start" problem, and allows you to gain confidence in the model _before_ it makes decisions in production, all without requiring any code change.
+ [!INCLUDE [Important Blue Box - Apprentice mode pricing tier](./includes/important-apprentice-mode.md)] ## What is Apprentice mode?
-Similar to how an apprentice learns a craft from an expert, and with experience can get better; Apprentice mode is a _behavior_ that lets Personalizer learn by observing the results obtained from existing application logic.
-
-Personalizer trains by mimicking the same output as the application. As more events flow, Personalizer can _catch up_ to the existing application without impacting the existing logic and outcomes. Metrics, available from the Azure portal and the API, help you understand the performance as the model learns.
+Similar to how an apprentice can learn a craft by observing an expert, Apprentice mode enables Personalizer to learn by observing the decisions made by your application's current logic. The Personalizer model trains by mimicking the same decision output as the application. With each Rank API call, Personalizer can learn without impacting the existing logic and outcomes. Metrics, available from the Azure portal and the API, help you understand the performance as the model learns. Specifically, how well Personalize is matching your existing logic.
-Once Personalizer has learned and attained a certain level of understanding, the developer can change the behavior from Apprentice mode to Online mode. At that time, Personalizer starts influencing the actions in the Rank API.
+Once Personalizer has learned and attained a certain level of understanding, the developer can change the behavior from Apprentice mode to Online mode. At that time, Personalizer starts influencing the actions in the Rank API to learn how to make even smarter decisions than your current logic.
## Purpose of Apprentice Mode
-Apprentice mode gives you trust in the Personalizer service and its machine learning capabilities, and provides reassurance that the service is sent information that can be learned from ΓÇô without risking online traffic.
+Apprentice mode provides additional trust in the Personalizer service and reassurance that the data sent to Personalizer is valuable for training the model ΓÇô without risking or affecting your online traffic and customer experiences.
The two main reasons to use Apprentice mode are:
-* Mitigating **Cold Starts**: Apprentice mode helps manage and assess the cost of a "new" model's learning time - when it isn't returning the best action and not achieved a satisfactory level of effectiveness of around 60-80%.
-* **Validating Action and Context Features**: Features sent in actions and context may be inadequate or inaccurate - too little, too much, incorrect, or too specific to train Personalizer to attain the ideal effectiveness rate. Use [feature evaluations](concept-feature-evaluation.md) to find and fix issues with features.
+* Mitigating **Cold Starts**: Apprentice mode helps mitigate the cost of a training a "new" model in production by learning without the need to make uninformed decisions. The model is informed by your existing application logic.
+* **Validating Action and Context Features**: Context and Action features may be inadequate, inaccurate, or sub-optimally engineered. If there are too few, too many, incorrect, noisy, or malformed features, Personalize will have difficulty training a well performing model. Performing [feature evaluations](concept-feature-evaluation.md) while in Apprentice mode, enables you to discover how effective the features are at training Personalizer and can identify areas for improving feature quality.
## When should you use Apprentice mode? Use Apprentice mode to train Personalizer to improve its effectiveness through the following scenarios while leaving the experience of your users unaffected by Personalizer:
-* You're implementing Personalizer in a new use case.
-* You've significantly changed the features you send in Context or Actions.
-* You've significantly changed when and how you calculate rewards.
+* You are implementing Personalizer in a new use case.
+* You have significantly changed the Context or Action features.
-Apprentice mode isn't an effective way of measuring the impact Personalizer is having on reward scores. To measure how effective Personalizer is at choosing the best possible action for each Rank call, use [Offline evaluations](concepts-offline-evaluation.md).
+However, Apprentice mode is not an effective way of measuring the impact Personalizer is having on improving your average reward or business metrics. It can only evaluate how well the service is learning your existing logic given the current data you are providing. To measure how effective Personalizer is at choosing the best possible action for each Rank call, use [Offline evaluations](concepts-offline-evaluation.md).
## Who should use Apprentice mode?
-Apprentice mode is useful for developers, data scientists and business decision makers:
+Apprentice mode is useful for developers, data scientists, and business decision makers:
-* **Developers** can use Apprentice mode to make sure the Rank and Reward APIs are being used correctly in the application, and that features being sent to Personalizer from the application contains no bugs, or non-relevant features such as a timestamp or UserID element.
+* **Developers** can use Apprentice mode to ensure the Rank and Reward APIs are implemented correctly in the application, and that features being sent to Personalizer are free from errors and common mistakes (such as including timestamps or unique user identifiers).
-* **Data scientists** can use Apprentice mode to validate that the features are effective to train the Personalizer models, that the reward wait times arenΓÇÖt too long or short.
+* **Data scientists** can use Apprentice mode to validate that the features are effective at training the Personalizer models.
-* **Business Decision Makers** can use Apprentice mode to assess the potential of Personalizer to improve results (i.e. rewards) compared to existing business logic. This allows them to make an informed decision impacting user experience, where real revenue and user satisfaction are at stake.
+* **Business Decision Makers** can use Apprentice mode to assess the potential of Personalizer to improve results (i.e. rewards) compared to existing business logic. Specifically, whether or not Personalizer can learn from the provided data before going into Online mode. This allows them to make an informed decisions about impacting user experience, where real revenue and user satisfaction are at stake.
## Comparing Behaviors - Apprentice mode and Online mode
Learning when in Apprentice mode differs from Online mode in the following ways.
|Area|Apprentice mode|Online mode| |--|--|--|
-|Impact on User Experience|You can use existing user behavior to train Personalizer by letting it observe (not affect) what your **default action** would have been and the reward it obtained. This means your usersΓÇÖ experience and the business results from them wonΓÇÖt be impacted.|Display top action returned from Rank call to affect user behavior.|
-|Learning speed|Personalizer will learn more slowly when in Apprentice mode than when learning in Online mode. Apprentice mode can only learn by observing the rewards obtained by your **default action**, which limits the speed of learning, as no exploration can be performed.|Learns faster because it can both exploit the current model and explore for new trends.|
-|Learning effectiveness "Ceiling"|Personalizer can approximate, very rarely match, and never exceed the performance of your base business logic (the reward total achieved by the **default action** of each Rank call). This approximation ceiling is reduced by exploration. For example, with exploration at 20% it's very unlikely apprentice mode performance will exceed 80%, and 60% is a reasonable target at which to graduate to online mode.|Personalizer should exceed applications baseline, and over time where it stalls you should conduct on offline evaluation and feature evaluation to continue to get improvements to the model. |
-|Rank API value for rewardActionId|The users' experience doesnΓÇÖt get impacted, as _rewardActionId_ is always the first action you send in the Rank request. In other words, the Rank API does nothing visible for your application during Apprentice mode. Reward APIs in your application shouldn't change how it uses the Reward API between one mode and another.|Users' experience will be changed by the _rewardActionId_ that Personalizer chooses for your application. |
-|Evaluations|Personalizer keeps a comparison of the reward totals that your default business logic is getting, and the reward totals Personalizer would be getting if in Online mode at that point. A comparison is available in the Azure portal for that resource|Evaluate PersonalizerΓÇÖs effectiveness by running [Offline evaluations](concepts-offline-evaluation.md), which let you compare the total rewards Personalizer has achieved against the potential rewards of the applicationΓÇÖs baseline.|
+|Impact on User Experience| The users' experience and business metrics will not change. Personalizer is trained by observing the **default actions**, or current logic, of your application without affecting them. | Your users' experience may change as the decision is made by Personalizer and not your default action.|
+|Learning speed|Personalizer will learn more slowly when in Apprentice mode compared to learning in Online mode. Apprentice mode can only learn by observing the rewards obtained by your default action without [exploration](concepts-exploration.md), which limits how much Personalizer can learn.|Learns faster because it can both _exploit_ the best action from the current model and _explore_ other actions for potentially better results.|
+|Learning effectiveness "Ceiling"|Personalizer can only approximate, and never exceed, the performance of your application's current logic (the total average reward achieved by the default action). However, this approximation ceiling is reduced by exploration. For example, it is unlikely that Personalizer will achieve 100% match with your current application's logic, and is recommended that once 60%-80% matching is achieved, Personalizer can be switched to Online mode.|Personalizer should exceed the performance of your current application logic. If Personalizer's performance stalls over time, you can conduct on [offline evaluation](concepts-offline-evaluation.md) and [feature evaluation](concept-feature-evaluation.md) to pursue additional improvement. |
+|Rank API return value for rewardActionId| The _rewardActionId_ will always be the Id of the default action. That is, the action you send as the first action in the Rank API request JSON. In other words, the Rank API does nothing visible for your application during Apprentice mode. |The _rewardActionId_ will be one of the Ids provided in then Rank API call as determined by the Personalizer model.|
+|Evaluations|Personalizer keeps a comparison of the reward totals received by your current application logic, and the reward totals Personalizer would be getting if it was in Online mode at that point. This comparison is available to view in the Azure portal.|Evaluate PersonalizerΓÇÖs effectiveness by running [Offline evaluations](concepts-offline-evaluation.md), which let you compare the total rewards Personalizer has achieved against the potential rewards of the applicationΓÇÖs baseline.|
-A note about apprentice mode's effectiveness:
-
-* Personalizer's effectiveness in Apprentice mode will rarely achieve near 100% of the application's baseline; and never exceed it.
-* Best practices would be not to try to get to 100% attainment; but a range of 60% ΓÇô 80% should be targeted depending on the use case.
+Note: It is unlikely for Personalizer to achieve a 100% performance match with the application's existing logic, and never exceed it. Performance matching of 60%-80% should be sufficient to switch Personalizer to Online mode.
## Limitations of Apprentice Mode
-Apprentice Mode attempts to train the Personalizer model by attempting to imitate your existing algorithm that chooses baseline items, using the features present in your context and actions used in Rank calls and the feedback from Reward calls. The following factors will affect if, or when, Personalizer Apprentice learns enough matched rewards.
+Apprentice Mode trains Personalizer model by attempting to imitate your existing application's logic, using the Context and Action features present in the Rank calls. The following factors will affect Apprentice mode's ability to learn.
### Scenarios where Apprentice Mode May Not be Appropriate: #### Editorially chosen Content:
-In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors aren't an algorithm, and the factors considered by editors can be nuanced and not included as features of the context and actions, Apprentice mode is unlikely to be able to predict the next baseline action. In these situations you can:
+In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors are not an algorithm, and the factors considered by editors can be nuanced and not included as Context or Action features. Apprentice mode is unlikely to be able to predict the next baseline action. In these situations you can:
-* Test Personalizer in Online Mode: Apprentice mode not predicting baselines doesn't imply Personalizer can't achieve as-good or even better results. Consider putting Personalizer in Online Mode for a period of time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference.
+* Test Personalizer in Online Mode: Apprentice mode not predicting baselines does not imply Personalizer cannot achieve as-good or even better results. Consider putting Personalizer in Online Mode for a period of time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference.
* Add editorial considerations and recommendations as features: Ask your editors what factors influence their choices, and see if you can add those as features in your context and action. For example, editors in a media company may highlight content while a certain celebrity is in the news: This knowledge could be added as a Context feature. ### Factors that will improve and accelerate Apprentice Mode
-If apprentice mode is learning and attaining Matched rewards above zero but seems to be growing slowly (not getting to 60% to 80% matched rewards within two weeks), it's possible that the challenge is having too little data. Taking the following steps could accelerate the learning.
+If apprentice mode is learning and attaining a matching performance above zero, however, performance is improving very slowly (not getting to 60% to 80% matched rewards within two weeks), it is possible that there is too little data being sent to Personalizer. The following steps may help facilitate faster learning:
-1. Adding more events with positive rewards over time: Apprentice mode will perform better in use cases where your application gets more than 100 positive rewards per day. For example, if a website rewarding a click has 2% clickthrough, it should be having at least 5,000 visits per day to have noticeable learning.
-2. Try a reward score that is simpler and happens more frequently. For example going from "Did users finish reading the article" to "Did users start reading the article".
-3. Adding differentiating features: You can do a visual inspection of the actions in a Rank call and their features. Does the baseline action have features that are differentiated from other actions? If they look mostly the same, add more features that will make them less similar.
-4. Reducing Actions per Event: Personalizer will use the Explore % setting to discover preferences and trends. When a Rank call has more actions, the chance of an Action being chosen for exploration becomes lower. Reduce the number of actions sent in each Rank call to a smaller number, to less than 10. This can be a temporary adjustment to show that Apprentice Mode has the right data to match rewards.
+1. Adding differentiating features: You can do a visual inspection of the actions in a Rank call and their features. Does the baseline action have features that are differentiated from other actions? If they look mostly the same, add more features that will make them less similar.
+2. Reducing Actions per Event: Personalizer will use the "% of Rank calls to use for exploration" setting to discover preferences and trends. When a Rank call has more actions, the chance of any particular Action being chosen for exploration becomes lower. Reducing the number of actions sent in each Rank call to a smaller number (under 10) can be a temporary adjustment that may indicate whether or not Apprentice Mode has sufficient data to learn.
## Using Apprentice mode to train with historical data
-If you have a significant amount of historical data, youΓÇÖd like to use to train Personalizer, you can use Apprentice mode to replay the data through Personalizer.
+If you have a significant amount of historical data that you would like to use to train Personalizer, you can use Apprentice mode to replay the data through Personalizer.
Set up the Personalizer in Apprentice Mode and create a script that calls Rank with the actions and context features from the historical data. Call the Reward API based on your calculations of the records in this data. You'll need approximately 50,000 historical events to see some results but 500,000 is recommended for higher confidence in the results.
-When training from historical data, it's recommended that the data sent in (features for context and actions, their layout in the JSON used for Rank requests, and the calculation of reward in this training data set), matches the data (features and calculation of reward) available from the existing application.
+When training from historical data, it is recommended that the data sent in (features for context and actions, their layout in the JSON used for Rank requests, and the calculation of reward in this training data set), matches the data (features and calculation of reward) available from the existing application.
Offline and post-facto data tends to be more incomplete and noisier and differs in format. While training from historical data is possible, the results from doing so may be inconclusive and not a good predictor of how well Personalizer will learn, especially if the features vary between past data and the existing application.
Typically for Personalizer, when compared to training with historical data, chan
## Using Apprentice Mode versus A/B Tests
-It's only useful to do A/B tests of Personalizer treatments once it has been validated and is learning in Online mode. In Apprentice mode, only the **default action** is used, which means all users would effectively see the control experience.
+It is only useful to do A/B tests of Personalizer treatments once it has been validated and is learning in Online mode. In Apprentice mode, only the default action is used, which means all users would effectively see the control experience.
Even if Personalizer is just the _treatment_, the same challenge is present when validating the data is good for training Personalizer. Apprentice mode could be used instead, with 100% of traffic, and with all users getting the control (unaffected) experience. Once you have a use case using Personalizer and learning online, A/B experiments allow you to do controlled cohorts and scientific comparison of results that may be more complex than the signals used for rewards. An example question an A/B test could answer is: `In a retail website, Personalizer optimizes a layout and gets more users to _check out_ earlier, but does this reduce total revenue per transaction?`
+## Are rank calls used
+ ## Next steps * Learn about [active and inactive events](concept-active-inactive-events.md)
cognitive-services How To Learning Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-learning-behavior.md
ms.
Previously updated : 05/01/2020 Last updated : 07/26/2022 # Configure the Personalizer learning behavior
In order to add Personalizer to your application, you need to call the Rank and
### Configure your application to call Reward API
+> [!NOTE]
+> Reward API calls do not affect training while in Apprentice mode. The service learns by matching your application's current logic, or default actions. However implementing Reward calls at this stage does help ensure a smooth transition to Online mode later on with a simple switch in the Azure portal. Additionally, the rewards will be logged, enabling you to analyze how well the current logic is performing and how much reward is being received.
+ 1. Use your existing business logic to calculate the **reward** of the displayed action. The value needs to be in the range from 0 to 1. Send this reward to Personalizer using the [Reward API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward). The reward value is not expected immediately and can be delayed over a time period - depending on your business logic.
-1. If you don't return the reward within the configured **Reward wait time**, the default reward will be used instead.
+1. If you don't return the reward within the configured **Reward wait time**, the default reward will be logged instead.
## Evaluate Apprentice mode
cognitive-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/quickstart-personalizer-sdk.md
zone_pivot_groups: programming-languages-set-six
# Quickstart: Personalizer client library
-Display personalized content in this quickstart with the Personalizer service.
-Get started with the Personalizer client library. Follow these steps to install the package and try out the example code for basic tasks.
+In this quickstart, you will learn how to create, configure, and use the Personalizer service in a toy example to learn food preferences. You will also utilize the Personalizer client library to make calls to the [Rank and Reward APIs](what-is-personalizer.md#rank-and-reward-apis)
- * Rank API - Selects the best item, from actions, based on real-time information you provide about content and context.
- * Reward API - You determine the reward score based on your business needs, then send it to Personalizer with this API. That score can be a single value such as 1 for good, and 0 for bad, or an algorithm you create based on your business needs.
::: zone pivot="programming-language-csharp" [!INCLUDE [Get intent with C# SDK](./includes/quickstart-sdk-csharp.md)]
Get started with the Personalizer client library. Follow these steps to install
## Clean up resources
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+To clean up your Cognitive Services subscription, you can delete the resource or the resource group, which also deletes any other associated resources.
* [Portal](../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../cognitive-services-apis-create-account-cli.md#clean-up-resources)
cognitive-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/what-is-personalizer.md
ms.
Previously updated : 08/27/2020 Last updated : 07/06/2022 keywords: personalizer, Azure personalizer, machine learning # What is Personalizer?
-Azure Personalizer is a cloud-based service that helps your applications choose the best content item to show your users. You can use the Personalizer service to determine what product to suggest to shoppers or to figure out the optimal position for an advertisement. After the content is shown to the user, your application monitors the user's reaction and reports a reward score back to the Personalizer service. This ensures continuous improvement of the machine learning model, and Personalizer's ability to select the best content item based on the contextual information it receives.
+Azure Personalizer helps your applications make smarter decisions at scale using **reinforcement learning**. Personalizer can determine the best actions to take in a variety of scenarios:
+* E-commerce: What product should be shown to customers to maximize the likelihood of a purchase?
+* Content recommendation: What article should be shown to increase the click-through rate?
+* Content design: Where should an advertisement be placed to optimize user engagement on a website?
+* Communication: When and how should a notification be sent to maximize the chance of a response?
-> [!TIP]
-> Content is any unit of information, such as text, images, URL, emails, or anything else that you want to select from and show to your users.
+Personalizer processes information about the state of your application, scenario, and/or users (*contexts*), and a set of possible decisions and related attributes (*actions*) to determine the best decision to make. Feedback from your application (*rewards*) is sent to Personalizer to learn how to improve its decision-making ability in near-real time.
-This documentation contains the following article types:
+To get started with the Personalizer, follow the [**quickstart guide**](quickstart-personalizer-sdk.md), or try Personalizer with this [interactive demo](https://personalizerdevdemo.azurewebsites.net/).
-* [**Quickstarts**](quickstart-personalizer-sdk.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to-settings.md) contain instructions for using the service in more specific or customized ways.
-* [**Concepts**](how-personalizer-works.md) provide in-depth explanations of the service functionality and features.
-* [**Tutorials**](tutorial-use-personalizer-web-app.md) are longer guides that show you how to use the service as a component in broader business solutions.
-Before you get started, try out [Personalizer with this interactive demo](https://personalizerdevdemo.azurewebsites.net/).
+This documentation contains the following types of articles:
-## How does Personalizer select the best content item?
+* [**Quickstarts**](quickstart-personalizer-sdk.md) provide step-by-step instructions to guide you through setup and sample code to start making API requests to the service.
+* [**How-to guides**](how-to-settings.md) contain instructions for using Personalizer features and advanced capabilities.
+* [**Code samples**](https://github.com/Azure-Samples/cognitive-services-personalizer-samples) demonstrate how to use Personalizer and help you to easily interface your application with the service.
+* [**Tutorials**](tutorial-use-personalizer-web-app.md) are longer walk-throughs implementing Personalizer as a part of a broader business solution.
+* [**Concepts**](how-personalizer-works.md) provide further detail on Personalizer features, capabilities, and fundamentals.
-Personalizer uses **reinforcement learning** to select the best item (_action_) based on collective behavior and reward scores across all users. Actions are the content items, such as news articles, specific movies, or products.
-The **Rank** call takes the action item, along with features of the action, and context features to select the top action item:
+## How does Personalizer work?
-* **Actions with features** - content items with features specific to each item
-* **Context features** - features of your users, their context or their environment when using your app
+Personalizer uses reinforcement learning to select the best *action* for a given *context* across all users in order to maximize an average *reward*.
+* **Context**: Information that describes the state of your application, scenario, or user that may be relevant to making a decision.
+ * Example: The location, device type, age, and favorite topics of users visiting a web site.
+* **Actions**: A discrete set of items that can be chosen, along with attributes describing each item.
+ * Example: A set of news articles and the topics that are discussed in each article.
+* **Reward**: A numerical score between 0 and 1 that indicates whether the decision was *bad* (0), or *good* (1)
+ * Example: A "1" indicates that a user clicked on the suggested article, whereas a "0" indicates the user did not.
-The Rank call returns the ID of which content item, __action__, to show to the user, in the **Reward Action ID** field.
+### Rank and Reward APIs
-The __action__ shown to the user is chosen with machine learning models, that try to maximize the total amount of rewards over time.
+Personalizer empowers you to take advantage of the power and flexibility of reinforcement learning using just two primary APIs.
-### Sample scenarios
+The **Rank** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) is called by your application each time there is a decision to be made. The application sends a JSON containing a set of actions, features that describe each action, and features that describe the current context. Each Rank API call is known as an **event** and noted with a unique _event ID_. Personalizer then returns the ID of the best action that maximizes the total average reward as determined by the underlying model.
-Let's take a look at a few scenarios where Personalizer can be used to select the best content to render for a user.
+The **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) is called by your application whenever there is feedback that can help Personalizer learn if the action ID returned in the *Rank* call provided value. For example, if a user clicked on the suggested news article, or completed the purchase of a suggested product. A call to then Reward API can be in real-time (just after the Rank call is made) or delayed to better fit the needs of the scenario. The reward score is determined your business metrics and objectives, and can be generated by an algorithm or rules in your application. The score is a real-valued number between 0 and 1.
-|Content type|Actions (with features)|Context features|Returned Reward Action ID<br>(display this content)|
-|--|--|--|--|
-|News list|a. `The president...` (national, politics, [text])<br>b. `Premier League ...` (global, sports, [text, image, video])<br> c. `Hurricane in the ...` (regional, weather, [text,image]|Device news is read from<br>Month, or season<br>|a `The president...`|
-|Movies list|1. `Star Wars` (1977, [action, adventure, fantasy], George Lucas)<br>2. `Hoop Dreams` (1994, [documentary, sports], Steve James<br>3. `Casablanca` (1942, [romance, drama, war], Michael Curtiz)|Device movie is watched from<br>screen size<br>Type of user<br>|3. `Casablanca`|
-|Products list|i. `Product A` (3 kg, $$$$, deliver in 24 hours)<br>ii. `Product B` (20 kg, $$, 2 week shipping with customs)<br>iii. `Product C` (3 kg, $$$, delivery in 48 hours)|Device shopping is read from<br>Spending tier of user<br>Month, or season|ii. `Product B`|
+### Learning modes
+
+* **[Apprentice mode](concept-apprentice-mode.md)** Similar to how an apprentice learns a craft from observing an expert, Apprentice mode enables Personalizer to learn by observing your application's current decision logic. This helps to mitigate the so-called "cold start" problem with a new untrained model, and allows you to validate the action and context features that are sent to Personalizer. In Apprentice mode, each call to the Rank API returns the _baseline action_ or _default action_, that is the action that the application would've taken without using Personalizer. This is sent by your application to Personalizer in the Rank API as the first item in the set of possible actions.
+
+* **Online mode** Personalizer will return the best action, given the context, as determined by the underlying RL model and explores other possible actions that may improve performance. Personalizer learns from feedback provided in calls to the Reward API.
-Personalizer used reinforcement learning to select the single best action, known as _reward action ID_. The machine learning model uses:
+Note that Personalizer uses collective information across all users to learn the best actions based on the current context. The service does not:
+* Persist and manage user profile information. Unique user IDs should not be sent to Personalizer.
+* Log individual users' preferences or historical data.
-* A trained model - information previously received from the personalize service used to improve the machine learning model
-* Current data - specific actions with features and context features
-## When to use Personalizer
+### Example scenarios
+
+Here are a few examples where Personalizer can be used to select the best content to render for a user.
+
+|Content type|Actions {features}|Context features|Returned Reward Action ID<br>(display this content)|
+|--|--|--|--|
+|News articles|a. `The president...`, {national, politics, [text]}<br>b. `Premier League ...` {global, sports, [text, image, video]}<br> c. `Hurricane in the ...` {regional, weather, [text,image]}|Country='USA',<br>Recent_Topics=('politics', 'business'),<br>Month='October'<br>|a `The president...`|
+|Movies|1. `Star Wars` {1977, [action, adventure, fantasy], George Lucas}<br>2. `Hoop Dreams` {1994, [documentary, sports], Steve James}<br>3. `Casablanca` {1942, [romance, drama, war], Michael Curtiz}|Device='smart TV',<br>Screen_Size='large',<br>Favorite_Genre='classics'<br>|3. `Casablanca`|
+|E-commerce Products|i. `Product A` {3 kg, $$$$, deliver in 1 day}<br>ii. `Product B` {20 kg, $$, deliver in 7 days}<br>iii. `Product C` {3 kg, $$$, deliver in 2 days}| Device='iPhone',<br>Spending_Tier='low',<br>Month='June'|ii. `Product B`|
-Personalizer's **Rank** [API](https://go.microsoft.com/fwlink/?linkid=2092082) is called each time your application presents content. This is known as an **event**, noted with an _event ID_.
-Personalizer's **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) can be called in real-time or delayed to better fit your infrastructure. You determine the reward score based on your business needs. The reward score is between 0 and 1. That can be a single value such as 1 for good, and 0 for bad, or a number produced by an algorithm you create considering your business goals and metrics.
+## Scenario requirements
-## Content requirements
+Use Personalizer when your scenario has:
-Use Personalizer when your content:
+* A limited set of actions or items to select from in each personalization event. We recommend no more than ~50 actions in each Rank API call. If you have a larger set of possible actions, we suggest using a [using a recommendation engine].(where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) or another mechanism to reduce the list of actions prior to calling the Rank API.
+* Information describing the actions (_action features_).
+* Information describing the current context (_contextual features_).
+* Sufficient data volume to enable Personalizer to learn. In general, we recommend a minimum of ~1,000 events per day to enable Personalizer learn effectively. If Personalizer doesn't receive sufficient data, the service takes longer to determine the best actions.
-* Has a limited set of actions or items (max of ~50) to select from in each personalization event. If you have a larger list, [use a recommendation engine](where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) to reduce the list down to 50 items for each time you call Rank on the Personalizer service.
-* Has information describing the content you want ranked: _actions with features_ and _context features_.
-* Has a minimum of ~1k/day content-related events for Personalizer to be effective. If Personalizer doesn't receive the minimum traffic required, the service takes longer to determine the single best content item.
-Since Personalizer uses collective information in near real-time to return the single best content item, the service doesn't:
-* Persist and manage user profile information
-* Log individual users' preferences or history
-* Require cleaned and labeled content
-## How to design for and implement Personalizer
+## Integrating Personalizer in an application
-1. [Design](concepts-features.md) and plan for content, **_actions_**, and **_context_**. Determine the reward algorithm for the **_reward_** score.
-1. Each [Personalizer Resource](how-to-settings.md) you create is considered one Learning Loop. The loop will receive the both the Rank and Reward calls for that content or user experience.
+1. [Design](concepts-features.md) and plan the **_actions_**, and **_context_**. Determine the how to interpret feedback as a **_reward_** score.
+1. Each [Personalizer Resource](how-to-settings.md) you create is defined as one _Learning Loop_. The loop will receive the both the Rank and Reward calls for that content or user experience and train an underlying RL model. There are
|Resource type| Purpose| |--|--|
- |[Apprentice mode](concept-apprentice-mode.md) `E0`|Train the Personalizer model without impacting your existing application, then deploy to Online learning behavior to a production environment|
- |Standard, `S0`|Online learning behavior in a production environment|
- |Free, `F0`| Try Online learning behavior in a non-production environment|
+ |[Apprentice mode](concept-apprentice-mode.md) - `E0`| Train Personalizer to mimic your current decision-making logic without impacting your existing application, before using _Online mode_ to learn better policies in a production environment.|
+ |_Online mode_ - Standard, `S0`| Personalizer uses RL to determine best actions in production.|
+ |_Online mode_ - Free, `F0`| Try Personalizer in a limited non-production environment.|
1. Add Personalizer to your application, website, or system:
- 1. Add a **Rank** call to Personalizer in your application, website, or system to determine best, single _content_ item before the content is shown to the user.
- 1. Display best, single _content_ item, which is the returned _reward action ID_, to user.
- 1. Apply _business logic_ to collected information about how the user behaved, to determine the **reward** score, such as:
+ 1. Add a **Rank** call to Personalizer in your application, website, or system to determine the best action.
+ 1. Use the the best action, as specified as a _reward action ID_ in your scenario.
+ 1. Apply _business logic_ to user behavior or feedback data to determine the **reward** score. For example:
|Behavior|Calculated reward score| |--|--|
- |User selected best, single _content_ item (reward action ID)|**1**|
- |User selected other content|**0**|
- |User paused, scrolling around indecisively, before selecting best, single _content_ item (reward action ID)|**0.5**|
+ |User selected a news article suggested by Personalizer |**1**|
+ |User selected a news article _not_ suggested by Personalizer |**0**|
+ |User hesitated to select a news article, scrolled around indecisively, and ultimately selected the news article suggested by Personalizer |**0.5**|
1. Add a **Reward** call sending a reward score between 0 and 1
- * Immediately after showing your content
- * Or sometime later in an offline system
- 1. [Evaluate your loop](concepts-offline-evaluation.md) with an offline evaluation after a period of use. An offline evaluation allows you to test and assess the effectiveness of the Personalizer Service without changing your code or affecting user experience.
+ * Immediately after feedback is received.
+ * Or sometime later in scenarios where delayed feedback is expected.
+ 1. Evaluate your loop with an [offline evaluation](concepts-offline-evaluation.md) after a period of time when Personalizer has received significant data to make online decisions. An offline evaluation allows you to test and assess the effectiveness of the Personalizer Service without code changes or user impact.
## Reference
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/whats-new.md
Last updated 05/28/2021
# What's new in Personalizer
-Learn what's new in the service. These items may include release notes, videos, blog posts, and other types of information. Bookmark this page to keep up-to-date with the service.
+Learn what's new in Azure Personalizer. These items may include release notes, videos, blog posts, and other types of information. Bookmark this page to keep up-to-date with the service.
## Release notes
+### April 2022
+* Local inference SDK (Preview): Personalizer now supports near-realtime (sub-10ms) inference without the need to wait for network API calls. Your Personalizer models can be used locally for lightning fast Rank calls using the [C# SDK (Preview)](https://www.nuget.org/packages/Azure.AI.Personalizer/2.0.0-beta.2), empowering your applications to personalize quickly and efficiently. Your model continues to train in Azure while your local model is seamlessly updated.
+ ### May 2021 - //Build conference * Auto-Optimize (Preview) : You can configure a Personalizer loop that you are using to continuously improve over time with less work. Personalizer will automatically run offline evaluations, discover better machine learning settings, and apply them. To learn more, see [Personalizer Auto-Optimize (Preview)](concept-auto-optimization.md).
communication-services Custom Teams Endpoint Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-authentication-overview.md
Title: Authentication for customized Teams apps
-description: Explore single-tenant and multi-tenant authentication use cases for customized Teams applications. Also learn about authentication artifacts.
+ Title: Authentication for apps with Teams users
+description: Explore single-tenant and multi-tenant authentication use cases for applications supporting Teams users. Also learn about authentication artifacts.
-# Single-tenant and multi-tenant authentication for Teams
+# Single-tenant and multi-tenant authentication for Teams users
- This article gives you insight into the authentication process for single-tenant and multi-tenant, *Azure Active Directory* (Azure AD) applications. You can use authentication when you build customized Teams calling experiences with the *Calling software development kit* (SDK) that *Azure Communication Services* makes available. Use cases in this article also break down individual authentication artifacts.
+ This article gives you insight into the authentication process for single-tenant and multi-tenant, *Azure Active Directory* (Azure AD) applications. You can use authentication when you build calling experiences for Teams users with the *Calling software development kit* (SDK) that *Azure Communication Services* makes available. Use cases in this article also break down individual authentication artifacts.
## Case 1: Example of a single-tenant application The Fabrikam company has built a custom, Teams calling application for internal company use. All Teams users are managed by Azure Active Directory. Access to Azure Communication Services is controlled by *Azure role-based access control (Azure RBAC)*.
-![A diagram that outlines the authentication process for Fabrikam;s customized Teams calling application and its Azure Communication Services resource.](./media/custom-teams-endpoint/authentication-case-single-tenant-azure-rbac-overview.svg)
+![A diagram that outlines the authentication process for Fabrikam's calling application for Teams users and its Azure Communication Services resource.](./media/custom-teams-endpoint/authentication-case-single-tenant-azure-rbac-overview.svg)
The following sequence diagram details single-tenant authentication.
Before we begin:
Steps: 1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Get an access token for Alice: The customized Teams application performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling.
-1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's customized Teams app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about [developing custom Teams clients](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
+1. Get an access token for Alice: The application for Teams users performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling.
+1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about [developing custom Teams clients](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
Artifacts: - Artifact A1
Artifacts:
- Azure Communication Services Resource ID: Fabrikam's _`Azure Communication Services Resource ID`_ ## Case 2: Example of a multi-tenant application
-The Contoso company has built a custom Teams calling application for external customers. This application uses custom authentication within Contoso's own infrastructure. Contoso uses a connection string to retrieve tokens from Fabrikam's customized Teams application.
+The Contoso company has built a custom Teams calling application for external customers. This application uses custom authentication within Contoso's own infrastructure. Contoso uses a connection string to retrieve tokens from Fabrikam's application.
![A sequence diagram that demonstrates how the Contoso application authenticates Fabrikam users with Contoso's own Azure Communication Services resource.](./media/custom-teams-endpoint/authentication-case-multiple-tenants-hmac-overview.svg)
Before we begin:
- Alice or her Azure AD administrator needs to give Contoso's Azure Active Directory application consent before the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md). Steps:
-1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's customized Teams application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
1. Get an access token for Alice: The Contoso application performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This generates Azure Communication Services access token 'D' for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling.
-1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's customized Teams app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about developing custom, Teams apps [in this quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
+1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's application. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about developing custom, Teams apps [in this quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
Artifacts:
The following articles may be of interest to you:
- Learn more about [authentication](../authentication.md). - Try this [quickstart to authenticate Teams users](../../quickstarts/manage-teams-identity.md).-- Try this [quickstart to call a Teams user](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
+- Try this [quickstart to call a Teams user](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
communication-services Custom Teams Endpoint Firewall Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-firewall-configuration.md
Title: Firewall configuration and Teams customization
-description: Learn the firewall configuration requirements that enable customized Teams calling experiences.
+description: Learn the firewall configuration requirements of calling applications for Teams users.
-# Firewall configuration for customized Teams calling experiences
+# Firewall configuration of calling applications for Teams users
Azure Communication Services allow you to build custom Teams calling experiences.
If you use an *independent software vendor* (ISV) for authentication, use instru
The following articles may be of interest to you: - Learn more about [Azure Communication Services firewall configuration](../voice-video-calling/network-requirements.md).-- Learn about [Microsoft Teams firewall configuration](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#skype-for-business-online-and-microsoft-teams).
+- Learn about [Microsoft Teams firewall configuration](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#skype-for-business-online-and-microsoft-teams).
communication-services Custom Teams Endpoint Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-use-cases.md
# Azure Communication Services support Teams identities ΓÇö Use cases
-Microsoft Teams provides identities managed by Azure Active Directory and calling experiences controlled by Teams Admin Center and policies. Users might have assigned licenses to enable PSTN connectivity and advanced calling capabilities of Teams Phone System. Azure Communication Services are supporting Teams identities for managing Teams VoIP calls, Teams PSTN calls, and join Teams meetings. Developers might extend the Azure Communication Services with Graph API to provide contextual data from Microsoft 365 ecosystem. This page is providing inspiration on how to use existing Microsoft technologies to provide an end-to-end experience for calling scenarios with Teams users and Azure Communication Services calling SDKs.
+Microsoft Teams provides identities managed by Azure Active Directory and calling experiences controlled by Teams Admin Center and policies. Users might have assigned licenses to enable PSTN connectivity and advanced calling capabilities of Microsoft Teams Phone. Azure Communication Services support Teams identities for managing Teams VoIP calls, Teams PSTN calls, and join Teams meetings. Developers might extend the Azure Communication Services with Graph API to provide contextual data from Microsoft 365 ecosystem. This page is providing inspiration on how to use existing Microsoft technologies to provide an end-to-end experience for calling scenarios with Teams users and Azure Communication Services calling SDKs.
## Use case 1: Make outbound Teams PSTN call This scenario is showing a multi-tenant use case, where company Contoso is providing SaaS to company Fabrikam. SaaS allows Fabrikam's users to make Teams PSTN calls via a custom website that takes the identity of the Teams user and configuration of the PSTN connectivity assigned to that Teams user.
The following articles might be of interest to you:
- Learn more about [authentication](../authentication.md). - Try [quickstart for authentication of Teams users](../../quickstarts/manage-teams-identity.md).-- Try [quickstart for calling to a Teams user](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
+- Try [quickstart for calling to a Teams user](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-endpoint.md
Find more details in [Azure Active Directory documentation](../../active-directo
> [!div class="nextstepaction"] > [Check use cases for communication as a Teams user](./interop/custom-teams-endpoint-use-cases.md)
-> [Issue a Teams access token](../quickstarts/manage-teams-identity.md)
-> [Start a call with Teams user as a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
-Learn about [Teams interoperability](./teams-interop.md).
+Find more details in the following articles:
+- [Teams interoperability](./teams-interop.md)
+- [Issue a Teams access token](../quickstarts/manage-teams-identity.md)
+- [Start a call with Teams user as a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md
Applications can implement both authentication models and leave the choice of au
|Chat is available via | Communication Services Chat SDKs | Graph API | |Join Teams meetings | Yes | Yes | |Make and receive calls as Teams users | No | Yes |
-|PSTN support| Not supported for Communication Services users in Teams meetings | Teams phone system, calling plan, direct routing, operator connect|
+|PSTN support| Not supported for Communication Services users in Teams meetings | Microsoft Teams Phone, calling plan, direct routing, operator connect|
\* Server logic issuing access tokens can perform any custom authentication and authorization of the request.
Azure Communication Services interoperability isn't compatible with Teams deploy
## Next steps
-> [!div class="nextstepaction"]
-> [Get access tokens for Guest/BYOI](../quickstarts/access-tokens.md)
-> [Join Teams meeting call as a Guest/BYOI](../quickstarts/voice-video-calling/get-started-teams-interop.md)
-> [Join Teams meeting chat as a Guest/BYOI](../quickstarts/chat/meeting-interop.md)
-> [Get access tokens for Teams users](../quickstarts/manage-teams-identity.md)
-> [Make a call as a Teams users to a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
+Find more details for Guest/BYOI interoperability:
+- [Get access tokens for Guest/BYOI](../quickstarts/access-tokens.md)
+- [Join Teams meeting call as a Guest/BYOI](../quickstarts/voice-video-calling/get-started-teams-interop.md)
+- [Join Teams meeting chat as a Guest/BYOI](../quickstarts/chat/meeting-interop.md)
+
+Find more details forTeams user interoperability:
+- [Get access tokens for Teams users](../quickstarts/manage-teams-identity.md)
+- [Make a call as a Teams users to a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
communication-services Manage Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/cte-calling-sdk/manage-calls.md
Title: Manage calls for customized Teams application
+ Title: Manage calls for Teams users
-description: Use Azure Communication Services SDKs to manage calls for customized Teams application.
+description: Use Azure Communication Services SDKs to manage calls for Teams users
communication-services Get Started Rooms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md
Title: Quickstart - Create and manage a `room` resource
+ Title: Quickstart - Create and manage a room resource
description: In this quickstart, you'll learn how to create a Room within your Azure Communication Services resource.+ - Previously updated : 11/19/2021 Last updated : 07/27/2022
-zone_pivot_groups: acs-csharp-java
+zone_pivot_groups: acs-js-csharp-java-python
# Quickstart: Create and manage a room resource
-This quickstart will help you get started with Azure Communication Services Rooms. A `room` is a server-managed communications space for a known, fixed set of participants to collaborate for a pre-determined duration. The [rooms conceptual documentation](../../concepts/rooms/room-concept.md) covers more details and potential use cases for `rooms`.
+This quickstart will help you get started with Azure Communication Services Rooms. A `room` is a server-managed communications space for a known, fixed set of participants to collaborate for a pre-determined duration. The [rooms conceptual documentation](../../concepts/rooms/room-concept.md) covers more details and use cases for `rooms`.
+ ::: zone pivot="programming-language-csharp" [!INCLUDE [Use rooms with .NET SDK](./includes/rooms-quickstart-net.md)]
This quickstart will help you get started with Azure Communication Services Room
[!INCLUDE [Use rooms with Java SDK](./includes/rooms-quickstart-java.md)] ::: zone-end + ## Object model The table below lists the main properties of `room` objects:
The table below lists the main properties of `room` objects:
| Name | Description | |--|-| | `roomId` | Unique `room` identifier. |
-| `ValidFrom` | Earliest time a `room` can be used. |
-| `ValidUntil` | Latest time a `room` can be used. |
-| `Participants` | List of pre-existing participant IDs. |
+| `validFrom` | Earliest time a `room` can be used. |
+| `validUntil` | Latest time a `room` can be used. |
+| `roomJoinPolicy` | Specifies which user identities are allowed to join room calls. Valid options are `InviteOnly` and `CommunicationServiceUsers`. |
+| `participants` | List of participants to a `room`. Specified as a `CommunicationIdentifier`. |
## Next steps
+Once you've created the room and configured it, you can learn how to [join a rooms call](join-rooms-call.md).
+ In this section you learned how to: > [!div class="checklist"] > - Create a new room > - Get the properties of a room > - Update the properties of a room
-> - Join a room call
> - Delete a room You may also want to:
communication-services Join Rooms Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/join-rooms-call.md
+
+ Title: Quickstart - Join a room call
+
+description: In this quickstart, you'll learn how to join a room call using web or native mobile calling SDKs
+++++ Last updated : 07/27/2022+++
+zone_pivot_groups: acs-web-ios-android
++
+# Quickstart: Join a room call
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).
+- Two or more Communication User Identities. [Create and manage access tokens](../access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).
+- A room resource. [Create and manage rooms](get-started-rooms.md)
+
+## Obtain user access token
+
+You'll need to create a User Access Token for each call participant. [Learn how to create and manage user access tokens](../access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token.
+
+```azurecli-interactive
+az communication identity issue-access-token --scope voip --connection-string "yourConnectionString"
+```
+
+For details, see [Use Azure CLI to Create and Manage Access Tokens](../access-tokens.md?pivots=platform-azcli).
++++
+## Next steps
+
+In this section you learned how to:
+> [!div class="checklist"]
+> - Add video calling to your application
+> - Pass the room identifier to the calling SDK
+> - Join a room call from your application
+
+You may also want to:
+ - Learn about [rooms concept](../../concepts/rooms/room-concept.md)
+ - Learn about [voice and video calling concepts](../../concepts/voice-video-calling/about-call-types.md)
+ - Learn about [authentication concepts](../../concepts/authentication.md)
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/resource-manager-template-samples.md
This template will create an Azure Cosmos account for Gremlin API with a databas
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-gremlin-autoscale/azuredeploy.json":::
-<a id="create-manual"></a>
-
-## Azure Cosmos DB account for Gremlin with standard provisioned throughput
-
-This template will create an Azure Cosmos account for Gremlin API with a database and graph with standard (manual) throughput. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
-
-[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-gremlin%2Fazuredeploy.json)
-- ## Next steps Here are some additional resources:
Here are some additional resources:
* [Azure Resource Manager documentation](../../azure-resource-manager/index.yml) * [Azure Cosmos DB resource provider schema](/azure/templates/microsoft.documentdb/allversions) * [Azure Cosmos DB Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.DocumentDB&pageNumber=1&sort=Popular)
-* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
+* [Troubleshoot common Azure Resource Manager deployment errors](../../azure-resource-manager/templates/common-deployment-errors.md)
cosmos-db Database Transactions Optimistic Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/database-transactions-optimistic-concurrency.md
The ability to execute JavaScript directly within the database engine provides p
Optimistic concurrency control allows you to prevent lost updates and deletes. Concurrent, conflicting operations are subjected to the regular pessimistic locking of the database engine hosted by the logical partition that owns the item. When two concurrent operations attempt to update the latest version of an item within a logical partition, one of them will win and the other will fail. However, if one or two operations attempting to concurrently update the same item had previously read an older value of the item, the database doesnΓÇÖt know if the previously read value by either or both the conflicting operations was indeed the latest value of the item. Fortunately, this situation can be detected with the **Optimistic Concurrency Control (OCC)** before letting the two operations enter the transaction boundary inside the database engine. OCC protects your data from accidentally overwriting changes that were made by others. It also prevents others from accidentally overwriting your own changes.
-The concurrent updates of an item are subjected to the OCC by Azure Cosmos DBΓÇÖs communication protocol layer. For Azure Cosmos accounts configured for **single-region writes**, Azure Cosmos DB ensures that the client-side version of the item that you are updating (or deleting) is the same as the version of the item in the Azure Cosmos container. This ensures that your writes are protected from being overwritten accidentally by the writes of others and vice versa. In a multi-user environment, the optimistic concurrency control protects you from accidentally deleting or updating wrong version of an item. As such, items are protected against the infamous "lost update" or "lost delete" problems.
-
-In an Azure Cosmos account configured with **multi-region writes**, data can be committed independently into secondary regions if its `_etag` matches that of the data in the local region. Once new data is committed locally in a secondary region, it is then merged in the hub or primary region. If the conflict resolution policy merges the new data into the hub region, this data will then be replicated globally with the new `_etag`. If the conflict resolution policy rejects the new data, the secondary region will be rolled back to the original data and `_etag`.
+### Implementing optimistic concurrency control using ETag and HTTP headers
Every item stored in an Azure Cosmos container has a system defined `_etag` property. The value of the `_etag` is automatically generated and updated by the server every time the item is updated. `_etag` can be used with the client supplied `if-match` request header to allow the server to decide whether an item can be conditionally updated. The value of the `if-match` header matches the value of the `_etag` at the server, the item is then updated. If the value of the `if-match` request header is no longer current, the server rejects the operation with an "HTTP 412 Precondition failure" response message. The client then can re-fetch the item to acquire the current version of the item on the server or override the version of item in the server with its own `_etag` value for the item. In addition, `_etag` can be used with the `if-none-match` header to determine whether a refetch of a resource is needed. The itemΓÇÖs `_etag` value changes every time the item is updated. For replace item operations, `if-match` must be explicitly expressed as a part of the request options. For an example, see the sample code in [GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L676-L772). `_etag` values are implicitly checked for all written items touched by the stored procedure. If any conflict is detected, the stored procedure will roll back the transaction and throw an exception. With this method, either all or no writes within the stored procedure are applied atomically. This is a signal to the application to reapply updates and retry the original client request.
+### Optimistic concurrency control and global distribution
+
+The concurrent updates of an item are subjected to the OCC by Azure Cosmos DBΓÇÖs communication protocol layer. For Azure Cosmos accounts configured for **single-region writes**, Azure Cosmos DB ensures that the client-side version of the item that you are updating (or deleting) is the same as the version of the item in the Azure Cosmos container. This ensures that your writes are protected from being overwritten accidentally by the writes of others and vice versa. In a multi-user environment, the optimistic concurrency control protects you from accidentally deleting or updating wrong version of an item. As such, items are protected against the infamous "lost update" or "lost delete" problems.
+
+In an Azure Cosmos account configured with **multi-region writes**, data can be committed independently into secondary regions if its `_etag` matches that of the data in the local region. Once new data is committed locally in a secondary region, it is then merged in the hub or primary region. If the conflict resolution policy merges the new data into the hub region, this data will then be replicated globally with the new `_etag`. If the conflict resolution policy rejects the new data, the secondary region will be rolled back to the original data and `_etag`.
+ ## Next steps Learn more about database transactions and optimistic concurrency control in the following articles:
cosmos-db How To Dotnet Query Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-query-items.md
Alternatively, use the [QueryDefinition](/dotnet/api/microsoft.azure.cosmos.quer
In this example, an [``IQueryable``<>](/dotnet/api/system.linq.iqueryable) object is used to construct a [Language Integrated Query (LINQ)](/dotnet/csharp/programming-guide/concepts/linq/). The results are then iterated over using a feed iterator. The [Container.GetItemLinqQueryable<>](/dotnet/api/microsoft.azure.cosmos.container.getitemlinqqueryable) method constructs an ``IQueryable`` to build the LINQ query. Then the ``ToFeedIterator<>`` method is used to convert the LINQ query expression into a [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1).
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs.md
The SQL API in Azure Cosmos DB supports registering and invoking stored procedures, triggers, and user-defined functions (UDFs) written in JavaScript. Once you've defined one or more stored procedures, triggers, and user-defined functions, you can load and view them in the [Azure portal](https://portal.azure.com/) by using Data Explorer.
-SQL API SDKs are available for a wide variety of platforms and programming languages. If you haven't worked
- You can use the SQL API SDK across multiple platforms including [.NET v2 (legacy)](sql-api-sdk-dotnet.md), [.NET v3](sql-api-sdk-dotnet-standard.md), [Java](sql-api-sdk-java.md), [JavaScript](sql-api-sdk-node.md), or [Python](sql-api-sdk-python.md) SDKs to perform these tasks. If you haven't worked with one of these SDKs before, see the *"Quickstart"* article for the appropriate SDK: | SDK | Getting started |
cosmos-db How To Write Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-write-stored-procedures-triggers-udfs.md
The following example shows a post-trigger. This trigger queries for the metadat
```javascript function updateMetadata() {
-var context = getContext();
-var container = context.getCollection();
-var response = context.getResponse();
-
-// item that was created
-var createdItem = response.getBody();
-
-// query for metadata document
-var filterQuery = 'SELECT * FROM root r WHERE r.id = "_metadata"';
-var accept = container.queryDocuments(container.getSelfLink(), filterQuery,
- updateMetadataCallback);
-if(!accept) throw "Unable to update metadata, abort";
-
-function updateMetadataCallback(err, items, responseOptions) {
- if(err) throw new Error("Error" + err.message);
- if(items.length != 1) throw 'Unable to find metadata document';
-
- var metadataItem = items[0];
-
- // update metadata
- metadataItem.createdItems += 1;
- metadataItem.createdNames += " " + createdItem.id;
- var accept = container.replaceDocument(metadataItem._self,
- metadataItem, function(err, itemReplaced) {
- if(err) throw "Unable to update metadata, abort";
- });
- if(!accept) throw "Unable to update metadata, abort";
- return;
-}
+ var context = getContext();
+ var container = context.getCollection();
+ var response = context.getResponse();
+
+ // item that was created
+ var createdItem = response.getBody();
+
+ // query for metadata document
+ var filterQuery = 'SELECT * FROM root r WHERE r.id = "_metadata"';
+ var accept = container.queryDocuments(container.getSelfLink(), filterQuery,
+ updateMetadataCallback);
+ if(!accept) throw "Unable to update metadata, abort";
+
+ function updateMetadataCallback(err, items, responseOptions) {
+ if(err) throw new Error("Error" + err.message);
+ if(items.length != 1) throw 'Unable to find metadata document';
+
+ var metadataItem = items[0];
+
+ // update metadata
+ metadataItem.createdItems += 1;
+ metadataItem.createdNames += " " + createdItem.id;
+ var accept = container.replaceDocument(metadataItem._self,
+ metadataItem, function(err, itemReplaced) {
+ if(err) throw "Unable to update metadata, abort";
+ });
+ if(!accept) throw "Unable to update metadata, abort";
+ return;
+ }
} ```
The following is a function definition to calculate income tax for various incom
```javascript function tax(income) {-
- if(income == undefined)
- throw 'no input';
-
- if (income < 1000)
- return income * 0.1;
- else if (income < 10000)
- return income * 0.2;
- else
- return income * 0.4;
- }
+ if (income == undefined)
+ throw 'no input';
+
+ if (income < 1000)
+ return income * 0.1;
+ else if (income < 10000)
+ return income * 0.2;
+ else
+ return income * 0.4;
+}
``` For examples of how to register and use a user-defined function, see [How to use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-work-with-user-defined-functions) article.
cost-management-billing Troubleshoot Azure Sign Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-azure-sign-up.md
Other troubleshooting articles for Azure Billing and Subscriptions
## Contact us for help
-If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+- Get answers in [Azure forums](https://azure.microsoft.com/support/forums/).
+- Connect with [@AzureSupport](https://twitter.com/AzureSupport)- answers, support, experts.
+- If you have a support plan, [open a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
## Next steps -- Read the [Cost Management and Billing documentation](../index.yml)
+- Read the [Cost Management and Billing documentation](../index.yml)
data-factory Scenario Ssis Migration Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-rules.md
Use other package configuration types. XML configuration file is recommended.
Additional Information
-[Package Configurations](/sql/integration-services/package-configurations)
+[Package Configurations](/sql/integration-services/packages/legacy-package-deployment-ssis)
### [4003]Package encrypted with user key isn't supported
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
You can simulate alerts for both of the control plane, and workload alerts with
**To simulate a a Kubernetes workload security alert**:
-1. Access one of the `azuredefender-publisher-<XXX>` pods deployed in your Kubernetes cluster.
+1. Create a pod to run a test command on. This pod can be any of the existing pods in the cluster, or a new pod. You can create created using this sample yaml configuration:
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: mdc-test
+ spec:
+ containers:
+ - name: mdc-test
+ image: ubuntu:18.04
+ command: ["/bin/sh"]
+ args: ["-c", "while true; do echo sleeping; sleep 3600;done"]
+ ```
+
+ To create the pod run:
+
+ ```bash
+ kubectl apply -f <path_to_the_yaml_file>
+ ```
1. Run the following command from the cluster: ```bash
- kubectl exec -it azuredefender-publisher-xx-xxxxx -n <namespace> -- bash
+ kubectl exec -it mdc-test -- bash
```
- For AKS - `<namespace>` = `kube-system`<br>
- For ARC - `<namespace>` = `mdc`
-
-1. Select an executable, copy it to a convenient location and rename it to `./asc_alerttest_662jfi039n`. For example:
-`cp /bin/echo ./asc_alerttest_662jfi039n`.
+1. Copy the executable to a separate location and rename it to `./asc_alerttest_662jfi039n` with the following command `cp /bin/echo ./asc_alerttest_662jfi039n`.
1. Execute the file `./asc_alerttest_662jfi039n testing eicar pipe`.
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 07/14/2022 Last updated : 07/25/2022 # Enable Microsoft Defender for Containers
You can learn more by watching these videos from the Defender for Cloud in the F
> [!NOTE] > Defender for Containers' support for Arc-enabled Kubernetes clusters, AWS EKS, and GCP GKE. This is a preview feature. >
-> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
+> To learn more about the supported operating systems, feature availability, outbound proxy and more see the [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
::: zone-end ::: zone pivot="defender-for-container-aks"
You can learn more by watching these videos from the Defender for Cloud in the F
::: zone-end ::: zone pivot="defender-for-container-arc,defender-for-container-eks,defender-for-container-gke" ::: zone-end ::: zone pivot="defender-for-container-aks"
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
The native cloud connector requires:
> [!IMPORTANT] > To present the current status of your recommendations, the CSPM plan queries the AWS resource APIs several times a day. These read-only API calls incur no charges, but they *are* registered in CloudTrail if you've enabled a trail for read events. As explained in [the AWS documentation](https://aws.amazon.com/cloudtrail/pricing/), there are no additional charges for keeping one trail. If you're exporting the data out of AWS (for example, to an external SIEM), this increased volume of calls might also increase ingestion costs. In such cases, We recommend filtering out the read-only calls from the Defender for Cloud user or role ARN: `arn:aws:iam::[accountId]:role/CspmMonitorAws` (this is the default role name, confirm the role name configured on your account).
-1. By default the **Servers** plan is set to **On**. This is necessary to extend Defender for server's coverage to your AWS EC2.
+1. By default the **Servers** plan is set to **On**. This is necessary to extend Defender for server's coverage to your AWS EC2. Ensure you've fulfilled the [network requirements for Azure Arc](/azure-arc/servers/network-requirements.md).
- (Optional) Select **Configure**, to edit the configuration as required.
-1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you've fulfilled the [network requirements](./defender-for-containers-enable.md?pivots=defender-for-container-eks&source=docs&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#network-requirements) for the Defender for Containers plan.
+1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you've fulfilled the [network requirements](./defender-for-containers-enable.md?pivots=defender-for-container-eks&source=docs&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#network-requirements) for the Defender for Containers plan.
> [!Note] > Azure Arc-enabled Kubernetes, the Defender Arc extension, and the Azure Policy Arc extension should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks).
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Follow the steps below to create your GCP cloud connector.
1. Toggle the plans you want to connect to **On**. By default all necessary prerequisites and components will be provisioned. (Optional) Learn how to [configure each plan](#optional-configure-selected-plans).
-1. (**Containers only**) Ensure you have fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for the Defender for Containers plan.
+ 1. (**Containers only**) Ensure you have fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for the Defender for Containers plan.
1. Select the **Next: Configure access**.
To have full visibility to Microsoft Defender for Servers security content, ensu
- **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that are not connected to Arc will be surfaced by the recommendation ΓÇ£GCP VM instances should be connected to Azure ArcΓÇ¥. Use the ΓÇ£FixΓÇ¥ option offered in this recommendation to install Azure Arc on the selected machines.
+- Ensure you've fulfilled the [network requirements for Azure Arc](/azure-arc/servers/network-requirements.md).
+ - Additional extensions should be enabled on the Arc-connected machines. - Microsoft Defender for Endpoint - VA solution (TVM/ Qualys)
defender-for-cloud Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md
To add Windows machines, you need the information on the **Agents management** p
When complete, the **Microsoft Monitoring agent** appears in **Control Panel**. You can review your configuration there and verify that the agent is connected.
-For further information on installing and configuring the agent, see [Connect Windows machines](../azure-monitor/agents/agent-windows.md#install-agent-using-setup-wizard).
+For further information on installing and configuring the agent, see [Connect Windows machines](../azure-monitor/agents/agent-windows.md#install-the-agent).
::: zone-end
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 07/26/2022 Last updated : 07/27/2022
The **tabs** below show the features that are available, by environment, for Mic
<sup><a name="footnote3"></a>3</sup> VA can detect vulnerabilities for these [language specific packages](#registries-and-images).
-## Additional information
+## Additional environment information
### Registries and images
The **tabs** below show the features that are available, by environment, for Mic
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+
+### Supported host operating systems
+
+Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
+
+- Amazon Linux 2
+- CentOS 8
+- Debian 10
+- Debian 11
+- Google Container-Optimized OS
+- Red Hat Enterprise Linux 8
+- Ubuntu 16.04
+- Ubuntu 18.04
+- Ubuntu 20.04
+- Ubuntu 22.04
+
+Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
+ ### [**GCP (GKE)**](#tab/gcp-gke) | Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
The **tabs** below show the features that are available, by environment, for Mic
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+
+### Supported host operating systems
+
+Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
+
+- Amazon Linux 2
+- CentOS 8
+- Debian 10
+- Debian 11
+- Google Container-Optimized OS
+- Red Hat Enterprise Linux 8
+- Ubuntu 16.04
+- Ubuntu 18.04
+- Ubuntu 20.04
+- Ubuntu 22.04
+
+Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
+ ### [**On-prem/IaaS (Arc)**](#tab/iaas-arc) | Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
The **tabs** below show the features that are available, by environment, for Mic
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+
+### Supported host operating systems
+
+Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
+
+- Amazon Linux 2
+- CentOS 8
+- Debian 10
+- Debian 11
+- Google Container-Optimized OS
+- Red Hat Enterprise Linux 8
+- Ubuntu 16.04
+- Ubuntu 18.04
+- Ubuntu 20.04
+- Ubuntu 22.04
+
+Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
+ ## Next steps-
+
- Learn how [Defender for Cloud collects data using the Log Analytics Agent](enable-data-collection.md). - Learn how [Defender for Cloud manages and safeguards data](data-security.md). - Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
event-hubs Store Captured Data Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/store-captured-data-data-warehouse.md
- Title: 'Tutorial: Migrate event data to Azure Synapse Analytics - Azure Event Hubs'
-description: Describes how to use Azure Event Grid and Functions to migrate Event Hubs captured data to Azure Synapse Analytics.
- Previously updated : 04/29/2022----
-# Tutorial: Migrate captured Event Hubs Avro data to Azure Synapse Analytics using Event Grid and Azure Functions
-Azure Event Hubs [Capture](./event-hubs-capture-overview.md) enables you to automatically capture the streaming data in Event Hubs in an Azure Blob storage or Azure Data Lake Storage. This tutorial shows you how to migrate captured Event Hubs data from Storage to Azure Synapse Analytics by using an Azure function that's triggered by [Event Grid](../event-grid/overview.md).
--
-## Next steps
-You can use powerful data visualization tools with your data warehouse to achieve actionable insights.
-
-This article shows how to use [Power BI with Azure Synapse Analytics](/power-bi/connect-data/service-azure-sql-data-warehouse-with-direct-connect)
expressroute How To Configure Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-configure-connection-monitor.md
Once you've configured the monitoring solution. Continue to the next step of ins
1. Next, copy the **Workspace ID** and **Primary Key** to Notepad.
- :::image type="content" source="./media/how-to-configure-connection-monitor/copy-id-key.png" alt-text="Screenshot of workspace id and primary key.":::
+ :::image type="content" source="./media/how-to-configure-connection-monitor/copy-id-key.png" alt-text="Screenshot of workspace ID and primary key.":::
1. For Windows machines, download and run this PowerShell script [*EnableRules.ps1*](https://aka.ms/npmpowershellscript) in a PowerShell window with Administrator privileges. The PowerShell script will open the relevant firewall port for the TCP transactions.
- For Linux machines, the port number needs to be changed manually with the follow steps:
+ For Linux machines, the port number needs to be changed manually with the following steps:
* Navigate to path: /var/opt/microsoft/omsagent/npm_state. * Open file: npmdregistry
It's recommended that you install the Log Analytics agent on at least two server
1. Select the appropriate operating system below for the steps to install the Log Analytics agent on your servers.
- * [Windows](../azure-monitor/agents/agent-windows.md#install-agent-using-setup-wizard)
+ * [Windows](../azure-monitor/agents/agent-windows.md#install-the-agent)
* [Linux](../azure-monitor/agents/agent-linux.md) 1. When complete, the Microsoft Monitoring Agent appears in the Control Panel. You can review your configuration, and [verify the agent connectivity](../azure-monitor/agents/agent-windows.md#verify-agent-connectivity-to-azure-monitor) to Azure Monitor logs.
hdinsight Hdinsight Hadoop R Scaler Sparkr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-r-scaler-sparkr.md
- Title: Use ScaleR and SparkR with Azure HDInsight
-description: Use ScaleR and SparkR for data manipulation and model development with ML Services on Azure HDInsight
--- Previously updated : 12/26/2019---
-# Combine ScaleR and SparkR in HDInsight
--
-This document shows how to predict flight arrival delays using a **ScaleR** logistic regression model. The example uses flight delay and weather data, joined using **SparkR**.
-
-Although both packages run on Apache Hadoop's Spark execution engine, they're blocked from in-memory data sharing as they each require their own respective Spark sessions. Until this issue is addressed in an upcoming version of ML Server, the workaround is to maintain non-overlapping Spark sessions, and to exchange data through intermediate files. The instructions here show that these requirements are straightforward to achieve.
-
-The code was originally written for ML Server running on Spark in an HDInsight cluster on Azure. But the concept of mixing the use of SparkR and ScaleR in one script is also valid in the context of on-premises environments.
-
-The steps in this document assume that you have an intermediate level of knowledge of R and R the [ScaleR](/machine-learning-server/r/concept-what-is-revoscaler) library of ML Server. You're introduced to [SparkR](https://spark.apache.org/docs/2.1.0/sparkr.html) while walking through this scenario.
-
-## The airline and weather datasets
-
-The flight data is available from the [U.S. government archives](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236).
-
-The weather data can be downloaded as zip files in raw form, by month, from the [National Oceanic and Atmospheric Administration repository](https://www.ncdc.noaa.gov/orders/qclcd/). For this example, download the data for May 2007 ΓÇô December 2012. Use the hourly data files and `YYYYMMMstation.txt` file within each of the zips.
-
-## Setting up the Spark environment
-
-Use the following code to set up the Spark environment:
-
-```
-workDir <- '~'
-myNameNode <- 'default'
-myPort <- 0
-inputDataDir <- 'wasb://hdfs@myAzureAccount.blob.core.windows.net'
-hdfsFS <- RxHdfsFileSystem(hostName=myNameNode, port=myPort)
-
-# create a persistent Spark session to reduce startup times
-# (remember to stop it later!)
-
-sparkCC <- RxSpark(consoleOutput=TRUE, nameNode=myNameNode, port=myPort, persistentRun=TRUE)
-
-# create working directories
-
-rxHadoopMakeDir('/user')
-rxHadoopMakeDir('user/RevoShare')
-rxHadoopMakeDir('user/RevoShare/remoteuser')
-
-(dataDir <- '/share')
-rxHadoopMakeDir(dataDir)
-rxHadoopListFiles(dataDir)
-
-setwd(workDir)
-getwd()
-
-# version of rxRoc that runs in a local CC
-rxRoc <- function(...){
- rxSetComputeContext(RxLocalSeq())
- roc <- RevoScaleR::rxRoc(...)
- rxSetComputeContext(sparkCC)
- return(roc)
-}
-
-logmsg <- function(msg) { cat(format(Sys.time(), "%Y-%m-%d %H:%M:%S"),':',msg,'\n') }
-t0 <- proc.time()
-
-#..start
-
-logmsg('Start')
-(trackers <- system("mapred job -list-active-trackers", intern = TRUE))
-logmsg(paste('Number of task nodes=',length(trackers)))
-```
-
-Next, add `Spark_Home` to the search path for R packages. Adding it to the search path allows you to use SparkR, and initialize a SparkR session:
-
-```
-#..setup for use of SparkR
-
-logmsg('Initialize SparkR')
-
-.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
-library(SparkR)
-
-sparkEnvir <- list(spark.executor.instances = '10',
- spark.yarn.executor.memoryOverhead = '8000')
-
-sc <- sparkR.init(
- sparkEnvir = sparkEnvir,
- sparkPackages = "com.databricks:spark-csv_2.10:1.3.0"
-)
-
-sqlContext <- sparkRSQL.init(sc)
-```
-
-## Preparing the weather data
-
-To prepare the weather data, subset it to the columns needed for modeling:
--- "Visibility"-- "DryBulbCelsius"-- "DewPointCelsius"-- "RelativeHumidity"-- "WindSpeed"-- "Altimeter"-
-Then add an airport code associated with the weather station and convert the measurements from local time to UTC.
-
-Begin by creating a file to map the weather station (WBAN) info to an airport code. The following code reads each of the hourly raw weather data files, subsets to the columns we need, merges the weather station-mapping file, adjusts the date times of measurements to UTC, and then writes out a new version of the file:
-
-```
-# Look up AirportID and Timezone for WBAN (weather station ID) and adjust time
-
-adjustTime <- function(dataList)
-{
- dataset0 <- as.data.frame(dataList)
-
- dataset1 <- base::merge(dataset0, wbanToAirIDAndTZDF1, by = "WBAN")
-
- if(nrow(dataset1) == 0) {
- dataset1 <- data.frame(
- Visibility = numeric(0),
- DryBulbCelsius = numeric(0),
- DewPointCelsius = numeric(0),
- RelativeHumidity = numeric(0),
- WindSpeed = numeric(0),
- Altimeter = numeric(0),
- AdjustedYear = numeric(0),
- AdjustedMonth = numeric(0),
- AdjustedDay = integer(0),
- AdjustedHour = integer(0),
- AirportID = integer(0)
- )
-
- return(dataset1)
- }
-
- Year <- as.integer(substr(dataset1$Date, 1, 4))
- Month <- as.integer(substr(dataset1$Date, 5, 6))
- Day <- as.integer(substr(dataset1$Date, 7, 8))
-
- Time <- dataset1$Time
- Hour <- ceiling(Time/100)
-
- Timezone <- as.integer(dataset1$TimeZone)
-
- adjustdate = as.POSIXlt(sprintf("%4d-%02d-%02d %02d:00:00", Year, Month, Day, Hour), tz = "UTC") + Timezone * 3600
-
- AdjustedYear = as.POSIXlt(adjustdate)$year + 1900
- AdjustedMonth = as.POSIXlt(adjustdate)$mon + 1
- AdjustedDay = as.POSIXlt(adjustdate)$mday
- AdjustedHour = as.POSIXlt(adjustdate)$hour
-
- AirportID = dataset1$AirportID
- Weather = dataset1[,c("Visibility", "DryBulbCelsius", "DewPointCelsius", "RelativeHumidity", "WindSpeed", "Altimeter")]
-
- data.set = data.frame(cbind(AdjustedYear, AdjustedMonth, AdjustedDay, AdjustedHour, AirportID, Weather))
-
- return(data.set)
-}
-
-wbanToAirIDAndTZDF <- read.csv("wban-to-airport-id-tz.csv")
-
-colInfo <- list(
- WBAN = list(type="integer"),
- Date = list(type="character"),
- Time = list(type="integer"),
- Visibility = list(type="numeric"),
- DryBulbCelsius = list(type="numeric"),
- DewPointCelsius = list(type="numeric"),
- RelativeHumidity = list(type="numeric"),
- WindSpeed = list(type="numeric"),
- Altimeter = list(type="numeric")
-)
-
-weatherDF <- RxTextData(file.path(inputDataDir, "WeatherRaw"), colInfo = colInfo)
-
-weatherDF1 <- RxTextData(file.path(inputDataDir, "Weather"), colInfo = colInfo,
- filesystem=hdfsFS)
-
-rxSetComputeContext("localpar")
-rxDataStep(weatherDF, outFile = weatherDF1, rowsPerRead = 50000, overwrite = T,
- transformFunc = adjustTime,
- transformObjects = list(wbanToAirIDAndTZDF1 = wbanToAirIDAndTZDF))
-```
-
-## Importing the airline and weather data to Spark DataFrames
-
-Now we use the SparkR [read.df()](https://spark.apache.org/docs/3.3.0/api/R/reference/read.df.html) function to import the weather and airline data to Spark DataFrames. This function, like many other Spark methods, is executed lazily, meaning that they're queued for execution but not executed until required.
-
-```
-airPath <- file.path(inputDataDir, "AirOnTime08to12CSV")
-weatherPath <- file.path(inputDataDir, "Weather") # pre-processed weather data
-rxHadoopListFiles(airPath)
-rxHadoopListFiles(weatherPath)
-
-# create a SparkR DataFrame for the airline data
-
-logmsg('create a SparkR DataFrame for the airline data')
-# use inferSchema = "false" for more robust parsing
-airDF <- read.df(sqlContext, airPath, source = "com.databricks.spark.csv",
- header = "true", inferSchema = "false")
-
-# Create a SparkR DataFrame for the weather data
-
-logmsg('create a SparkR DataFrame for the weather data')
-weatherDF <- read.df(sqlContext, weatherPath, source = "com.databricks.spark.csv",
- header = "true", inferSchema = "true")
-```
-
-## Data cleansing and transformation
-
-Next we do some cleanup on the airline data we've imported to rename columns. We only keep the variables needed, and round scheduled departure times down to the nearest hour to enable merging with the latest weather data at departure:
-
-```
-logmsg('clean the airline data')
-airDF <- rename(airDF,
- ArrDel15 = airDF$ARR_DEL15,
- Year = airDF$YEAR,
- Month = airDF$MONTH,
- DayofMonth = airDF$DAY_OF_MONTH,
- DayOfWeek = airDF$DAY_OF_WEEK,
- Carrier = airDF$UNIQUE_CARRIER,
- OriginAirportID = airDF$ORIGIN_AIRPORT_ID,
- DestAirportID = airDF$DEST_AIRPORT_ID,
- CRSDepTime = airDF$CRS_DEP_TIME,
- CRSArrTime = airDF$CRS_ARR_TIME
-)
-
-# Select desired columns from the flight data.
-varsToKeep <- c("ArrDel15", "Year", "Month", "DayofMonth", "DayOfWeek", "Carrier", "OriginAirportID", "DestAirportID", "CRSDepTime", "CRSArrTime")
-airDF <- select(airDF, varsToKeep)
-
-# Apply schema
-coltypes(airDF) <- c("character", "integer", "integer", "integer", "integer", "character", "integer", "integer", "integer", "integer")
-
-# Round down scheduled departure time to full hour.
-airDF$CRSDepTime <- floor(airDF$CRSDepTime / 100)
-```
-
-Now we perform similar operations on the weather data:
-
-```
-# Average weather readings by hour
-logmsg('clean the weather data')
-weatherDF <- agg(groupBy(weatherDF, "AdjustedYear", "AdjustedMonth", "AdjustedDay", "AdjustedHour", "AirportID"), Visibility="avg",
- DryBulbCelsius="avg", DewPointCelsius="avg", RelativeHumidity="avg", WindSpeed="avg", Altimeter="avg"
- )
-
-weatherDF <- rename(weatherDF,
- Visibility = weatherDF$'avg(Visibility)',
- DryBulbCelsius = weatherDF$'avg(DryBulbCelsius)',
- DewPointCelsius = weatherDF$'avg(DewPointCelsius)',
- RelativeHumidity = weatherDF$'avg(RelativeHumidity)',
- WindSpeed = weatherDF$'avg(WindSpeed)',
- Altimeter = weatherDF$'avg(Altimeter)'
-)
-```
-
-## Joining the weather and airline data
-
-We now use the SparkR [join()](https://spark.apache.org/docs/3.3.0/api/R/reference/join.html) function to do a left outer join of the airline and weather data by departure AirportID and datetime. The outer join allows us to retain all the airline data records even if there's no matching weather data. Following the join, we remove some redundant columns, and rename the kept columns to remove the incoming DataFrame prefix introduced by the join.
-
-```
-logmsg('Join airline data with weather at Origin Airport')
-joinedDF <- SparkR::join(
- airDF,
- weatherDF,
- airDF$OriginAirportID == weatherDF$AirportID &
- airDF$Year == weatherDF$AdjustedYear &
- airDF$Month == weatherDF$AdjustedMonth &
- airDF$DayofMonth == weatherDF$AdjustedDay &
- airDF$CRSDepTime == weatherDF$AdjustedHour,
- joinType = "left_outer"
-)
-
-# Remove redundant columns
-vars <- names(joinedDF)
-varsToDrop <- c('AdjustedYear', 'AdjustedMonth', 'AdjustedDay', 'AdjustedHour', 'AirportID')
-varsToKeep <- vars[!(vars %in% varsToDrop)]
-joinedDF1 <- select(joinedDF, varsToKeep)
-
-joinedDF2 <- rename(joinedDF1,
- VisibilityOrigin = joinedDF1$Visibility,
- DryBulbCelsiusOrigin = joinedDF1$DryBulbCelsius,
- DewPointCelsiusOrigin = joinedDF1$DewPointCelsius,
- RelativeHumidityOrigin = joinedDF1$RelativeHumidity,
- WindSpeedOrigin = joinedDF1$WindSpeed,
- AltimeterOrigin = joinedDF1$Altimeter
-)
-```
-
-In a similar fashion, we join the weather and airline data based on arrival AirportID and datetime:
-
-```
-logmsg('Join airline data with weather at Destination Airport')
-joinedDF3 <- SparkR::join(
- joinedDF2,
- weatherDF,
- airDF$DestAirportID == weatherDF$AirportID &
- airDF$Year == weatherDF$AdjustedYear &
- airDF$Month == weatherDF$AdjustedMonth &
- airDF$DayofMonth == weatherDF$AdjustedDay &
- airDF$CRSDepTime == weatherDF$AdjustedHour,
- joinType = "left_outer"
-)
-
-# Remove redundant columns
-vars <- names(joinedDF3)
-varsToDrop <- c('AdjustedYear', 'AdjustedMonth', 'AdjustedDay', 'AdjustedHour', 'AirportID')
-varsToKeep <- vars[!(vars %in% varsToDrop)]
-joinedDF4 <- select(joinedDF3, varsToKeep)
-
-joinedDF5 <- rename(joinedDF4,
- VisibilityDest = joinedDF4$Visibility,
- DryBulbCelsiusDest = joinedDF4$DryBulbCelsius,
- DewPointCelsiusDest = joinedDF4$DewPointCelsius,
- RelativeHumidityDest = joinedDF4$RelativeHumidity,
- WindSpeedDest = joinedDF4$WindSpeed,
- AltimeterDest = joinedDF4$Altimeter
- )
-```
-
-## Save results to CSV for exchange with ScaleR
-
-That completes the joins we need to do with SparkR. We save the data from the final Spark DataFrame "joinedDF5" to a CSV for input to ScaleR and then close out the SparkR session. We explicitly tell SparkR to save the resultant CSV in 80 separate partitions to enable sufficient parallelism in ScaleR processing:
-
-```
-logmsg('output the joined data from Spark to CSV')
-joinedDF5 <- repartition(joinedDF5, 80) # write.df below will produce this many CSVs
-
-# write result to directory of CSVs
-write.df(joinedDF5, file.path(dataDir, "joined5Csv"), "com.databricks.spark.csv", "overwrite", header = "true")
-
-# We can shut down the SparkR Spark context now
-sparkR.stop()
-
-# remove non-data files
-rxHadoopRemove(file.path(dataDir, "joined5Csv/_SUCCESS"))
-```
-
-## Import to XDF for use by ScaleR
-
-We could use the CSV file of joined airline and weather data as-is for modeling via a ScaleR text data source. But we import it to XDF first, since it's more efficient when running multiple operations on the dataset:
-
-```
-logmsg('Import the CSV to compressed, binary XDF format')
-
-# set the Spark compute context for ML Services
-rxSetComputeContext(sparkCC)
-rxGetComputeContext()
-
-colInfo <- list(
- ArrDel15 = list(type="numeric"),
- Year = list(type="factor"),
- Month = list(type="factor"),
- DayofMonth = list(type="factor"),
- DayOfWeek = list(type="factor"),
- Carrier = list(type="factor"),
- OriginAirportID = list(type="factor"),
- DestAirportID = list(type="factor"),
- RelativeHumidityOrigin = list(type="numeric"),
- AltimeterOrigin = list(type="numeric"),
- DryBulbCelsiusOrigin = list(type="numeric"),
- WindSpeedOrigin = list(type="numeric"),
- VisibilityOrigin = list(type="numeric"),
- DewPointCelsiusOrigin = list(type="numeric"),
- RelativeHumidityDest = list(type="numeric"),
- AltimeterDest = list(type="numeric"),
- DryBulbCelsiusDest = list(type="numeric"),
- WindSpeedDest = list(type="numeric"),
- VisibilityDest = list(type="numeric"),
- DewPointCelsiusDest = list(type="numeric")
-)
-
-joinedDF5Txt <- RxTextData(file.path(dataDir, "joined5Csv"),
- colInfo = colInfo, fileSystem = hdfsFS)
-rxGetInfo(joinedDF5Txt)
-
-destData <- RxXdfData(file.path(dataDir, "joined5XDF"), fileSystem = hdfsFS)
-
-rxImport(inData = joinedDF5Txt, destData, overwrite = TRUE)
-
-rxGetInfo(destData, getVarInfo = T)
-
-# File name: /user/RevoShare/dev/delayDataLarge/joined5XDF
-# Number of composite data files: 80
-# Number of observations: 148619655
-# Number of variables: 22
-# Number of blocks: 320
-# Compression type: zlib
-# Variable information:
-# Var 1: ArrDel15, Type: numeric, Low/High: (0.0000, 1.0000)
-# Var 2: Year
-# 26 factor levels: 1987 1988 1989 1990 1991 ... 2008 2009 2010 2011 2012
-# Var 3: Month
-# 12 factor levels: 10 11 12 1 2 ... 5 6 7 8 9
-# Var 4: DayofMonth
-# 31 factor levels: 1 3 4 5 7 ... 29 30 2 18 31
-# Var 5: DayOfWeek
-# 7 factor levels: 4 6 7 1 3 2 5
-# Var 6: Carrier
-# 30 factor levels: PI UA US AA DL ... HA F9 YV 9E VX
-# Var 7: OriginAirportID
-# 374 factor levels: 15249 12264 11042 15412 13930 ... 13341 10559 14314 11711 10558
-# Var 8: DestAirportID
-# 378 factor levels: 13303 14492 10721 11057 13198 ... 14802 11711 11931 12899 10559
-# Var 9: CRSDepTime, Type: integer, Low/High: (0, 24)
-# Var 10: CRSArrTime, Type: integer, Low/High: (0, 2400)
-# Var 11: RelativeHumidityOrigin, Type: numeric, Low/High: (0.0000, 100.0000)
-# Var 12: AltimeterOrigin, Type: numeric, Low/High: (28.1700, 31.1600)
-# Var 13: DryBulbCelsiusOrigin, Type: numeric, Low/High: (-46.1000, 47.8000)
-# Var 14: WindSpeedOrigin, Type: numeric, Low/High: (0.0000, 81.0000)
-# Var 15: VisibilityOrigin, Type: numeric, Low/High: (0.0000, 90.0000)
-# Var 16: DewPointCelsiusOrigin, Type: numeric, Low/High: (-41.7000, 29.0000)
-# Var 17: RelativeHumidityDest, Type: numeric, Low/High: (0.0000, 100.0000)
-# Var 18: AltimeterDest, Type: numeric, Low/High: (28.1700, 31.1600)
-# Var 19: DryBulbCelsiusDest, Type: numeric, Low/High: (-46.1000, 53.9000)
-# Var 20: WindSpeedDest, Type: numeric, Low/High: (0.0000, 136.0000)
-# Var 21: VisibilityDest, Type: numeric, Low/High: (0.0000, 88.0000)
-# Var 22: DewPointCelsiusDest, Type: numeric, Low/High: (-43.0000, 29.0000)
-
-finalData <- RxXdfData(file.path(dataDir, "joined5XDF"), fileSystem = hdfsFS)
-
-```
-
-## Splitting data for training and test
-
-We use rxDataStep to split out the 2012 data for testing and keep the rest for training:
-
-```
-# split out the training data
-
-logmsg('split out training data as all data except year 2012')
-trainDS <- RxXdfData( file.path(dataDir, "finalDataTrain" ),fileSystem = hdfsFS)
-
-rxDataStep( inData = finalData, outFile = trainDS,
- rowSelection = ( Year != 2012 ), overwrite = T )
-
-# split out the testing data
-
-logmsg('split out the test data for year 2012')
-testDS <- RxXdfData( file.path(dataDir, "finalDataTest" ), fileSystem = hdfsFS)
-
-rxDataStep( inData = finalData, outFile = testDS,
- rowSelection = ( Year == 2012 ), overwrite = T )
-
-rxGetInfo(trainDS)
-rxGetInfo(testDS)
-```
-
-## Train and test a logistic regression model
-
-Now we're ready to build a model. To see the influence of weather data on delay in the arrival time, we use ScaleR's logistic regression routine. We use it to model whether an arrival delay of greater than 15 minutes is influenced by the weather at the departure and arrival airports:
-
-```
-logmsg('train a logistic regression model for Arrival Delay > 15 minutes')
-formula <- as.formula(ArrDel15 ~ Year + Month + DayofMonth + DayOfWeek + Carrier +
- OriginAirportID + DestAirportID + CRSDepTime + CRSArrTime +
- RelativeHumidityOrigin + AltimeterOrigin + DryBulbCelsiusOrigin +
- WindSpeedOrigin + VisibilityOrigin + DewPointCelsiusOrigin +
- RelativeHumidityDest + AltimeterDest + DryBulbCelsiusDest +
- WindSpeedDest + VisibilityDest + DewPointCelsiusDest
- )
-
-# Use the scalable rxLogit() function but set max iterations to 3 for the purposes of
-# this exercise
-
-logitModel <- rxLogit(formula, data = trainDS, maxIterations = 3)
-
-base::summary(logitModel)
-```
-
-Now let's see how it does on the test data by making some predictions and looking at ROC and AUC.
-
-```
-# Predict over test data (Logistic Regression).
-
-logmsg('predict over the test data')
-logitPredict <- RxXdfData(file.path(dataDir, "logitPredict"), fileSystem = hdfsFS)
-
-# Use the scalable rxPredict() function
-
-rxPredict(logitModel, data = testDS, outData = logitPredict,
- extraVarsToWrite = c("ArrDel15"),
- type = 'response', overwrite = TRUE)
-
-# Calculate ROC and Area Under the Curve (AUC).
-
-logmsg('calculate the roc and auc')
-logitRoc <- rxRoc("ArrDel15", "ArrDel15_Pred", logitPredict)
-logitAuc <- rxAuc(logitRoc)
-head(logitAuc)
-logitAuc
-
-plot(logitRoc)
-```
-
-## Scoring elsewhere
-
-We can also use the model for scoring data on another platform. By saving it to an RDS file and then transferring and importing that RDS into a destination scoring environment such as Microsoft SQL Server R Services. It's important to ensure that the factor levels of the data to be scored match those on which the model was built. That match can be achieved by extracting and saving the column information associated with the modeling data via ScaleR's `rxCreateColInfo()` function and then applying that column information to the input data source for prediction. In the following code example, we save a few rows of the test dataset and extract and use the column information from this sample in the prediction script:
-
-```
-# save the model and a sample of the test dataset
-
-logmsg('save serialized version of the model and a sample of the test data')
-rxSetComputeContext('localpar')
-saveRDS(logitModel, file = "logitModel.rds")
-testDF <- head(testDS, 1000)
-saveRDS(testDF , file = "testDF.rds" )
-list.files()
-
-rxHadoopListFiles(file.path(inputDataDir,''))
-rxHadoopListFiles(dataDir)
-
-# stop the spark engine
-rxStopEngine(sparkCC)
-
-logmsg('Done.')
-elapsed <- (proc.time() - t0)[3]
-logmsg(paste('Elapsed time=',sprintf('%6.2f',elapsed),'(sec)\n\n'))
-```
-
-## Summary
-
-In this article, we've shown how it's possible to combine use of SparkR for data manipulation with ScaleR for model development in Hadoop Spark. This scenario requires that you maintain separate Spark sessions, only running one session at a time, and exchange data via CSV files. Although straightforward, this process should be even easier in an upcoming ML Services release, when SparkR and ScaleR can share a Spark session and so share Spark DataFrames.
-
-## Next steps and more information
--- For more information on use of ML Server on Apache Spark, see the [Getting started guide](/machine-learning-server/r/how-to-revoscaler-spark).-
-For more information on use of SparkR, see:
--- [Apache SparkR document](https://spark.apache.org/docs/2.1.0/sparkr.html).--- [SparkR Overview](/azure/databricks/spark/latest/sparkr/overview)
key-vault Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/alert.md
You can choose between these alert types:
> [!IMPORTANT] > It can take up to 10 minutes for newly configured alerts to start sending notifications.
-This article focuses on alerts for Key Vault. For information about Key Vault insights, which combines both logs and metrics to provide a global monitoring solution, see [Monitoring your key vault with Key Vault insights](../../azure-monitor/insights/key-vault-insights-overview.md#introduction-to-key-vault-insights).
+This article focuses on alerts for Key Vault. For information about Key Vault insights, which combines both logs and metrics to provide a global monitoring solution, see [Monitoring your key vault with Key Vault insights](../key-vault-insights-overview.md#introduction-to-key-vault-insights).
## Configure an action group
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/logging.md
You can access your logging information 10 minutes (at most) after the key vault
* Use standard Azure access control methods in your storage account to secure your logs by restricting who can access them. * Delete logs that you no longer want to keep in your storage account.
-For overview information about Key Vault, see [What is Azure Key Vault?](overview.md). For information about where Key Vault is available, see the [pricing page](https://azure.microsoft.com/pricing/details/key-vault/). For information about using [Azure Monitor for Key Vault](../../azure-monitor/insights/key-vault-insights-overview.md).
+For overview information about Key Vault, see [What is Azure Key Vault?](overview.md). For information about where Key Vault is available, see the [pricing page](https://azure.microsoft.com/pricing/details/key-vault/). For information about using [Azure Monitor for Key Vault](../key-vault-insights-overview.md).
## Interpret your Key Vault logs
The following table lists the **operationName** values and corresponding REST AP
You can use the Key Vault solution in Azure Monitor logs to review Key Vault `AuditEvent` logs. In Azure Monitor logs, you use log queries to analyze data and get the information you need.
-For more information, including how to set this up, see [Azure Key Vault in Azure Monitor](../../azure-monitor/insights/key-vault-insights-overview.md).
+For more information, including how to set this up, see [Azure Key Vault in Azure Monitor](../key-vault-insights-overview.md).
For understanding how to analyze logs, see [Sample kusto log queries](./monitor-key-vault.md#analyzing-logs)
key-vault Monitor Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/monitor-key-vault.md
You can select "additional metrics" (or the "Metrics" tab in the left-hand sideb
Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
-Key Vault insights provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. For full details, see [Monitoring your key vault service with Key Vault insights](../../azure-monitor/insights/key-vault-insights-overview.md).
+Key Vault insights provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. For full details, see [Monitoring your key vault service with Key Vault insights](../key-vault-insights-overview.md).
## Monitoring data
key-vault Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/versions.md
Here's what's new with Azure Key Vault. New features and improvements are also a
## June 2020
-Azure Monitor for Key Vault is now in preview. Azure Monitor provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. For more information, see [Azure Monitor for Key Vault (preview).](../../azure-monitor/insights/key-vault-insights-overview.md).
+Azure Monitor for Key Vault is now in preview. Azure Monitor provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. For more information, see [Azure Monitor for Key Vault (preview).](../key-vault-insights-overview.md).
## May 2020
key-vault Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/whats-new.md
Microsoft is updating Azure services to use TLS certificates from a different se
## June 2020
-Azure Monitor for Key Vault is now in preview. Azure Monitor provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. For more information, see [Azure Monitor for Key Vault (preview).](../../azure-monitor/insights/key-vault-insights-overview.md).
+Azure Monitor for Key Vault is now in preview. Azure Monitor provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. For more information, see [Azure Monitor for Key Vault (preview).](../key-vault-insights-overview.md).
## May 2020
key-vault Key Vault Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/key-vault-insights-overview.md
+
+ Title: Monitor Key Vault with Key Vault insights | Microsoft Docs
+description: This article describes the Key Vault insights.
+++++ Last updated : 09/10/2020++++
+# Monitoring your key vault service with Key Vault insights
+Key Vault insights provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency.
+This article will help you understand how to onboard and customize the experience of Key Vault insights.
+
+## Introduction to Key Vault insights
+
+Before jumping into the experience, you should understand how it presents and visualizes information.
+- **At scale perspective** showing a snapshot view of performance based on the requests, breakdown of failures, and an overview of the operations and latency.
+- **Drill down analysis** of a particular key vault to perform detailed analysis.
+- **Customizable** where you can change which metrics you want to see, modify or set thresholds that align with your limits, and save your own workbook. Charts in the workbook can be pinned to Azure dashboards.
+
+Key Vault insights combines both logs and metrics to provide a global monitoring solution. All users can access the metrics-based monitoring data, however the inclusion of logs-based visualizations may require users to [enable logging of their Azure Key Vault](./general/logging.md).
+
+## View from Azure Monitor
+
+From Azure Monitor, you can view request, latency, and failure details from multiple key vaults in your subscription, and help identify performance problems and throttling scenarios.
+
+To view the utilization and operations of your key vaults across all your subscriptions, perform the following steps:
+
+1. Sign into the [Azure portal](https://portal.azure.com/)
+
+2. Select **Monitor** from the left-hand pane in the Azure portal, and under the Insights section, select **Key Vaults**.
+
+![Screenshot of overview experience with multiple graphs](./media/key-vaults-insights-overview/overview.png)
+
+## Overview workbook
+
+On the Overview workbook for the selected subscription, the table displays interactive key vault metrics for key vaults grouped within the subscription. You can filter results based on the options you select from the following drop-down lists:
+
+* Subscriptions ΓÇô only subscriptions that have key vaults are listed.
+
+* Key Vaults ΓÇô by default only up to 5 key vaults are pre-selected. If you select all or multiple key vaults in the scope selector, up to 200 key vaults will be returned. For example, if you had a total of 573 key vaults across three subscriptions that you've selected, only 200 vaults will be displayed.
+
+* Time Range ΓÇô by default, displays the last 24 hours of information based on the corresponding selections made.
+
+The counter tile, under the drop-down list, rolls-up the total number of key vaults in the selected subscriptions and reflects how many are selected. There are conditional color-coded heatmaps for the columns of the workbook that report request, failures, and latency metrics. The deepest color has the highest value and a lighter color is based on the lowest values.
+
+## Failures workbook
+
+Select **Failures** at the top of the page and the Failures tab opens. It shows you the API hits, frequency over time, along with the amount of certain response codes.
+
+![Screenshot of failures workbook](./media/key-vaults-insights-overview/failures.png)
+
+There is conditional color-coding or heatmaps for columns in the workbook that report API hits metrics with a blue value. The deepest color has the highest value and a lighter color is based on the lowest values.
+
+The workbook displays Successes (2xx status codes), Authentication Errors (401/403 status codes), Throttling (429 status codes), and Other Failures (4xx status codes).
+
+To better understand what each of the status codes represent, we recommend reading through the documentation on [Azure Key Vault status and response codes](./general/authentication-requests-and-responses.md).
+
+## View from a Key Vault resource
+
+To access Key Vault insights directly from a key Vault:
+
+1. In the Azure portal, select Key Vaults.
+
+2. From the list, choose a key vault. In the monitoring section, choose Insights.
+
+These views are also accessible by selecting the resource name of a key vault from the Azure Monitor level workbook.
+
+![Screenshot of view from a key vault resource](./media/key-vaults-insights-overview/key-vault-resource-view.png)
+
+On the **Overview** workbook for the key vault, it shows several performance metrics that help you quickly assess:
+
+- Interactive performance charts showing the most essential details related to key vault transactions, latency, and availability.
+
+- Metrics and status tiles highlighting service availability, total count of transactions to the key vault resource, and overall latency.
+
+Selecting any of the other tabs for **Failures** or **Operations** opens the respective workbooks.
+
+![Screenshot of failures view](./media/key-vaults-insights-overview/resource-failures.png)
+
+The failures workbook breakdowns the results of all key vault requests in the selected time frame, and provides categorization on Successes (2xx), Authentication Errors (401/403), Throttling (429), and other failures.
+
+![Screenshot of operations view](./media/key-vaults-insights-overview/operations.png)
+
+The Operations workbook allows users to deep dive into the full details of all transactions, which can be filtered by the Result Status using the top level tiles.
+
+![Screenshot that shows the Operations workbook that contains full details of all transactions.](./media/key-vaults-insights-overview/info.png)
+
+Users can also scope out views based on specific transaction types in the upper table, which dynamically updates the lower table, where users can view full operation details in a pop up context pane.
+
+>[!NOTE]
+> Note that users must have the diagnostic settings enabled to view this workbook. To learn more about enabling diagnostic setting, read more about [Azure Key Vault Logging](./general/logging.md).
+
+## Pin and export
+
+You can pin any one of the metric sections to an Azure dashboard by selecting the pushpin icon at the top right of the section.
+
+The multi-subscription and key vaults overview or failures workbooks support exporting the results in Excel format by selecting the download icon to the left of the pushpin icon.
+
+![Screenshot of pin icon selected](./media/key-vaults-insights-overview/pin.png)
+
+## Customize Key Vault insights
+
+This section highlights common scenarios for editing the workbook to customize in support of your data analytics needs:
+* Scope the workbook to always select a particular subscription or key vault(s)
+* Change metrics in the grid
+* Change the requests threshold
+* Change the color rendering
+
+You can begin customizations by enabling the editing mode, by selecting the **Customize** button from the top toolbar.
+
+![Screenshot of customize button](./media/key-vaults-insights-overview/customize.png)
+
+Customizations are saved to a custom workbook to prevent overwriting the default configuration in our published workbook. Workbooks are saved within a resource group, either in the My Reports section that is private to you or in the Shared Reports section that's accessible to everyone with access to the resource group. After you save the custom workbook, you need to go to the workbook gallery to launch it.
+
+![Screenshot of the workbook gallery](./media/key-vaults-insights-overview/gallery.png)
+
+### Specifying a subscription or key vault
+
+You can configure the multi-subscription and key vault Overview or Failures workbooks to scope to a particular subscription(s) or key vault(s) on every run, by performing the following steps:
+
+1. Select **Monitor** from the portal and then select **Key Vaults** from the left-hand pane.
+2. On the **Overview** workbook, from the command bar select **Edit**.
+3. Select from the **Subscriptions** drop-down list one or more subscriptions you want yo use as the default. Remember, the workbook supports selecting up to a total of 10 subscriptions.
+4. Select from the **Key Vaults** drop-down list one or more accounts you want it to use as the default. Remember, the workbook supports selecting up to a total of 200 storage accounts.
+5. Select **Save as** from the command bar to save a copy of the workbook with your customizations, and then click **Done editing** to return to reading mode.
+
+## Troubleshooting
+
+For general troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
+
+This section will help you with the diagnosis and troubleshooting of some of the common issues you may encounter when using Key Vault insights. Use the list below to locate the information relevant to your specific issue.
+
+### Resolving performance issues or failures
+
+To help troubleshoot any key vault related issues you identify with Key Vault insights, see the [Azure Key Vault documentation](index.yml).
+
+### Why can I only see 200 key vaults
+
+There is a limit of 200 key vaults that can be selected and viewed. Regardless of the number of selected subscriptions, the number of selected key vaults has a limit of 200.
+
+### Why don't I see all my subscriptions in the subscription picker
+
+We only show subscriptions that contain key vaults, chosen from the selected subscription filter, which are selected in the "Directory + Subscription" in the Azure portal header.
+
+![Screenshot of subscription filter](./media/key-vaults-insights-overview/Subscriptions.png)
+
+### I want to make changes or add additional visualizations to Key Vault Insights, how do I do so
+
+To make changes, select the "Edit Mode" to modify the workbook, then you can save your work as a new workbook that is tied to a designated subscription and resource group.
+
+### What is the time-grain once we pin any part of the Workbooks
+
+We utilize the "Auto" time grain, therefore it depends on what time range is selected.
+
+### What is the time range when any part of the workbook is pinned
+
+The time range will depend on the dashboard settings.
+
+### What if I want to see other data or make my own visualizations? How can I make changes to the Key Vault Insights
+
+You can edit the existing workbook, through the use of the edit mode, and then save your work as a new workbook that will have all your new changes.
+
+## Next steps
+
+Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/logging.md
Individual blobs are stored as text, formatted as a JSON. Let's look at an examp
You can use the Key Vault solution in Azure Monitor logs to review Managed HSM **AuditEvent** logs. In Azure Monitor logs, you use log queries to analyze data and get the information you need.
-For more information, including how to set this up, see [Azure Key Vault in Azure Monitor](../../azure-monitor/insights/key-vault-insights-overview.md).
+For more information, including how to set this up, see [Azure Key Vault in Azure Monitor](../key-vault-insights-overview.md).
## Next steps
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
Title: Gateway load balancer (Preview)
+ Title: Gateway load balancer
description: Overview of gateway load balancer SKU for Azure Load Balancer.
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
When you've configured your AutoML Job to the desired settings, you can submit t
The automated ML training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child runs. > [!TIP]
-> Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-run-results) section.
+> Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-job-results) section.
For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview)
machine-learning How To Move Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-move-workspace.md
Moving the workspace enables you to migrate the workspace and its contents as a
| Workspace contents | Moved with workspace | | -- |:--:| | Datasets | Yes |
-| Experiment runs | Yes |
+| Experiment jobs | Yes |
| Environments | Yes | | Models and other assets stored in the workspace | Yes | | Compute resources | No |
Moving the workspace enables you to migrate the workspace and its contents as a
| Resource provider | Why it's needed | | -- | -- |
- | __Microsoft.DocumentDB/databaseAccounts__ | Azure CosmosDB instance that logs metadata for the workspace. |
+ | __Microsoft.DocumentDB/databaseAccounts__ | Azure Cosmos DB instance that logs metadata for the workspace. |
| __Microsoft.Search/searchServices__ | Azure Search provides indexing capabilities for the workspace. | For information on registering resource providers, see [Resolve errors for resource provider registration](/azure/azure-resource-manager/templates/error-register-resource-provider).
Moving the workspace enables you to migrate the workspace and its contents as a
* Workspace move is not meant for replicating workspaces, or moving individual assets such as models or datasets from one workspace to another. * Workspace move doesn't support migration across Azure regions or Azure Active Directory tenants.
-* The workspace mustn't be in use during the move operation. Verify that all experiment runs, data profiling runs, and labeling projects have completed. Also verify that inference endpoints aren't being invoked.
+* The workspace mustn't be in use during the move operation. Verify that all experiment jobs, data profiling jobs, and labeling projects have completed. Also verify that inference endpoints aren't being invoked.
* The workspace will become unavailable during the move. * Before to the move, you must delete or detach computes and inference endpoints from the workspace.
Moving the workspace enables you to migrate the workspace and its contents as a
az account set -s origin-sub-id ```
-2. Verify that the origin workspace isn't being used. Check that any experiment runs, data profiling runs, or labeling projects have completed. Also verify that inferencing endpoints aren't being invoked.
+2. Verify that the origin workspace isn't being used. Check that any experiment jobs, data profiling jobs, or labeling projects have completed. Also verify that inferencing endpoints aren't being invoked.
3. Delete or detach any computes from the workspace, and delete any inferencing endpoints. Moving computes and endpoints isn't supported. Also note that the workspace will become unavailable during the move.
machine-learning How To Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-scorecard.md
Azure Machine LearningΓÇÖs Responsible AI dashboard is designed for machine lear
- While an end-to-end machine learning life cycle includes both technical and non-technical stakeholders in the loop, there's very little support to enable an effective multi-stakeholder alignment, helping technical experts get timely feedback and direction from the non-technical stakeholders. - AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes.
-One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Run History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard, a customizable report that you can easily configure, download, and share with your technical and non-technical stakeholders to educate them about your data and model health and compliance and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of your model.
+One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Job History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard, a customizable report that you can easily configure, download, and share with your technical and non-technical stakeholders to educate them about your data and model health and compliance and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of your model.
## Who should use a Responsible AI scorecard?
machine-learning How To Retrain Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-retrain-designer.md
For this example, you will change the training data path from a fixed value to a
> - After detaching, you can delete the pipeline parameter in the **Setings** pane. > - You can also add a pipeline parameter in the **Settings** pane, and then apply it on some component parameter.
-1. Submit the pipeline run.
+1. Submit the pipeline job.
## Publish a training pipeline
Publish a pipeline to a pipeline endpoint to easily reuse your pipelines in the
## Retrain your model
-Now that you have a published training pipeline, you can use it to retrain your model on new data. You can submit runs from a pipeline endpoint from the studio workspace or programmatically.
+Now that you have a published training pipeline, you can use it to retrain your model on new data. You can submit jobs from a pipeline endpoint from the studio workspace or programmatically.
-### Submit runs by using the studio portal
+### Submit jobs by using the studio portal
-Use the following steps to submit a parameterized pipeline endpoint run from the studio portal:
+Use the following steps to submit a parameterized pipeline endpoint job from the studio portal:
1. Go to the **Endpoints** page in your studio workspace. 1. Select the **Pipeline endpoints** tab. Then, select your pipeline endpoint. 1. Select the **Published pipelines** tab. Then, select the pipeline version that you want to run. 1. Select **Submit**.
-1. In the setup dialog box, you can specify the parameters values for the run. For this example, update the data path to train your model using a non-US dataset.
+1. In the setup dialog box, you can specify the parameters values for the job. For this example, update the data path to train your model using a non-US dataset.
-![Screenshot that shows how to set up a parameterized pipeline run in the designer](./media/how-to-retrain-designer/published-pipeline-run.png)
+![Screenshot that shows how to set up a parameterized pipeline job in the designer](./media/how-to-retrain-designer/published-pipeline-run.png)
-### Submit runs by using code
+### Submit jobs by using code
You can find the REST endpoint of a published pipeline in the overview panel. By calling the endpoint, you can retrain the published pipeline.
In this article, you learned how to create a parameterized training pipeline end
For a complete walkthrough of how you can deploy a model to make predictions, see the [designer tutorial](tutorial-designer-automobile-price-train-score.md) to train and deploy a regression model.
-For how to publish and submit a run to pipeline endpoint using SDK, see [this article](how-to-deploy-pipelines.md).
+For how to publish and submit a job to pipeline endpoint using SDK, see [this article](how-to-deploy-pipelines.md).
machine-learning How To Run Batch Predictions Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-batch-predictions-designer.md
Now you're ready to deploy the inference pipeline. This will deploy the pipeline
Now, you have a published pipeline with a dataset parameter. The pipeline will use the trained model created in the training pipeline to score the dataset you provide as a parameter.
-### Submit a pipeline run
+### Submit a pipeline job
-In this section, you'll set up a manual pipeline run and alter the pipeline parameter to score new data.
+In this section, you'll set up a manual pipeline job and alter the pipeline parameter to score new data.
1. After the deployment is complete, go to the **Endpoints** section.
In this section, you'll set up a manual pipeline run and alter the pipeline para
1. Select the pipeline you published.
- The pipeline details page shows you a detailed run history and connection string information for your pipeline.
+ The pipeline details page shows you a detailed job history and connection string information for your pipeline.
1. Select **Submit** to create a manual run of the pipeline.
In this section, you'll set up a manual pipeline run and alter the pipeline para
You can find information on how to consume pipeline endpoints and published pipeline in the **Endpoints** section.
-You can find the REST endpoint of a pipeline endpoint in the run overview panel. By calling the endpoint, you're consuming its default published pipeline.
+You can find the REST endpoint of a pipeline endpoint in the job overview panel. By calling the endpoint, you're consuming its default published pipeline.
You can also consume a published pipeline in the **Published pipelines** page. Select a published pipeline and you can find the REST endpoint of it in the **Published pipeline overview** panel to the right of the graph.
machine-learning How To Save Write Experiment Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-save-write-experiment-files.md
Last updated 03/10/2020
In this article, you learn where to save input files, and where to write output files from your experiments to prevent storage limit errors and experiment latency.
-When launching training runs on a [compute target](concept-compute-target.md), they are isolated from outside environments. The purpose of this design is to ensure reproducibility and portability of the experiment. If you run the same script twice, on the same or another compute target, you receive the same results. With this design, you can treat compute targets as stateless computation resources, each having no affinity to the jobs that are running after they are finished.
+When launching training jobs on a [compute target](concept-compute-target.md), they are isolated from outside environments. The purpose of this design is to ensure reproducibility and portability of the experiment. If you run the same script twice, on the same or another compute target, you receive the same results. With this design, you can treat compute targets as stateless computation resources, each having no affinity to the jobs that are running after they are finished.
## Where to save input files Before you can initiate an experiment on a compute target or your local machine, you must ensure that the necessary files are available to that compute target, such as dependency files and data files your code needs to run.
-Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory. Instead, access your data using a [datastore](/python/api/azureml-core/azureml.data).
+Azure Machine Learning jobs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory. Instead, access your data using a [datastore](/python/api/azureml-core/azureml.data).
The storage limit for experiment snapshots is 300 MB and/or 2000 files.
For this reason, we recommend:
### Storage limits of experiment snapshots
-For experiments, Azure Machine Learning automatically makes an experiment snapshot of your code based on the directory you suggest when you configure the run. This has a total limit of 300 MB and/or 2000 files. If you exceed this limit, you'll see the following error:
+For experiments, Azure Machine Learning automatically makes an experiment snapshot of your code based on the directory you suggest when you configure the job. This has a total limit of 300 MB and/or 2000 files. If you exceed this limit, you'll see the following error:
```Python While attempting to take snapshot of .
Jupyter notebooks| Create a `.amlignore` file or move your notebook into a new,
## Where to write files
-Due to the isolation of training experiments, the changes to files that happen during runs are not necessarily persisted outside of your environment. If your script modifies the files local to compute, the changes are not persisted for your next experiment run, and they're not propagated back to the client machine automatically. Therefore, the changes made during the first experiment run don't and shouldn't affect those in the second.
+Due to the isolation of training experiments, the changes to files that happen during jobs are not necessarily persisted outside of your environment. If your script modifies the files local to compute, the changes are not persisted for your next experiment job, and they're not propagated back to the client machine automatically. Therefore, the changes made during the first experiment job don't and shouldn't affect those in the second.
When writing changes, we recommend writing files to storage via an Azure Machine Learning dataset with an [OutputFileDatasetConfig object](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig). See [how to create an OutputFileDatasetConfig](how-to-train-with-datasets.md#where-to-write-training-output). Otherwise, write files to the `./outputs` and/or `./logs` folder. >[!Important]
-> Two folders, *outputs* and *logs*, receive special treatment by Azure Machine Learning. During training, when you write files to`./outputs` and`./logs` folders, the files will automatically upload to your run history, so that you have access to them once your run is finished.
+> Two folders, *outputs* and *logs*, receive special treatment by Azure Machine Learning. During training, when you write files to`./outputs` and`./logs` folders, the files will automatically upload to your job history, so that you have access to them once your job is finished.
-* **For output such as status messages or scoring results,** write files to the `./outputs` folder, so they are persisted as artifacts in run history. Be mindful of the number and size of files written to this folder, as latency may occur when the contents are uploaded to run history. If latency is a concern, writing files to a datastore is recommended.
+* **For output such as status messages or scoring results,** write files to the `./outputs` folder, so they are persisted as artifacts in job history. Be mindful of the number and size of files written to this folder, as latency may occur when the contents are uploaded to job history. If latency is a concern, writing files to a datastore is recommended.
-* **To save written file as logs in run history,** write files to `./logs` folder. The logs are uploaded in real time, so this method is suitable for streaming live updates from a remote run.
+* **To save written file as logs in job history,** write files to `./logs` folder. The logs are uploaded in real time, so this method is suitable for streaming live updates from a remote job.
## Next steps
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-set-up-training-targets.md
Title: Configure a training run
+ Title: Configure a training job
description: Train your machine learning model on various training environments (compute targets). You can easily switch between training environments.
-# Configure and submit training runs
+# Configure and submit training jobs
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
-In this article, you learn how to configure and submit Azure Machine Learning runs to train your models. Snippets of code explain the key parts of configuration and submission of a training script. Then use one of the [example notebooks](#notebooks) to find the full end-to-end working examples.
+In this article, you learn how to configure and submit Azure Machine Learning jobs to train your models. Snippets of code explain the key parts of configuration and submission of a training script. Then use one of the [example notebooks](#notebooks) to find the full end-to-end working examples.
When training, it is common to start on your local computer, and then later scale out to a cloud-based cluster. With Azure Machine Learning, you can run your script on various compute targets without having to change your training script.
-All you need to do is define the environment for each compute target within a **script run configuration**. Then, when you want to run your training experiment on a different compute target, specify the run configuration for that compute.
+All you need to do is define the environment for each compute target within a **script job configuration**. Then, when you want to run your training experiment on a different compute target, specify the job configuration for that compute.
## Prerequisites
All you need to do is define the environment for each compute target within a **
* A compute target, `my_compute_target`. [Create a compute target](how-to-create-attach-compute-studio.md) ## <a name="whats-a-run-configuration"></a>What's a script run configuration?
-A [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) is used to configure the information necessary for submitting a training run as part of an experiment.
+A [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) is used to configure the information necessary for submitting a training job as part of an experiment.
You submit your training experiment with a ScriptRunConfig object. This object includes the:
You submit your training experiment with a ScriptRunConfig object. This object
## <a id="submit"></a>Train your model
-The code pattern to submit a training run is the same for all types of compute targets:
+The code pattern to submit a training job is the same for all types of compute targets:
1. Create an experiment to run 1. Create an environment where the script will run 1. Create a ScriptRunConfig, which specifies the compute target and environment
-1. Submit the run
-1. Wait for the run to complete
+1. Submit the job
+1. Wait for the job to complete
Or you can:
Or you can:
## Create an experiment
-Create an [experiment](v1/concept-azure-machine-learning-architecture.md#experiments) in your workspace. An experiment is a light-weight container that helps to organize run submissions and keep track of code.
+Create an [experiment](v1/concept-azure-machine-learning-architecture.md#experiments) in your workspace. An experiment is a light-weight container that helps to organize job submissions and keep track of code.
```python from azureml.core import Experiment
The example code in this article assumes that you have already created a compute
## <a name="environment"></a> Create an environment Azure Machine Learning [environments](concept-environments.md) are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, Docker image, environment variables, and software settings around your training and scoring scripts. They also specify runtimes (Python, Spark, or Docker).
-You can either define your own environment, or use an Azure ML curated environment. [Curated environments](./how-to-use-environments.md#use-a-curated-environment) are predefined environments that are available in your workspace by default. These environments are backed by cached Docker images which reduces the run preparation cost. See [Azure Machine Learning Curated Environments](./resource-curated-environments.md) for the full list of available curated environments.
+You can either define your own environment, or use an Azure ML curated environment. [Curated environments](./how-to-use-environments.md#use-a-curated-environment) are predefined environments that are available in your workspace by default. These environments are backed by cached Docker images which reduces the job preparation cost. See [Azure Machine Learning Curated Environments](./resource-curated-environments.md) for the full list of available curated environments.
For a remote compute target, you can use one of these popular curated environments to start with:
myenv.python.user_managed_dependencies = True
# myenv.python.interpreter_path = '/home/johndoe/miniconda3/envs/myenv/bin/python' ```
-## Create the script run configuration
+## Create the script job configuration
-Now that you have a compute target (`my_compute_target`, see [Prerequisites](#prerequisites) and environment (`myenv`, see [Create an environment](#environment)), create a script run configuration that runs your training script (`train.py`) located in your `project_folder` directory:
+Now that you have a compute target (`my_compute_target`, see [Prerequisites](#prerequisites) and environment (`myenv`, see [Create an environment](#environment)), create a script job configuration that runs your training script (`train.py`) located in your `project_folder` directory:
```python from azureml.core import ScriptRunConfig
If you do not specify an environment, a default environment will be created for
If you have command-line arguments you want to pass to your training script, you can specify them via the **`arguments`** parameter of the ScriptRunConfig constructor, e.g. `arguments=['--arg1', arg1_val, '--arg2', arg2_val]`.
-If you want to override the default maximum time allowed for the run, you can do so via the **`max_run_duration_seconds`** parameter. The system will attempt to automatically cancel the run if it takes longer than this value.
+If you want to override the default maximum time allowed for the job, you can do so via the **`max_run_duration_seconds`** parameter. The system will attempt to automatically cancel the job if it takes longer than this value.
### Specify a distributed job configuration If you want to run a [distributed training](how-to-train-distributed-gpu.md) job, provide the distributed job-specific config to the **`distributed_job_config`** parameter. Supported config types include [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration), [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration), and [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration).
run.wait_for_completion(show_output=True)
``` > [!IMPORTANT]
-> When you submit the training run, a snapshot of the directory that contains your training scripts is created and sent to the compute target. It is also stored as part of the experiment in your workspace. If you change files and submit the run again, only the changed files will be uploaded.
+> When you submit the training job, a snapshot of the directory that contains your training scripts is created and sent to the compute target. It is also stored as part of the experiment in your workspace. If you change files and submit the job again, only the changed files will be uploaded.
> > [!INCLUDE [amlinclude-info](../../includes/machine-learning-amlignore-gitignore.md)] >
run.wait_for_completion(show_output=True)
> [!IMPORTANT] > **Special Folders**
-> Two folders, *outputs* and *logs*, receive special treatment by Azure Machine Learning. During training, when you write files to folders named *outputs* and *logs* that are relative to the root directory (`./outputs` and `./logs`, respectively), the files will automatically upload to your run history so that you have access to them once your run is finished.
+> Two folders, *outputs* and *logs*, receive special treatment by Azure Machine Learning. During training, when you write files to folders named *outputs* and *logs* that are relative to the root directory (`./outputs` and `./logs`, respectively), the files will automatically upload to your job history so that you have access to them once your job is finished.
> > To create artifacts during training (such as model files, checkpoints, data files, or plotted images) write these to the `./outputs` folder. >
-> Similarly, you can write any logs from your training run to the `./logs` folder. To utilize Azure Machine Learning's [TensorBoard integration](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.ipynb) make sure you write your TensorBoard logs to this folder. While your run is in progress, you will be able to launch TensorBoard and stream these logs. Later, you will also be able to restore the logs from any of your previous runs.
+> Similarly, you can write any logs from your training job to the `./logs` folder. To utilize Azure Machine Learning's [TensorBoard integration](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.ipynb) make sure you write your TensorBoard logs to this folder. While your job is in progress, you will be able to launch TensorBoard and stream these logs. Later, you will also be able to restore the logs from any of your previous jobs.
>
-> For example, to download a file written to the *outputs* folder to your local machine after your remote training run:
+> For example, to download a file written to the *outputs* folder to your local machine after your remote training job:
> `run.download_file(name='outputs/my_output_file', output_file_path='my_destination_path')` ## <a id="gitintegration"></a>Git tracking and integration
-When you start a training run where the source directory is a local Git repository, information about the repository is stored in the run history. For more information, see [Git integration for Azure Machine Learning](concept-train-model-git-integration.md).
+When you start a training job where the source directory is a local Git repository, information about the repository is stored in the job history. For more information, see [Git integration for Azure Machine Learning](concept-train-model-git-integration.md).
## <a name="notebooks"></a>Notebook examples
-See these notebooks for examples of configuring runs for various training scenarios:
+See these notebooks for examples of configuring jobs for various training scenarios:
* [Training on various compute targets](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training) * [Training with ML frameworks](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks) * [tutorials/img-classification-part1-training.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb)
See these notebooks for examples of configuring runs for various training scenar
* **AttributeError: 'RoundTripLoader' object has no attribute 'comment_handling'**: This error comes from the new version (v0.17.5) of `ruamel-yaml`, an `azureml-core` dependency, that introduces a breaking change to `azureml-core`. In order to fix this error, please uninstall `ruamel-yaml` by running `pip uninstall ruamel-yaml` and installing a different version of `ruamel-yaml`; the supported versions are v0.15.35 to v0.17.4 (inclusive). You can do this by running `pip install "ruamel-yaml>=0.15.35,<0.17.5"`.
-* **Run fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`.
+* **Job fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`.
Consider upgrading to the latest version of azureml-core: `pip install -U azureml-core`.
- If you are running into this issue for local runs, check the version of PyJWT installed in your environment where you are starting runs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall and install the right version as follows:
+ If you are running into this issue for local jobs, check the version of PyJWT installed in your environment where you are starting jobs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall and install the right version as follows:
1. Start a command shell, activate conda environment where azureml-core is installed. 2. Enter `pip freeze` and look for `PyJWT`, if found, the version listed should be < 2.0.0 3. If the listed version is not a supported version, `pip uninstall PyJWT` in the command shell and enter y for confirmation. 4. Install using `pip install 'PyJWT<2.0.0'`
- If you are submitting a user-created environment with your run, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
+ If you are submitting a user-created environment with your job, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
- * **ModuleErrors (No module named)**: If you are running into ModuleErrors while submitting experiments in Azure ML, the training script is expecting a package to be installed but it isn't added. Once you provide the package name, Azure ML installs the package in the environment used for your training run.
+ * **ModuleErrors (No module named)**: If you are running into ModuleErrors while submitting experiments in Azure ML, the training script is expecting a package to be installed but it isn't added. Once you provide the package name, Azure ML installs the package in the environment used for your training job.
If you are using Estimators to submit experiments, you can specify a package name via `pip_packages` or `conda_packages` parameter in the estimator based on from which source you want to install the package. You can also specify a yml file with all your dependencies using `conda_dependencies_file`or list all your pip requirements in a txt file using `pip_requirements_file` parameter. If you have your own Azure ML Environment object that you want to override the default image used by the estimator, you can specify that environment via the `environment` parameter of the estimator constructor.
See these notebooks for examples of configuring runs for various training scenar
> [!Note] > If you think a particular package is common enough to be added in Azure ML maintained images and environments please raise a GitHub issue in [AzureML Containers](https://github.com/Azure/AzureML-Containers).
-* **NameError (Name not defined), AttributeError (Object has no attribute)**: This exception should come from your training scripts. You can look at the log files from Azure portal to get more information about the specific name not defined or attribute error. From the SDK, you can use `run.get_details()` to look at the error message. This will also list all the log files generated for your run. Please make sure to take a look at your training script and fix the error before resubmitting your run.
+* **NameError (Name not defined), AttributeError (Object has no attribute)**: This exception should come from your training scripts. You can look at the log files from Azure portal to get more information about the specific name not defined or attribute error. From the SDK, you can use `run.get_details()` to look at the error message. This will also list all the log files generated for your job. Please make sure to take a look at your training script and fix the error before resubmitting your job.
-* **Run or experiment deletion**: Experiments can be archived by using the [Experiment.archive](/python/api/azureml-core/azureml.core.experiment%28class%29#archive--)
+* **Job or experiment deletion**: Experiments can be archived by using the [Experiment.archive](/python/api/azureml-core/azureml.core.experiment%28class%29#archive--)
method, or from the Experiment tab view in Azure Machine Learning studio client via the "Archive experiment" button. This action hides the experiment from list queries and views, but does not delete it.
- Permanent deletion of individual experiments or runs is not currently supported. For more information on deleting Workspace assets, see [Export or delete your Machine Learning service workspace data](how-to-export-delete-data.md).
+ Permanent deletion of individual experiments or jobs is not currently supported. For more information on deleting Workspace assets, see [Export or delete your Machine Learning service workspace data](how-to-export-delete-data.md).
-* **Metric Document is too large**: Azure Machine Learning has internal limits on the size of metric objects that can be logged at once from a training run. If you encounter a "Metric Document is too large" error when logging a list-valued metric, try splitting the list into smaller chunks, for example:
+* **Metric Document is too large**: Azure Machine Learning has internal limits on the size of metric objects that can be logged at once from a training job. If you encounter a "Metric Document is too large" error when logging a list-valued metric, try splitting the list into smaller chunks, for example:
```python run.log_list("my metric name", my_metric[:N])
machine-learning How To Setup Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-vs-code.md
The Azure Machine Learning extension for VS Code provides a user interface to:
## Sign in to your Azure Account
-In order to provision resources and run workloads on Azure, you have to sign in with your Azure account credentials. To assist with account management, Azure Machine Learning automatically installs the Azure Account extension. Visit the following site to [learn more about the Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account).
+In order to provision resources and job workloads on Azure, you have to sign in with your Azure account credentials. To assist with account management, Azure Machine Learning automatically installs the Azure Account extension. Visit the following site to [learn more about the Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account).
To sign into you Azure account, select the **Azure: Sign In** button in the bottom right corner on the Visual Studio Code status bar to start the sign in process.
machine-learning How To Track Designer Experiments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-designer-experiments.md
After the pipeline run completes, you can see the *Mean_Absolute_Error* in the E
1. Navigate to the **Jobs** section. 1. Select your experiment.
-1. Select the run in your experiment you want to view.
+1. Select the job in your experiment you want to view.
1. Select **Metrics**.
- ![View run metrics in the studio](./media/how-to-log-view-metrics/experiment-page-metrics-across-runs.png)
+ ![View job metrics in the studio](./media/how-to-log-view-metrics/experiment-page-metrics-across-runs.png)
## Next steps
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-monitor-analyze-runs.md
# Monitor and analyze jobs in studio + You can use [Azure Machine Learning studio](https://ml.azure.com) to monitor, organize, and track your jobs for training and experimentation. Your ML job history is an important part of an explainable and repeatable ML development process. This article shows how to do the following tasks:
This article shows how to do the following tasks:
> * If you're looking for information on monitoring training jobs from the CLI or SDK v2, see [Track experiments with MLflow and CLI v2](how-to-use-mlflow-cli-runs.md). > * If you're looking for information on monitoring the Azure Machine Learning service and associated Azure services, see [How to monitor Azure Machine Learning](monitor-azure-machine-learning.md). >
-> If you're looking for information on monitoring models deployed as web services, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md).
+> If you're looking for information on monitoring models deployed as web services, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](how-to-enable-app-insights.md)F.
## Prerequisites
Navigate to the **Job Details** page for your job and select the edit or pencil
:::image type="content" source="media/how-to-track-monitor-analyze-runs/run-description-2.gif" alt-text="Screenshot of how to create a job description."::: + ## Tag and find jobs In Azure Machine Learning, you can use properties and tags to help organize and query your jobs for important information.
machine-learning How To Train Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-sdk.md
In this article, you learn how to configure and submit Azure Machine Learning jo
To run the training examples, first clone the examples repository and change into the `sdk` directory: ```bash
-git clone --depth 1 https://github.com/Azure/azureml-examples --branch
+git clone --depth 1 https://github.com/Azure/azureml-examples
cd azureml-examples/sdk ```
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
There are many ways to create a training job with Azure Machine Learning. You ca
* Or, you may enter the job creation from the left pane. Click **+New** and select **Job**. [![Azure Machine Learning studio left navigation](media/how-to-train-with-ui/left-nav-entry.png)](media/how-to-train-with-ui/left-nav-entry.png) + These options will all take you to the job creation panel, which has a wizard for configuring and creating a training job. ## Select compute resources
After selecting a compute target, you need to specify the runtime environment fo
### Curated environments
-Curated environments are Azure-defined collections of Python packages used in common ML workloads. Curated environments are available in your workspace by default. These environments are backed by cached Docker images, which reduce the run preparation overhead. The cards displayed in the "Curated environments" page show details of each environment. To learn more, see [curated environments in Azure Machine Learning](resource-curated-environments.md).
+Curated environments are Azure-defined collections of Python packages used in common ML workloads. Curated environments are available in your workspace by default. These environments are backed by cached Docker images, which reduce the job preparation overhead. The cards displayed in the "Curated environments" page show details of each environment. To learn more, see [curated environments in Azure Machine Learning](resource-curated-environments.md).
[![Curated environments](media/how-to-train-with-ui/curated-env.png)](media/how-to-train-with-ui/curated-env.png)
You may choose **view the YAML spec** to review and download the yaml file gener
[![view yaml spec](media/how-to-train-with-ui/view-yaml.png)](media/how-to-train-with-ui/view-yaml.png) [![Yaml spec](media/how-to-train-with-ui/yaml-spec.png)](media/how-to-train-with-ui/yaml-spec.png)
-To launch the job, choose **Create**. Once the job is created, Azure will show you the run details page, where you can monitor and manage your training job.
+To launch the job, choose **Create**. Once the job is created, Azure will show you the job details page, where you can monitor and manage your training job.
[!INCLUDE [Email Notification Include](../../includes/machine-learning-email-notifications.md)]
machine-learning How To Trigger Published Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-trigger-published-pipeline.md
pipeline_id = "aaaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
## Create a schedule
-To run a pipeline on a recurring basis, you'll create a schedule. A `Schedule` associates a pipeline, an experiment, and a trigger. The trigger can either be a`ScheduleRecurrence` that describes the wait between runs or a Datastore path that specifies a directory to watch for changes. In either case, you'll need the pipeline identifier and the name of the experiment in which to create the schedule.
+To run a pipeline on a recurring basis, you'll create a schedule. A `Schedule` associates a pipeline, an experiment, and a trigger. The trigger can either be a`ScheduleRecurrence` that describes the wait between jobs or a Datastore path that specifies a directory to watch for changes. In either case, you'll need the pipeline identifier and the name of the experiment in which to create the schedule.
At the top of your Python file, import the `Schedule` and `ScheduleRecurrence` classes:
from azureml.pipeline.core.schedule import ScheduleRecurrence, Schedule
The `ScheduleRecurrence` constructor has a required `frequency` argument that must be one of the following strings: "Minute", "Hour", "Day", "Week", or "Month". It also requires an integer `interval` argument specifying how many of the `frequency` units should elapse between schedule starts. Optional arguments allow you to be more specific about starting times, as detailed in the [ScheduleRecurrence SDK docs](/python/api/azureml-pipeline-core/azureml.pipeline.core.schedule.schedulerecurrence).
-Create a `Schedule` that begins a run every 15 minutes:
+Create a `Schedule` that begins a job every 15 minutes:
```python recurrence = ScheduleRecurrence(frequency="Minute", interval=15)
recurring_schedule = Schedule.create(ws, name="MyRecurringSchedule",
### Create a change-based schedule
-Pipelines that are triggered by file changes may be more efficient than time-based schedules. When you want to do something before a file is changed, or when a new file is added to a data directory, you can preprocess that file. You can monitor any changes to a datastore or changes within a specific directory within the datastore. If you monitor a specific directory, changes within subdirectories of that directory will _not_ trigger a run.
+Pipelines that are triggered by file changes may be more efficient than time-based schedules. When you want to do something before a file is changed, or when a new file is added to a data directory, you can preprocess that file. You can monitor any changes to a datastore or changes within a specific directory within the datastore. If you monitor a specific directory, changes within subdirectories of that directory will _not_ trigger a job.
To create a file-reactive `Schedule`, you must set the `datastore` parameter in the call to [Schedule.create](/python/api/azureml-pipeline-core/azureml.pipeline.core.schedule.schedule#create-workspace--name--pipeline-id--experiment-name--recurrence-none--description-none--pipeline-parameters-none--wait-for-provisioning-false--wait-timeout-3600--datastore-none--polling-interval-5--data-path-parameter-name-none--continue-on-step-failure-none--path-on-datastore-noneworkflow-provider-noneservice-endpoint-none-). To monitor a folder, set the `path_on_datastore` argument.
In your Web browser, navigate to Azure Machine Learning. From the **Endpoints**
:::image type="content" source="./media/how-to-trigger-published-pipeline/scheduled-pipelines.png" alt-text="Pipelines page of AML":::
-In this page you can see summary information about all the pipelines in the Workspace: names, descriptions, status, and so forth. Drill in by clicking in your pipeline. On the resulting page, there are more details about your pipeline and you may drill down into individual runs.
+In this page you can see summary information about all the pipelines in the Workspace: names, descriptions, status, and so forth. Drill in by clicking in your pipeline. On the resulting page, there are more details about your pipeline and you may drill down into individual jobs.
## Deactivate the pipeline
In an Azure Data Factory pipeline, the *Machine Learning Execute Pipeline* activ
## Next steps
-In this article, you used the Azure Machine Learning SDK for Python to schedule a pipeline in two different ways. One schedule recurs based on elapsed clock time. The other schedule runs if a file is modified on a specified `Datastore` or within a directory on that store. You saw how to use the portal to examine the pipeline and individual runs. You learned how to disable a schedule so that the pipeline stops running. Finally, you created an Azure Logic App to trigger a pipeline.
+In this article, you used the Azure Machine Learning SDK for Python to schedule a pipeline in two different ways. One schedule recurs based on elapsed clock time. The other schedule jobs if a file is modified on a specified `Datastore` or within a directory on that store. You saw how to use the portal to examine the pipeline and individual jobs. You learned how to disable a schedule so that the pipeline stops running. Finally, you created an Azure Logic App to trigger a pipeline.
For more information, see:
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
If the listed version is not a supported version:
## Data access
-For automated ML runs, you need to ensure the file datastore that connects to your AzureFile storage has the appropriate authentication credentials. Otherwise, the following message results. Learn how to [update your data access authentication credentials](how-to-train-with-datasets.md#azurefile-storage).
+For automated ML jobs, you need to ensure the file datastore that connects to your AzureFile storage has the appropriate authentication credentials. Otherwise, the following message results. Learn how to [update your data access authentication credentials](how-to-train-with-datasets.md#azurefile-storage).
Error message: `Could not create a connection to the AzureFileService due to missing credentials. Either an Account Key or SAS token needs to be linked the default workspace blob store.` ## Data schema
-When you try to create a new automated ML experiment via the **Edit and submit** button in the Azure Machine Learning studio, the data schema for the new experiment must match the schema of the data that was used in the original experiment. Otherwise, an error message similar to the following results. Learn more about how to [edit and submit experiments from the studio UI](how-to-use-automated-ml-for-ml-models.md#edit-and-submit-runs-preview).
+When you try to create a new automated ML experiment via the **Edit and submit** button in the Azure Machine Learning studio, the data schema for the new experiment must match the schema of the data that was used in the original experiment. Otherwise, an error message similar to the following results. Learn more about how to [edit and submit experiments from the studio UI](how-to-use-automated-ml-for-ml-models.md#edit-and-submit-jobs-preview).
Error message non-vision experiments: ` Schema mismatch error: (an) additional column(s): "Column1: String, Column2: String, Column3: String", (a) missing column(s)`
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
Because of the distributed nature of batch scoring jobs, there are logs from sev
- `~/logs/job_progress_overview.txt`: This file provides high-level information about the number of mini-batches (also known as tasks) created so far and the number of mini-batches processed so far. As the mini-batches end, the log records the results of the job. If the job failed, it will show the error message and where to start the troubleshooting. -- `~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. This log provides information on task creation, progress monitoring, the run result.
+- `~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. This log provides information on task creation, progress monitoring, the job result.
For a concise understanding of errors in your script there is:
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
Learn how to troubleshoot issues with Docker environment image builds and packag
## Docker image build failures For most image build failures, you'll find the root cause in the image build log.
-Find the image build log from the Azure Machine Learning portal (20\_image\_build\_log.txt) or from your Azure Container Registry task run logs.
+Find the image build log from the Azure Machine Learning portal (20\_image\_build\_log.txt) or from your Azure Container Registry task job logs.
It's usually easier to reproduce errors locally. Check the kind of error and try one of the following `setuptools`:
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-understand-automated-ml.md
Title: Evaluate AutoML experiment results
-description: Learn how to view and evaluate charts and metrics for each of your automated machine learning experiment runs.
+description: Learn how to view and evaluate charts and metrics for each of your automated machine learning experiment jobs.
# Evaluate automated machine learning experiment results
-In this article, learn how to evaluate and compare models trained by your automated machine learning (automated ML) experiment. Over the course of an automated ML experiment, many runs are created and each run creates a model. For each model, automated ML generates evaluation metrics and charts that help you measure the model's performance.
+In this article, learn how to evaluate and compare models trained by your automated machine learning (automated ML) experiment. Over the course of an automated ML experiment, many jobs are created and each job creates a model. For each model, automated ML generates evaluation metrics and charts that help you measure the model's performance.
For example, automated ML generates the following charts based on experiment type.
For example, automated ML generates the following charts based on experiment typ
- The [Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md) (no code required) - The [Azure Machine Learning Python SDK](how-to-configure-auto-train.md)
-## View run results
+## View job results
-After your automated ML experiment completes, a history of the runs can be found via:
+After your automated ML experiment completes, a history of the jobs can be found via:
- A browser with [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md)
- - A Jupyter notebook using the [RunDetails Jupyter widget](/python/api/azureml-widgets/azureml.widgets.rundetails)
+ - A Jupyter notebook using the [JobDetails Jupyter widget](/python/api/azureml-widgets/azureml.widgets.rundetails)
The following steps and video, show you how to view the run history and model evaluation metrics and charts in the studio: 1. [Sign into the studio](https://ml.azure.com/) and navigate to your workspace. 1. In the left menu, select **Experiments**. 1. Select your experiment from the list of experiments.
-1. In the table at the bottom of the page, select an automated ML run.
+1. In the table at the bottom of the page, select an automated ML job.
1. In the **Models** tab, select the **Algorithm name** for the model you want to evaluate. 1. In the **Metrics** tab, use the checkboxes on the left to view metrics and charts.
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Title: Set up AutoML with the studio UI
-description: Learn how to set up AutoML training runs without a single line of code with Azure Machine Learning automated ML in the Azure Machine Learning studio.
+description: Learn how to set up AutoML training jobs without a single line of code with Azure Machine Learning automated ML in the Azure Machine Learning studio.
# Set up no-code AutoML training with the studio UI
-In this article, you learn how to set up AutoML training runs without a single line of code using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md).
+In this article, you learn how to set up AutoML training jobs without a single line of code using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md).
Automated machine learning, AutoML, is a process in which the best machine learning algorithm to use for your specific data is selected for you. This process enables you to generate machine learning models quickly. [Learn more about how Azure Machine Learning implements automated machine learning](concept-automated-ml.md).
Otherwise, you'll see a list of your recent automated ML experiments, including
## Create and run experiment
-1. Select **+ New automated ML run** and populate the form.
+1. Select **+ New automated ML job** and populate the form.
1. Select a data asset from your storage container, or create a new data asset. Data asset can be created from local files, web urls, datastores, or Azure open datasets. Learn more about [data asset creation](how-to-create-register-data-assets.md).
Otherwise, you'll see a list of your recent automated ML experiments, including
Select **Next**. 1. Select your newly created dataset once it appears. You are also able to view a preview of the dataset and sample statistics.
-1. On the **Configure run** form, select **Create new** and enter **Tutorial-automl-deploy** for the experiment name.
+1. On the **Configure job** form, select **Create new** and enter **Tutorial-automl-deploy** for the experiment name.
1. Select a target column; this is the column that you would like to do predictions on.
Otherwise, you'll see a list of your recent automated ML experiments, including
Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model). Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels). Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary.
- Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child runs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
+ Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
1. (Optional) View featurization settings: if you choose to enable **Automatic featurization** in the **Additional configuration settings** form, default featurization techniques are applied. In the **View featurization settings** you can change these defaults and customize accordingly. Learn how to [customize featurizations](#customize-featurization).
Otherwise, you'll see a list of your recent automated ML experiments, including
1. Forecasting tasks only supports k-fold cross validation.
- 1. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test run is automatically triggered at the end of your experiment. This test run is only run on the best model that was recommended by automated ML. Learn how to get the [results of the remote test run](#view-remote-test-run-results-preview).
+ 1. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that was recommended by automated ML. Learn how to get the [results of the remote test job](#view-remote-test-job-results-preview).
>[!IMPORTANT] > Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
- * Test data is considered a separate from training and validation, so as to not bias the results of the test run of the recommended model. [Learn more about bias during model validation](concept-automated-ml.md#training-validation-and-test-data).
+ * Test data is considered a separate from training and validation, so as to not bias the results of the test job of the recommended model. [Learn more about bias during model validation](concept-automated-ml.md#training-validation-and-test-data).
* You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](./v1/how-to-create-register-datasets.md#tabulardataset). * The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated. * The test dataset should not be the same as the training dataset or the validation dataset.
- * Forecasting runs do not support train/test split.
+ * Forecasting jobs do not support train/test split.
![Screenshot shows the form where to select validation data and test data](media/how-to-use-automated-ml-for-ml-models/validate-test-form.png)
Select **Finish** to run your experiment. The experiment preparing process can t
### View experiment details
-The **Run Detail** screen opens to the **Details** tab. This screen shows you a summary of the experiment run including a status bar at the top next to the run number.
+The **Job Detail** screen opens to the **Details** tab. This screen shows you a summary of the experiment job including a status bar at the top next to the job number.
The **Models** tab contains a list of the models created ordered by the metric score. By default, the model that scores the highest based on the chosen metric is at the top of the list. As the training job tries out more models, they are added to the list. Use this to get a quick comparison of the metrics for the models produced so far.
-![Run detail](./media/how-to-use-automated-ml-for-ml-models/explore-models.gif)
+![Job detail](./media/how-to-use-automated-ml-for-ml-models/explore-models.gif)
-### View training run details
+### View training job details
-Drill down on any of the completed models to see training run details. On the **Model** tab view details like a model summary and the hyperparameters used for the selected model.
+Drill down on any of the completed models to see training job details. On the **Model** tab view details like a model summary and the hyperparameters used for the selected model.
[![Hyperparameter details](media/how-to-use-automated-ml-for-ml-models/hyperparameter-button.png)](media/how-to-use-automated-ml-for-ml-models/hyperparameter-details.png)
On the Data transformation tab, you can see a diagram of what data preprocessing
![Data transformation](./media/how-to-use-automated-ml-for-ml-models/data-transformation.png)
-## View remote test run results (preview)
+## View remote test job results (preview)
If you specified a test dataset or opted for a train/test split during your experiment setup-- on the **Validate and test** form, automated ML automatically tests the recommended model by default. As a result, automated ML calculates test metrics to determine the quality of the recommended model and its predictions.
If you specified a test dataset or opted for a train/test split during your expe
> * [Computer vision tasks (preview)](how-to-auto-train-image-models.md) > * [Many models and hiearchical time series forecasting training (preview)](how-to-auto-train-forecast.md) > * [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning)
-> * [Automated ML runs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
+> * [Automated ML jobs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
-To view the test run metrics of the recommended model,
+To view the test job metrics of the recommended model,
1. Navigate to the **Models** page, select the best model. 1. Select the **Test results (preview)** tab.
-1. Select the run you want, and view the **Metrics** tab.
+1. Select the job you want, and view the **Metrics** tab.
![Test results tab of automatically tested, recommended model](./media/how-to-use-automated-ml-for-ml-models/test-best-model-results.png) To view the test predictions used to calculate the test metrics, 1. Navigate to the bottom of the page and select the link under **Outputs dataset** to open the dataset.
-1. On the **Datasets** page, select the **Explore** tab to view the predictions from the test run.
+1. On the **Datasets** page, select the **Explore** tab to view the predictions from the test job.
1. Alternatively, the prediction file can also be viewed/downloaded from the **Outputs + logs** tab, expand the **Predictions** folder to locate your `predicted.csv` file. Alternatively, the predictions file can also be viewed/downloaded from the Outputs + logs tab, expand Predictions folder to locate your predictions.csv file.
-The model test run generates the predictions.csv file that's stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test runs are not recommended for scenarios if any of the information used for or created by the test run needs to remain private.
+The model test job generates the predictions.csv file that's stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test jobs are not recommended for scenarios if any of the information used for or created by the test job needs to remain private.
## Test an existing automated ML model (preview)
The model test run generates the predictions.csv file that's stored in the defau
After your experiment completes, you can test the model(s) that automated ML generates for you. If you want to test a different automated ML generated model, not the recommended model, you can do so with the following steps.
-1. Select an existing automated ML experiment run.
-1. Navigate to the **Models** tab of the run and select the completed model you want to test.
+1. Select an existing automated ML experiment job.
+1. Navigate to the **Models** tab of the job and select the completed model you want to test.
1. On the model **Details** page, select the **Test model(preview)** button to open the **Test model** pane.
-1. On the **Test model** pane, select the compute cluster and a test dataset you want to use for your test run.
+1. On the **Test model** pane, select the compute cluster and a test dataset you want to use for your test job.
1. Select the **Test** button. The schema of the test dataset should match the training dataset, but the **target column** is optional.
-1. Upon successful creation of model test run, the **Details** page displays a success message. Select the **Test results** tab to see the progress of the run.
+1. Upon successful creation of model test job, the **Details** page displays a success message. Select the **Test results** tab to see the progress of the job.
-1. To view the results of the test run, open the **Details** page and follow the steps in the [view results of the remote test run](#view-remote-test-run-results-preview) section.
+1. To view the results of the test job, open the **Details** page and follow the steps in the [view results of the remote test job](#view-remote-test-job-results-preview) section.
![Test model form](./media/how-to-use-automated-ml-for-ml-models/test-model-form.png)
To get explanations for a particular model,
1. On the **Models** tab, select the model you want to understand. 1. Select the **Explain model** button, and provide a compute that can be used to generate the explanations.
-1. Check the **Child runs** tab for the status.
+1. Check the **Child jobs** tab for the status.
1. Once complete, navigate to the **Explanations (preview)** tab which contains the explanations dashboard. ![Model explanation dashboard](media/how-to-use-automated-ml-for-ml-models/model-explanation-dashboard.png)
-## Edit and submit runs (preview)
+## Edit and submit jobs (preview)
>[!IMPORTANT] > The ability to copy, edit and submit a new experiment based on an existing experiment is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
In scenarios where you would like to create a new experiment based on the settin
This functionality is limited to experiments initiated from the studio UI and requires the data schema for the new experiment to match that of the original experiment.
-The **Edit and submit** button opens the **Create a new Automated ML run** wizard with the data, compute and experiment settings pre-populated. You can go through each form and edit selections as needed for your new experiment.
+The **Edit and submit** button opens the **Create a new Automated ML job** wizard with the data, compute and experiment settings pre-populated. You can go through each form and edit selections as needed for your new experiment.
## Deploy your model
Automated ML helps you with deploying the model without writing code:
1. You have a couple options for deployment. + Option 1: Deploy the best model, according to the metric criteria you defined.
- 1. After the experiment is complete, navigate to the parent run page by selecting **Run 1** at the top of the screen.
+ 1. After the experiment is complete, navigate to the parent job page by selecting **Job 1** at the top of the screen.
1. Select the model listed in the **Best model summary** section. 1. Select **Deploy** on the top left of the window.
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
See the [object detection sample notebook](https://github.com/Azure/azureml-exam
## Next steps * Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
-* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md).
+* For definitions and examples of the performance charts and metrics provided for each job, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md).
* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models.md). * See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md). *[Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models.md)
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
Follow the below steps to view the scoring results in Azure Storage Explorer whe
:::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="show_job_in_studio" :::
-1. In the graph of the run, select the `batchscoring` step.
+1. In the graph of the job, select the `batchscoring` step.
1. Select the __Outputs + logs__ tab and then select **Show data outputs**. 1. From __Data outputs__, select the icon to open __Storage Explorer__.
machine-learning How To Use Batch Endpoints Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoints-studio.md
To change where the results are stored, providing a blob store and output path w
### Summary of all submitted jobs
-To see a summary of all the submitted jobs for an endpoint, select the endpoint and then select the **Runs** tab.
+To see a summary of all the submitted jobs for an endpoint, select the endpoint and then select the **Jobs** tab.
:::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint"::: ## Check batch scoring results
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-identities.md
az ml workspace create -w <workspace name> \
### Let Azure Machine Learning service create workspace ACR
-If you don't bring your own ACR, Azure Machine Learning service will create one for you when you perform an operation that needs one. For example, submit a training run to Machine Learning Compute, build an environment, or deploy a web service endpoint. The ACR created by the workspace will have admin user enabled, and you need to disable the admin user manually.
+If you don't bring your own ACR, Azure Machine Learning service will create one for you when you perform an operation that needs one. For example, submit a training job to Machine Learning Compute, build an environment, or deploy a web service endpoint. The ACR created by the workspace will have admin user enabled, and you need to disable the admin user manually.
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
az role assignment create --assignee <principal ID> \
--scope "/subscriptions/<subscription ID>/resourceGroups/<private ACR resource group>/providers/Microsoft.ContainerRegistry/registries/<private ACR name>" ```
-Finally, when submitting a training run, specify the base image location in the [environment definition](how-to-use-environments.md#use-existing-environments).
+Finally, when submitting a training job, specify the base image location in the [environment definition](how-to-use-environments.md#use-existing-environments).
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
In this article, learn how to enable MLflow to connect to Azure Machine Learning while working in an Azure Databricks workspace. You can leverage this configuration for tracking, model management and model deployment.
-[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLFlow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts. Learn more about [Azure Databricks and MLflow](/azure/databricks/applications/mlflow/).
+[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLFlow Tracking is a component of MLflow that logs and tracks your training job metrics and model artifacts. Learn more about [Azure Databricks and MLflow](/azure/databricks/applications/mlflow/).
See [MLflow and Azure Machine Learning](concept-mlflow.md) for additional MLflow and Azure Machine Learning functionality integrations.
If you have an MLflow Project to train with Azure Machine Learning, see [Train M
* [Create an Azure Machine Learning Workspace](quickstart-create-resources.md). * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations). + ## Install libraries To install libraries on your cluster, navigate to the **Libraries** tab and select **Install New**
When MLflow is configured to exclusively track experiments in Azure Machine Lear
mlflow.set_experiment(experiment_name="experiment-name") ```
+In your training script, import `mlflow` to use the MLflow logging APIs, and start logging your job metrics. The following example, logs the epoch loss metric.
+ ## Logging models with MLflow After your model is trained, you can log it to the tracking server with the `mlflow.<model_flavor>.log_model()` method. `<model_flavor>`, refers to the framework associated with the model. [Learn what model flavors are supported](https://mlflow.org/docs/latest/models.html#model-api). In the following example, a model created with the Spark library MLLib is being registered: + ```python mlflow.spark.log_model(model, artifact_path = "model") ```
The [Training models in Azure Databricks and deploying them on Azure ML](https:/
## Next steps * [Deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md). * [Manage your models](concept-model-management-and-deployment.md).
-* [Track experiment runs with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
+* [Track experiment jobs with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
* Learn more about [Azure Databricks and MLflow](/azure/databricks/applications/mlflow/).
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
ms.devlang: azurecli
In this article, learn how to enable [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) to connect Azure Machine Learning as the backend of your MLflow experiments.
-[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an [Azure Databricks cluster](how-to-use-mlflow-azure-databricks.md).
+[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow Tracking is a component of MLflow that logs and tracks your training job metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an [Azure Databricks cluster](how-to-use-mlflow-azure-databricks.md).
See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLflow and Azure Machine Learning functionality including MLflow Project support (preview) and model deployment.
See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLf
> When using the Azure Machine Learning SDK v2, no native logging is provided. Instead, use MLflow's tracking capabilities. For more information, see [How to log and view metrics (v2)](how-to-log-view-metrics.md). > [!TIP]
-> The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
+> The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
> [!NOTE] > You can use the [MLflow Skinny client](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst) which is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. This is recommended for users who primarily need the tracking and logging capabilities without importing the full suite of MLflow features including deployments.
See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLf
* Install and [set up CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the ml extension. * Install and set up SDK(v2) for Python + ## Track runs from your local machine or remote compute Tracking using MLflow with Azure Machine Learning lets you store the logged metrics and artifacts runs that were executed on your local machine into your Azure Machine Learning workspace.
Tracking using MLflow with Azure Machine Learning lets you store the logged metr
To track a run that is not running on Azure Machine Learning compute (from now on referred to as *"local compute"*), you need to point your local compute to the Azure Machine Learning MLflow Tracking URI. + > [!NOTE] > When running on Azure Compute (Azure Notebooks, Jupyter Notebooks hosted on Azure Compute Instances or Compute Clusters) you don't have to configure the tracking URI. It's automatically configured for you.
export MLFLOW_TRACKING_URI=$(az ml workspace show --query mlflow_tracking_uri |
The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how: + ```python import mlflow
You can also set one of the MLflow environment variables [MLFLOW_EXPERIMENT_NAME
export MLFLOW_EXPERIMENT_NAME="experiment_with_mlflow" ```
-### Start training run
+### Start training job
-After you set the MLflow experiment name, you can start your training run with `start_run()`. Then use `log_metric()` to activate the MLflow logging API and begin logging your training run metrics.
+After you set the MLflow experiment name, you can start your training job with `start_run()`. Then use `log_metric()` to activate the MLflow logging API and begin logging your training job metrics.
```Python import os
with mlflow.start_run() as mlflow_run:
mlflow.log_artifact("helloworld.txt") ``` + For details about how to log metrics, parameters and artifacts in a run using MLflow view [How to log and view metrics](how-to-log-view-metrics.md). ## Track jobs running on Azure Machine Learning + [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] Remote runs (jobs) let you train your models in a more robust and repetitive way. They can also leverage more powerful computes, such as Machine Learning Compute clusters. See [Use compute targets for model training](how-to-set-up-training-targets.md) to learn about different compute options. + When submitting runs using jobs, Azure Machine Learning automatically configures MLflow to work with the workspace the job is running in. This means that there is no need to configure the MLflow tracking URI. On top of that, experiments are automatically named based on the details of the job. > [!IMPORTANT]
When submitting runs using jobs, Azure Machine Learning automatically configures
### Creating a training routine + First, you should create a `src` subdirectory and create a file with your training code in a `hello_world.py` file in the `src` subdirectory. All your training code will go into the `src` subdirectory, including `train.py`. The training code is taken from this [MLfLow example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/basics/src/hello-mlflow.py) in the Azure Machine Learning example repo.
Copy this code into the file:
:::code language="python" source="~/azureml-examples-main/cli/jobs/basics/src/hello-mlflow.py"::: + > [!NOTE] > Note how this sample don't contains the instructions `mlflow.start_run` nor `mlflow.set_experiment`. This is automatically done by Azure Machine Learning.
Copy this code into the file:
Use the [Azure Machine Learning CLI (v2)](how-to-train-cli.md) to submit a remote run. When using the Azure Machine Learning CLI (v2), the MLflow tracking URI and experiment name are set automatically and directs the logging from MLflow to your workspace. Learn more about [logging Azure Machine Learning CLI (v2) experiments with MLflow](how-to-train-cli.md#model-tracking-with-mlflow) + Create a YAML file with your job definition in a `job.yml` file. This file should be created outside the `src` directory. Copy this code into the file: :::code language="azurecli" source="~/azureml-examples-main/cli/jobs/basics/hello-mlflow.yml":::
Retrieve run metric using MLflow [get_run()](https://mlflow.org/docs/latest/pyth
```Python from mlflow.tracking import MlflowClient
-# Use MlFlow to retrieve the run that was just completed
+# Use MlFlow to retrieve the job that was just completed
client = MlflowClient() run_id = mlflow_run.info.run_id finished_mlflow_run = MlflowClient().get_run(run_id)
params = finished_mlflow_run.data.params
print(metrics,tags,params) ``` + To view the artifacts of a run, you can use [MlFlowClient.list_artifacts()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.list_artifacts) ```Python
client.download_artifacts(run_id, "helloworld.txt", ".")
For more details about how to retrieve information from experiments and runs in Azure Machine Learning using MLflow view [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md). + ## Manage models
-Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow-related metadata, such as run ID, is also tracked with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
+Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow-related metadata, such as run ID, is also tracked with the registered model for traceability. Users can submit training jobs, register, and deploy models produced from MLflow runs.
If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](how-to-deploy-mlflow-models.md).
-To register and view a model from a run, use the following steps:
+To register and view a model from a job, use the following steps:
-1. Once a run is complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
+1. Once a job is complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
```Python
- # the model folder produced from a run is registered. This includes the MLmodel file, model.pkl and the conda.yaml.
+ # the model folder produced from a job is registered. This includes the MLmodel file, model.pkl and the conda.yaml.
model_path = "model" model_uri = 'runs:/{}/{}'.format(run_id, model_path) mlflow.register_model(model_uri,"registered_model_name")
To register and view a model from a run, use the following steps:
![model-schema](./media/how-to-use-mlflow-cli-runs/mlflow-model-schema.png)
-1. Select MLmodel to see the MLmodel file generated by the run.
+1. Select MLmodel to see the MLmodel file generated by the job.
![MLmodel-schema](./media/how-to-use-mlflow-cli-runs/mlmodel-view.png)
machine-learning How To Use Pipeline Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-parameter.md
Use pipeline parameters to build flexible pipelines in the designer. Pipeline parameters let you dynamically set values at runtime to encapsulate pipeline logic and reuse assets.
-Pipeline parameters are especially useful when resubmitting a pipeline run, [retraining models](how-to-retrain-designer.md), or [performing batch predictions](how-to-run-batch-predictions-designer.md).
+Pipeline parameters are especially useful when resubmitting a pipeline job, [retraining models](how-to-retrain-designer.md), or [performing batch predictions](how-to-run-batch-predictions-designer.md).
In this article, you learn how to do the following: > [!div class="checklist"] > * Create pipeline parameters > * Delete and manage pipeline parameters
-> * Trigger pipeline runs while adjusting pipeline parameters
+> * Trigger pipeline jobs while adjusting pipeline parameters
## Prerequisites
In this section, you will learn how to attach and detach component parameter to
### Attach component parameter to pipeline parameter
-You can attach the same component parameters of duplicated components to the same pipeline parameter if you want to alter the value at one time when triggering the pipeline run.
+You can attach the same component parameters of duplicated components to the same pipeline parameter if you want to alter the value at one time when triggering the pipeline job.
The following example has duplicated **Clean Missing Data** component. For each **Clean Missing Data** component, attach **Replacement value** to pipeline parameter **replace-missing-value**:
Use the following steps to delete a component pipeline parameter:
> [!NOTE] > Deleting a pipeline parameter will cause all attached component parameters to be detached and the value of detached component parameters will keep current pipeline parameter value.
-## Trigger a pipeline run with pipeline parameters
+## Trigger a pipeline job with pipeline parameters
-In this section, you learn how to submit a pipeline run while setting pipeline parameters.
+In this section, you learn how to submit a pipeline job while setting pipeline parameters.
-### Resubmit a pipeline run
+### Resubmit a pipeline job
-After submitting a pipeline with pipeline parameters, you can resubmit a pipeline run with different parameters:
+After submitting a pipeline with pipeline parameters, you can resubmit a pipeline job with different parameters:
-1. Go to pipeline detail page. In the **Pipeline run overview** window, you can check current pipeline parameters and values.
+1. Go to pipeline detail page. In the **Pipeline job overview** window, you can check current pipeline parameters and values.
1. Select **Resubmit**.
-1. In the **Setup pipeline run**, specify your new pipeline parameters.
+1. In the **Setup pipeline job**, specify your new pipeline parameters.
![Screenshot that shows resubmit pipeline with pipeline parameters](media/how-to-use-pipeline-parameter/resubmit-pipeline-run.png)
machine-learning How To Use Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-reinforcement-learning.md
In this article you learn how to:
> * Set up an experiment > * Define head and worker nodes > * Create an RL estimator
-> * Submit an experiment to start a run
+> * Submit an experiment to start a job
> * View results This article is based on the [RLlib Pong example](https://aka.ms/azureml-rl-pong) that can be found in the Azure Machine Learning notebook [GitHub repository](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/README.md).
ws = Workspace.from_config()
### Create a reinforcement learning experiment
-Create an [experiment](/python/api/azureml-core/azureml.core.experiment.experiment) to track your reinforcement learning run. In Azure Machine Learning, experiments are logical collections of related trials to organize run logs, history, outputs, and more.
+Create an [experiment](/python/api/azureml-core/azureml.core.experiment.experiment) to track your reinforcement learning job. In Azure Machine Learning, experiments are logical collections of related trials to organize job logs, history, outputs, and more.
```python experiment_name='rllib-pong-multi-node'
else:
Use the [ReinforcementLearningEstimator](/python/api/azureml-contrib-reinforcementlearning/azureml.contrib.train.rl.reinforcementlearningestimator) to submit a training job to Azure Machine Learning.
-Azure Machine Learning uses estimator classes to encapsulate run configuration information. This lets you specify how to configure a script execution.
+Azure Machine Learning uses estimator classes to encapsulate job configuration information. This lets you specify how to configure a script execution.
### Define a worker configuration
rl_estimator = ReinforcementLearningEstimator(
cluster_coordination_timeout_seconds=3600, # Maximum time for the whole Ray job to run
- # This will cut off the run after an hour
+ # This will cut off the job after an hour
max_run_duration_seconds=3600, # Allow the docker container Ray runs in to make full use
def on_train_result(info):
value=info["result"]["episodes_total"]) ```
-## Submit a run
+## Submit a job
[Run](/python/api/azureml-core/azureml.core.run%28class%29) handles the run history of in-progress or complete jobs.
run = exp.submit(config=rl_estimator)
## Monitor and view results
-Use the Azure Machine Learning Jupyter widget to see the status of your runs in real time. The widget shows two child runs: one for head and one for workers.
+Use the Azure Machine Learning Jupyter widget to see the status of your jobs in real time. The widget shows two child jobs: one for head and one for workers.
```python from azureml.widgets import RunDetails
run.wait_for_completion()
``` 1. Wait for the widget to load.
-1. Select the head run in the list of runs.
+1. Select the head job in the list of jobs.
-Select **Click here to see the run in Azure Machine Learning studio** for additional run information in the studio. You can access this information while the run is in progress or after it completes.
+Select **Click here to see the job in Azure Machine Learning studio** for additional job information in the studio. You can access this information while the job is in progress or after it completes.
-![Line graph showing how run details widget](./media/how-to-use-reinforcement-learning/pong-run-details-widget.png)
+![Line graph showing how job details widget](./media/how-to-use-reinforcement-learning/pong-run-details-widget.png)
The **episode_reward_mean** plot shows the mean number of points scored per training epoch. You can see that the training agent initially performed poorly, losing its matches without scoring a single point (shown by a reward_mean of -21). Within 100 iterations, the training agent learned to beat the computer opponent by an average of 18 points.
-If you browse logs of the child run, you can see the evaluation results recorded in driver_log.txt file. You may need to wait several minutes before these metrics become available on the Run page.
+If you browse logs of the child job, you can see the evaluation results recorded in driver_log.txt file. You may need to wait several minutes before these metrics become available on the Job page.
In short work, you have learned to configure multiple compute resources to train a reinforcement learning agent to play Pong very well against a computer opponent.
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
Title: Authentication secrets in training
-description: Learn how to pass secrets to training runs in secure fashion using the Azure Key Vault for your workspace.
+description: Learn how to pass secrets to training jobs in secure fashion using the Azure Key Vault for your workspace.
-# Use authentication credential secrets in Azure Machine Learning training runs
+# Use authentication credential secrets in Azure Machine Learning training jobs
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
-In this article, you learn how to use secrets in training runs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote run context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
+In this article, you learn how to use secrets in training jobs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
-Instead, your Azure Machine Learning workspace has an associated resource called a [Azure Key Vault](../key-vault/general/overview.md). Use this Key Vault to pass secrets to remote runs securely through a set of APIs in the Azure Machine Learning Python SDK.
+Instead, your Azure Machine Learning workspace has an associated resource called a [Azure Key Vault](../key-vault/general/overview.md). Use this Key Vault to pass secrets to remote jobs securely through a set of APIs in the Azure Machine Learning Python SDK.
The standard flow for using secrets is: 1. On local computer, log in to Azure and connect to your workspace. 2. On local computer, set a secret in Workspace Key Vault.
- 3. Submit a remote run.
- 4. Within the remote run, get the secret from Key Vault and use it.
+ 3. Submit a remote job.
+ 4. Within the remote job, get the secret from Key Vault and use it.
## Set secrets
You can list secret names using the [`list_secrets()`](/python/api/azureml-core/
In your local code, you can use the [`get_secret()`](/python/api/azureml-core/azureml.core.keyvault.keyvault#get-secret-name-) method to get the secret value by name.
-For runs submitted the [`Experiment.submit`](/python/api/azureml-core/azureml.core.experiment.experiment#submit-config--tags-none-kwargs-) , use the [`get_secret()`](/python/api/azureml-core/azureml.core.run.run#get-secret-name-) method with the [`Run`](/python/api/azureml-core/azureml.core.run%28class%29) class. Because a submitted run is aware of its workspace, this method shortcuts the Workspace instantiation and returns the secret value directly.
+For jobs submitted the [`Experiment.submit`](/python/api/azureml-core/azureml.core.experiment.experiment#submit-config--tags-none-kwargs-) , use the [`get_secret()`](/python/api/azureml-core/azureml.core.run.run#get-secret-name-) method with the [`Run`](/python/api/azureml-core/azureml.core.run%28class%29) class. Because a submitted run is aware of its workspace, this method shortcuts the Workspace instantiation and returns the secret value directly.
```python
-# Code in submitted run
+# Code in submitted job
from azureml.core import Experiment, Run run = Run.get_context()
machine-learning How To Use Sweep In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-sweep-in-pipeline.md
Below code snippet shows how to enable sweep for `train_model`.
After you submit a pipeline job, the SDK or CLI widget will give you a web URL link to Studio UI. The link will guide you to the pipeline graph view by default.
-To check details of the sweep step, double click the sweep step and navigate to the **child run** tab in the panel on the right.
+To check details of the sweep step, double click the sweep step and navigate to the **child job** tab in the panel on the right.
-This will link you to the sweep job page as seen in the below screenshot. Navigate to **child run** tab, here you can see the metrics of all child runs and list of all child runs.
+This will link you to the sweep job page as seen in the below screenshot. Navigate to **child job** tab, here you can see the metrics of all child jobs and list of all child jobs.
-If a child runs failed, select the name of that child run to enter detail page of that specific child run (see screenshot below). The useful debug information is under **Outputs + Logs**.
+If a child jobs failed, select the name of that child job to enter detail page of that specific child job (see screenshot below). The useful debug information is under **Outputs + Logs**.
:::image type="content" source="./media/how-to-use-sweep-in-pipeline/child-run.png" alt-text="Screenshot of the output + logs tab of a child run." lightbox= "./media/how-to-use-sweep-in-pipeline/child-run.png":::
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-synapsesparkstep.md
sdf.coalesce(1).write\
.csv(args.output_dir) ```
-This "data preparation" script doesn't do any real data transformation, but illustrates how to retrieve data, convert it to a spark dataframe, and how to do some basic Apache Spark manipulation. You can find the output in Azure Machine Learning Studio by opening the child run, choosing the **Outputs + logs** tab, and opening the `logs/azureml/driver/stdout` file, as shown in the following figure.
+This "data preparation" script doesn't do any real data transformation, but illustrates how to retrieve data, convert it to a spark dataframe, and how to do some basic Apache Spark manipulation. You can find the output in Azure Machine Learning Studio by opening the child job, choosing the **Outputs + logs** tab, and opening the `logs/azureml/driver/stdout` file, as shown in the following figure.
## Use the `SynapseSparkStep` in a pipeline
pipeline_run = pipeline.submit('synapse-pipeline', regenerate_outputs=True)
The above code creates a pipeline consisting of the data preparation step on Apache Spark pools powered by Azure Synapse Analytics (`step_1`) and the training step (`step_2`). Azure calculates the execution graph by examining the data dependencies between the steps. In this case, there's only a straightforward dependency that `step2_input` necessarily requires `step1_output`.
-The call to `pipeline.submit` creates, if necessary, an Experiment called `synapse-pipeline` and asynchronously begins a Run within it. Individual steps within the pipeline are run as Child Runs of this main run and can be monitored and reviewed in the Experiments page of Studio.
+The call to `pipeline.submit` creates, if necessary, an Experiment called `synapse-pipeline` and asynchronously begins a Job within it. Individual steps within the pipeline are run as Child Jobs of this main job and can be monitored and reviewed in the Experiments page of Studio.
## Next steps
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-version-track-datasets.md
Azure Machine Learning tracks your data throughout your experiment as input and
The following are scenarios where your data is tracked as an **input dataset**.
-* As a `DatasetConsumptionConfig` object through either the `inputs` or `arguments` parameter of your `ScriptRunConfig` object when submitting the experiment run.
+* As a `DatasetConsumptionConfig` object through either the `inputs` or `arguments` parameter of your `ScriptRunConfig` object when submitting the experiment job.
* When methods like, get_by_name() or get_by_id() are called in your script. For this scenario, the name assigned to the dataset when you registered it to the workspace is the name displayed. The following are scenarios where your data is tracked as an **output dataset**.
-* Pass an `OutputFileDatasetConfig` object through either the `outputs` or `arguments` parameter when submitting an experiment run. `OutputFileDatasetConfig` objects can also be used to persist data between pipeline steps. See [Move data between ML pipeline steps.](how-to-move-data-in-out-of-pipelines.md)
+* Pass an `OutputFileDatasetConfig` object through either the `outputs` or `arguments` parameter when submitting an experiment job. `OutputFileDatasetConfig` objects can also be used to persist data between pipeline steps. See [Move data between ML pipeline steps.](how-to-move-data-in-out-of-pipelines.md)
* Register a dataset in your script. For this scenario, the name assigned to the dataset when you registered it to the workspace is the name displayed. In the following example, `training_ds` is the name that would be displayed.
The following are scenarios where your data is tracked as an **output dataset**.
) ```
-* Submit child run with an unregistered dataset in script. This results in an anonymous saved dataset.
+* Submit child job with an unregistered dataset in script. This results in an anonymous saved dataset.
-### Trace datasets in experiment runs
+### Trace datasets in experiment jobs
-For each Machine Learning experiment, you can easily trace the datasets used as input with the experiment `Run` object.
+For each Machine Learning experiment, you can easily trace the datasets used as input with the experiment `Job` object.
The following code uses the [`get_details()`](/python/api/azureml-core/azureml.core.run.run#get-details--) method to track which input datasets were used with the experiment run:
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-experiment.md
In Azure Machine Learning, the visual graph is called a **pipeline draft**. In t
Select each module and adjust the parameters in the module settings panel to the right. Use the parameters to recreate the functionality of your Studio (classic) experiment. For more information on each module, see the [module reference](./component-reference/component-reference.md).
-## Submit a run and check results
+## Submit a job and check results
-After you recreate your Studio (classic) experiment, it's time to submit a **pipeline run**.
+After you recreate your Studio (classic) experiment, it's time to submit a **pipeline job**.
-A pipeline run executes on a **compute target** attached to your workspace. You can set a default compute target for the entire pipeline, or you can specify compute targets on a per-module basis.
+A pipeline job executes on a **compute target** attached to your workspace. You can set a default compute target for the entire pipeline, or you can specify compute targets on a per-module basis.
-Once you submit a run from a pipeline draft, it turns into a **pipeline run**. Each pipeline run is recorded and logged in Azure Machine Learning.
+Once you submit a job from a pipeline draft, it turns into a **pipeline job**. Each pipeline job is recorded and logged in Azure Machine Learning.
To set a default compute target for the entire pipeline: 1. Select the **Gear icon** ![Gear icon in the designer](./media/tutorial-designer-automobile-price-train-score/gear-icon.png) next to the pipeline name. 1. Select **Select compute target**. 1. Select an existing compute, or create a new compute by following the on-screen instructions.
-Now that your compute target is set, you can submit a pipeline run:
+Now that your compute target is set, you can submit a pipeline job:
1. At the top of the canvas, select **Submit**. 1. Select **Create new** to create a new experiment.
- Experiments organize similar pipeline runs together. If you run a pipeline multiple times, you can select the same experiment for successive runs. This is useful for logging and tracking.
+ Experiments organize similar pipeline jobs together. If you run a pipeline multiple times, you can select the same experiment for successive jobs. This is useful for logging and tracking.
1. Enter an experiment name. Then, select **Submit**.
- The first run may take up to 20 minutes. Since the default compute settings have a minimum node size of 0, the designer must allocate resources after being idle. Successive runs take less time, since the nodes are already allocated. To speed up the running time, you can create a compute resources with a minimum node size of 1 or greater.
+ The first job may take up to 20 minutes. Since the default compute settings have a minimum node size of 0, the designer must allocate resources after being idle. Successive jobs take less time, since the nodes are already allocated. To speed up the running time, you can create a compute resources with a minimum node size of 1 or greater.
-After the run finishes, you can check the results of each module:
+After the job finishes, you can check the results of each module:
1. Right-click the module whose output you want to see. 1. Select either **Visualize**, **View Output**, or **View Log**.
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-web-service.md
In Studio (classic), you used a **REQUEST/RESPOND web service** to deploy a mode
There are multiple ways to deploy a model in Azure Machine Learning. One of the simplest ways is to use the designer to automate the deployment process. Use the following steps to deploy a model as a real-time endpoint: 1. Run your completed training pipeline at least once.
-1. After the run completes, at the top of the canvas, select **Create inference pipeline** > **Real-time inference pipeline**.
+1. After the job completes, at the top of the canvas, select **Create inference pipeline** > **Real-time inference pipeline**.
![Create realtime inference pipeline](./media/migrate-rebuild-web-service/create-inference-pipeline.png)
Use the following steps to publish a pipeline endpoint for batch prediction:
1. Run your completed training pipeline at least once.
-1. After the run completes, at the top of the canvas, select **Create inference pipeline** > **Batch inference pipeline**.
+1. After the job completes, at the top of the canvas, select **Create inference pipeline** > **Batch inference pipeline**.
![Screenshot showing the create inference pipeline button on a training pipeline](./media/migrate-rebuild-web-service/create-inference-pipeline.png)
machine-learning Reference Yaml Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-model.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| | - | -- | -- | | `$schema` | string | The YAML schema. | | | `name` | string | **Required.** Name of the model. | |
-| `version` | string | Version of the model. If omitted, Azure ML will autogenerate a version. | |
+| `version` | int | Version of the model. If omitted, Azure ML will autogenerate a version. | |
| `description` | string | Description of the model. | | | `tags` | object | Dictionary of tags for the model. | | | `path` | string | Either a local path to the model file(s), or the URI of a cloud path to the model file(s). This can point to either a file or a directory. | |
machine-learning How To Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models-v1.md
automl_image_run = experiment.submit(automl_image_config)
The automated ML training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child runs. > [!TIP]
-> Check how to navigate to the run results from the [View run results](../how-to-understand-automated-ml.md#view-run-results) section.
+> Check how to navigate to the job results from the [View run results](../how-to-understand-automated-ml.md#view-job-results) section.
For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](../how-to-understand-automated-ml.md#metrics-for-image-models-preview)
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
RunDetails(run).show()
Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, automatically triggers a remote test run that uses the provided test data to evaluate the best model that automated ML recommends upon completion of the experiment. This remote test run is done at the end of the experiment, once the best model is determined. See how to [pass test data into your `AutoMLConfig`](../how-to-configure-cross-validation-data-splits.md#provide-test-data-preview).
-### Get test run results
+### Get test job results
-You can get the predictions and metrics from the remote test run from the [Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md#view-remote-test-run-results-preview) or with the following code.
+You can get the predictions and metrics from the remote test job from the [Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md#view-remote-test-job-results-preview) or with the following code.
```python
predictions_df = pd.read_csv("predictions.csv")
```
-The model test run generates the predictions.csv file that's stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test runs are not recommended for scenarios if any of the information used for or created by the test run needs to remain private.
+The model test job generates the predictions.csv file that's stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test jobs are not recommended for scenarios if any of the information used for or created by the test job needs to remain private.
### Test existing automated ML model
-To test other existing automated ML models created, best run or child run, use [`ModelProxy()`](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy) to test a model after the main AutoML run has completed. `ModelProxy()` already returns the predictions and metrics and does not require further processing to retrieve the outputs.
+To test other existing automated ML models created, best job or child job, use [`ModelProxy()`](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy) to test a model after the main AutoML run has completed. `ModelProxy()` already returns the predictions and metrics and does not require further processing to retrieve the outputs.
> [!NOTE] > ModelProxy is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview class, and may change at any time.
marketplace Dynamics 365 Review Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-review-publish.md
Title: Review and publish a Dynamics 365 offer to Microsoft AppSource (Azure Marketplace) description: Review and publish a Dynamics 365 offer to Microsoft AppSource (Azure Marketplace).-
marketplace Manage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/manage-account.md
Previously updated : 1/20/2022 Last updated : 7/27/2022 # Manage a commercial marketplace account in Partner Center
The billing address is pre-populated from your legal entity, and you can update
Partner Center uses [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) for multi-user account access and management. Your organization's Azure AD is automatically associated with your Partner Center account as part of the enrollment process.
+## Delete a commercial marketplace account
+
+You may want to delete a commercial marketplace account if the account was created by mistake or is no longer needed. You canΓÇÖt delete a commercial marketplace account yourself. To have an account deleted, create a support request. See [Get help and contact support](/partner-center/report-problems-with-partner-center).
+
+Before creating a support request, ensure that:
+
+- YouΓÇÖre the owner/manager of the account and you specifically provide your consent to delete the account.
+- There are no offers or applications (live or unpublished) associated with the account.
+- There are no pending payments associated with the account.
+ ## Next steps - [Add and manage users](add-manage-users.md)
mysql Concepts Data Encryption Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-encryption-mysql.md
For a MySQL server to use customer-managed keys stored in Key Vault for encrypti
* **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL. * **unwrapKey**: To be able to decrypt the DEK. Azure Database for MySQL needs the decrypted DEK to encrypt/decrypt the data
-The key vault administrator can also [enable logging of Key Vault audit events](../../azure-monitor/insights/key-vault-insights-overview.md), so they can be audited later.
+The key vault administrator can also [enable logging of Key Vault audit events](../../key-vault/key-vault-insights-overview.md), so they can be audited later.
When the server is configured to use the customer-managed key stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* Don't override any of the cluster's MachineConfig objects (for example, the kubelet configuration) in any way. * Don't set any unsupportedConfigOverrides options. Setting these options prevents minor version upgrades. * The Azure Red Hat OpenShift service accesses your cluster via Private Link Service. Don't remove or modify service access.
+* To avoid disruption resulting from cluster maintenance, in-cluster workloads should be configured with high availability practices, including but not limited to pod affinity and anti-affinity, pod disruption budgets, and adequate scaling.
* Non-RHCOS compute nodes aren't supported. For example, you can't use a RHEL compute node. * Don't place policies within your subscription or management group that prevent SREs from performing normal maintenance against the Azure Red Hat OpenShift cluster. For example, don't require tags on the Azure Red Hat OpenShift RP-managed cluster resource group. * Do not run extra workloads on the control plane nodes. While they can be scheduled on the control plane nodes, it will cause extra resource usage and stability issues that can affect the entire cluster.
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-firewall-rules.md
To configure your firewall, you create firewall rules that specify ranges of acc
All database access to your coordinator node is blocked by the firewall by default. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules. Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your PostgreSQL Database, as shown in the following diagram: ## Connecting from the Internet and from Azure
postgresql Concepts Data Encryption Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-encryption-postgresql.md
For a PostgreSQL server to use customer-managed keys stored in Key Vault for enc
* **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for PostgreSQL. * **unwrapKey**: To be able to decrypt the DEK. Azure Database for PostgreSQL needs the decrypted DEK to encrypt/decrypt the data
-The key vault administrator can also [enable logging of Key Vault audit events](../../azure-monitor/insights/key-vault-insights-overview.md), so they can be audited later.
+The key vault administrator can also [enable logging of Key Vault audit events](../../key-vault/key-vault-insights-overview.md), so they can be audited later.
When the server is configured to use the customer-managed key stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the server sends the protected DEK to the key vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
route-server Next Hop Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/next-hop-ip.md
Previously updated : 07/18/2022 Last updated : 07/26/2022 # Next Hop IP support
-Azure Route Server simplifies the exchange of routing information between any Network Virtual Appliance (NVA) that supports the Border Gateway Protocol (BGP) routing protocol and the Azure Software Defined Network (SDN) in the Azure Virtual Network (VNet) without the need to manually configure or maintain route tables. With the support for Next Hop IP in Azure Route Server, you can peer with NVAs deployed behind an Azure Internal Load Balancer (ILB). The internal load balancer lets you set up active-passive connectivity scenarios and leverage load balancing to improve connectivity performance.
+With the support for Next Hop IP in [Azure Route Server](overview.md), you can peer with NVAs deployed behind an Azure Internal Load Balancer (ILB). The internal load balancer lets you set up active-passive connectivity scenarios and leverage load balancing to improve connectivity performance.
:::image type="content" source="./media/next-hop-ip/route-server-next-hop.png" alt-text="Diagram of two NVAs behind a load balancer and a Route Server.":::
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
Previously updated : 09/27/2021 Last updated : 07/27/2022 #Customer intent: As an IT administrator, I want to learn about Azure Route Server and what I can use it for.
Azure Route Server simplifies configuration, management, and deployment of your
* You no longer need to update [User-Defined Routes](../virtual-network/virtual-networks-udr-overview.md) manually whenever your NVA announces new routes or withdraw old ones.
-* You can peer multiple instances of your NVA with Azure Route Server. You can configure the BGP attributes in your NVA and, depending on your design (for example, active-active for performance or active-passive for resiliency), let Azure Route Server know which NVA instance is active or which one is passive.
+* You can peer multiple instances of your NVA with Azure Route Server. You can configure the BGP attributes in your NVA and, depending on your design (for example, active-active for performance or active-passive for resiliency), let Azure Route Server know which NVA instance is active or which one is passive.
* The interface between NVA and Azure Route Server is based on a common standard protocol. As long as your NVA supports BGP, you can peer it with Azure Route Server. For more information, see [Route Server supported routing protocols](route-server-faq.md#protocol). * You can deploy Azure Route Server in any of your new or existing virtual network.
+## Route Server Limits
+
+Azure Route Server has the following limits (per deployment).
++ ## FAQ For frequently asked questions about Azure Route Server, see [Azure Route Server FAQ](route-server-faq.md).
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
Previously updated : 03/25/2022 Last updated : 07/26/2022
No, Azure Route Server supports only 16-bit (2 bytes) ASNs.
### Can I associate a User Defined Route (UDR) to the RouteServerSubnet?
-No, Azure Route Server doesn't support configuring a UDR on the RouteServerSubnet. It should be noted that Azure Route Server does not route any data traffic between NVAs and VMs.
+No, Azure Route Server doesn't support configuring a UDR on the RouteServerSubnet. It should be noted that Azure Route Server doesn't route any data traffic between NVAs and VMs.
### Can I associate a Network Security group (NSG) to the RouteServerSubnet?
Azure Route Server supports ***NO_ADVERTISE*** BGP Community. If an NVA advertis
Azure Route Server has the following limits (per deployment).
-| Resource | Limit |
-|-|-|
-| Number of BGP peers supported | 8 |
-| Number of routes each BGP peer can advertise to Azure Route Server | 1000 |
-| Number of routes that Azure Route Server can advertise to ExpressRoute or VPN gateway | 200 |
-| Number of VMs in the virtual network (including peered virtual networks) that Azure Route Server can support | 2000 |
-The number of VMs that Azure Route Server can support is not a hard limit. This depends on how the Route Server infrastructure is deployed within an Azure Region.
-
-If your NVA advertises more routes than the limit, the BGP session will get dropped. In the event BGP session is dropped between the gateway and Azure Route Server, you'll lose connectivity from your on-premises network to Azure. For more information, see [Diagnose an Azure virtual machine routing problem](../virtual-network/diagnose-network-routing-problem.md).
+For information on troubleshooting routing problems in a virtual machine, see [Diagnose an Azure virtual machine routing problem](../virtual-network/diagnose-network-routing-problem.md).
## Next steps
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-overview.md
Previously updated : 07/11/2022 Last updated : 07/27/2022 # Indexers in Azure Cognitive Search
You can use an indexer as the sole means for data ingestion, or in combination w
Indexers crawl data stores on Azure and outside of Azure.
-+ [Amazon Redshift](search-how-to-index-power-query-data-sources.md) (in preview)
+ [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md) + [Azure Cosmos DB](search-howto-index-cosmosdb.md) + [Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md)
-+ [Azure MySQL](search-howto-index-mysql.md) (in preview)
+ [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) + [Azure Table Storage](search-howto-indexing-azure-tables.md)
-+ [Elasticsearch](search-how-to-index-power-query-data-sources.md) (in preview)
-+ [PostgreSQL](search-how-to-index-power-query-data-sources.md) (in preview)
-+ [Salesforce Objects](search-how-to-index-power-query-data-sources.md) (in preview)
-+ [Salesforce Reports](search-how-to-index-power-query-data-sources.md) (in preview)
-+ [Smartsheet](search-how-to-index-power-query-data-sources.md) (in preview)
-+ [Snowflake](search-how-to-index-power-query-data-sources.md) (in preview)
+ [Azure SQL Managed Instance](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md) + [SQL Server on Azure Virtual Machines](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md) + [Azure Files](search-file-storage-integration.md) (in preview)++ [Azure MySQL](search-howto-index-mysql.md) (in preview)++ [SharePoint in Microsoft 365](search-howto-index-sharepoint-online.md) (in preview)++ [Azure Cosmos DB (MongoDB API)](search-howto-index-cosmosdb-mongodb.md) (in preview)++ [Azure Cosmos DB (Gremlin API)](search-howto-index-cosmosdb-gremlin.md) (in preview) Indexers accept flattened row sets, such as a table or view, or items in a container or folder. In most cases, it creates one search document per row, record, or item.
sentinel Connect Azure Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-stack.md
Add the **Azure Monitor, Update, and Configuration Management** virtual machine
1. After the extension installation completes, its status shows as **Provisioning Succeeded**. It might take up to one hour for the virtual machine to appear in the Microsoft Sentinel portal.
-For more information on installing and configuring the agent for Windows, see [Connect Windows computers](../azure-monitor/agents/agent-windows.md#install-agent-using-setup-wizard).
+For more information on installing and configuring the agent for Windows, see [Connect Windows computers](../azure-monitor/agents/agent-windows.md#install-the-agent).
For Linux troubleshooting of agent issues, see [Troubleshoot Azure Log Analytics Linux Agent](../azure-monitor/agents/agent-linux-troubleshoot.md).
sentinel Indicators Bulk File Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/indicators-bulk-file-import.md
+
+ Title: Add indicators in bulk to threat intelligence by file
+
+description: Learn how to bulk add indicators to threat intelligence from flat files in Microsoft Sentinel.
++++ Last updated : 07/26/2022+
+#Customer intent: As a security analyst, I want to bulk import indicators from common file types to my threat intelligence (TI), so I can more effectively share TI during an investigation.
++
+# Add indicators in bulk to Microsoft Sentinel threat intelligence from a CSV or JSON file
+
+In this how-to guide, you'll add indicators from a CSV or JSON file into Microsoft Sentinel threat intelligence. A lot of threat intelligence sharing still happens across emails and other informal channels during an ongoing investigation. The ability to import indicators directly into Microsoft Sentinel threat intelligence allows you to quickly socialize emerging threats for your team and make them available to power other analytics such as producing security alerts, incidents, and automated responses.
+
+> [!IMPORTANT]
+> This feature is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.
+
+## Select an import template for your indicators
+
+Add multiple indicators to your threat intelligence with a specially crafted CSV or JSON file. Download the file templates to get familiar with the fields and how they map to the data you have. Review the required fields for each template type to validate your data before importing.
+
+1. From the [Azure portal](https://portal.azure.com), go to **Microsoft Sentinel**.
+
+1. Select the workspace you want to import threat indicators into.
+
+1. Go to **Threat Intelligence** under the **Threat Management** heading.
+
+ :::image type="content" source="media/indicators-bulk-file-import/import-using-file-menu-fixed.png" alt-text="Screenshot of the menu options to import indicators using a file menu." lightbox="media/indicators-bulk-file-import/import-using-file-menu-fixed.png":::
+
+1. Select **Import** > **Import using a file**.
+
+1. Choose CSV or JSON from the **File Format** drop down menu.
+
+ :::image type="content" source="media/indicators-bulk-file-import/format-select-and-download.png" alt-text="Screenshot of the menu flyout to upload a CSV or JSON file, choose a template to download, and specify a source highlighting the file format selection.":::
+
+1. Select the **Download template** link once you've chosen a bulk upload template.
+
+1. Consider grouping your indicators by source since each file upload requires one.
+
+The templates provide all the fields you need to create a single valid indicator, including required fields and validation parameters. Replicate that structure to populate additional indicators in one file. For more information on the templates, see [Understand the import templates](indicators-bulk-file-import.md#understand-the-import-templates).
++
+## Upload the indicator file
+
+1. Change the file name from the template default, but keep the file extension as .csv or .json. When you create a unique file name, it will be easier to monitor your imports from the **Manage file imports** pane.
+
+1. Drag your indicators file to the **Upload a file** section or browse for the file using the link.
+
+1. Enter a source for the indicators in the **Source** text box. This value will be stamped on all the indicators included in that file. You can view this property as the **SourceSystem** field. The source will also be displayed in the **Manage file imports** pane. Learn more about how to view indicator properties here: [Work with threat indicators](work-with-threat-indicators.md#find-and-view-your-indicators-in-logs).
+
+1. Choose how you want Microsoft Sentinel to handle invalid indicator entries by selecting one of the radio buttons at the bottom of the **Import using a file** pane.
+ - Import only the valid indicators and leave aside any invalid indicators from the file.
+ - Don't import any indicators if a single indicator in the file is invalid.
+
+ :::image type="content" source="media/indicators-bulk-file-import/upload-file-pane.png" alt-text="Screenshot of the menu flyout to upload a CSV or JSON file, choose a template to download, and specify a source highlighting the Import button.":::
+
+1. Select the **Import** button.
++
+## Manage file imports
+
+Monitor your imports and view error reports for partially imported or failed imports.
+
+1. Select **Import** > **Manage file imports**.
+
+ :::image type="content" source="media/indicators-bulk-file-import/manage-file-imports.png" alt-text="Screenshot of the menu option to manage file imports.":::
+
+1. Review the status of imported files and the number of invalid indicator entries.
+
+ :::image type="content" source="media/indicators-bulk-file-import/manage-file-imports-pane.png" alt-text="Screenshot of the manage file imports pane with example ingestion data. The columns show sorted by imported number with various sources.":::
+
+1. View and sort imports by selecting **Source**, indicator file **Name**, the number **Imported**, the **Total** number of indicators in each file, or the **Created** date.
+
+1. Select the preview of the error file or download the error file containing the errors about invalid indicators.
+
+Microsoft Sentinel maintains the status of the file import for 30 days. The actual file and the associated error file are maintained in the system for 24 hours. After 24 hours the file and the error file are deleted, and the ingested indicators will continue to show in the Threat Intelligence menu.
++
+## Understand the import templates
+
+Review each template to ensure your indicators are imported successfully. If this is your first import, be sure to reference the instructions in the template file and follow the supplemental guidance below.
+
+### CSV template structure
+
+1. Choose between the **File indicators** or **All other indicator types** option from the **Indicator type** drop down menu when you select **CSV**.
+
+ The CSV template needs multiple columns to accommodate the file indicator type because file indicators can have multiple hash types like MD5, SHA256, and more. All other indicator types like IP addresses only require the observable type and the observable value.
+
+1. The column headings for the CSV **All other indicator types** template include fields such as `threatTypes`, single or multiple `tags`, `confidence`, and `tlpLevel`. TLP or Traffic Light Protocol is a sensitivity designation to help make decisions on threat intelligence sharing.
+
+1. Only the `validFrom`, `observableType` and `observableValue` fields are required.
+
+1. Delete the entire first row from the template to remove the comments before upload.
+
+1. Keep in mind the max file size for a CSV file import is 50MB.
+
+Here's an example domain-name indicator using the CSV template.
+
+```CSV
+threatTypes,tags,name,description,confidence,revoked,validFrom,validUntil,tlpLevel,severity,observableType,observableValue
+Phishing,"demo, csv",MDTI article - Franken-Phish domainname,Entity appears in MDTI article Franken-phish,100,,2022-07-18T12:00:00.000Z,,white,5,domain-name,1776769042.tailspintoys.com
+```
+
+### JSON template structure
+
+1. There is only one JSON template for all indicator types.
+
+1. The `pattern` element supports indicator types of: file, ipv4-addr, ipv6-addr, domain-name, url, user-account, email-addr, and windows-registry-key types.
+
+1. Remove the template comments before upload.
+
+1. Close the last indicator in the array using the "}" without a comma.
+
+1. Keep in mind the max file size for a JSON file import is 250MB.
+
+Here's an example ipv4-addr indicator using the JSON template.
+
+```json
+[
+ {
+ "type": "indicator",
+ "id": "indicator--dbc48d87-b5e9-4380-85ae-e1184abf5ff4",
+ "spec_version": "2.1",
+ "pattern": "([ipv4-addr:value = '198.168.100.5' ] AND [ipv4-addr:value = '198.168.100.10']) WITHIN 300 SECONDS",
+ "pattern_type": "stix",
+ "created": "2022-07-27T12:00:00.000Z",
+ "modified": "2022-07-27T12:00:00.000Z",
+ "valid_from": "2016-07-20T12:00:00.000Z",
+ "name": "Sample IPv4 indicator",
+ "description": "This indicator implements an observation expression.",
+ "indicator_types": [
+ "anonymization",
+ "malicious-activity"
+ ],
+ "kill_chain_phases": [
+ {
+ "kill_chain_name": "mandiant-attack-lifecycle-model",
+ "phase_name": "establish-foothold"
+ }
+ ],
+ "labels": ["proxy","demo"],
+ "confidence": "95",
+ "lang": "",
+ "external_references": [],
+ "object_marking_refs": [],
+ "granular_markings": [],
+ }
+]
+```
+
+## Next steps
+
+This article has shown you how to manually bolster your threat intelligence by importing indicators gathered in flat files. Check out these links to learn how indicators power other analytics in Microsoft Sentinel.
+- [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md)
+- [Threat indicators for cyber threat intelligence in Microsoft Sentinel](/azure/architecture/example-scenario/dat)
+- [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md)
service-fabric How To Managed Cluster Modify Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-modify-node-type.md
Remove-AzServiceFabricManagedNodeType -ResourceGroupName $resourceGroup -Cluster
You can scale a Service Fabric managed cluster node type with portal, ARM template, or PowerShell. You can also [configure autoscale for a secondary node type](how-to-managed-cluster-autoscale.md) if you want a fully automated solution. > [!NOTE]
-> For the Primary node type, you will not be able to go below 3 nodes for a Basic SKU cluster, and 5 nodes for a Standard SKU cluster.
+> * A primary node type can't be set to auto-scale and you can only set it to manual scale.
+> * For the Primary node type, you will not be able to go below 3 nodes for a Basic SKU cluster, and 5 nodes for a Standard SKU cluster.
### Scale using portal
In this walkthrough, you will learn how to modify the node count for a node type
4) Select the `Node type name` you want to modify
-5) Adjust the `Node count` to the new value you want and select `Apply` at the bottom. In this screenshot, the value was `3` and adjusted to `5`.
+5) Review and update node type properties if needed.
+ ![Sample showing a node count increase][adjust-node-count]
-6) The `Provisioning state` will now show a status of `Updating` until complete. When complete, it will show `Succeeded` again.
+6) Select `Manage node type scaling` to configure the scaling settings and choose between custom autoscale and manual scale options. Autoscale is a built-in feature that helps applications perform their best when demand changes. You can choose to scale your resource manually to a specific instance count, or via a custom Autoscale policy that scales based on metric(s) thresholds, or schedule instance count which scales during designated time windows. [Learn more about Azure Autoscale](https://docs.microsoft.com/azure/azure-monitor/platform/autoscale-get-started?WT.mc_id=Portal-Microsoft_Azure_Monitoring) or [view the how-to video](https://www.microsoft.com/videoplayer/embed/RE4u7ts).
+
+ * **Custom autoscale**: Select the appropriate `scale mode` to define the custom Autoscale policy - `Scale to a specific instance count`or `Scale based on a metric`. The latter is based on metric trigger rules, for example, increase instance count by 1 when CPU Percentage is above 70%. Once you define the policy, select `Save` at the top.
+
+ ![Sample showing auto scaling setting][auto-scale-setting]
+
+ * **Manual scale**: Adjust the `Node count` to the new value you want and select `Save` at the top. In this screenshot, the value was `3` and adjusted to `5`.
+
+ ![Sample showing manual scaling setting][manual-scale-setting]
+
+ Select `Apply` at the bottom to configure these saved settings on the node type.
+
+7) The `Provisioning state` will now show a status of `Updating` until complete. When complete, it will show `Succeeded` again.
![Sample showing a node type updating][node-type-updating]
Service Fabric managed clusters by default configure a Service Fabric data disk
[overview]: ./media/how-to-managed-cluster-modify-node-type/sfmc-overview.png [node-type-updating]: ./media/how-to-managed-cluster-modify-node-type/sfmc-adjust-node-type-updating.png
-[adjust-node-count]: ./media/how-to-managed-cluster-modify-node-type/sfmc-adjust-node-counts.png
+[adjust-node-count]: ./media/how-to-managed-cluster-modify-node-type/sfmc-adjust-node-counts-new.png
+[manual-scale-setting]: ./media/how-to-managed-cluster-modify-node-type/sfmc-manual-scale-setting.png
+[auto-scale-setting]: ./media/how-to-managed-cluster-modify-node-type/sfmc-auto-scale-setting-new.png
[change-nodetype-os-image]: ./media/how-to-managed-cluster-modify-node-type/sfmc-change-os-image.png [nodetype-placement-property]: ./media/how-to-managed-cluster-modify-node-type/sfmc-nodetype-placement-property.png [addremove]: ./media/how-to-managed-cluster-modify-node-type/sfmc-addremove-node-type.png
site-recovery Concepts On Premises To Azure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-on-premises-to-azure-networking.md
Previously updated : 10/13/2019 Last updated : 07/26/2022
In this scenario, the Azure VM gets a new IP address after failover. To setup a
2. Select the desired Azure virtual machine. 3. Select **Compute and Network** and select **Edit**.
- ![Customize the failover networking configurations](media/azure-to-azure-customize-networking/edit-networking-properties.png)
+ :::image type="network configurations" source="media/azure-to-azure-customize-networking/edit-networking-properties.png" alt-text="Customize the failover networking configurations.":::
4. To update Failover network settings, Select **Edit** for the NIC you want to configure. In the next page that opens, provide the corresponding pre-created IP Address in the test failover and failover location.
- ![Edit the NIC configuration](media/azure-to-azure-customize-networking/nic-drilldown.png)
+ :::image type="NIC configuration" source="media/azure-to-azure-customize-networking/nic-drilldown.png" alt-text="Edit the NIC configuration.":::
5. Select **OK**.
site-recovery Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-log-analytics.md
We recommend that you review [common monitoring questions](monitoring-common-que
## Configure Site Recovery to send logs
-1. In the vault, click **Diagnostic settings** > **Add diagnostic setting**.
+1. In the vault, select **Diagnostic settings** > **Add diagnostic setting**.
![Screenshot showing the Add diagnostic setting option.](./media/monitoring-log-analytics/add-diagnostic.png) 2. In **Diagnostic settings**, specify a name, and check the box **Send to Log Analytics**. 3. Select the Azure Monitor Logs subscription, and the Log Analytics workspace. 4. Select **Azure Diagnostics** in the toggle.
-5. From the log list, select all the logs with the prefix **AzureSiteRecovery**. Then click **OK**.
+5. From the log list, select all the logs with the prefix **AzureSiteRecovery**. Then select **OK**.
![Screenshot of the Diagnostics setting screen.](./media/monitoring-log-analytics/select-workspace.png)
The Site Recovery logs start to feed into a table (**AzureDiagnostics**) in the
You can capture the data churn rate information and source data upload rate information for your VMware/physical machines at on-premises. To enable this, a Microsoft monitoring agent is required to be installed on the Process Server.
-1. Go to the Log Analytics workspace and click on **Advanced Settings**.
-2. Click on **Connected Sources** page and further select **Windows Servers**.
+1. Go to the Log Analytics workspace and select **Advanced Settings**.
+2. Select **Connected Sources** page and further select **Windows Servers**.
3. Download the Windows Agent (64 bit) on the Process Server. 4. [Obtain the workspace ID and key](../azure-monitor/agents/agent-windows.md#workspace-id-and-key) 5. [Configure agent to use TLS 1.2](../azure-monitor/agents/agent-windows.md#configure-agent-to-use-tls-12)
-6. [Complete the agent installation](../azure-monitor/agents/agent-windows.md#install-agent-using-setup-wizard) by providing the obtained workspace ID and key.
-7. Once the installation is complete, go to Log Analytics workspace and click on **Advanced Settings**. Go to the **Data** page and further click on **Windows Performance Counters**.
-8. Click on **'+'** to add the following two counters with sample interval of 300 seconds:
+6. [Complete the agent installation](../azure-monitor/agents/agent-windows.md#install-the-agent) by providing the obtained workspace ID and key.
+7. Once the installation is complete, go to Log Analytics workspace and select **Advanced Settings**. Go to the **Data** page and select **Windows Performance Counters**.
+8. Select **'+'** to add the following two counters with sample interval of 300 seconds:
- ASRAnalytics(*)\SourceVmChurnRate - ASRAnalytics(*)\SourceVmThrpRate
Category contains "Upload", "UploadRate", "none") 
> [!Note] > Ensure you set up the monitoring agent on the Process Server to fetch these logs. Refer [steps to configure monitoring agent](#configure-microsoft-monitoring-agent-on-the-process-server-to-send-churn-and-upload-rate-logs).
-This query plots a trend graph for a specific disk **disk0** of a replicated item **win-9r7sfh9qlru**, that represents the data change rate (Write Bytes per Second), and data upload rate. You can find the disk name on **Disks** blade of the replicated item in the recovery services vault. Instance name to be used in the query is DNS name of the machine followed by _ and disk name as in this example.
+This query plots a trend graph for a specific disk, **disk0**, of a replicated item, **win-9r7sfh9qlru**, which represents the data change rate (Write Bytes per Second) and data upload rate. You can find the disk name on **Disks** blade of the replicated item in the recovery services vault. Instance name to be used in the query is DNS name of the machine followed by _ and disk name as in this example.
``` Perf
site-recovery Physical Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-manage-configuration-server.md
Upgrade the server as follows:
```powershell $Vault = Get-AzRecoveryServicesVault -Name <name of your vault>
- Set-AzSiteRecoveryVaultSettings -ARSVault $Vault
+ Set-AzRecoveryServicesVaultContext -Vault $Vault
``` 4. Get select your configuration server
- `$Fabric = Get-AzSiteRecoveryFabric -FriendlyName <name of your configuration server>`
+ `$Fabric = Get-AzRecoveryServicesAsrFabric -FriendlyName <name of your configuration server>`
6. Delete the Configuration Server
- `Remove-AzSiteRecoveryFabric -Fabric $Fabric [-Force]`
+ `Remove-AzRecoveryServicesAsrFabric -Fabric $Fabric [-Force]`
> [!NOTE]
-> The **-Force** option in the Remove-AzSiteRecoveryFabric can be used to force the removal/deletion of the Configuration server.
+> The **-Force** option in the Remove-AzRecoveryServicesAsrFabric can be used to force the removal/deletion of the Configuration server.
## Renew TLS/SSL certificates The configuration server has an inbuilt web server, which orchestrates activities of the Mobility service, process servers, and master target servers connected to it. The web server uses a TLS/SSL certificate to authenticate clients. The certificate expires after three years, and can be renewed at any time.
site-recovery Vmware Azure Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-configuration-server.md
You can optionally delete the configuration server by using PowerShell.
``` $vault = Get-AzRecoveryServicesVault -Name <name of your vault>
- Set-AzSiteRecoveryVaultSettings -ARSVault $vault
+ Set-AzRecoveryServicesVaultContext -ARSVault $vault
``` 4. Retrieve the configuration server.
- `$fabric = Get-AzSiteRecoveryFabric -FriendlyName <name of your configuration server>`
+ `$fabric = Get-AzRecoveryServicesAsrFabric -FriendlyName <name of your configuration server>`
6. Delete the configuration server.
- `Remove-AzSiteRecoveryFabric -Fabric $fabric [-Force]`
+ `Remove-AzRecoveryServicesAsrFabric -Fabric $fabric [-Force]`
> [!NOTE]
-> You can use the **-Force** option in Remove-AzSiteRecoveryFabric for forced deletion of the configuration server.
+> You can use the **-Force** option in Remove-AzRecoveryServicesAsrFabric for forced deletion of the configuration server.
## Generate configuration server Passphrase
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
Previously updated : 07/14/2022 Last updated : 07/27/2022
The items that appear in these tables will change over time as support continues
| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
The items that appear in these tables will change over time as support continues
| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> <sup>3</sup> | ![No](../media/icons/no-icon.png)| ![Yes](../media/icons/yes-icon.png) |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
az storage account create \
To create an account with Azure DNS zone endpoints (preview), first register for the preview as described in [Azure DNS zone endpoints (preview)](storage-account-overview.md#azure-dns-zone-endpoints-preview). Next, install the preview extension for the Azure CLI if it's not already installed: ```azurecli
-az extension add -name storage-preview
+az extension add --name storage-preview
``` Next, create the account, specifying `AzureDnsZone` for the `--dns-endpoint-type` parameter. After the account is created, you can see the service endpoints by getting the `PrimaryEndpoints` property of the storage account.
synapse-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics
description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 07/26/2022 --++
synapse-analytics How To Create A Workspace With Data Exfiltration Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-create-a-workspace-with-data-exfiltration-protection.md
Title: Create a workspace with data exfiltration protection enabled description: This article will explain how to create a workspace with data exfiltration protection in Azure Synapse Analytics-+ Last updated 12/01/2020 -+
synapse-analytics Synapse Private Link Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-private-link-hubs.md
Title: Connect to a Synapse Studio using private links description: This article will teach you how to connect to your Azure Synapse Studio using private links-+ Last updated 12/01/2020 -+
synapse-analytics Workspace Data Exfiltration Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/workspace-data-exfiltration-protection.md
Title: Data exfiltration protection for Azure Synapse Analytics workspaces description: This article will explain data exfiltration protection in Azure Synapse Analytics-+ Last updated 12/01/2020 -+ # Data exfiltration protection for Azure Synapse Analytics workspaces
synapse-analytics Analyze Your Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/analyze-your-workload.md
Title: Analyze your workload for dedicated SQL pool description: Techniques for analyzing query prioritization for dedicated SQL pool in Azure Synapse Analytics.-+ Last updated 11/03/2021-+
synapse-analytics Column Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/column-level-security.md
Title: Column-level security for dedicated SQL pool description: Column-Level Security allows customers to control access to database table columns based on the user's execution context or group membership, simplifying the design and coding of security in your application, and allowing you to implement restrictions on column access.-+ Last updated 04/19/2020-+ tags: azure-synapse
synapse-analytics Memory Concurrency Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/memory-concurrency-limits.md
Title: Memory and concurrency limits description: View the memory and concurrency limits allocated to the various performance levels and resource classes for dedicated SQL pool in Azure Synapse Analytics.-+ Last updated 04/04/2021-+
synapse-analytics Quickstart Configure Workload Isolation Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-portal.md
Title: 'Quickstart: Configure workload isolation - Portal' description: Use Azure portal to configure workload isolation for dedicated SQL pool.--++ Last updated 05/04/2020
synapse-analytics Quickstart Configure Workload Isolation Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-tsql.md
Title: 'Quickstart: Configure workload isolation - T-SQL' description: Use T-SQL to configure workload isolation.-+ Last updated 04/27/2020-+
synapse-analytics Quickstart Create A Workload Classifier Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-portal.md
Title: 'Quickstart: Create a workload classifier - Portal' description: Use Azure portal to create a workload classifier with high importance.--++ Last updated 05/04/2020
synapse-analytics Quickstart Create A Workload Classifier Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-tsql.md
Title: 'Quickstart: Create a workload classifier - T-SQL' description: Use T-SQL to create a workload classifier with high importance.-+ Last updated 02/04/2020-+
synapse-analytics Resource Classes For Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/resource-classes-for-workload-management.md
Title: Resource classes for workload management description: Guidance for using resource classes to manage concurrency and compute resources for queries in Azure Synapse Analytics.-+ Last updated 02/04/2020-+
synapse-analytics Sql Data Warehouse Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-authentication.md
Title: Authentication for dedicated SQL pool (formerly SQL DW) description: Learn how to authenticate to dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics by using Azure Active Directory (Azure AD) or SQL Server authentication.-+ Last updated 04/02/2019-+ tag: azure-synapse
synapse-analytics Sql Data Warehouse Develop User Defined Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-user-defined-schemas.md
Title: Using user-defined schemas description: Tips for using T-SQL user-defined schemas to develop solutions for dedicated SQL pools in Azure Synapse Analytics.-+ Last updated 04/17/2018-+
synapse-analytics Sql Data Warehouse Encryption Tde Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-encryption-tde-tsql.md
Title: Transparent data encryption (T-SQL) description: Transparent data encryption (TDE) in Azure Synapse Analytics (T-SQL)-+ Last updated 04/30/2019--++
synapse-analytics Sql Data Warehouse Encryption Tde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-encryption-tde.md
Title: Transparent Data Encryption (Portal) for dedicated SQL pool (formerly SQL DW) description: Transparent Data Encryption (TDE) for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics-+ Last updated 06/23/2021--++
synapse-analytics Sql Data Warehouse How To Configure Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-configure-workload-importance.md
Title: Configure workload importance for dedicated SQL pool description: Learn how to set request level importance in Azure Synapse Analytics.-+ Last updated 05/15/2020-+
synapse-analytics Sql Data Warehouse How To Convert Resource Classes Workload Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md
Title: Convert resource class to a workload group description: Learn how to create a workload group that is similar to a resource class in a dedicated SQL pool.-+ Last updated 08/13/2020-+
synapse-analytics Sql Data Warehouse How To Manage And Monitor Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md
Title: Manage and monitor workload importance in dedicated SQL pool description: Learn how to manage and monitor request level importance dedicated SQL pool for Azure Synapse Analytics.-+ Last updated 02/04/2020-+
synapse-analytics Sql Data Warehouse Manage Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-overview.md
Title: Manage compute resource for for dedicated SQL pool (formerly SQL DW) description: Learn about performance scale out capabilities for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. Scale out by adjusting DWUs, or lower costs by pausing the dedicated SQL pool (formerly SQL DW).-+ Last updated 11/12/2019-+
synapse-analytics Sql Data Warehouse Manage Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
Title: Monitor your dedicated SQL pool workload using DMVs description: Learn how to monitor your Azure Synapse Analytics dedicated SQL pool workload and query execution using DMVs.-+ Last updated 11/15/2021-+
synapse-analytics Sql Data Warehouse Overview Manage Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security.md
Title: Secure a dedicated SQL pool (formerly SQL DW) description: Tips for securing a dedicated SQL pool (formerly SQL DW) and developing solutions in Azure Synapse Analytics.-+ Last updated 04/17/2018-+ tags: azure-synapse
synapse-analytics Sql Data Warehouse Query Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-query-ssms.md
Last updated 04/17/2018--++
synapse-analytics Sql Data Warehouse Workload Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-classification.md
Title: Workload classification for dedicated SQL pool description: Guidance for using classification to manage query concurrency, importance, and compute resources for dedicated SQL pool in Azure Synapse Analytics.-+ Last updated 01/24/2022-+
synapse-analytics Sql Data Warehouse Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-importance.md
Title: Workload importance description: Guidance for setting importance for dedicated SQL pool queries in Azure Synapse Analytics.-+ Last updated 02/04/2020-+
synapse-analytics Sql Data Warehouse Workload Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-isolation.md
Title: Workload isolation description: Guidance for setting workload isolation with workload groups in Azure Synapse Analytics.-+ Last updated 11/16/2021-+
synapse-analytics Sql Data Warehouse Workload Management Portal Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management-portal-monitor.md
Title: Workload management portal monitoring description: Guidance for workload management portal monitoring in Azure Synapse Analytics.-+ Last updated 03/01/2021-+
synapse-analytics