Updates from: 08/05/2022 01:15:14
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
Previously updated : 01/24/2022 Last updated : 08/04/2022 zone_pivot_groups: b2c-policy-type
Get the example of the force password reset policy on [GitHub](https://github.co
## Force password reset on next login
-To force reset the password on next login, update the account password profile using MS Graph [Update user](/graph/api/user-update) operation. The following example updates the password profile [forceChangePasswordNextSignIn](user-profile-attributes.md#password-profile-property) attribute to `true`, which forces the user to reset the password on next login.
+To force reset the password on next login, update the account password profile using MS Graph [Update user](/graph/api/user-update) operation. To do this, you need to assign your [Microsoft Graph application](microsoft-graph-get-started.md) the [User administrator](../active-directory/roles/permissions-reference.md#user-administrator) role. Follow the steps in [Grant user administrator role](microsoft-graph-get-started.md?tabs=app-reg-ga#optional-grant-user-administrator-role) to assign your Microsoft Graph application a User administrator role.
+
+The following example updates the password profile [forceChangePasswordNextSignIn](user-profile-attributes.md#password-profile-property) attribute to `true`, which forces the user to reset the password on next login.
```http PATCH https://graph.microsoft.com/v1.0/users/<user-object-ID> Content-type: application/json {
-"passwordProfile": {
- "forceChangePasswordNextSignIn": true
+ "passwordProfile": {
+ "forceChangePasswordNextSignIn": true
+ }
} ```
active-directory-domain-services Manage Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-group-policy.md
This article shows you how to install the Group Policy Management tools, then ed
If you are interested in server management strategy, including machines in Azure and [hybrid connected](../azure-arc/servers/overview.md), consider reading about the
-[guest configuration](../governance/policy/concepts/guest-configuration.md)
+[guest configuration](../governance/machine-configuration/overview.md)
feature of [Azure Policy](../governance/policy/overview.md).
active-directory Howto Authentication Passwordless Security Key Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-windows.md
Previously updated : 05/04/2022 Last updated : 07/06/2022
This document focuses on enabling FIDO2 security key based passwordless authenti
| [Azure AD joined devices](../devices/concept-azure-ad-join.md) require Windows 10 version 1909 or higher | X | | | [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) require Windows 10 version 2004 or higher | | X | | Fully patched Windows Server 2016/2019 Domain Controllers. | | X |
-| [Azure AD Connect](../hybrid/how-to-connect-install-roadmap.md#install-azure-ad-connect) version 1.4.32.0 or later | | X |
+| [Azure AD Hybrid Authentication Management module](https://www.powershellgallery.com/packages/AzureADHybridAuthenticationManagement/2.1.1.0) | | X |
| [Microsoft Endpoint Manager](/intune/fundamentals/what-is-intune) (Optional) | X | X | | Provisioning package (Optional) | X | X | | Group Policy (Optional) | | X |
active-directory Workload Identity Federation Create Trust Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-gcp.md
Previously updated : 01/06/2022 Last updated : 07/18/2022
The most important fields for creating the federated identity credential are:
The following command configures a federated identity credential:
-```http
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/41be38fd-caac-4354-aa1e-1fdb20e43bfa/federatedIdentityCredentials' --body '{"name":"GcpFederation","issuer":"https://accounts.google.com","subject":"112633961854638529490","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az ad app federated-credential create --id 41be38fd-caac-4354-aa1e-1fdb20e43bfa --parameters credential.json
+("credential.json" contains the following content)
+{
+ "name": "GcpFederation",
+ "issuer": "https://accounts.google.com",
+ "subject": "112633961854638529490",
+ "description": "Test GCP federation",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ]
+}
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzADappfederatedidentitycredential -ApplicationObjectId $appObjectId -Audience api://AzureADTokenExchange -Issuer 'https://accounts.google.com' -name 'GcpFederation' -Subject '112633961854638529490'
```+ For more information and examples, see [Create a federated identity credential](workload-identity-federation-create-trust.md).
async function getGoogleIDToken() {
``` # [C#](#tab/csharp)
-HereΓÇÖs an example in TypeScript of how to request an ID token from the Google metadata server:
+HereΓÇÖs an example in C# of how to request an ID token from the Google metadata server:
```csharp private string getGoogleIdToken() {
active-directory Workload Identity Federation Create Trust Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-github.md
- Title: Create a trust relationship between an app and GitHub
-description: Set up a trust relationship between an app in Azure AD and a GitHub repo. This allows a GitHub Actions workflow to access Azure AD protected resources without using secrets or certificates.
-------- Previously updated : 01/28/2022---
-#Customer intent: As an application developer, I want to configure a federated credential on an app registration so I can create a trust relationship with a GitHub repo and use workload identity federation to access Azure AD protected resources without managing secrets.
--
-# Configure an app to trust a GitHub repo (preview)
-
-This article describes how to create a trust relationship between an application in Azure Active Directory (Azure AD) and a GitHub repo. You can then configure a GitHub Actions workflow to exchange a token from GitHub for an access token from Microsoft identity platform and access Azure AD protected resources without needing to manage secrets. To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md). You establish the trust relationship by configuring a federated identity credential on your app registration in the Azure portal or by using Microsoft Graph.
-
-Anyone with permissions to create an app registration and add a secret or certificate can add a federated identity credential. If the **Users can register applications** switch in the [User Settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/UserSettings) blade is set to **No**, however, you won't be able to create an app registration or configure the federated identity credential. Find an admin to configure the federated identity credential on your behalf. Anyone in the Application Administrator or Application Owner roles can do this.
-
-After you configure your app to trust a GitHub repo, [configure your GitHub Actions workflow](/azure/developer/github/connect-from-azure) to get an access token from Microsoft identity provider and access Azure AD protected resources.
-
-## Prerequisites
-
-[Create an app registration](quickstart-register-app.md) in Azure AD. [Grant your app access to the Azure resources](howto-create-service-principal-portal.md) targeted by your GitHub workflow.
-
-Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal and select your app registration. In **Overview**->**Essentials**, find the **Object ID**.
-
-Get the organization, repository, and environment information for your GitHub repo, which you need in the following steps.
-
-## Configure a federated identity credential
-
-# [Azure portal](#tab/azure-portal)
-
-Sign into the [Azure portal](https://portal.azure.com/). Go to **App registrations** and open the app you want to configure.
-
-Go to **Certificates and secrets**. In the **Federated credentials** tab, select **Add credential**. The **Add a credential** blade opens.
-
-In the **Federated credential scenario** drop-down box select **GitHub actions deploying Azure resources**.
-
-Specify the **Organization** and **Repository** for your GitHub Actions workflow.
-
-For **Entity type**, select **Environment**, **Branch**, **Pull request**, or **Tag** and specify the value. The values must exactly match the configuration in the [GitHub workflow](https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions#on). For more info, read the [examples](#entity-type-examples).
-
-Add a **Name** for the federated credential.
-
-The **Issuer**, **Audiences**, and **Subject identifier** fields autopopulate based on the values you entered.
-
-Click **Add** to configure the federated credential.
--
-> [!NOTE]
-> If you accidentally configure someone else's GitHub repo in the *subject* setting (enter a typo that matches someone elses repo) you can successfully create the federated identity credential. But in the GitHub configuration, however, you would get an error because you aren't able to access another person's repo.
-
-> [!IMPORTANT]
-> The **Organization**, **Repository**, and **Entity type** values must exactly match the configuration on the GitHub workflow configuration. Otherwise, Microsoft identity platform will look at the incoming external token and reject the exchange for an access token. You won't get an error, the exchange fails without error.
-
-### Entity type examples
-
-#### Branch example
-
-For a workflow triggered by a push or pull request event on the main branch:
-
-```yml
-on:
- push:
- branches: [ main ]
- pull_request:
- branches: [ main ]
-```
-
-Specify an **Entity type** of **Branch** and a **GitHub branch name** of "main".
-
-#### Environment example
-
-For Jobs tied to an environment named "production":
-
-```yml
-on:
- push:
- branches:
- - main
-
-jobs:
- deployment:
- runs-on: ubuntu-latest
- environment: production
- steps:
- - name: deploy
- # ...deployment-specific steps
-```
-
-Specify an **Entity type** of **Environment** and a **GitHub environment name** of "production".
-
-#### Tag example
-
-For example, for a workflow triggered by a push to the tag named "v2":
-
-```yml
-on:
- push:
- # Sequence of patterns matched against refs/heads
- branches:
- - main
- - 'mona/octocat'
- - 'releases/**'
- # Sequence of patterns matched against refs/tags
- tags:
- - v2
- - v1.*
-```
-
-Specify an **Entity type** of **Tag** and a **GitHub tag name** of "v2".
-
-#### Pull request example
-
-For a workflow triggered by a pull request event, specify an **Entity type** of **Pull request**.
-
-# [Microsoft Graph](#tab/microsoft-graph)
-
-Launch [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) and sign in to your tenant.
-
-### Create a federated identity credential
-
-Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) on your app (specified by the object ID of the app). The *issuer* identifies GitHub as the external token issuer. *subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token.
-
-```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Testing","issuer":"https://token.actions.githubusercontent.com/","subject":"repo:octo-org/octo-repo:environment:Production","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
-```
-
-And you get the response:
-
-```azurecli
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials/$entity",
- "audiences": [
- "api://AzureADTokenExchange"
- ],
- "description": "Testing",
- "id": "1aa3e6a7-464c-4cd2-88d3-90db98132755",
- "issuer": "https://token.actions.githubusercontent.com/",
- "name": "Testing",
- "subject": "repo:octo-org/octo-repo:environment:Production"
-}
-```
-
-*name*: The name of your Azure application.
-
-*issuer*: The path to the GitHub OIDC provider: `https://token.actions.githubusercontent.com/`. This issuer will become trusted by your Azure application.
-
-*subject*: Before Azure will grant an access token, the request must match the conditions defined here.
--- For Jobs tied to an environment: `repo:< Organization/Repository >:environment:< Name >`-- For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.-- For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull-request`.-
-*audiences*: `api://AzureADTokenExchange` is the required value.
-
-> [!NOTE]
-> If you accidentally configure someone else's GitHub repo in the *subject* setting (enter a typo that matches someone elses repo) you can successfully create the federated identity credential. But in the GitHub configuration, however, you would get an error because you aren't able to access another person's repo.
-
-> [!IMPORTANT]
-> The *subject* setting values must exactly match the configuration on the GitHub workflow configuration. Otherwise, Microsoft identity platform will look at the incoming external token and reject the exchange for an access token. You won't get an error, the exchange fails without error.
-
-### List federated identity credentials on an app
-
-Run the following command to [list the federated identity credential(s)](/graph/api/application-list-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for an app (specified by the object ID of the app):
-
-```azurecli
-az rest -m GET -u 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials'
-```
-
-And you get a response similar to:
-
-```azurecli
-{
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials",
- "value": [
- {
- "audiences": [
- "api://AzureADTokenExchange"
- ],
- "description": "Testing",
- "id": "1aa3e6a7-464c-4cd2-88d3-90db98132755",
- "issuer": "https://token.actions.githubusercontent.com/",
- "name": "Testing",
- "subject": "repo:octo-org/octo-repo:environment:Production"
- }
- ]
-}
-```
-
-### Delete a federated identity credential
-
-Run the following command to [delete a federated identity credential](/graph/api/application-list-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) from an app (specified by the object ID of the app):
-
-```azurecli
-az rest -m DELETE -u 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials/1aa3e6a7-464c-4cd2-88d3-90db98132755'
-```
---
-## Get the application (client) ID and tenant ID from the Azure portal
-
-Before configuring your GitHub Actions workflow, get the *tenant-id* and *client-id* values of your app registration. You can find these values in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) and select your app registration. In **Overview**->**Essentials**, find the **Application (client) ID** and **Directory (tenant) ID**. Set these values in your GitHub environment to use in the Azure login action for your workflow.
-
-## Next steps
-
-For an end-to-end example, read [Deploy to App Service using GitHub Actions](../../app-service/deploy-github-actions.md?tabs=openid).
-
-Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
Previously updated : 03/30/2022 Last updated : 07/27/2022 -+
+zone_pivot_groups: identity-wif-apps-methods
#Customer intent: As an application developer, I want to configure a federated credential on an app registration so I can create a trust relationship with an external identity provider and use workload identity federation to access Azure AD protected resources without managing secrets.
-# Configure an app to trust an external identity provider (preview)
+# Configure an app to trust an external identity provider
-This article describes how to create a trust relationship between an application in Azure Active Directory (Azure AD) and an external identity provider (IdP). You can then configure an external software workload to exchange a token from the external IdP for an access token from Microsoft identity platform. The external workload can access Azure AD protected resources without needing to manage secrets (in supported scenarios). To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md). You establish the trust relationship by configuring a federated identity credential on your app registration by using Microsoft Graph or the Azure portal.
+This article describes how to manage a federated identity credential on an application in Azure Active Directory (Azure AD). The federated identity credential creates a trust relationship between an application and an external identity provider (IdP).
-Anyone with permissions to create an app registration and add a secret or certificate can add a federated identity credential. If the **Users can register applications** switch in the [User Settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/UserSettings) blade is set to **No**, however, you won't be able to create an app registration or configure the federated identity credential. Find an admin to configure the federated identity credential on your behalf. Anyone in the Application Administrator or Application Owner roles can do this.
+You can then configure an external software workload to exchange a token from the external IdP for an access token from Microsoft identity platform. The external workload can access Azure AD protected resources without needing to manage secrets (in supported scenarios). To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md).
-After you configure your app to trust an external IdP, configure your software workload to get an access token from Microsoft identity provider and access Azure AD protected resources.
+In this article, you learn how to create, list, and delete federated identity credentials on an application in Azure AD.
++ ## Prerequisites [Create an app registration](quickstart-register-app.md) in Azure AD. Grant your app access to the Azure resources targeted by your external software workload. Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal and select your app registration. In **Overview**->**Essentials**, find the **Object ID**.
-Get the information for your external IdP and software workload, which you need in the following steps.
-
-The Microsoft Graph beta endpoint (`https://graph.microsoft.com/beta`) exposes REST APIs to create, update, delete [federatedIdentityCredentials](/graph/api/resources/federatedidentitycredential?view=graph-rest-beta&preserve-view=true) on applications. Launch [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) and sign in to your tenant in order to run Microsoft Graph commands from AZ CLI.
+Get the *subject* and *issuer* information for your external IdP and software workload, which you need in the following steps.
## Configure a federated identity credential on an app
-When you configure a federated identity credential on an app, there are several important pieces of information to provide.
+### GitHub Actions
+Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
+
+In the **Federated credential scenario** drop-down box, select **GitHub actions deploying Azure resources**.
-*issuer* and *subject* are the key pieces of information needed to set up the trust relationship. *issuer* is the URL of the external identity provider and must match the `issuer` claim of the external token being exchanged. *subject* is the identifier of the external software workload and must match the `sub` (`subject`) claim of the external token being exchanged. *subject* has no fixed format, as each IdP uses their own - sometimes a GUID, sometimes a colon delimited identifier, sometimes arbitrary strings. The combination of `issuer` and `subject` must be unique on the app. When the external software workload requests Microsoft identity platform to exchange the external token for an access token, the *issuer* and *subject* values of the federated identity credential are checked against the `issuer` and `subject` claims provided in the external token. If that validation check passes, Microsoft identity platform issues an access token to the external software workload.
+Specify the **Organization** and **Repository** for your GitHub Actions workflow.
-> [!IMPORTANT]
-> If you accidentally add the incorrect external workload information in the *subject* setting the federated identity credential is created successfully without error. The error does not become apparent until the token exchange fails.
+For **Entity type**, select **Environment**, **Branch**, **Pull request**, or **Tag** and specify the value. The values must exactly match the configuration in the [GitHub workflow](https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions#on). For more info, read the [examples](#entity-type-examples).
-*audiences* lists the audiences that can appear in the external token. This field is mandatory, and defaults to "api://AzureADTokenExchange". It says what Microsoft identity platform must accept in the `aud` claim in the incoming token. This value represents Azure AD in your external identity provider and has no fixed value across identity providers - you may need to create a new application registration in your IdP to serve as the audience of this token.
+Add a **Name** for the federated credential.
-*name* is the unique identifier for the federated identity credential, which has a character limit of 120 characters and must be URL friendly. It is immutable once created.
+The **Issuer**, **Audiences**, and **Subject identifier** fields autopopulate based on the values you entered.
-*description* is the un-validated, user-provided description of the federated identity credential.
+Click **Add** to configure the federated credential.
-### GitHub Actions example
+
+#### Entity type examples
+
+##### Branch example
-# [Azure CLI](#tab/azure-cli)
+For a workflow triggered by a push or pull request event on the main branch:
-Run the [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) command on your app (specified by the object ID of the app). Specify the *name*, *issuer*, *subject*, and other parameters.
+```yml
+on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+```
+
+Specify an **Entity type** of **Branch** and a **GitHub branch name** of "main".
+
+##### Environment example
+
+For Jobs tied to an environment named "production":
+
+```yml
+on:
+ push:
+ branches:
+ - main
+
+jobs:
+ deployment:
+ runs-on: ubuntu-latest
+ environment: production
+ steps:
+ - name: deploy
+ # ...deployment-specific steps
+```
-For examples, see [Configure an app to trust a GitHub repo](workload-identity-federation-create-trust-github.md?tabs=microsoft-graph).
+Specify an **Entity type** of **Environment** and a **GitHub environment name** of "production".
-# [Portal](#tab/azure-portal)
+##### Tag example
+
+For example, for a workflow triggered by a push to the tag named "v2":
+
+```yml
+on:
+ push:
+ # Sequence of patterns matched against refs/heads
+ branches:
+ - main
+ - 'mona/octocat'
+ - 'releases/**'
+ # Sequence of patterns matched against refs/tags
+ tags:
+ - v2
+ - v1.*
+```
+
+Specify an **Entity type** of **Tag** and a **GitHub tag name** of "v2".
+
+##### Pull request example
+
+For a workflow triggered by a pull request event, specify an **Entity type** of **Pull request**
+
+### Kubernetes
Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
-Select the **GitHub Actions deploying Azure resources** scenario from the dropdown menu. Fill in the **Organization**, **Repository**, **Entity type**, and other fields.
+Select the **Kubernetes accessing Azure resources** scenario from the dropdown menu.
-For examples, see [Configure an app to trust a GitHub repo](workload-identity-federation-create-trust-github.md?tabs=azure-portal).
+Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields:
-
+- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
+- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod.
+- **Namespace** is the service account namespace.
+- **Name** is the name of the federated credential, which can't be changed later.
+
+### Other identity providers
+
+Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
+
+Select the **Other issuer** scenario from the dropdown menu.
+
+Specify the following fields (using a software workload running in Google Cloud as an example):
+
+- **Name** is the name of the federated credential, which can't be changed later.
+- **Subject identifier**: must match the `sub` claim in the token issued by the external identity provider. In this example using Google Cloud, *subject* is the Unique ID of the service account you plan to use.
+- **Issuer**: must match the `iss` claim in the token issued by the external identity provider. A URL that complies with the OIDC Discovery spec. Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. For Google Cloud, the *issuer* is "https://accounts.google.com".
+
+## List federated identity credentials on an app
+
+Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane and select the **Federated credentials** tab. The federated credentials that are configured on your app are listed.
+
+## Delete a federated identity credential from an app
+
+Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane and select the **Federated credentials** tab. The federated credentials that are configured on your app are listed.
+
+To delete a federated identity credential, select the **Delete** icon for the credential.
+++
+## Prerequisites
+
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.
++
+- [Create an app registration](quickstart-register-app.md) in Azure AD. Grant your app access to the Azure resources targeted by your external software workload.
+- Find the object ID, app (client) ID, or identifier URI of the app, which you need in the following steps. You can find these values in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal and select your app registration. In **Overview**->**Essentials**, get the **Object ID**, **Application (client) ID**, or **Application ID URI** value, which you need in the following steps.
+- Get the *subject* and *issuer* information for your external IdP and software workload, which you need in the following steps.
+
+## Configure a federated identity credential on an app
+
+Run the [az ad app federated-credential create](/cli/azure/ad/app/federated-credential) command to create a new federated identity credential on your app.
+
+The *id* parameter specifies the identifier URI, application ID, or object ID of the application. *parameters* specifies the parameters, in JSON format, for creating the federated identity credential.
+
+### GitHub Actions example
+
+The *name* specifies the name of your federated identity credential.
+
+The *issuer* identifies the path to the GitHub OIDC provider: `https://token.actions.githubusercontent.com/`. This issuer will become trusted by your Azure application.
+
+*subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token. Before Azure will grant an access token, the request must match the conditions defined here.
+- For Jobs tied to an environment: `repo:< Organization/Repository >:environment:< Name >`
+- For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+- For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull-request`.
+
+```azurecli-interactive
+az ad app federated-credential create --id f6475511-fd81-4965-a00e-41e7792b7b9c --parameters credential.json
+("credential.json" contains the following content)
+{
+ "name": "Testing",
+ "issuer": "https://token.actions.githubusercontent.com/",
+ "subject": "repo:octo-org/octo-repo:environment:Production",
+ "description": "Testing",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ]
+}
+```
### Kubernetes example
-# [Azure CLI](#tab/azure-cli)
+*issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+
+*subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.
+
+*name* is the name of the federated credential, which can't be changed later.
+
+*audiences* lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange".
+
+```azurecli-interactive
+az ad app federated-credential create --id f6475511-fd81-4965-a00e-41e7792b7b9c --parameters credential.json
+("credential.json" contains the following content)
+{
+ "name": "Kubernetes-federated-credential",
+ "issuer": "https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/",
+ "subject": "system:serviceaccount:erp8asle:pod-identity-sa",
+ "description": "Kubernetes service account federated credential",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ]
+}
+```
+
+### Other identity providers example
+
+You can configure a federated identity credential on an app and create a trust relationship with other external identity providers. The following example uses a software workload running in Google Cloud as an example:
+
+*name* is the name of the federated credential, which can't be changed later.
+
+*id*: the object ID, application (client) ID, or identifier URI of the app.
+
+*subject*: must match the `sub` claim in the token issued by the external identity provider. In this example using Google Cloud, *subject* is the Unique ID of the service account you plan to use.
+
+*issuer*: must match the `iss` claim in the token issued by the external identity provider. A URL that complies with the OIDC Discovery spec. Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. For Google Cloud, the *issuer* is "https://accounts.google.com".
+
+*audiences*: lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange".
+
+```azurecli-interactive
+az ad app federated-credential create --id f6475511-fd81-4965-a00e-41e7792b7b9c --parameters credential.json
+("credential.json" contains the following content)
+{
+ "name": "GcpFederation",
+ "issuer": "https://accounts.google.com",
+ "subject": "112633961854638529490",
+ "description": "Test GCP federation",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ]
+}
+```
+
+## List federated identity credentials on an app
+
+Run the [az ad app federated-credential list](/cli/azure/ad/app/federated-credential) command to list the federated identity credentials on your app.
+
+The *id* parameter specifies the identifier URI, application ID, or object ID of the application.
+
+```azurecli-interactive
+az ad app federated-credential list --id f6475511-fd81-4965-a00e-41e7792b7b9c
+```
+
+## Get a federated identity credential on an app
+
+Run the [az ad app federated-credential show](/cli/azure/ad/app/federated-credential) command to get a federated identity credential on your app.
+
+The *id* parameter specifies the identifier URI, application ID, or object ID of the application.
+
+The *federated-credential-id* specifies the ID or name of the federated identity credential.
+
+```azurecli-interactive
+az ad app federated-credential show --id f6475511-fd81-4965-a00e-41e7792b7b9c --federated-credential-id c79f8feb-a9db-4090-85f9-90d820caa0eb
+```
+
+## Delete a federated identity credential from an app
+
+Run the [az ad app federated-credential delete](/cli/azure/ad/app/federated-credential) command to remove a federated identity credential from your app.
+
+The *id* parameter specifies the identifier URI, application ID, or object ID of the application.
+
+The *federated-credential-id* specifies the ID or name of the federated identity credential.
+
+```azurecli-interactive
+az ad app federated-credential delete --id f6475511-fd81-4965-a00e-41e7792b7b9c --federated-credential-id c79f8feb-a9db-4090-85f9-90d820caa0eb
+```
+++
+## Prerequisites
+- To run the example scripts, you have two options:
+ - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks.
+ - Run scripts locally with Azure PowerShell, as described in the next section.
+- [Create an app registration](quickstart-register-app.md) in Azure AD. Grant your app access to the Azure resources targeted by your external software workload.
+- Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal and select your app registration. In **Overview**->**Essentials**, find the **Object ID**.
+- Get the *subject* and *issuer* information for your external IdP and software workload, which you need in the following steps.
+
+### Configure Azure PowerShell locally
+
+To use Azure PowerShell locally for this article instead of using Cloud Shell:
+
+1. Install [the latest version of Azure PowerShell](/powershell/azure/install-az-ps) if you haven't already.
+
+1. Sign in to Azure.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+1. Install the [latest version of PowerShellGet](/powershell/scripting/gallery/installing-psget#for-systems-with-powershell-50-or-newer-you-can-install-the-latest-powershellget).
-Run the following command to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. Specify the following parameters:
+ ```azurepowershell
+ Install-Module -Name PowerShellGet -AllowPrerelease
+ ```
+
+ You might need to `Exit` out of the current PowerShell session after you run this command for the next step.
+
+1. Install the prerelease version of the `Az.Resources` module to perform the federated identity credential operations in this article.
+
+ ```azurepowershell
+ Install-Module -Name Az.Resources -AllowPrerelease
+ ```
+
+## Configure a federated identity credential on an app
+
+Run the [New-AzADAppFederatedCredential](/powershell/module/az.resources/new-azadappfederatedcredential) cmdlet to create a new federated identity credential on an application.
+
+### GitHub Actions example
+
+- *ApplicationObjectId*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.
+- *Issuer* identifies GitHub as the external token issuer.
+- *Subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token.
+ - For Jobs tied to an environment: `repo:< Organization/Repository >:environment:< Name >`
+ - For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ - For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull-request`.
+- *Name* is the name of the federated credential, which can't be changed later.
+- *Audience* lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange".
+
+```azurepowershell-interactive
+New-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -Audience api://AzureADTokenExchange -Issuer 'https://token.actions.githubusercontent.com/' -Name 'GitHub-Actions-Test' -Subject 'repo:octo-org/octo-repo:environment:Production'
+```
+
+### Kubernetes example
+
+- *ApplicationObjectId*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.
+- *Issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *Subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.
+- *Name* is the name of the federated credential, which can't be changed later.
+- *Audience* lists the audiences that can appear in the `aud` claim of the external token.
+
+```azurepowershell-interactive
+New-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -Audience api://AzureADTokenExchange -Issuer 'https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/' -Name 'Kubernetes-federated-credential' -Subject 'system:serviceaccount:erp8asle:pod-identity-sa'
+```
+
+### Other identity providers example
+
+Specify the following parameters (using a software workload running in Google Cloud as an example):
+
+- *ObjectID*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.
+- *Name* is the name of the federated credential, which can't be changed later.
+- *Subject*: must match the `sub` claim in the token issued by the external identity provider. In this example using Google Cloud, *subject* is the Unique ID of the service account you plan to use.
+- *Issuer*: must match the `iss` claim in the token issued by the external identity provider. A URL that complies with the OIDC Discovery spec. Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. For Google Cloud, the *issuer* is "https://accounts.google.com".
+- *Audiences*: must match the `aud` claim in the external token. For security reasons, you should pick a value that is unique for tokens meant for Azure AD. The recommended value is "api://AzureADTokenExchange".
+
+```azurepowershell-interactive
+New-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -Audience api://AzureADTokenExchange -Issuer 'https://accounts.google.com' -Name 'GcpFederation' -Subject '112633961854638529490'
+```
+
+## List federated identity credentials on an app
+
+Run the [Get-AzADAppFederatedCredential](/powershell/module/az.resources/get-azadappfederatedcredential) cmdlet to list the federated identity credentials for an application.
+
+```azurepowershell-interactive
+Get-AzADApplication -ObjectId $app | Get-AzADAppFederatedCredential
+```
+
+## Get a federated identity credential on an app
+
+Run the [Get-AzADAppFederatedCredential](/powershell/module/az.resources/get-azadappfederatedcredential) cmdlet to get the federated identity credential by ID from an application.
+
+```azurepowershell-interactive
+Get-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -FederatedCredentialId $credentialId
+```
+
+## Delete a federated identity credential from an app
+
+Run the [Remove-AzADAppFederatedCredential](/powershell/module/az.resources/remove-azadappfederatedcredential) cmdlet to delete a federated identity credential from an application.
+
+```azurepowershell-interactive
+Remove-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -FederatedCredentialId $credentialId
+```
+++
+## Prerequisites
+[Create an app registration](quickstart-register-app.md) in Azure AD. Grant your app access to the Azure resources targeted by your external software workload.
+
+Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal and select your app registration. In **Overview**->**Essentials**, find the **Object ID**.
+
+Get the *subject* and *issuer* information for your external IdP and software workload, which you need in the following steps.
+
+The Microsoft Graph endpoint (`https://graph.microsoft.com`) exposes REST APIs to create, update, delete [federatedIdentityCredentials](/graph/api/resources/federatedidentitycredential) on applications. Launch [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) and sign in to your tenant in order to run Microsoft Graph commands from AZ CLI.
+
+## Configure a federated identity credential on an app
+
+### GitHub Actions
+
+Run the following method to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials) on your app (specified by the object ID of the app). The *issuer* identifies GitHub as the external token issuer. *subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token.
+
+```azurecli
+az rest --method POST --uri 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Testing","issuer":"https://token.actions.githubusercontent.com/","subject":"repo:octo-org/octo-repo:environment:Production","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+```
+
+And you get the response:
+```azurecli
+{
+ "@odata.context": "https://graph.microsoft.com/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials/$entity",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ],
+ "description": "Testing",
+ "id": "1aa3e6a7-464c-4cd2-88d3-90db98132755",
+ "issuer": "https://token.actions.githubusercontent.com/",
+ "name": "Testing",
+ "subject": "repo:octo-org/octo-repo:environment:Production"
+}
+```
+
+*name*: The name of your Azure application.
+
+*issuer*: The path to the GitHub OIDC provider: `https://token.actions.githubusercontent.com/`. This issuer will become trusted by your Azure application.
+
+*subject*: Before Azure will grant an access token, the request must match the conditions defined here.
+- For Jobs tied to an environment: `repo:< Organization/Repository >:environment:< Name >`
+- For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+- For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull-request`.
+
+*audiences* lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange".
+
+### Kubernetes example
+
+Run the following method to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. Specify the following parameters:
- *issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster). - *subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.-- *name* is the name of the federated credential, which cannot be changed later.-- *audiences* lists the audiences that can appear in the 'aud' claim of the external token. This field is mandatory, and defaults to "api://AzureADTokenExchange".
+- *name* is the name of the federated credential, which can't be changed later.
+- *audiences* lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange".
```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Kubernetes-federated-credential","issuer":"https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/","subject":"system:serviceaccount:erp8asle:pod-identity-sa","description":"Kubernetes service account federated credential","audiences":["api://AzureADTokenExchange"]}'
+az rest --method POST --uri 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Kubernetes-federated-credential","issuer":"https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/","subject":"system:serviceaccount:erp8asle:pod-identity-sa","description":"Kubernetes service account federated credential","audiences":["api://AzureADTokenExchange"]}'
``` And you get the response: ```azurecli {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials/$entity",
+ "@odata.context": "https://graph.microsoft.com/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials/$entity",
"audiences": [ "api://AzureADTokenExchange" ],
And you get the response:
} ```
-# [Portal](#tab/azure-portal)
-
-Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
-
-Select the **Kubernetes accessing Azure resources** scenario from the dropdown menu.
-
-Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields:
--- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.-- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod. -- **Namespace** is the service account namespace.-- **Name** is the name of the federated credential, which cannot be changed later.--- ### Other identity providers example
-# [Azure CLI](#tab/azure-cli)
-
-Run the following command to configure a federated identity credential on an app and create a trust relationship with an external identity provider. Specify the following parameters (using a software workload running in Google Cloud as an example):
+Run the following method to configure a federated identity credential on an app and create a trust relationship with an external identity provider. Specify the following parameters (using a software workload running in Google Cloud as an example):
-- *name* is the name of the federated credential, which cannot be changed later.
+- *name* is the name of the federated credential, which can't be changed later.
- *ObjectID*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD. - *subject*: must match the `sub` claim in the token issued by the external identity provider. In this example using Google Cloud, *subject* is the Unique ID of the service account you plan to use.-- *issuer*: must match the `iss` claim in the token issued by the external identity provider. A URL that complies with the OIDC Discovery spec. Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. In the case of Google Cloud, the *issuer* is "https://accounts.google.com".-- *audiences*: must match the `aud` claim in the external token. For security reasons, you should pick a value that is unique for tokens meant for Azure AD. The Microsoft recommended value is "api://AzureADTokenExchange".
+- *issuer*: must match the `iss` claim in the token issued by the external identity provider. A URL that complies with the OIDC Discovery spec. Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. For Google Cloud, the *issuer* is "https://accounts.google.com".
+- *audiences* lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange".
```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<ObjectID>/federatedIdentityCredentials' --body '{"name":"GcpFederation","issuer":"https://accounts.google.com","subject":"112633961854638529490","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+az rest --method POST --uri 'https://graph.microsoft.com/applications/<ObjectID>/federatedIdentityCredentials' --body '{"name":"GcpFederation","issuer":"https://accounts.google.com","subject":"112633961854638529490","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
``` And you get the response: ```azurecli {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials/$entity",
+ "@odata.context": "https://graph.microsoft.com/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials/$entity",
"audiences": [ "api://AzureADTokenExchange" ],
And you get the response:
} ```
-# [Portal](#tab/azure-portal)
-
-Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
-
-Select the **Other issuer** scenario from the dropdown menu.
-
-Specify the following fields (using a software workload running in Google Cloud as an example):
--- **Name** is the name of the federated credential, which cannot be changed later.-- **Subject identifier**: must match the `sub` claim in the token issued by the external identity provider. In this example using Google Cloud, *subject* is the Unique ID of the service account you plan to use.-- **Issuer**: must match the `iss` claim in the token issued by the external identity provider. A URL that complies with the OIDC Discovery spec. Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. In the case of Google Cloud, the *issuer* is "https://accounts.google.com".--- ## List federated identity credentials on an app
-# [Azure CLI](#tab/azure-cli)
-Run the following command to [list the federated identity credential(s)](/graph/api/application-list-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for an app (specified by the object ID of the app):
+Run the following method to [list the federated identity credential(s)](/graph/api/application-list-federatedidentitycredentials) for an app (specified by the object ID of the app):
```azurecli
-az rest -m GET -u 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials'
+az rest -m GET -u 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials'
``` And you get a response similar to: ```azurecli {
- "@odata.context": "https://graph.microsoft.com/beta/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials",
+ "@odata.context": "https://graph.microsoft.com/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials",
"value": [ { "audiences": [
And you get a response similar to:
} ```
-# [Portal](#tab/azure-portal)
-
-Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane and select the **Federated credentials** tab. The federated credentials that are configured on your app are listed.
--
+## Get a federated identity credential on an app
-## Delete a federated identity credential
+Run the following method to [get a federated identity credential](/graph/api/federatedidentitycredential-get) for an app (specified by the object ID of the app):
-# [Azure CLI](#tab/azure-cli)
+```azurecli
+az rest -m GET -u 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c//federatedIdentityCredentials/1aa3e6a7-464c-4cd2-88d3-90db98132755'
+```
-Run the following command to [delete a federated identity credential](/graph/api/application-list-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) from an app (specified by the object ID of the app):
+And you get a response similar to:
```azurecli
-az rest -m DELETE -u 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials/51ecf9c3-35fc-4519-a28a-8c27c6178bca'
-
+{
+ "@odata.context": "https://graph.microsoft.com/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials",
+ "value": {
+ "@odata.context": "https://graph.microsoft.com/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials/$entity",
+ "@odata.id": "https://graph.microsoft.com/v2/3d1e2be9-a10a-4a0c-8380-7ce190f98ed9/directoryObjects/$/Microsoft.DirectoryServices.Application('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials('f6475511-fd81-4965-a00e-41e7792b7b9c')/f6475511-fd81-4965-a00e-41e7792b7b9c",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ],
+ "description": "Testing",
+ "id": "1aa3e6a7-464c-4cd2-88d3-90db98132755",
+ "issuer": "https://token.actions.githubusercontent.com/",
+ "name": "Testing",
+ "subject": "repo:octo-org/octo-repo:environment:Production"
+ }
+}
```
-# [Portal](#tab/azure-portal)
+## Delete a federated identity credential from an app
-Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane and select the **Federated credentials** tab. The federated credentials that are configured on your app are listed.
+Run the following method to [delete a federated identity credential](/graph/api/federatedidentitycredential-delete) from an app (specified by the object ID of the app):
-To delete a federated identity credential, select the **Delete** icon for the credential.
+```azurecli
+az rest -m DELETE -u 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials/1aa3e6a7-464c-4cd2-88d3-90db98132755'
+```
- ## Next steps - To learn how to use workload identity federation for Kubernetes, see [Azure AD Workload Identity for Kubernetes](https://azure.github.io/azure-workload-identity/docs/quick-start.html) open source project. - To learn how to use workload identity federation for GitHub Actions, see [Configure a GitHub Actions workflow to get an access token](/azure/developer/github/connect-from-azure).
+- Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.
- For more information, read about how Azure AD uses the [OAuth 2.0 client credentials grant](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token. - For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
Previously updated : 07/13/2022 Last updated : 07/27/2022 -+ #Customer intent: As a developer, I want to learn about workload identity federation so that I can securely access Azure AD protected resources from external apps and services without needing to manage secrets.
-# Workload identity federation (preview)
+# Workload identity federation
This article provides an overview of workload identity federation for software workloads. Using workload identity federation allows you to access Azure Active Directory (Azure AD) protected resources without needing to manage secrets (for supported scenarios).
-You can use workload identity federation in scenarios such as GitHub Actions and workloads running on Kubernetes.
+You can use workload identity federation in scenarios such as GitHub Actions, workloads running on Kubernetes, or workloads running in compute platforms outside of Azure.
## Why use workload identity federation?
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
This step removes settings at directory level, which apply to all Office groups
## Cmdlet syntax reference You can find more Azure Active Directory PowerShell documentation at [Azure Active Directory Cmdlets](/powershell/azure/active-directory/install-adv2).
+
+## Manage group settings using Microsoft Graph
+
+To configure and manage group settings using Microsoft Graph, see the [groupSetting resource type](/graph/api/resources/groupsetting?view=graph-rest-1.0&preserve-view=true) and its associated methods.
## Additional reading
active-directory Road To The Cloud Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md
Use the following table to determine what Azure-based tools you use to replace t
More tools and notes:
-* [Azure Arc](https://azure.microsoft.com/services/azure-arc/) enables above Azure features to non-Azure VMs. For example, Windows Server when used on-premises
-* or on AWS.
+* [Azure Arc](https://azure.microsoft.com/services/azure-arc/) enables above Azure features to non-Azure VMs. For example, Windows Server when used on-premises or on AWS.
* [Manage and secure your Azure VM environment](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/).
active-directory Pim Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-configure.md
The following screenshot shows an email message sent by PIM. The email informs P
### Assign
-The assignment process starts by assign roles to members. To grant access to a resource, the administrator assigns roles to users, groups, service principals, or managed identities. The assignment includes the following data:
+The assignment process starts by assigning roles to members. To grant access to a resource, the administrator assigns roles to users, groups, service principals, or managed identities. The assignment includes the following data:
- The members or owners to assign the role. - The scope of the assignment. The scope limits the assigned role to a particular set of resources.
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
For each month, we truncate the SLA attainment at three places after the decimal
| February | 99.999% | 99.999% | | March | 99.568% | 99.999% | | April | 99.999% | 99.999% |
-| May | 99.999% | |
-| June | 99.999% | |
+| May | 99.999% | 99.999% |
+| June | 99.999% | 99.999% |
| July | 99.999% | | | August | 99.999% | | | September | 99.999% | |
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 07/18/2022 Last updated : 08/03/2022
Users with this role can't change the credentials or reset MFA for members and o
> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users | > | microsoft.directory/users/authenticationMethods/standard/restrictedRead | Read standard properties of authentication methods that do not include personally identifiable information for users | > | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
+> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
+> | microsoft.directory/users/delete | Delete users |
+> | microsoft.directory/users/disable | Disable users |
+> | microsoft.directory/users/enable | Enable users |
> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/restore | Restore deleted users |
+> | microsoft.directory/users/basic/update | Update basic properties on users |
+> | microsoft.directory/users/manager/update | Update manager for users |
> | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Users with this role have the ability to manage Azure Active Directory Condition
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/namedLocations/create | Create custom rules that define network locations |
+> | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations |
+> | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations |
+> | microsoft.directory/namedLocations/basic/update | Update basic properties of custom rules that define network locations |
> | microsoft.directory/conditionalAccessPolicies/create | Create conditional access policies | > | microsoft.directory/conditionalAccessPolicies/delete | Delete conditional access policies | > | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/deletedItems/delete | Permanently delete objects, which can no longer be restored | > | microsoft.directory/deletedItems/restore | Restore soft deleted objects to original state | > | microsoft.directory/devices/allProperties/allTasks | Create and delete devices, and read and update all properties |
+> | microsoft.directory/namedLocations/create | Create custom rules that define network locations |
+> | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations |
+> | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations |
+> | microsoft.directory/namedLocations/basic/update | Update basic properties of custom rules that define network locations |
> | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies | > | microsoft.directory/deviceManagementPolicies/basic/update | Update basic properties on device management application policies | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/groupSettingTemplates/allProperties/read | Read all properties of group setting templates | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection | > | microsoft.directory/loginOrganizationBranding/allProperties/read | Read all properties for your organization's branded sign-in page |
+> | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations |
> | microsoft.directory/oAuth2PermissionGrants/allProperties/read | Read all properties of OAuth 2.0 permission grants | > | microsoft.directory/organization/allProperties/read | Read all properties for an organization | > | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts | > | microsoft.directory/deletedItems.groups/restore | Restore soft deleted groups to original state |
-> | microsoft.directory/deletedItems.users/delete | Permanently delete users, which can no longer be restored |
> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state | > | microsoft.directory/groups/create | Create Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups/delete | Delete Security groups and Microsoft 365 groups, excluding role-assignable groups |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts | > | microsoft.directory/deletedItems.groups/restore | Restore soft deleted groups to original state |
-> | microsoft.directory/deletedItems.users/delete | Permanently delete users, which can no longer be restored |
> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state | > | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties | > | microsoft.directory/groups/create | Create Security groups and Microsoft 365 groups, excluding role-assignable groups |
The [Authentication Policy Administrator](#authentication-policy-administrator)
> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users | > | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users | > | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
+> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
+> | microsoft.directory/users/delete | Delete users |
+> | microsoft.directory/users/disable | Disable users |
+> | microsoft.directory/users/enable | Enable users |
> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/restore | Restore deleted users |
+> | microsoft.directory/users/basic/update | Update basic properties on users |
+> | microsoft.directory/users/manager/update | Update manager for users |
> | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection | > | microsoft.directory/identityProtection/allProperties/update | Update all resources in Azure AD Identity Protection |
+> | microsoft.directory/namedLocations/create | Create custom rules that define network locations |
+> | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations |
+> | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations |
+> | microsoft.directory/namedLocations/basic/update | Update basic properties of custom rules that define network locations |
> | microsoft.directory/policies/create | Create policies in Azure AD | > | microsoft.directory/policies/delete | Delete policies in Azure AD | > | microsoft.directory/policies/basic/update | Update basic properties on policies |
Users with this role can manage alerts and have global read-only access on secur
> | | | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
-> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security |
+> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Defender for Cloud Apps |
> | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs |
Identity Protection Center | Read all security reports and settings information
> | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection |
+> | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations |
> | microsoft.directory/policies/standard/read | Read basic properties on policies | > | microsoft.directory/policies/owners/read | Read owners of policies | > | microsoft.directory/policies/policyAppliedTo/read | Read policies.policyAppliedTo property |
Users with this role can't change the credentials or reset MFA for members and o
> | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts | > | microsoft.directory/deletedItems.groups/restore | Restore soft deleted groups to original state |
+> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
> | microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management | > | microsoft.directory/groups/assignLicense | Assign product licenses to groups for group-based licensing | > | microsoft.directory/groups/create | Create Security groups and Microsoft 365 groups, excluding role-assignable groups |
active-directory Adpfederatedsso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adpfederatedsso-tutorial.md
Previously updated : 09/30/2021 Last updated : 08/03/2022
To configure and test Azure AD SSO with ADP, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-2. **[Configure ADP SSO](#configure-adp-sso)** - to configure the Single Sign-On settings on application side.
+2. **[Configure ADP SSO](#configure-adp-sso)** - to configure the single sign-on settings on application side.
1. **[Create ADP test user](#create-adp-test-user)** - to have a counterpart of B.Simon in ADP that is linked to the Azure AD representation of user. 3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure ADP SSO
-To configure single sign-on on **ADP** side, you need to upload the downloaded **Metadata XML** on the [ADP website](https://adpfedsso.adp.com/public/login/index.fcc).
+1. To automate the configuration within ADP, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
-> [!NOTE]
-> This process may take a few days.
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+1. After adding extension to the browser, click on **Set up ADP** will direct you to the ADP application. From there, provide the admin credentials to sign in to ADP. The browser extension will automatically configure the application for you and automate steps 3-7.
+
+ ![Setup configuration](common/setup-sso.png)
+
+1. If you want to set up ADP manually, open a new web browser window and sign in to your ADP company site as an administrator and perform the following steps:
+
+1. Click **Federation Setup** and go to **Identity Provider** then, select the **Microsoft Azure**.
+
+ ![Screenshot for identity provider.](./media/adpfederatedsso-tutorial/microsoft-azure.png)
+
+1. In the **Services Selection**, select all applicable service(s) for connection, and then click **Next**.
+
+ ![Screenshot for services selection.](./media/adpfederatedsso-tutorial/services.png)
+
+1. In the **Configure** section, click on the **Next**.
+
+1. In the **Upload Metadata**, click **Browse** to upload the metadata XML file which you have downloaded from the Azure portal and click **UPLOAD**.
+
+ ![Screenshot for uploading metadata.](./media/adpfederatedsso-tutorial/metadata.png)
### Configure your ADP service(s) for federated access
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
Previously updated : 12/27/2021 Last updated : 08/04/2022
To test the steps in this tutorial, you should follow these recommendations:
4. **Q: Can I enable single sign-on for only a subset of my Google Cloud / G Suite Connector by Microsoft users?**
- A: No, turning on single sign-on immediately requires all your Google Cloud / G Suite Connector by Microsoft users to authenticate with their Azure AD credentials. Because Google Cloud / G Suite Connector by Microsoft doesn't support having multiple identity providers, the identity provider for your Google Cloud / G Suite Connector by Microsoft environment can either be Azure AD or Google -- but not both at the same time.
+ A: Yes, the SSO profiles can be selected per User, Organizational Unit or Group in the Google Workspace.
+
+ ![Screenshot for SSO profile assignment.](./media/google-apps-tutorial/profile-assignment.png)
+
+ Select the SSO profile as "none" for the Google Workspace group. This prevents members of this (Google Workspace group) from being redirected to Azure AD for logon.
5. **Q: If a user is signed in through Windows, are they automatically authenticate to Google Cloud / G Suite Connector by Microsoft without getting prompted for a password?**
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
test.txt
> [!IMPORTANT] > Azure Disks CSI driver supports resizing PVCs without downtime. > Follow this [link][expand-an-azure-managed-disk] to register the disk online resize feature.
+>
+> az feature register --namespace Microsoft.Compute --name LiveResize
++ You can request a larger volume for a PVC. Edit the PVC object, and specify a larger size. This change triggers the expansion of the underlying volume that backs the PV.
aks Command Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/command-invoke.md
The pod created by the `run` command provides the following binaries:
* The latest compatible version of `kubectl` for your cluster with `kustomize`. * `helm`
-In addition, `command invoke` runs the commands from your cluster so any commands run in this manner are subject to networking and other restrictions you have configured on your cluster.
+In addition, `command invoke` runs the commands from your cluster so any commands run in this manner are subject to networking and other restrictions you have configured on your cluster. Also make sure that there are enough nodes and resources in your cluster to schedule this command pod.
## Use `command invoke` to run a single command
aks Openfaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/openfaas.md
gateway ClusterIP 10.0.156.194 <none> 8080/TCP
gateway-external LoadBalancer 10.0.28.18 52.186.64.52 8080:30800/TCP 7m ```
-To test the OpenFaaS system, browse to the external IP address on port 8080, `http://52.186.64.52:8080` in this example. You will be prompted to log in. To fetch your password, enter `echo $PASSWORD`.
+To test the OpenFaaS system, browse to the external IP address on port 8080, `http://52.186.64.52:8080` in this example. You will be prompted to log in. The default user is `admin` and your password can be retrieved by using `echo $PASSWORD`.
![OpenFaaS UI](media/container-service-serverless/openfaas.png)
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
aks Use Byo Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-byo-cni.md
Last updated 3/30/2022
-# Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS) (PREVIEW)
+# Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS) (preview)
Kubernetes does not provide a network interface system by default; this functionality is provided by [network plugins][kubernetes-cni]. Azure Kubernetes Service provides several supported CNI plugins. Documentation for supported plugins can be found from the [networking concepts page][aks-network-concepts].
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
To understand the difference between rate limits and quotas, [see Rate limits an
<rate-limit calls="number" renewal-period="seconds"> <api name="API name" id="API id" calls="number" renewal-period="seconds"> <operation name="operation name" id="operation id" calls="number" renewal-period="seconds"
- retry-after-header-name="header name"
+ retry-after-header-name="custom header name, replaces default 'Retry-After'"
retry-after-variable-name="policy expression variable name" remaining-calls-header-name="header name" remaining-calls-variable-name="policy expression variable name"
In the following example, the per subscription rate limit is 20 calls per 90 sec
| name | The name of the API for which to apply the rate limit. | Yes | N/A | | calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A | | renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
-| retry-after-header-name | The name of a response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
+| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A | | remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
For more information and examples of this policy, see [Advanced request throttli
increment-condition="condition" increment-count="number" counter-key="key value"
- retry-after-header-name="header name" retry-after-variable-name="policy expression variable name"
- remaining-calls-header-name="header name" remaining-calls-variable-name="policy expression variable name"
+ retry-after-header-name="custom header name, replaces default 'Retry-After'"
+ retry-after-variable-name="policy expression variable name"
+ remaining-calls-header-name="header name"
+ remaining-calls-variable-name="policy expression variable name"
total-calls-header-name="header name"/> ```
In the following example, the rate limit of 10 calls per 60 seconds is keyed by
| increment-condition | The boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A | | increment-count | The number by which the counter is increased per request. | No | 1 | | renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
-| retry-after-header-name | The name of a response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
+| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A | | remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
This policy can be used in the following policy [sections](./api-management-howt
## <a name="SetUsageQuota"></a> Set usage quota by subscription
-The `quota` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis.
+The `quota` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
This policy can be used in the following policy [sections](./api-management-howt
> [!IMPORTANT] > This feature is unavailable in the **Consumption** tier of API Management.
-The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it is incremented only once per request. When the call rate is exceeded, the caller receives a `403 Forbidden` response status code.
+The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it is incremented only once per request. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds.
For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
api-management Api Management Transformation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-transformation-policies.md
In this example the policy routes the request to a service fabric backend, using
### Policy statement ```xml
-<set-body>new body value as text</set-body>
+<set-body template="liquid" xsi-nil="blank | null">
+ new body value as text
+</set-body>
``` ### Examples
The following Liquid filters are supported in the `set-body` policy. For filter
|Name|Description|Required|Default| |-|--|--|-|
-|template|Used to change the templating mode that the set body policy will run in. Currently the only supported value is:<br /><br />- liquid - the set body policy will use the liquid templating engine |No||
+|template|Used to change the templating mode that the `set-body` policy will run in. Currently the only supported value is:<br /><br />- liquid - the `set-body` policy will use the liquid templating engine |No| N/A|
+|xsi-nil| Used to control how elements marked with `xsi:nil="true"` are represented in XML payloads. Set to one of the following values.<br /><br />- blank - `nil` is represented with an empty string.<br />- null - `nil` is represented with a null value.|No | blank |
For accessing information about the request and response, the Liquid template can bind to a context object with the following properties: <br /> <pre>context.
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
* For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`. ```azurecli
- az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ az ad app federated-credential create --id <APPLICATION-OBJECT-ID> --parameters credential.json
+ ("credential.json" contains the following content)
+ {
+ "name": "<CREDENTIAL-NAME>",
+ "issuer": "https://token.actions.githubusercontent.com/",
+ "subject": "repo:organization/repository:ref:refs/heads/main",
+ "description": "Testing",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ]
+ }
``` To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md
Get-AzLog -ResourceGroup [resource group name] -StartTime 2018-03-07 -Caller Slo
Remove-AzResource -ResourceGroupName [resource group name] -ResourceType Microsoft.Web/sites/slots ΓÇôName [app name]/[slot name] -ApiVersion 2015-07-01 ```
+To perform a slot swap from the production slot, the identity needs (at minimum) permissions to perform the `Microsoft.Web/sites/slotsswap/Action` operation. For more information, see the [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftweb)
+ ## Automate with Resource Manager templates [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) are declarative JSON files used to automate the deployment and configuration of Azure resources. To swap slots by using Resource Manager templates, you will set two properties on the *Microsoft.Web/sites/slots* and *Microsoft.Web/sites* resources:
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
application-gateway Configure Alerts With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-alerts-with-templates.md
You can use ARM templates to quickly configure important alerts for Application
> 1. Use the Resource Group Name, Action Group Name and Subscription Info here to form the ResourceID for the action group as shown here: <br> > `/subscriptions/<subscription-id-from-your-account>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>` - The templates for alerts described here are defined generically for settings like Severity, Aggregation Granularity, Frequency of Evaluation, Condition Type, and so on. You can modify the settings after deployment to meet your needs. See [Understand how metric alerts work in Azure Monitor](../azure-monitor/alerts/alerts-metric-overview.md) for more information.-- The templates for metric-based alerts use the **Dynamic threshold** value with [High sensitivity](../azure-monitor/alerts/alerts-dynamic-thresholds.md#what-does-sensitivity-setting-in-dynamic-thresholds-mean). You can choose to adjust these settings based on your needs.
+- The templates for metric-based alerts use the **Dynamic threshold** value with [High sensitivity](../azure-monitor/alerts/alerts-dynamic-thresholds.md#what-does-the-sensitivity-setting-in-dynamic-thresholds-mean). You can choose to adjust these settings based on your needs.
## ARM templates
automanage Automanage Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-arc.md
For all of these services, we will auto-onboard, auto-configure, monitor for dri
Automanage supports the following operating systems for Azure Arc-enabled servers -- Windows Server 2012 R2, 2016, 2019, 2022
+- Windows Server 2012 R2, 2016, 2019, 2022
- CentOS 7.3+, 8 - RHEL 7.4+, 8 - Ubuntu 16.04, 18.04, 20.04
Automanage supports the following operating systems for Azure Arc-enabled server
|[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. |Production, Dev/Test | |[Microsoft Antimalware](../security/fundamentals/antimalware.md) |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. **Note:** Microsoft Antimalware requires that there be no other antimalware software installed, or it may fail to work. This is also only supported for Windows Server 2016 and above. |Production, Dev/Test | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. |Production, Dev/Test |
-|[Azure Guest Configuration](../governance/policy/concepts/guest-configuration.md) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure security baseline using the Guest Configuration extension. For Arc machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. |Production, Dev/Test |
+|[Azure Guest Configuration](../governance/machine-configuration/overview.md) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure security baseline using the Guest Configuration extension. For Arc machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. |Production, Dev/Test |
|[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. |Production, Dev/Test | |[Log Analytics Workspace](../azure-monitor/logs/log-analytics-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. |Production, Dev/Test |
automanage Automanage Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-linux.md
Automanage supports the following Linux distributions and versions:
|[Microsoft Defender for Cloud](../security-center/security-center-introduction.md) |Microsoft Defender for Cloud is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud. Learn [more](../security-center/security-center-introduction.md). Automanage will configure the subscription where your VM resides to the free-tier offering of Microsoft Defender for Cloud (Enhanced security off). If your subscription is already onboarded to Microsoft Defender for Cloud, then Automanage will not reconfigure it. |Production, Dev/Test | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |
-|[Guest configuration](../governance/policy/concepts/guest-configuration.md) | Guest configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the guest configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |
+|[Guest configuration](../governance/machine-configuration/overview.md) | Guest configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the guest configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/machine-configuration/overview.md). |Production, Dev/Test |
|[Boot Diagnostics](../virtual-machines/boot-diagnostics.md) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test | |[Log Analytics Workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/workspace-design.md). |Production, Dev/Test |
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-windows-server.md
Automanage supports the following Windows versions:
- Windows Server 2019 - Windows Server 2022 - Windows Server 2022 Azure Edition-- Windows 10
+- Windows 10
## Participating services
Automanage supports the following Windows versions:
|[Microsoft Antimalware](../security/fundamentals/antimalware.md) |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. **Note:** Microsoft Antimalware requires that there be no other antimalware software installed, or it may fail to work. |Production, Dev/Test | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. |Production, Dev/Test | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. |Production, Dev/Test |
-|[Guest configuration](../governance/policy/concepts/guest-configuration.md) | Guest configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. For Windows machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/policy/concepts/guest-configuration.md). To modify the audit mode for Windows machines, use a custom profile to choose your audit mode setting. [Learn more](virtual-machines-custom-profile.md) |Production, Dev/Test |
+|[Guest configuration](../governance/machine-configuration/overview.md) | Guest configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. For Windows machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/machine-configuration/overview.md). To modify the audit mode for Windows machines, use a custom profile to choose your audit mode setting. [Learn more](virtual-machines-custom-profile.md) |Production, Dev/Test |
|[Boot Diagnostics](../virtual-machines/boot-diagnostics.md) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test | |[Windows Admin Center](/windows-server/manage/windows-admin-center/azure/manage-vm) | Use Windows Admin Center (preview) in the Azure portal to manage the Windows Server operating system inside an Azure VM. This is only supported for machines using Windows Server 2016 or higher. Automanage configures Windows Admin Center over a Private IP address. If you wish to connect with Windows Admin Center over a Public IP address, please open an inbound port rule for port 6516. Automanage onboards Windows Admin Center for the Dev/Test profile by default. Use the preferences to enable or disable Windows Admin Center for the Production and Dev/Test environments. |Production, Dev/Test | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. |Production, Dev/Test |
automanage Virtual Machines Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-best-practices.md
For all of these services, we will auto-onboard, auto-configure, monitor for dri
|Microsoft Antimalware |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. Learn [more](../security/fundamentals/antimalware.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |Yes | |Update Management |You can use Update Management in Azure Automation to manage operating system updates for your virtual machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No | |Change Tracking & Inventory |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
-|Guest configuration | Guest configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. Learn [more](../governance/policy/concepts/guest-configuration.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
+|Guest configuration | Guest configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. Learn [more](../governance/machine-configuration/overview.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
|Azure Automation Account |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No | |Log Analytics Workspace |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/log-analytics-workspace-overview.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
automanage Virtual Machines Custom Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-custom-profile.md
Previously updated : 03/22/2022 Last updated : 08/01/2022 # Create a custom profile in Azure Automanage for VMs
-Azure Automanage for machine best practices has default best practice profiles that cannot be edited. However, if you need more flexibility, you can pick and choose the set of services and settings by creating a custom profile.
+Azure Automanage for Virtual Machines includes default best practice profiles that can't be edited. However, if you need more flexibility, you can pick and choose the set of services and settings by creating a custom profile.
-Automanage supports toggling services ON and OFF. It also currently supports customizing settings on [Azure Backup](..\backup\backup-azure-arm-vms-prepare.md#create-a-custom-policy) and [Microsoft Antimalware](../security/fundamentals/antimalware.md#default-and-custom-antimalware-configuration). You can also specify an existing log analytics workspace. Also, for Windows machines only, you can modify the audit modes for the [Azure security baselines in Guest Configuration](../governance/policy/concepts/guest-configuration.md).
+Automanage supports toggling services ON and OFF. It also currently supports customizing settings on [Azure Backup](..\backup\backup-azure-arm-vms-prepare.md#create-a-custom-policy) and [Microsoft Antimalware](../security/fundamentals/antimalware.md#default-and-custom-antimalware-configuration). You can also specify an existing log analytics workspace. Also, for Windows machines only, you can modify the audit modes for the [Azure security baselines in Guest Configuration](../governance/machine-configuration/overview.md).
-Automanage allows you to tag the following resources in the custom profile:
+Automanage allows you to tag the following resources in the custom profile:
* Resource Group * Automation Account * Log Analytics Workspace
The following ARM template will create an Automanage custom profile. Details on
``` ### ARM template deployment
-This ARM template will create a custom configuration profile that you can assign to your specified machine.
+This ARM template will create a custom configuration profile that you can assign to your specified machine.
The `customProfileName` value is the name of the custom configuration profile that you would like to create.
-The `location` value is the region where you would like to store this custom configuration profile. Note, you can assign this profile to any supported machines in any region.
+The `location` value is the region where you would like to store this custom configuration profile. Note, you can assign this profile to any supported machines in any region.
-The `azureSecurityBaselineAssignmentType` is the audit mode that you can choose for the Azure server security baseline. Your options are
+The `azureSecurityBaselineAssignmentType` is the audit mode that you can choose for the Azure server security baseline. Your options are
-* ApplyAndAutoCorrect : This will apply the Azure security baseline through the Guest Configuration extention, and if any setting within the baseline drifts, we will auto-remediate the setting so it stays compliant.
-* ApplyAndMonitor : This will apply the Azure security baseline through the Guest Configuration extention when you first assign this profile to each machine. After it is applied, the Guest Configuration service will monitor the sever baseline and report any drift from the desired state. However, it will not auto-remdiate.
-* Audit : This will install the Azure security baseline using the Guest Configuration extension. You will be able to see where your machine is out of compliance with the baseline, but noncompliance won't be automatically remediated.
+* ApplyAndAutoCorrect : This setting will apply the Azure security baseline through the Guest Configuration extension, and if any setting within the baseline drifts, we'll auto-remediate the setting so it stays compliant.
+* ApplyAndMonitor : This setting will apply the Azure security baseline through the Guest Configuration extention when you first assign this profile to each machine. After it's applied, the Guest Configuration service will monitor the server baseline and report any drift from the desired state. However, it will not auto-remdiate.
+* Audit : This setting will install the Azure security baseline using the Guest Configuration extension. You'll be able to see where your machine is out of compliance with the baseline, but noncompliance won't be automatically remediated.
-You can also specify an existing log analytics workspace by adding this setting to the configuration section of properties below:
+You can also specify an existing log analytics workspace by adding this setting to the configuration section of properties below:
* "LogAnalytics/Workspace": "/subscriptions/**subscriptionId**/resourceGroups/**resourceGroupName**/providers/Microsoft.OperationalInsights/workspaces/**workspaceName**" * "LogAnalytics/Reprovision": false
-Specify your existing workspace in the `LogAnalytics/Workspace` line. Set the `LogAnalytics/Reprovision` setting to true if you would like this log analytics workspace to be used in all cases. This means that any machine with this custom profile will use this workspace, even it is already connected to one. By default, the `LogAnalytics/Reprovision` is set to false. If your machine is already connected to a workspace, then that workspace will continue to be used. If it is not connected to a workspace, then the workspace specified in `LogAnalytics\Workspace` will be used.
+Specify your existing workspace in the `LogAnalytics/Workspace` line. Set the `LogAnalytics/Reprovision` setting to true if you would like this log analytics workspace to be used in all cases. This means that any machine with this custom profile will use this workspace, even it is already connected to one. By default, the `LogAnalytics/Reprovision` is set to false. If your machine is already connected to a workspace, then that workspace will continue to be used. If it's not connected to a workspace, then the workspace specified in `LogAnalytics\Workspace` will be used.
Also, you can add tags to resources specified in the custom profile like below:
Also, you can add tags to resources specified in the custom profile like below:
}, "Tags/RecoveryVault/Behavior": "Preserve" ```
-The `Tags/Behavior` can either be set to Preserve or Replace. If the resource you are tagging already has the same tag key in the key/value pair, you can choose if you would like to replace that key with the specified value in the configuration profile by using the *Replace* behavior. By default, the behavior is set to *Preserve*, meaning that the tag key that is already associated with that resource will be kept and not overwritten by the key/value pair specified in the configuration profile.
+The `Tags/Behavior` can be set either to Preserve or Replace. If the resource you are tagging already has the same tag key in the key/value pair, you can replace that key with the specified value in the configuration profile by using the *Replace* behavior. By default, the behavior is set to *Preserve*, meaning that the tag key that is already associated with that resource will be retained and not overwritten by the key/value pair specified in the configuration profile.
Follow these steps to deploy the ARM template: 1. Save this ARM template as `azuredeploy.json`
-1. Run this ARM template deployment with `az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json`
-1. Provide the values for customProfileName, location, and azureSecurityBaselineAssignmentType when prompted
-1. You're ready to deploy
+2. Run this ARM template deployment with `az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json`
+3. Provide the values for customProfileName, location, and azureSecurityBaselineAssignmentType when prompted
+4. You're ready to deploy
As with any ARM template, it's possible to factor out the parameters into a separate `azuredeploy.parameters.json` file and use that as an argument when deploying.
-## Next steps
+## Next steps
-Get the most frequently asked questions answered in our FAQ.
+Get the most frequently asked questions answered in our FAQ.
> [!div class="nextstepaction"] > [Frequently Asked Questions](faq.yml)
automation Automation Dsc Extension History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-extension-history.md
The Azure Desired State Configuration (DSC) VM [extension](../virtual-machines/extensions/dsc-overview.md) is updated as-needed to support enhancements and new capabilities delivered by Azure, Windows Server, and the Windows Management Framework (WMF) that includes Windows PowerShell. > [!NOTE]
-> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/policy/concepts/guest-configuration.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
This article provides information about each version of the Azure DSC VM extension, what environments it supports, and comments and remarks on new features or changes.
automation Automation Dsc Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-getting-started.md
This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview). > [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/policy/concepts/guest-configuration.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
If you want a sample environment that is already set up without following the steps described in this article, you can use the [Azure Automation Managed Node template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.automation/automation-configuration). This template sets up a complete State Configuration (DSC) environment, including an Azure VM that is managed by State Configuration (DSC).
automation Automation Dsc Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-onboarding.md
This topic describes how you can set up your machines for management with Azure Automation State Configuration. For details of this service, see [Azure Automation State Configuration overview](automation-dsc-overview.md). > [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/policy/concepts/guest-configuration.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
## Enable Azure VMs
automation Automation Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-overview.md
Azure Automation State Configuration is an Azure configuration management servic
compile PowerShell Desired State Configuration (DSC) [configurations](/powershell/dsc/configurations/configurations) for nodes in any cloud or on-premises datacenter. The service also imports [DSC Resources](/powershell/dsc/resources/resources), and assigns configurations to target nodes, all in the cloud. You can access Azure Automation State Configuration in the Azure portal by selecting **State configuration (DSC)** under **Configuration Management**. > [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/policy/concepts/guest-configuration.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
You can use Azure Automation State Configuration to manage a variety of machines:
automation Automation Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md
Title: Automation services in Azure - overview
description: This article tells what are the Automation services in Azure and how to compare and use it to automate the lifecycle of infrastructure and applications. keywords: azure automation services, automanage, Bicep, Blueprints, Guest Config, Policy, Functions Previously updated : 03/04/2022 Last updated : 08/03/2022
Automation is required in three broad categories of operations:
## Azure services for Automation
-Multiple Azure services can fulfill the above requirements. Each service has its benefits and limitations, and customers can use multiple services to meet their automation requirements.
+Multiple Azure services can fulfill the above requirements. Each service has its benefits and limitations, and customers can use multiple services to meet their automation requirements.
**Deployment and management of resources** - Azure Resource Manager (ARM) templates with Bicep
Multiple Azure services can fulfill the above requirements. Each service has its
- Azure Automation - Azure Automanage (for machine configuration and management.)
-**Responding to external events**
+**Responding to external events**
- Azure Functions - Azure Automation - Azure Policy Guest Config (to take an action when there's a change in the compliance state of resource.)
-**Complex orchestration and integration with 1st or 3rd party products**
+**Complex orchestration and integration with 1st or 3rd party products**
- Azure Logic Apps - Azure Functions or Azure Automation. (Azure Logic app has over 400+ connectors to other services, including Azure Automation and Azure Functions, which could be used to meet complex automation scenarios.)
The following table describes the scenarios and users for ARM template and Bicep
| Create, manage, and update infrastructure resources to ensure that the deployed infrastructure meets the organization compliance standards. </br> </br> Audit and track Azure deployments.| Auditors and central information technology groups responsible to ensure that the deployed Azure infrastructure meets the organization compliance standards.
-
+ ### [Azure Automation](./overview.md) Azure Automation orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environments. It provides a persistent shared assets including variables, connections, objects that allow orchestration of complex jobs. [Learn more](./automation-runbook-gallery.md).
There are more than 3,000 modules in the PowerShell Gallery, and the PowerShell
**Scenarios** | **Users** |
- | Allows to write an [Automation PowerShell runbook](./learn/powershell-runbook-managed-identity.md) that deploys an Azure resource by using an [Azure Resource Manager template](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).</br> </br> Schedule tasks, for example ΓÇô Stop dev/test VMs or services at night and turn on during the day. </br> </br> Response to alerts such as system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation where you can manage to automate on-premises servers such as SQL Server, Active Directory and so on. </br> </br> Azure resource life-cycle management and governance include resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure administrators manage the on-premises infrastructure using scripts or executing long-running jobs such as month-end operations on servers running on-premises.
+ | Allows to write an [Automation PowerShell runbook](./learn/powershell-runbook-managed-identity.md) that deploys an Azure resource by using an [Azure Resource Manager template](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).</br> </br> Schedule tasks, for example ΓÇô Stop dev/test VMs or services at night and turn on during the day. </br> </br> Response to alerts such as system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation where you can manage to automate on-premises servers such as SQL Server, Active Directory and so on. </br> </br> Azure resource life-cycle management and governance include resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure administrators manage the on-premises infrastructure using scripts or executing long-running jobs such as month-end operations on servers running on-premises.
### Azure Automation based in-guest management
You can configure desired the state of your machines to discover and correct con
Replaces repetitive, day-to-day operational tasks with an exception-only management model, where a healthy, steady-state of VM is equal to hands-free management. [Learn more](../automanage/automanage-virtual-machines.md).
- **Linux and Windows support**
+ **Linux and Windows support**
- You can intelligently onboard virtual machines to select best practices Azure services. - It allows you to configure each service per Azure best practices automatically. - It supports customization of best practice services through VM Best practices template for Dev\Test and Production workload. - You can monitor for drift and correct it when detected.
- - It provides a simple experience (point, select, set, and forget).
+ - It provides a simple experience (point, select, set, and forget).
**Scenarios** | **Users** |
- | Automatically configures guest operating system per Microsoft baseline configuration. </br> </br> Automatically detects for drift and corrects it across a VMΓÇÖs entire lifecycle. </br> </br> Aims at a hands-free management of machines. | The IT Administrators, Infra Administrators, IT Operations Administrators are responsible for managing server workload, day to day admin tasks such as backup, disaster recovery, security updates, responding to security threats, and so on across Azure and on-premise. </br> </br> Developers who do not wish to manage servers or spend the time on fewer priority tasks.
+ | Automatically configures guest operating system per Microsoft baseline configuration. </br> </br> Automatically detects for drift and corrects it across a VMΓÇÖs entire lifecycle. </br> </br> Aims at a hands-free management of machines. | The IT Administrators, Infra Administrators, IT Operations Administrators are responsible for managing server workload, day to day admin tasks such as backup, disaster recovery, security updates, responding to security threats, and so on across Azure and on-premises. </br> </br> Developers who do not wish to manage servers or spend the time on fewer priority tasks.
## Respond to events in Automation workflow ### Azure Policy based Guest Configuration
-Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](../governance/policy/concepts/guest-configuration-policy-effects.md).
+Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](../governance/machine-configuration/machine-configuration-policy-effects.md).
You can check on what is installed in: - The next iteration of [Azure Automation State Configuration](./automation-dsc-overview.md). - For known-bad apps, protocols certificates, administrator privileges, and health of agents.
- - For customer-authored content.
+ - For customer-authored content.
**Scenarios** | **Users** |
- | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](../governance/policy/concepts/guest-configuration-policy-effects.md#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change.
+ | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](../governance/machine-configuration/machine-configuration-policy-effects.md#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change.
### Azure Automation - Process Automation
-Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. [Learn more](./automation-runbook-types.md).
+Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. [Learn more](./automation-runbook-types.md).
- - It provides persistent shared assets, including variables, connections, objects, that allows orchestration of complex jobs.
- - You can invoke a runbook on the basis of [Azure Monitor alert](./automation-create-alert-triggered-runbook.md) or through a [webhook](./automation-webhooks.md).
+ - It provides persistent shared assets, including variables, connections, objects, that allows orchestration of complex jobs.
+ - You can invoke a runbook on the basis of [Azure Monitor alert](./automation-create-alert-triggered-runbook.md) or through a [webhook](./automation-webhooks.md).
**Scenarios** | **Users** |
Provides a serverless event-driven compute platform for automation that allows y
- You can choose the hosting plan according to your function app scaling requirements, functionality, and resources required. - You can orchestrate complex workflows through [durable functions](../azure-functions/durable/durable-functions-overview.md?tabs=csharp). - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](../azure-functions/functions-best-practices.md?tabs=csharp#write-robust-functions).
- - When you write Powershell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
+ - When you write PowerShell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
**Scenarios** | **Users** |
- | Respond to events on resources: such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts to send the teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br> Respond to database changes. | The Application developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless Micro-services based applications where a single or mutiple Azure Functions could be part of larger application workflow.
+ | Respond to events on resources: such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts to send the teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br> Respond to database changes. | The Application developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless applications where Azure Functions could be part of a larger application workflow.
## Orchestrate complex jobs in Azure Automation
Logic Apps is a platform for creating and running complex orchestration workflow
- Allows you to build smart integrations between 1st party and 3rd party apps, services and systems running across on-premises, hybrid and cloud native. - Allows you to use managed connectors from a 450+ and growing Azure connectors ecosystem to use in your workflows. - Provides a first-class support for enterprise integration and B2B scenarios.
- - Flexibility to visually create and edit workflows - Low Code\no code approach
+ - Flexibility to visually create and edit workflows - Low Code\no code approach
- Runs only in the cloud. - Provides a large collection of ready made actions and triggers.
Orchestrates repetitive processes using graphical, PowerShell, and Python runboo
**Scenarios** | **Users** |
- | Azure resource life-cycle management and governance which includes Resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on through runbooks that are triggered from ITSM alerts. </br></br> Use hybrid worker as a bridge from cloud to on-premises enabling resource\user management on-premise. </br></br> Execute complex disaster recovery workflows through Automation runbooks. </br></br> Execute automation runbooks as part of Logic apps workflow through Azure Automation Connector. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure Administrators managing on-premises infrastructure using scripts or executing long running jobs such as month-end operations on servers running on-premises.
+ | Azure resource life-cycle management and governance which includes Resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on through runbooks that are triggered from ITSM alerts. </br></br> Use hybrid worker as a bridge from cloud to on-premises enabling resource\user management on-premises. </br></br> Execute complex disaster recovery workflows through Automation runbooks. </br></br> Execute automation runbooks as part of Logic apps workflow through Azure Automation Connector. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure Administrators managing on-premises infrastructure using scripts or executing long running jobs such as month-end operations on servers running on-premises.
### Azure functions
Provides a serverless event-driven compute platform for automation that allows y
- You can choose the hosting plan according to your function app scaling requirements, functionality, and resources required. - You can orchestrate complex workflows through [durable functions](../azure-functions/durable/durable-functions-overview.md?tabs=csharp). - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](../azure-functions/functions-best-practices.md?tabs=csharp#write-robust-functions).
- - When you write Powershell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
+ - When you write PowerShell scripts within Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
**Scenarios** | **Users** |
- | Respond to events on resources : such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts where you can send teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br>Executes Azure Function as part of Logic apps workflow through Azure Function Connector. | Application Developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless Micro-services based applications where a single or mutiple Azure Functions could be part of larger application workflow.
-
+ | Respond to events on resources : such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts where you can send teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br>Executes Azure Function as part of Logic apps workflow through Azure Function Connector. | Application Developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless applications where single or multiple Azure Functions could be part of a larger application workflow.
+ ## Next steps - To learn on how to securely execute the automation jobs, see [best practices for security in Azure Automation](./automation-security-guidelines.md).
automation Disable Local Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/disable-local-authentication.md
Disabling local authentication doesn't take effect immediately. Allow a few minu
>[!NOTE] > Currently, PowerShell support for the new API version (2021-06-22) or the flag ΓÇô `DisableLocalAuth` is not available. However, you can use the Rest-API with this API version to update the flag. To allow list and enroll your subscription for this feature in your respective regions, follow the steps in [how to create an Azure support request - Azure supportability | Microsoft Docs](../azure-portal/supportability/how-to-create-azure-support-request.md).
-
+ ## Re-enable local authentication
-To re-enable local authentication, execute the PowerShell cmdlet `Set-AzAutomationAccount` with the parameter `-DisableLocalAuth false`.  Allow a few minutes for the service to accept the change to allow local authentication requests.
+To re-enable local authentication, execute the PowerShell cmdlet `Set-AzAutomationAccount` with the parameter `-DisableLocalAuth false`.  Allow a few minutes for the service to accept the change to allow local authentication requests.
## Compatibility
The following table describes the behaviors or features that are prevented from
|Scenario | Alternative | ||| |Starting a runbook using a webhook. | Start a runbook job using Azure Resource Manager template, which uses Azure AD authentication. |
-|Using Automation Desired State Configuration.| Use [Azure Policy Guest configuration](../governance/policy/concepts/guest-configuration.md).  |
+|Using Automation Desired State Configuration.| Use [Azure Policy Guest configuration](../governance/machine-configuration/overview.md).  |
|Using agent-based Hybrid Runbook Workers.| Use [extension-based Hybrid Runbook Workers (Preview)](./extension-based-hybrid-runbook-worker-install.md).| ## Limitations
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Title: What's new in Azure Automation description: Significant updates to Azure Automation updated each month. -+ Last updated 11/02/2021
Azure Automation can send diagnostic audit logs in addition to runbook job statu
To strengthen the overall Azure Automation security posture, the built-in RBAC Reader role would not have access to Automation account keys through the API call - `GET /automationAccounts/agentRegistrationInformation`. Read [here](./automation-role-based-access-control.md#reader) for more information.
-### Restore deleted Automation Accounts
+### Restore deleted Automation Accounts
-**Type:** New change
+**Type:** New change
Users can now restore an Automation account deleted within 30 days. Read [here](./delete-account.md?tabs=azure-portal#restore-a-deleted-automation-account) for more information.
New scripts are added to the Azure Automation [GitHub repository](https://github
- ScaleDown-Azure-VM-On-Alert - ScaleUp-Azure-VM-On-Alert
-## November 2021
+## November 2021
-### General Availability of Managed Identity for Azure Automation
+### General Availability of Managed Identity for Azure Automation
**Type:** New feature Azure Automation now supports Managed Identities in Azure public, Azure Gov, and Azure China cloud. [System Assigned Managed Identities](./enable-managed-identity-for-automation.md) is supported for cloud as well as hybrid jobs, while [User Assigned Managed Identities](./automation-security-overview.md) is supported only for cloud jobs. Read the [announcement](https://azure.microsoft.com/updates/azure-automation-managed-identities-ga/) for more information.
-### Preview support for PowerShell 7.1
+### Preview support for PowerShell 7.1
**Type:** New feature
Azure Automation now supports Az modules by default. New Automation accounts cre
**Type:** Plan for change
-Customers should evaluate and plan for migration from Azure Automation State Configuration to Azure Policy guest configuration. For more information, see [Azure Policy guest configuration](../governance/policy/concepts/guest-configuration.md).
+Customers should evaluate and plan for migration from Azure Automation State Configuration to Azure Policy guest configuration. For more information, see [Azure Policy guest configuration](../governance/machine-configuration/overview.md).
## July 2021
availability-zones Migrate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-vm.md
To migrate to availability zone support, your VM SKUs must be available across t
## Downtime requirements
-Because zonal VMs are created across the availability zones, all migration options mentioned in this article require downtime during deployment because zonal VMs are created across the availability zones.
+Because zonal VMs are created across the availability zones, all migration options mentioned in this article require downtime during deployment.
## Migration Option 1: Redeployment
Learn more about:
> [Regions and Availability Zones in Azure](az-overview.md) > [!div class="nextstepaction"]
-> [Azure Services that support Availability Zones](az-region.md)
+> [Azure Services that support Availability Zones](az-region.md)
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
keywords: "Kubernetes, Arc, Azure, K8s, containers, agent, update, auto upgrade"
# Upgrade Azure Arc-enabled Kubernetes agents
-Azure Arc-enabled Kubernetes provides both automatic and manual upgrade capabilities for its agents. If you disable automatic upgrade and instead rely on manual upgrade, a [version support policy](#version-support-policy) applies for Arc agents and the underlying Kubernetes clusters.
+Azure Arc-enabled Kubernetes provides both automatic and manual upgrade capabilities for its [agents](conceptual-agent-overview.md). If you disable automatic upgrade and instead rely on manual upgrade, a [version support policy](#version-support-policy) applies for Arc agents and the underlying Kubernetes clusters.
## Toggle automatic upgrade on or off when connecting cluster to Azure Arc
azure-arc Conceptual Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-agent-overview.md
Title: "Azure Arc-enabled Kubernetes agent architecture" Previously updated : 03/03/2021 Last updated : 08/03/2021
-description: "This article provides an architectural overview of Azure Arc-enabled Kubernetes agents"
+description: "This article provides an architectural overview of Azure Arc-enabled Kubernetes agents."
keywords: "Kubernetes, Arc, Azure, containers" # Azure Arc-enabled Kubernetes agent overview
-[Kubernetes](https://kubernetes.io/) can deploy containerized workloads consistently on hybrid and multi-cloud environments. Azure Arc-enabled Kubernetes provides a centralized, consistent control plane to manage policy, governance, and security across Kubernetes clusters on these heterogenous environments. This article provides an overview of the Azure Arc agents deployed on the Kubernetes clusters as part of connecting the cluster to Azure Arc.
+[Kubernetes](https://kubernetes.io/) can deploy containerized workloads consistently on hybrid and multi-cloud environments. [Azure Arc-enabled Kubernetes](overview.md) provides a centralized, consistent control plane to manage policy, governance, and security across Kubernetes clusters on these heterogenous environments.
+
+This article provides an overview of the Azure Arc agents deployed on the Kubernetes clusters when [connecting them to Azure Arc](quickstart-connect-cluster.md).
## Deploy agents to your cluster
-Most on-prem datacenters enforce strict network rules that prevent inbound communication on the network boundary firewall. Azure Arc-enabled Kubernetes works with these restrictions by not requiring inbound ports on the firewall. Azure Arc agents only require outbound communication to a prerequisite list of network endpoints.
+Most on-premises datacenters enforce strict network rules that prevent inbound communication on the network boundary firewall. Azure Arc-enabled Kubernetes works with these restrictions by not requiring inbound ports on the firewall. Azure Arc agents only require outbound communication to a [set list of network endpoints](quickstart-connect-cluster.md#meet-network-requirements).
-[ ![Architectural overview](./media/architectural-overview.png) ](./media/architectural-overview.png#lightbox)
+![Diagram showing an architectural overview of the Azure Arc-enabled Kubernetes agents](./media/architectural-overview.png) ](./media/architectural-overview.png#lightbox)
-The following steps are involved in connecting a Kubernetes cluster to Azure Arc:
+The following high-level steps are involved in [connecting a Kubernetes cluster to Azure Ar](quickstart-connect-cluster.md)c:
1. Create a Kubernetes cluster on your choice of infrastructure (VMware vSphere, Amazon Web Services, Google Cloud Platform, etc.). > [!NOTE]
- > Since Azure Arc-enabled Kubernetes currently only supports attaching existing Kubernetes clusters to Azure Arc, customers are required to create and manage the lifecycle of the Kubernetes cluster themselves.
+ > Azure Arc-enabled Kubernetes currently only supports attaching existing Kubernetes clusters to Azure Arc. You must create the cluster before you connect it to Azure Arc.
-1. Start the Azure Arc registration for your cluster using Azure CLI.
- * Azure CLI uses Helm to deploy the agent Helm chart on the cluster.
- * The cluster nodes initiate an outbound communication to the [Microsoft Container Registry](https://github.com/microsoft/containerregistry) and pull the images needed to create the following agents in the `azure-arc` namespace:
+1. Start the Azure Arc registration for your cluster.
+ * The agent Helm chart is deployed on the cluster.
+ * The cluster nodes initiate an outbound communication to the [Microsoft Container Registry](https://github.com/microsoft/containerregistry), pulling the images needed to create the following agents in the `azure-arc` namespace:
| Agent | Description | | -- | -- | | `deployment.apps/clusteridentityoperator` | Azure Arc-enabled Kubernetes currently supports only [system assigned identities](../../active-directory/managed-identities-azure-resources/overview.md). `clusteridentityoperator` initiates the first outbound communication. This first communication fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure. | | `deployment.apps/config-agent` | Watches the connected cluster for source control configuration resources applied on the cluster. Updates the compliance state. |
- | `deployment.apps/controller-manager` | An operator of operators that orchestrates interactions between Azure Arc components. |
+ | `deployment.apps/controller-manager` | An operator of operators that orchestrates interactions between Azure Arc components. |
| `deployment.apps/metrics-agent` | Collects metrics of other Arc agents to verify optimal performance. | | `deployment.apps/cluster-metadata-operator` | Gathers cluster metadata, including cluster version, node count, and Azure Arc agent version. | | `deployment.apps/resource-sync-agent` | Syncs the above-mentioned cluster metadata to Azure. | | `deployment.apps/flux-logs-agent` | Collects logs from the flux operators deployed as a part of source control configuration. | | `deployment.apps/extension-manager` | Installs and manages lifecycle of extension helm charts |
- | `deployment.apps/kube-aad-proxy` | Used for authentication of requests sent to the cluster using Cluster Connect |
- | `deployment.apps/clusterconnect-agent` | Reverse proxy agent that enables Cluster Connect feature to provide access to `apiserver` of cluster. Optional component deployed only if `cluster-connect` feature is enabled on the cluster |
- | `deployment.apps/guard` | Authentication and authorization webhook server used for Azure Active Directory (Azure AD) RBAC. Optional component deployed only if `azure-rbac` feature is enabled on the cluster |
+ | `deployment.apps/kube-aad-proxy` | Used for authentication of requests sent to the cluster using Cluster Connect. |
+ | `deployment.apps/clusterconnect-agent` | Reverse proxy agent that enables the Cluster Connect feature to provide access to `apiserver` of the cluster. Optional component deployed only if the [Cluster Connect](conceptual-cluster-connect.md) feature is enabled. |
+ | `deployment.apps/guard` | Authentication and authorization webhook server used for Azure Active Directory (Azure AD) RBAC. Optional component deployed only if [Azure RBAC](conceptual-azure-rbac.md) is enabled on the cluster. |
-1. Once all the Azure Arc-enabled Kubernetes agent pods are in `Running` state, verify that your cluster connected to Azure Arc. You should see:
+1. Once all the Azure Arc-enabled Kubernetes agent pods are in `Running` state, verify that your cluster is connected to Azure Arc. You should see:
* An Azure Arc-enabled Kubernetes resource in [Azure Resource Manager](../../azure-resource-manager/management/overview.md). Azure tracks this resource as a projection of the customer-managed Kubernetes cluster, not the actual Kubernetes cluster itself.
- * Cluster metadata (like Kubernetes version, agent version, and number of nodes) appears on the Azure Arc-enabled Kubernetes resource as metadata.
+ * Cluster metadata (such as Kubernetes version, agent version, and number of nodes) appearing on the Azure Arc-enabled Kubernetes resource as metadata.
## Next steps * Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* Learn about [upgrading Azure Arc-enabled Kubernetes agents](agent-upgrade.md).
* Learn more about the creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md).
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 08/01/2022 Last updated : 08/03/2022 ms.devlang: azurecli
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
Install-Module -Name Az.ConnectedKubernetes ```
- > [!IMPORTANT]
- > While the **Az.ConnectedKubernetes** PowerShell module is in preview, you must install it separately using
- > the `Install-Module` cmdlet.
- * [Log in to Azure PowerShell](/powershell/azure/authenticate-azureps) using the identity (user or service principal) that you want to use for connecting your cluster to Azure Arc. * The identity used needs to at least have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`). * The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) is useful for at-scale onboarding as it has the granular permissions required to only connect clusters to Azure Arc. This role doesn't have the permissions to update, delete, or modify any other clusters or other Azure resources.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Provider name | Distribution name | Version | | | -- | - |
-| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.7.18+](https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html), [4.9.17+](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.0+](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html) |
+| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6 |
| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 | | Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.2.4](https://github.com/rancher/rke/releases/tag/v1.2.4); Kubernetes versions: [1.19.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.6)), [1.18.14](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.14)), [1.17.16](https://github.com/kubernetes/kubernetes/releases/tag/v1.17.16)) | | Nutanix | [Karbon](https://www.nutanix.com/products/karbon) | Version 2.2.1 | | Platform9 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) | PMK Version [5.3.0](https://platform9.com/docs/kubernetes/release-notes#platform9-managed-kubernetes-version-53-release-notes); Kubernetes versions: v1.20.5, v1.19.6, v1.18.10 |
-| Cisco | [Intersight Kubernetes Service (IKS)](https://www.cisco.com/c/en/us/products/cloud-systems-management/cloud-operations/intersight-kubernetes-service.html) Distribution | Upstream K8s version: 1.19.5 |
+| Cisco | [Intersight Kubernetes Service (IKS)](https://www.cisco.com/c/en/us/products/cloud-systems-management/cloud-operations/intersight-kubernetes-service.html) Distribution | Upstream K8s version: 1.21.13, 1.19.5 |
| Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 | | Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version 3.5.1 <br> MKE Version 3.4.7 | | Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Last updated 07/05/2022-+
The Azure Connected Machine agent package contains several logical components, w
* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance.
- Note the following behavior with Azure Policy [guest configuration](../../governance/policy/concepts/guest-configuration.md) for a disconnected machine:
+ Note the following behavior with Azure Policy [guest configuration](../../governance/machine-configuration/overview.md) for a disconnected machine:
* An Azure Policy assignment that targets disconnected machines is unaffected. * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied.
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
### Fixed -- The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/policy/concepts/guest-configuration-policy-effects.md).
+- The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/machine-configuration/machine-configuration-policy-effects.md).
- The guest configuration policy agent now restarts every 48 hours instead of every 6 hours. ## Version 1.9 - July 2021
azure-arc Tutorial Enable Vm Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/tutorial-enable-vm-insights.md
Last updated 04/25/2022
# Tutorial: Monitor a hybrid machine with VM insights
-[Azure Monitor](../../../azure-monitor/overview.md) can collect data directly from your hybrid machines into a Log Analytics workspace for detailed analysis and correlation. Typically, this would require installing the [Log Analytics agent](../../../azure-monitor/agents/agents-overview.md#log-analytics-agent) on the machine using a script, manually, or an automated method following your configuration management standards. Now, Azure Arc-enabled servers can install the Log Analytics and Dependency agent [VM extension](../manage-vm-extensions.md) for Windows and Linux, enabling [VM insights](../../../azure-monitor/vm/vminsights-overview.md) to collect data from your non-Azure VMs.
+[Azure Monitor](../../../azure-monitor/overview.md) can collect data directly from your hybrid machines into a Log Analytics workspace for detailed analysis and correlation. Typically, this would require installing the [Log Analytics agent](../../../azure-monitor/agents/log-analytics-agent.md) on the machine using a script, manually, or an automated method following your configuration management standards. Now, Azure Arc-enabled servers can install the Log Analytics and Dependency agent [VM extension](../manage-vm-extensions.md) for Windows and Linux, enabling [VM insights](../../../azure-monitor/vm/vminsights-overview.md) to collect data from your non-Azure VMs.
<!This tutorial shows you how to configure and collect data from your Linux or Windows machines by enabling VM insights following a simplified set of steps, which streamlines the experience and takes a shorter amount of time.>
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 06/29/2022 Last updated : 08/03/2022
For Azure Arc-enabled servers, before you rename the machine, it's necessary to
3. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. You can run this manually while logged on interactively, with a Microsoft identity [access token](../../active-directory/develop/access-tokens.md), or with the service principal you used for onboarding (or with a [new service principal that you create](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale).
- Disconnecting the machine from Azure Arc-enabled servers doesn't remove the Connected Machine agent, and you do not need to remove the agent as part of this process.
+ Disconnecting the machine from Azure Arc-enabled servers doesn't remove the Connected Machine agent, and you do not need to remove the agent as part of this process.
4. Re-register the Connected Machine agent with Azure Arc-enabled servers. Run the `azcmagent` tool with the [Connect](manage-agent.md#connect) parameter to complete this step. The agent will default to using the computer's current hostname, but you can choose your own resource name by passing the `--resource-name` parameter to the connect command.
The proxy bypass feature does not require you to enter specific URLs to bypass.
| Proxy bypass value | Affected endpoints | | | |
-| AAD | `login.windows.net`, `login.microsoftonline.com`, `pas.windows.net` |
+| Azure AD | `login.windows.net`, `login.microsoftonline.com`, `pas.windows.net` |
| ARM | `management.azure.com` | | Arc | `his.arc.azure.com`, `guestconfiguration.azure.com`, `guestnotificationservice.azure.com`, `servicebus.windows.net` |
If you're already using environment variables to configure the proxy server for
* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
-* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
azure-arc Manage Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-howto-migrate.md
To migrate an Azure Arc-enabled server from one Azure region to another, you hav
2. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. You can run this manually while logged on interactively, with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md), or with the service principal you used for onboarding (or with a [new service principal that you create](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale)).
- Disconnecting the machine from Azure Arc-enabled servers does not remove the Connected Machine agent, and you don't need to remove the agent as part of this process.
+ Disconnecting the machine from Azure Arc-enabled servers does not remove the Connected Machine agent, and you don't need to remove the agent as part of this process.
3. Run the `azcmagent` tool with the [Connect](manage-agent.md#connect) parameter to re-register the Connected Machine agent with Azure Arc-enabled servers in the other region.
-4. Redeploy the VM extensions that were originally deployed to the machine from Azure Arc-enabled servers.
-
+4. Redeploy the VM extensions that were originally deployed to the machine from Azure Arc-enabled servers.
+ If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy definition, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). ## Next steps * Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
-* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md) policy, and much more.
+* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md) policy, and much more.
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
Title: VM extension management with Azure Arc-enabled servers description: Azure Arc-enabled servers can manage deployment of virtual machine extensions that provide post-deployment configuration and automation tasks with non-Azure VMs. Previously updated : 07/01/2022 Last updated : 07/26/2022
In this release, we support the following VM extensions on Windows and Linux mac
To learn about the Azure Connected Machine agent package and details about the Extension agent component, see [Agent overview](agent-overview.md#agent-component-details). > [!NOTE]
-> The Desired State Configuration VM extension is no longer available for Azure Arc-enabled servers. Alternatively, we recommend [migrating to guest configuration](../../governance/policy/how-to/guest-configuration-azure-automation-migration.md) or using the Custom Script Extension to manage the post-deployment configuration of your server.
+> The Desired State Configuration VM extension is no longer available for Azure Arc-enabled servers. Alternatively, we recommend [migrating to machine configuration](../../governance/machine-configuration/machine-configuration-azure-automation-migration.md) or using the Custom Script Extension to manage the post-deployment configuration of your server.
Arc-enabled servers support moving machines with one or more VM extensions installed between resource groups or another Azure subscription without experiencing any impact to their configuration. The source and destination subscriptions must exist within the same [Azure Active Directory tenant](../../active-directory/develop/quickstart-create-new-tenant.md). This support is enabled starting with the Connected Machine agent version **1.8.21197.005**. For more information about moving resources and considerations before proceeding, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
Be sure to review the documentation for each VM extension referenced in the prev
### Log Analytics VM extension
-The Log Analytics agent VM extension for Linux requires Python 2.x is installed on the target machine.
+The Log Analytics agent VM extension for Linux requires Python 2.x is installed on the target machine.
Before you install the extension we suggest you review the [deployment options for the Log Analytics agent](concept-log-analytics-extension-deployment.md) to understand the different methods available and which meets your requirements.
azure-arc Onboard Ansible Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-ansible-playbooks.md
Title: Connect machines at scale using Ansible Playbooks
-description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using Ansible playbooks.
+description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using Ansible playbooks.
Last updated 05/09/2022
You can onboard Ansible-managed nodes to Azure Arc-enabled servers at scale using Ansible playbooks. To do so, you'll need to download, modify, and then run the appropriate playbook.
-Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you are onboarding machines to Azure Arc-enabled servers, copy the following
url: https://aka.ms/azcmagent dest: ~/install_linux_azcmagent.sh mode: '700'
- when: (ansible_system == 'Linux') and (azcmagent_lnx_downloaded.stat.exists == false)
+ when: (ansible_system == 'Linux') and (azcmagent_lnx_downloaded.stat.exists == false)
- name: Install the Connected Machine Agent on Linux servers become: yes shell: bash ~/install_linux_azcmagent.sh
- when: (ansible_system == 'Linux') and (not azcmagent_lnx_downloaded.stat.exists)
+ when: (ansible_system == 'Linux') and (not azcmagent_lnx_downloaded.stat.exists)
- name: Check if the Connected Machine Agent has already been downloaded on Windows servers win_stat:
If you are onboarding machines to Azure Arc-enabled servers, copy the following
win_get_url: url: https://aka.ms/AzureConnectedMachineAgent dest: C:\AzureConnectedMachineAgent.msi
- when: (ansible_os_family == 'Windows') and (not azcmagent_win_downloaded.stat.exists)
+ when: (ansible_os_family == 'Windows') and (not azcmagent_win_downloaded.stat.exists)
- name: Install the Connected Machine Agent on Windows servers win_package:
If you are onboarding machines to Azure Arc-enabled servers, copy the following
when: (ansible_os_family == 'Windows') and (not azcmagent_win_downloaded.stat.exists) - name: Check if the Connected Machine Agent has already been connected
- become: true
+ become: true
command: cmd: azcmagent check register: azcmagent_lnx_connected
If you are onboarding machines to Azure Arc-enabled servers, copy the following
- name: Connect the Connected Machine Agent on Windows servers to Azure win_shell: '& $env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe connect --service-principal-id "{{ azure.service_principal_id }}" --service-principal-secret "{{ azure.service_principal_secret }}" --resource-group "{{ azure.resource_group }}" --tenant-id "{{ azure.tenant_id }}" --location "{{ azure.location }}" --subscription-id "{{ azure.subscription_id }}"'
- when: (ansible_os_family == 'Windows') and (azcmagent_win_connected.rc is defined and azcmagent_win_connected.rc != 0)
+ when: (ansible_os_family == 'Windows') and (azcmagent_win_connected.rc is defined and azcmagent_win_connected.rc != 0)
``` ## Modify the Ansible playbook
After the playbook has run, the **PLAY RECAP** will indicate if all tasks were c
## Verify the connection with Azure Arc
-After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your target hosts have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
+After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your target hosts have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
## Next steps - Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. - Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).-- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
azure-arc Onboard Configuration Manager Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-custom-task.md
Title: Connect machines at scale with a Configuration Manager custom task sequence
-description: You can use a custom task sequence that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers.
+ Title: Connect machines at scale with a Configuration Manager custom task sequence
+description: You can use a custom task sequence that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers.
Last updated 01/20/2022-+ # Connect machines at scale with a Configuration Manager custom task sequence
-Microsoft Endpoint Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager offers the custom task sequence as a flexible paradigm for application deployment.
+Microsoft Endpoint Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager offers the custom task sequence as a flexible paradigm for application deployment.
You can use a custom task sequence, that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers.
-Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
To verify that the machines have been successfully connected to Azure Arc, verif
- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. - Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).-- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
azure-arc Onboard Configuration Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-powershell.md
Title: Connect machines at scale by running PowerShell scripts with Configuration Manager
-description: You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers.
+ Title: Connect machines at scale by running PowerShell scripts with Configuration Manager
+description: You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers.
Last updated 01/20/2022 # Connect machines at scale by running PowerShell scripts with Configuration Manager
-Microsoft Endpoint Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager has an integrated ability to run PowerShell scripts.
+Microsoft Endpoint Configuration Manager facilitates comprehensive management of servers supporting the secure and scalable deployment of applications, software updates, and operating systems. Configuration Manager has an integrated ability to run PowerShell scripts.
You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers.
-Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
The script status monitoring will indicate whether the script has successfully i
- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. - Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).-- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
azure-arc Onboard Dsc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-dsc.md
The [CompositeResource](https://www.powershellgallery.com/packages/compositereso
* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
-* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
azure-arc Onboard Group Policy Service Principal Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy-service-principal-encryption.md
Title: Connect machines at scale using Group Policy with a PowerShell script
-description: In this article, you learn how to create a Group Policy Object to onboard Active Directory-joined Windows machines to Azure Arc-enabled servers.
+ Title: Connect machines at scale using Group Policy with a PowerShell script
+description: In this article, you learn how to create a Group Policy Object to onboard Active Directory-joined Windows machines to Azure Arc-enabled servers.
Last updated 07/20/2022
You can onboard Active DirectoryΓÇôjoined Windows machines to Azure Arc-enabled
You'll first need to set up a local remote share with the Connected Machine agent and modify a script specifying the Arc-enabled server's landing zone within Azure. You'll then run a script that generates a Group Policy Object (GPO) to onboard a group of machines to Azure Arc-enabled servers. This Group Policy Object can be applied to the site, domain, or organizational level. Assignment can also use Access Control List (ACL) and other security filtering native to Group Policy. Machines in the scope of the Group Policy will be onboarded to Azure Arc-enabled servers. Scope your GPO to only include machines that you want to onboard to Azure Arc.
-Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
The Group Policy Object, which is used to onboard Azure Arc-enabled servers, req
* [`DeployGPO.ps1`](https://raw.githubusercontent.com/Azure/ArcEnabledServersGroupPolicy/main/DeployGPO.ps1) * [`AzureArcDeployment.psm1`](https://raw.githubusercontent.com/Azure/ArcEnabledServersGroupPolicy/main/AzureArcDeployment.psm1)
- > [!NOTE]
+ > [!NOTE]
> The ArcGPO folder must be in the same directory as the downloaded script files above. The ArcGPO folder contains the files that define the Group Policy Object that's created when the DeployGPO script is run. When running the DeployGPO script, make sure you're in the same directory as the ps1 files and ArcGPO folder. 1. Modify the script `EnableAzureArc.ps1` by providing the parameter declarations for servicePrincipalClientId, tenantId, subscriptionId, ResourceGroup, Location, Tags, and ReportServerFQDN fields respectively.
The Group Policy Object, which is used to onboard Azure Arc-enabled servers, req
.\DeployGPO.ps1 -DomainFQDN <INSERT Domain FQDN> -ReportServerFQDN <INSERT Domain FQDN of Network Share> -ArcRemoteShare <INSERT Name of Network Share> -Spsecret <INSERT SPN SECRET> [-AgentProxy $AgentProxy] ```
-1. Download the latest version of the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
+1. Download the latest version of the [Azure Connected Machine agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
## Apply the Group Policy Object
-On the Group Policy Management Console (GPMC), right-click on the desired Organizational Unit and link the GPO named **[MSFT] Azure Arc Servers (datetime)**. This is the Group Policy Object which has the Scheduled Task to onboard the machines. After 10 or 20 minutes, the Group Policy Object will be replicated to the respective domain controllers. Learn more about [creating and managing group policy in Azure AD Domain Services](../../active-directory-domain-services/manage-group-policy.md).
+On the Group Policy Management Console (GPMC), right-click on the desired Organizational Unit and link the GPO named **[MSFT] Azure Arc Servers (datetime)**. This is the Group Policy Object which has the Scheduled Task to onboard the machines. After 10 or 20 minutes, the Group Policy Object will be replicated to the respective domain controllers. Learn more about [creating and managing group policy in Azure AD Domain Services](../../active-directory-domain-services/manage-group-policy.md).
-After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your Organizational Unit have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
+After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your Organizational Unit have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
## Next steps * Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. * Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
-* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
* Learn more about [Group Policy](/troubleshoot/windows-server/group-policy/group-policy-overview).
azure-arc Onboard Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy.md
Title: Connect machines at scale using group policy
-description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using group policy.
+description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using group policy.
Last updated 05/25/2022
You can onboard Active DirectoryΓÇôjoined Windows machines to Azure Arc-enabled
You'll first need to set up a local remote share with the Connected Machine Agent and define a configuration file specifying the Arc-enabled server's landing zone within Azure. You will then define a Group Policy Object to run an onboarding script using a scheduled task. This Group Policy can be applied at the site, domain, or organizational unit level. Assignment can also use Access Control List (ACL) and other security filtering native to Group Policy. Machines in the scope of the Group Policy will be onboarded to Azure Arc-enabled servers.
-Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
The Group Policy to onboard Azure Arc-enabled servers requires a remote share wi
1. Prepare a remote share to host the Azure Connected Machine agent package for Windows and the configuration file. You need to be able to add files to the distributed location.
-1. Download the latest version of the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
+1. Download the latest version of the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
## Generate an onboarding script and configuration file from Azure portal
Before you can run the script to connect your machines, you'll need to do the fo
1. Modify and save the following configuration file to the remote share as `ArcConfig.json`. Edit the file with your Azure subscription, resource group, and location details. Use the service principal details from step 1 for the last two fields: ```json
-{
- "tenant-id": "INSERT AZURE TENANTID",
- "subscription-id": "INSERT AZURE SUBSCRIPTION ID",
- "resource-group": "INSERT RESOURCE GROUP NAME",
- "location": "INSERT REGION",
- "service-principal-id": "INSERT SPN ID",
- "service-principal-secret": "INSERT SPN Secret"
- }
+{
+ "tenant-id": "INSERT AZURE TENANTID",
+ "subscription-id": "INSERT AZURE SUBSCRIPTION ID",
+ "resource-group": "INSERT RESOURCE GROUP NAME",
+ "location": "INSERT REGION",
+ "service-principal-id": "INSERT SPN ID",
+ "service-principal-secret": "INSERT SPN Secret"
+ }
``` The group policy will project machines as Arc-enabled servers in the Azure subscription, resource group, and region specified in this configuration file.
Before you can run the script to connect your machines, you'll need to save the
1. Save the modified onboarding script locally and note its location. This will be referenced when creating the Group Policy Object.--> ```
-# This script is used to install and configure the Azure Connected Machine Agent
+# This script is used to install and configure the Azure Connected Machine Agent
[CmdletBinding()] param(
$ProgressPreference="SilentlyContinue"
# create local installation folder if it doesn't exist if (!(Test-Path $InstallationFolder) ) { [void](New-Item -path $InstallationFolder -ItemType Directory )
-}
+}
# create log file and overwrite if it already exists $logpath = New-Item -path $InstallationFolder -Name $LogFile -ItemType File -Force
RegKey: $RegKey
LogFile: $LogPath InstallationFolder: $InstallationFolder ConfigFileName: $ConfigFilename
-"@ >> $logPath
+"@ >> $logPath
try {
try
$agentData = Get-ItemProperty $RegKey -ErrorAction SilentlyContinue if (! $agentData) {
- throw "Could not read installation data from registry, a problem may have occurred during installation"
+ throw "Could not read installation data from registry, a problem may have occurred during installation"
"Azure Connected Machine Agent version $($agentData.version) is already deployed, exiting without changes" >> $logPath exit }
try
} catch { "An error occurred during installation: $_" >> $logpath
-}
+}
``` ## Create a Group Policy Object > [!NOTE] > Before applying the Group Policy Scheduled Task, you must first check the folder `ScheduledTasks` (located within the `Preferences` folder) and modify the `ScheduledTasks.xml` file by changing `<GroupId>NT AUTHORITY\SYSTEM<\GroupId>` to `<UserId>NT AUTHORITY\SYSTEM</UserId>`.
-Create a new Group Policy Object (GPO) to run the onboarding script using the configuration file details:
+Create a new Group Policy Object (GPO) to run the onboarding script using the configuration file details:
-1. Open the Group Policy Management Console (GPMC).
+1. Open the Group Policy Management Console (GPMC).
-1. Navigate to the Organization Unit (OU), Domain, or Security Group in your AD forest that contains the machines you want to onboard to Azure Arc-enabled servers.
+1. Navigate to the Organization Unit (OU), Domain, or Security Group in your AD forest that contains the machines you want to onboard to Azure Arc-enabled servers.
-1. Right-click on this set of resources and select **Create a GPO in this domain, and Link it here.**
+1. Right-click on this set of resources and select **Create a GPO in this domain, and Link it here.**
1. Assign the name ΓÇ£Onboard servers to Azure Arc-enabled serversΓÇ¥ to this new Group Policy Object (GPO). ## Create a scheduled task
-The newly created GPO needs to be modified to run the onboarding script at the appropriate cadence. Use Group PolicyΓÇÖs built-in Scheduled Task capabilities to do so:
+The newly created GPO needs to be modified to run the onboarding script at the appropriate cadence. Use Group PolicyΓÇÖs built-in Scheduled Task capabilities to do so:
1. Select **Computer Configuration > Preferences > Control Panel Settings > Scheduled Tasks**.
-1. Right-click in the blank area and select **New > Scheduled Task**.
+1. Right-click in the blank area and select **New > Scheduled Task**.
-Your workstation must be running Windows 7 or higher to be able to create a Scheduled Task from Group Policy Management Console.
+Your workstation must be running Windows 7 or higher to be able to create a Scheduled Task from Group Policy Management Console.
### Assign general parameters for the task In the **General** tab, set the following parameters under **Security Options**:
-1. In the field **When running the task, use the following user account:**, enter "NT AUTHORITY\System".
+1. In the field **When running the task, use the following user account:**, enter "NT AUTHORITY\System".
-1. Select **Run whether user is logged on or not**.
+1. Select **Run whether user is logged on or not**.
-1. Check the box for **Run with highest privileges**.
+1. Check the box for **Run with highest privileges**.
-1. In the field **Configure for**, select **Windows Vista or Window 2008**.
+1. In the field **Configure for**, select **Windows Vista or Window 2008**.
:::image type="content" source="media/onboard-group-policy/general-properties.png" alt-text="Screenshot of the Azure Arc agent Deployment and Configuration properties window." :::
In the **General** tab, set the following parameters under **Security Options**:
In the **Triggers** tab, select **New**, then enter the following parameters in the **New Trigger** window:
-1. In the field **Begin the task**, select **On a schedule**.
+1. In the field **Begin the task**, select **On a schedule**.
1. Under **Settings**, select **One time** and enter the date and time for the task to run. Select a date and time that is at least 2 hours after the current time to make sure that the Group Policy update will be applied.
-1. Under **Advanced Settings**, check the box for **Enabled**.
+1. Under **Advanced Settings**, check the box for **Enabled**.
-1. Once you've set the trigger parameters, select **OK**.
+1. Once you've set the trigger parameters, select **OK**.
:::image type="content" source="media/onboard-group-policy/new-trigger.png" alt-text="Screenshot of the New Trigger window." ::: ### Assign action parameters for the task
-In the **Actions** tab, select **New**, then enter the follow parameters in the **New Action** window:
+In the **Actions** tab, select **New**, then enter the follow parameters in the **New Action** window:
-1. For **Action**, select **Start a program** from the dropdown.
+1. For **Action**, select **Start a program** from the dropdown.
1. For **Program/script**, enter `C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe`.
-1. For **Add arguments (optional)**, enter `-ExecutionPolicy Bypass -command <INSERT UNC-path to PowerShell script> -remotePath <INSERT path to your Remote Share>`.
+1. For **Add arguments (optional)**, enter `-ExecutionPolicy Bypass -command <INSERT UNC-path to PowerShell script> -remotePath <INSERT path to your Remote Share>`.
-1. For **Start In (Optional)**, enter `C:\`.
+1. For **Start In (Optional)**, enter `C:\`.
1. Once you've set the action parameters, select **OK**. :::image type="content" source="media/onboard-group-policy/new-action.png" alt-text="Screenshot of the New Action window." :::
-## Apply the Group Policy Object
+## Apply the Group Policy Object
-On the Group Policy Management Console, right-click on the desired Organizational Unit and select the option to link an existent GPO. Choose the Group Policy Object defined in the Scheduled Task. After 10 or 20 minutes, the Group Policy Object will be replicated to the respective domain controllers. Learn more about [creating and managing group policy in Azure AD Domain Services](../../active-directory-domain-services/manage-group-policy.md).
+On the Group Policy Management Console, right-click on the desired Organizational Unit and select the option to link an existent GPO. Choose the Group Policy Object defined in the Scheduled Task. After 10 or 20 minutes, the Group Policy Object will be replicated to the respective domain controllers. Learn more about [creating and managing group policy in Azure AD Domain Services](../../active-directory-domain-services/manage-group-policy.md).
-After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your Organizational Unit have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
+After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your Organizational Unit have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
## Next steps - Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. - Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).-- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
- Learn more about [Group Policy](/troubleshoot/windows-server/group-policy/group-policy-overview).
azure-arc Onboard Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-portal.md
wget https://aka.ms/azcmagent -O ~/Install_linux_azcmagent.sh
bash ~/Install_linux_azcmagent.sh ```
-1. To download and install the agent, run the following commands. If your machine needs to communicate through a proxy server to connect to the internet, include the `--proxy` parameter.
+1. To download and install the agent, run the following commands. If your machine needs to communicate through a proxy server to connect to the internet, include the `--proxy` parameter.
```bash # Download the installation package.
After you install the agent and configure it to connect to Azure Arc-enabled ser
- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. -- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verify the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verify the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
azure-arc Onboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-powershell.md
Title: Connect hybrid machines to Azure by using PowerShell description: In this article, you learn how to install the agent and connect a machine to Azure by using Azure Arc-enabled servers. You can do this with PowerShell. Last updated 07/16/2021-+
When the installation finishes, you see the following message:
```azurepowershell Connect-AzConnectedMachine -ResourceGroupName myResourceGroup -Name myMachineName -Location <region> ```
-
+ * To install the Connected Machine agent on the target machine that communicates through a proxy server, run:
-
+ ```azurepowershell Connect-AzConnectedMachine -ResourceGroupName myResourceGroup -Name myMachineName -Location <region> -Proxy http://<proxyURL>:<proxyport> ```
Here's how to configure one or more Windows servers with servers enabled with Az
3. To install the Connected Machine agent, use `Connect-AzConnectedMachine` with the `-ResourceGroupName`, and `-Location` parameters. The Azure resource names will automatically use the hostname of each server. Use the `-SubscriptionId` parameter to override the default subscription as a result of the Azure context created after sign-in. * To install the Connected Machine agent on the target machine that can directly communicate to Azure, run the following command:
-
+ ```azurepowershell $sessions = New-PSSession -ComputerName myMachineName Connect-AzConnectedMachine -ResourceGroupName myResourceGroup -Location <region> -PSSession $sessions ```
-
+ * To install the Connected Machine agent on multiple remote machines at the same time, add a list of remote machine names, each separated by a comma. ```azurepowershell
Here's how to configure one or more Windows servers with servers enabled with Az
``` The following example shows the results of the command targeting a single machine:
-
+ ```azurepowershell time="2020-08-07T13:13:25-07:00" level=info msg="Onboarding Machine. It usually takes a few minutes to complete. Sometimes it may take longer depending on network and server load status." time="2020-08-07T13:13:25-07:00" level=info msg="Check network connectivity to all endpoints..." time="2020-08-07T13:13:29-07:00" level=info msg="All endpoints are available... continue onboarding" time="2020-08-07T13:13:50-07:00" level=info msg="Successfully Onboarded Resource to Azure" VM Id=f65bffc7-4734-483e-b3ca-3164bfa42941
-
+ Name Location OSName Status ProvisioningState - -- -- myMachineName eastus windows Connected Succeeded
After you install and configure the agent to register with Azure Arc-enabled ser
* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
-* Learn how to manage your machine by using [Azure Policy](../../governance/policy/overview.md). You can use VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verify that the machine is reporting to the expected Log Analytics workspace, and enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md).
+* Learn how to manage your machine by using [Azure Policy](../../governance/policy/overview.md). You can use VM [guest configuration](../../governance/machine-configuration/overview.md), verify that the machine is reporting to the expected Log Analytics workspace, and enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md).
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
Title: Connect hybrid machines to Azure at scale description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using a service principal. Last updated 05/23/2022-+
You can enable Azure Arc-enabled servers for multiple Windows or Linux machines in your environment with several flexible options depending on your requirements. Using the template script we provide, you can automate every step of the installation, including establishing the connection to Azure Arc. However, you are required to execute this script manually with an account that has elevated permissions on the target machine and in Azure.
-One method to connect the machines to Azure Arc-enabled servers is to use an Azure Active Directory [service principal](../../active-directory/develop/app-objects-and-service-principals.md). This service principal method can be used instead of your privileged identity to [interactively connect the machine](onboard-portal.md). This service principal is a special limited management identity that has only the minimum permission necessary to connect machines to Azure using the `azcmagent` command. This method is safer than using a higher privileged account like a Tenant Administrator and follows our access control security best practices. **The service principal is used only during onboarding; it is not used for any other purpose.**
+One method to connect the machines to Azure Arc-enabled servers is to use an Azure Active Directory [service principal](../../active-directory/develop/app-objects-and-service-principals.md). This service principal method can be used instead of your privileged identity to [interactively connect the machine](onboard-portal.md). This service principal is a special limited management identity that has only the minimum permission necessary to connect machines to Azure using the `azcmagent` command. This method is safer than using a higher privileged account like a Tenant Administrator and follows our access control security best practices. **The service principal is used only during onboarding; it is not used for any other purpose.**
Before you start connecting your machines, review the following requirements:
Before you start connecting your machines, review the following requirements:
<!--The installation methods to install and configure the Connected Machine agent requires that the automated method you use has administrator permissions on the machines: on Linux by using the root account, and on Windows as a member of the Local Administrators group.
-Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.-->
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.-->
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
The values from the following properties are used with parameters passed to the
> [!TIP] > Make sure to use the service principal **ApplicationId** property, not the **Id** property.
-4. Assign the **Azure Connected Machine Onboarding** role to the service principal for the designated resource group or subscription. This role contains only the permissions required to onboard a machine. Note that your account must be a member of the **Owner** or **User Access Administrator** role for the subscription to which the service principal will have access. For information on how to add role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md) or [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
+4. Assign the **Azure Connected Machine Onboarding** role to the service principal for the designated resource group or subscription. This role contains only the permissions required to onboard a machine. Note that your account must be a member of the **Owner** or **User Access Administrator** role for the subscription to which the service principal will have access. For information on how to add role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md) or [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
## Generate the installation script from the Azure portal
After you install the agent and configure it to connect to Azure Arc-enabled ser
- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. - Learn how to [troubleshoot agent connection issues](troubleshoot-agent-onboard.md).-- Learn how to manage your machines using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying that machines are reporting to the expected Log Analytics workspace, monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and more.
+- Learn how to manage your machines using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying that machines are reporting to the expected Log Analytics workspace, monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and more.
azure-arc Onboard Update Management Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-update-management-machines.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## How it works
-When the onboarding process is launched, an Active Directory [service principal](../../active-directory/fundamentals/service-accounts-principal.md) is created in the tenant.
+When the onboarding process is launched, an Active Directory [service principal](../../active-directory/fundamentals/service-accounts-principal.md) is created in the tenant.
To install and configure the Connected Machine agent on the target machine, a master runbook named **Add-AzureConnectedMachines** runs in the Azure sandbox. Based on the operating system detected on the machine, the master runbook calls a child runbook named **Add-AzureConnectedMachineWindows** or **Add-AzureConnectedMachineLinux** that runs under the system [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md) role directly on the machine. Runbook job output is written to the job history, and you can view their [status summary](../../automation/automation-runbook-execution.md#job-statuses) or drill into details of a specific runbook job in the [Azure portal](../../automation/manage-runbooks.md#view-statuses-in-the-azure-portal) or using [Azure PowerShell](../../automation/manage-runbooks.md#retrieve-job-statuses-using-powershell). Execution of runbooks in Azure Automation writes details in an activity log for the Automation account. For details of using the log, see [Retrieve details from Activity log](../../automation/manage-runbooks.md#retrieve-details-from-activity-log).
The final step establishes the connection to Azure Arc using the `azcmagent` com
## Prerequisites
-This method requires that you are a member of the [Automation Job Operator](../../automation/automation-role-based-access-control.md#automation-job-operator) role or higher so you can create runbook jobs in the Automation account.
+This method requires that you are a member of the [Automation Job Operator](../../automation/automation-role-based-access-control.md#automation-job-operator) role or higher so you can create runbook jobs in the Automation account.
-If you have enabled Azure Policy to [manage runbook execution](../../automation/enforce-job-execution-hybrid-worker.md) and enforce targeting of runbook execution against a Hybrid Runbook Worker group, this policy must be disabled. Otherwise, the runbook jobs that onboard the machine(s) to Arc-enabled servers will fail.
+If you have enabled Azure Policy to [manage runbook execution](../../automation/enforce-job-execution-hybrid-worker.md) and enforce targeting of runbook execution against a Hybrid Runbook Worker group, this policy must be disabled. Otherwise, the runbook jobs that onboard the machine(s) to Arc-enabled servers will fail.
## Add machines from the Azure portal
Perform the following steps to configure the hybrid machine with Arc-enabled ser
After specifying the Automation account, the list below returns non-Azure machines managed by Update Management for that Automation account. Both Windows and Linux machines are listed and for each one, select **add**.
- You can review your selection by selecting **Review selection** and if you want to remove a machine select **remove** from under the **Action** column.
+ You can review your selection by selecting **Review selection** and if you want to remove a machine select **remove** from under the **Action** column.
Once you confirm your selection, select **Next: Tags**.
After the agent is installed and configured to connect to Azure Arc-enabled serv
- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. -- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verify the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verify the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
azure-arc Onboard Windows Admin Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-windows-admin-center.md
After you install the agent and configure it to connect to Azure Arc-enabled ser
* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
-* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+* Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
You can install the Connected Machine agent manually, or on multiple machines at
When you connect your machine to Azure Arc-enabled servers, you can perform many operational functions, just as you would with native Azure virtual machines. Below are some of the key supported actions for connected machines. * **Govern**:
- * Assign [Azure Policy guest configurations](../../governance/policy/concepts/guest-configuration.md) to audit settings inside the machine. To understand the cost of using Azure Policy Guest Configuration policies with Arc-enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/).
+ * Assign [Azure Policy guest configurations](../../governance/machine-configuration/overview.md) to audit settings inside the machine. To understand the cost of using Azure Policy Guest Configuration policies with Arc-enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/).
* **Protect**: * Protect non-Azure servers with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint), included through [Microsoft Defender for Cloud](../../security-center/defender-for-servers-introduction.md), for threat detection, for vulnerability management, and to proactively monitor for potential security threats. Microsoft Defender for Cloud presents the alerts and remediation suggestions from the threats detected. * Use [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) to collect security-related events and correlate them with other data sources.
When you connect your machine to Azure Arc-enabled servers, you can perform many
* Perform post-deployment configuration and automation tasks using supported [Arc-enabled servers VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. * **Monitor**: * Monitor operating system performance and discover application components to monitor processes and dependencies with other resources using [VM insights](../../azure-monitor/vm/vminsights-overview.md).
- * Collect other log data, such as performance data and events, from the operating system or workloads running on the machine with the [Log Analytics agent](../../azure-monitor/agents/agents-overview.md#log-analytics-agent). This data is stored in a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md).
+ * Collect other log data, such as performance data and events, from the operating system or workloads running on the machine with the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md). This data is stored in a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md).
> [!NOTE] > At this time, enabling Azure Automation Update Management directly from an Azure Arc-enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and [how to enable Update Management for non-Azure VMs](../../automation/update-management/enable-from-automation-account.md#enable-non-azure-vms).
Azure Arc-enabled servers support the management of physical servers and virtual
The status for a connected machine can be viewed in the Azure portal under **Azure Arc > Servers**.
-The Connected Machine agent sends a regular heartbeat message to the service every five minutes. If the service stops receiving these heartbeat messages from a machine, that machine is considered offline, and its status will automatically be changed to **Disconnected** within 15 to 30 minutes. Upon receiving a subsequent heartbeat message from the Connected Machine agent, its status will automatically be changed back to **Connected**.
+The Connected Machine agent sends a regular heartbeat message to the service every five minutes. If the service stops receiving these heartbeat messages from a machine, that machine is considered offline, and its status will automatically be changed to **Disconnected** within 15 to 30 minutes. Upon receiving a subsequent heartbeat message from the Connected Machine agent, its status will automatically be changed back to **Connected**.
If a machine remains disconnected for 45 days, its status may change to **Expired**. An expired machine can no longer connect to Azure and requires a server administrator to disconnect and then reconnect it to Azure to continue managing it with Azure Arc. The exact date upon which a machine will expire is determined by the expiration date of the managed identity's credential, which is valid up to 90 days and renewed every 45 days.
azure-arc Scenario Migrate To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/scenario-migrate-to-azure.md
In this article, you:
* Inventory Azure Arc-enabled servers supported VM extensions installed. * Uninstall all VM extensions from the Azure Arc-enabled server. * Identify Azure services configured to authenticate with your Azure Arc-enabled server-managed identity and prepare to update those services to use the Azure VM identity after migration.
-* Review Azure role-based access control (Azure RBAC) access rights granted to the Azure Arc-enabled server resource to maintain who has access to the resource after it has been migrated to an Azure VM.
+* Review Azure role-based access control (Azure RBAC) access rights granted to the Azure Arc-enabled server resource to maintain who has access to the resource after it has been migrated to an Azure VM.
* Delete the Azure Arc-enabled server resource identity from Azure and remove the Azure Arc-enabled server agent. * Install the Azure guest agent. * Migrate the server or VM to Azure.
List role assignments for the Azure Arc-enabled servers resource, using [Azure P
If you're using a managed identity for an application or process running on an Azure Arc-enabled server, you need to make sure the Azure VM has a managed identity assigned. To view the role assignment for a managed identity, you can use the Azure PowerShell `Get-AzADServicePrincipal` cmdlet. For more information, see [List role assignments for a managed identity](../../role-based-access-control/role-assignments-list-powershell.md#list-role-assignments-for-a-managed-identity).
-A system-managed identity is also used when Azure Policy is used to audit or configure settings inside a machine or server. With Azure Arc-enabled servers, the guest configuration agent service is included, and performs validation of audit settings. After you migrate, see [Deploy requirements for Azure virtual machines](../../governance/policy/concepts/guest-configuration.md#deploy-requirements-for-azure-virtual-machines) for information on how to configure your Azure VM manually or with policy with the guest configuration extension.
+A system-managed identity is also used when Azure Policy is used to audit or configure settings inside a machine or server. With Azure Arc-enabled servers, the guest configuration agent service is included, and performs validation of audit settings. After you migrate, see [Deploy requirements for Azure virtual machines](../../governance/machine-configuration/overview.md#deploy-requirements-for-azure-virtual-machines) for information on how to configure your Azure VM manually or with policy with the guest configuration extension.
Update role assignment with any resources accessed by the managed identity to allow the new Azure VM identity to authenticate to those services. See the following to learn [how managed identities for Azure resources work for an Azure Virtual Machine (VM)](../../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
Before proceeding with the migration with Azure Migration, review the [Prepare o
## Step 6: Deploy Azure VM extensions
-After migration and completion of all post-migration configuration steps, you can now deploy the Azure VM extensions based on the VM extensions originally installed on your Azure Arc-enabled server. Review [Azure virtual machine extensions and features](../../virtual-machines/extensions/overview.md) to help plan your extension deployment.
+After migration and completion of all post-migration configuration steps, you can now deploy the Azure VM extensions based on the VM extensions originally installed on your Azure Arc-enabled server. Review [Azure virtual machine extensions and features](../../virtual-machines/extensions/overview.md) to help plan your extension deployment.
-To resume using audit settings inside a machine with guest configuration policy definitions, see [Enable guest configuration](../../governance/policy/concepts/guest-configuration.md#enable-guest-configuration).
+To resume using audit settings inside a machine with guest configuration policy definitions, see [Enable guest configuration](../../governance/machine-configuration/overview.md).
-If the Log Analytics VM extension or Dependency agent VM extension was deployed using Azure Policy and the [VM insights initiative](../../azure-monitor/vm/vminsights-enable-policy.md), remove the [exclusion](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion) you created earlier. To use Azure Policy to enable Azure virtual machines, see [Deploy Azure Monitor at scale using Azure Policy](../../azure-monitor/best-practices.md).
+If the Log Analytics VM extension or Dependency agent VM extension was deployed using Azure Policy and the [VM insights initiative](../../azure-monitor/vm/vminsights-enable-policy.md), remove the [exclusion](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion) you created earlier. To use Azure Policy to enable Azure virtual machines, see [Deploy Azure Monitor at scale using Azure Policy](../../azure-monitor/best-practices.md).
## Next steps
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | | [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; |
-| [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; |
+| [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | | [Azure Red Hat OpenShift](../../openshift/index.yml) | &#x2705; | &#x2705; | | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| Azure Monitor [Log Analytics](../../azure-monitor/logs/data-platform-logs.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
Last updated 04/06/2022 - # Managing and maintaining the Log Analytics agent for Windows and Linux
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Title: Overview of the Azure monitoring agents| Microsoft Docs
-description: This article provides a detailed overview of the Azure agents that are available and support monitoring virtual machines hosted in an Azure or hybrid environment.
-
+ Title: Azure Monitor Agent overview
+description: Overview of the Azure Monitor Agent, which collects monitoring data from the guest operating system of virtual machines.
-- Previously updated : 7/11/2022++ Last updated : 7/21/2022+ +
+#customer-intent: As an IT manager, I want to understand the capabilities of Azure Monitor Agent to determine whether I can use the agent to collect the data I need from the operating systems of my virtual machines.
-# Overview of Azure Monitor agents
+# Azure Monitor Agent overview
-Virtual machines and other compute resources require an agent to collect monitoring data required to measure the performance and availability of their guest operating system and workloads. Many legacy agents exist today for this purpose. Eventually, they'll all be replaced by the new consolidated [Azure Monitor agent](./azure-monitor-agent-overview.md). This article describes the legacy agents and the new Azure Monitor agent.
+Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features, insights, and other services, such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). Azure Monitor Agent replaces all of Azure Monitor's legacy monitoring agents. This article provides an overview of Azure Monitor Agent's capabilities and supported use cases.
-The general recommendation is to use the Azure Monitor agent if you aren't bound by [these limitations](./azure-monitor-agent-overview.md#current-limitations) because it consolidates the features of all the legacy agents listed here and provides [other benefits](#azure-monitor-agent).
-If you do require the limitations today, you may continue to use the other legacy agents listed here until **August 2024**. [Learn more](./azure-monitor-agent-overview.md).
+Here's a short **introduction to Azure Monitor video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
-## Summary of agents
+## Can I deploy Azure Monitor Agent?
-The following tables provide a quick comparison of the telemetry agents for Windows and Linux. More information on each agent is provided in the following sections.
+Deploy Azure Monitor Agent on all new virtual machines to collect data for [supported services and features](#supported-services-and-features).
-### Windows agents
+If you have virtual machines already deployed with legacy agents, we recommend you [check whether Azure Monitor Agent supports your monitoring needs](#compare-to-legacy-agents) and [migrate to Azure Monitor Agent](./azure-monitor-agent-migration.md) as soon as possible.
-| | Azure Monitor agent | Diagnostics<br>extension (WAD) | Log Analytics<br>agent |
-|:|:-|:|:|
-| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc)<br>[Windows Client OS (preview)](./azure-monitor-agent-windows-client.md) | Azure | Azure<br>Other cloud<br>On-premises |
-| **Agent requirements** | None | None | None |
-| **Data collected** | Event Logs<br>Performance<br>File based logs (preview)<br> | Event Logs<br>ETW events<br>Performance<br>File based logs<br>IIS logs<br>.NET app logs<br>Crash dumps<br>Agent diagnostics logs | Event Logs<br>Performance<br>File based logs<br>IIS logs<br>Insights and solutions<br>Other services |
-| **Data sent to** | Azure Monitor Logs<br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br>Azure Monitor Metrics<br>Event Hub | Azure Monitor Logs |
-| **Services and**<br>**features**<br>**supported** | Log Analytics<br>Metrics explorer<br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | Metrics explorer | VM insights<br>Log Analytics<br>Azure Automation<br>Microsoft Defender for Cloud<br>Microsoft Sentinel |
+Azure Monitor Agent replaces the Azure Monitor legacy monitoring agents:
-### Linux agents
+- [Log Analytics Agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports monitoring solutions.
+- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).
+- [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage.
-| | Azure Monitor agent | Diagnostics<br>extension (LAD) | Telegraf<br>agent | Log Analytics<br>agent |
-| : | :- | :-- | :- | :-- |
-| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
-| **Agent requirements** | None | None | None | None |
-| **Data collected** | Syslog<br>Performance<br>File based logs (preview)<br> | Syslog<br>Performance | Performance | Syslog<br>Performance |
-| **Data sent to** | Azure Monitor Logs<br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br>Event Hub | Azure Monitor Metrics | Azure Monitor Logs |
-| **Services and**<br>**features**<br>**supported** | Log Analytics<br>Metrics explorer<br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | | Metrics explorer | VM insights<br>Log Analytics<br>Azure Automation<br>Microsoft Defender for Cloud<br>Microsoft Sentinel |
+## Install the agent and configure data collection
-<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
+Azure Monitor Agent uses [data collection rules](../essentials/data-collection-rule-overview.md), using which you define which data you want each agent to collect. Data collection rules let you manage data collection settings at scale and define unique, scoped configurations for subsets of machines. The rules are independent of the workspace and the virtual machine, which means you can define a rule once and reuse it across machines and environments.
-## Azure Monitor agent
+**To collect data using Azure Monitor Agent:**
-The [Azure Monitor agent](azure-monitor-agent-overview.md) is meant to replace the Log Analytics agent, Azure Diagnostics extension, and Telegraf agent for Windows and Linux machines. It can send data to Azure Monitor Logs and Azure Monitor Metrics and uses [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). DCRs provide a more scalable method of configuring data collection and destinations for each agent.
+1. Install the agent on the resource.
-Use the Azure Monitor agent to gain these benefits:
+ | Resource type | Installation method | More information |
+ |:|:|:|
+ | Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent by using Azure extension framework. |
+ | On-premises servers (Azure Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing the [Azure Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent by using Azure extension framework, provided for on-premises by first installing [Azure Arc agent](../../azure-arc/servers/deployment-options.md). |
+ | Windows 10, 11 desktops, workstations | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. |
+ | Windows 10, 11 laptops | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. The installer works on laptops, but the agent *isn't optimized yet* for battery or network consumption. |
+
+1. Define a data collection rule and associate the resource to the rule.
-- Collect guest logs and metrics from any machine in Azure, in other clouds, or on-premises. ([Azure Arc-enabled servers](../../azure-arc/servers/overview.md) are required for machines outside of Azure.)-- Collect specific data types from specific machines with granular targeting via [DCRs](../essentials/data-collection-rule-overview.md) as compared to the "all or nothing" mode that the Log Analytics agent supports.-- Use XPath queries to filter Windows events that get collected, which helps to further reduce ingestion and storage costs.-- Centrally configure collection for different sets of data from different sets of VMs.-- Simplify management of data collection. Send data from Windows and Linux VMs to multiple Log Analytics workspaces (for example, "multihoming") and/or other [supported destinations](./azure-monitor-agent-overview.md#data-sources-and-destinations). Every action across the data collection lifecycle, from onboarding to deployment to updates, is easier, scalable, and centralized (in Azure) by using DCRs.-- Manage dependent solutions or services. The Azure Monitor agent uses a new method of handling extensibility that's more transparent and controllable than management packs and Linux plug-ins in the legacy Log Analytics agents. This management experience is identical for machines in Azure or on-premises/other clouds via Azure Arc, at no added cost.-- Use Managed Identity (for virtual machines) and Azure Active Directory device tokens (for clients), which are much more secure and "hack proof" than certificates or workspace keys that legacy agents use. This agent performs better at higher events-per-second upload rates compared to legacy agents.-- Manage data collection configuration centrally by using [DCRs](../essentials/data-collection-rule-overview.md), and use Azure Resource Manager templates or policies for management overall.-- Send data to Azure Monitor Logs and Azure Monitor Metrics (preview) for analysis with Azure Monitor.-- Use Windows event filtering or multihoming for logs on Windows and Linux.
+ The table below lists the types of data you can currently collect with the Azure Monitor Agent and where you can send that data.
-<! Send data to Azure Storage for archiving.
-- Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).-- Manage the security of your machines by using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md). (Available in private preview.)-- Use [VM insights](../vm/vminsights-overview.md), which allows you to monitor your machines at scale and monitors their processes and dependencies on other resources and external processes. -- Manage the security of your machines by using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md).-- Use different [solutions](../monitor-reference.md#insights-and-curated-visualizations) to monitor a particular service or application. */>
+ | Data source | Destinations | Description |
+ |:|:|:|
+ | Performance | Azure Monitor Metrics (preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads |
+ | Windows event logs | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system |
+ | Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
+ | Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine |
+
+ <sup>1</sup> On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.<br>
+ <sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).
-When compared with the legacy agents, the Azure Monitor agent has [these limitations currently](./azure-monitor-agent-overview.md#current-limitations).
+## Supported services and features
-## Log Analytics agent
+Azure Monitor Agent currently supports these Azure Monitor features:
-> [!WARNING]
-> The Log Analytics agents are on a deprecation path and will no longer be supported after August 31, 2024.
+| Azure Monitor feature | Current support | Other extensions installed | More information |
+| : | : | : | : |
+| Text logs and Windows IIS logs | Public preview | None | [Collect text logs with Azure Monitor Agent (preview)](data-collection-text-log.md) |
+| Windows client installer | Public preview | None | [Set up Azure Monitor Agent on Windows client devices](azure-monitor-agent-windows-client.md) |
+| [VM insights](../vm/vminsights-overview.md) | Preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
-The legacy [Log Analytics agent](./log-analytics-agent.md) collects monitoring data from the guest operating system and workloads of virtual machines in Azure, other cloud providers, and on-premises machines. It sends data to a Log Analytics workspace. The Log Analytics agent is the same agent used by System Center Operations Manager. You can multihome agent computers to communicate with your management group and Azure Monitor simultaneously. This agent is also required by certain insights in Azure Monitor and other services in Azure.
+Azure Monitor Agent currently supports these Azure
-> [!NOTE]
-> The Log Analytics agent for Windows is often referred to as Microsoft Monitoring Agent (MMA). The Log Analytics agent for Linux is often referred to as OMS agent.
+| Azure service | Current support | Other extensions installed | More information |
+| : | : | : | : |
+| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Sign-up link](https://aka.ms/AMAgent) |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows DNS logs: Preview</li><li>Linux Syslog CEF: Preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | <ul><li>[Sign-up link for Windows DNS logs](https://aka.ms/AMAgent)</li><li>[Sign-up link for Linux Syslog CEF](https://aka.ms/AMAgent)</li><li>No sign-up needed for Windows Forwarding Event (WEF) and Windows Security Events</li></ul> |
+| [Change Tracking](../../automation/change-tracking/overview.md) (part of Defender) | Supported as File Integrity Monitoring in the Microsoft Defender for Cloud: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/AMAgent) |
+| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (preview) documentation](/azure/update-center/) |
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Preview | Azure NetworkWatcher extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
+
+## Supported regions
+
+Azure Monitor Agent is available in all public regions and Azure Government clouds. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
+## Costs
+
+There's no cost for the Azure Monitor Agent, but you might incur charges for the data ingested. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Networking
+
+The Azure Monitor Agent supports Azure service tags. Both *AzureMonitor* and *AzureResourceManager* tags are required. It supports connecting via *direct proxies, Log Analytics gateway, and private links* as described in the following sections.
+
+### Firewall requirements
+
+| Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection|
+|||||--|--|
+| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
+| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
+| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
+| Azure Commercial | management.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes |
+| Azure Government | Replace '.com' above with '.us' | Same as above | Same as above | Same as above| Same as above |
+| Azure China | Replace '.com' above with '.cn' | Same as above | Same as above | Same as above| Same as above |
+
+If you use private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).
+
+### Proxy configuration
+
+If the machine connects through a proxy server to communicate over the internet, review the following requirements to understand the network configuration required.
+
+The Azure Monitor Agent extensions for Windows and Linux can communicate either through a proxy server or a [Log Analytics gateway](./gateway.md) to Azure Monitor by using the HTTPS protocol. Use it for Azure virtual machines, Azure virtual machine scale sets, and Azure Arc for servers. Use the extensions settings for configuration as described in the following steps. Both anonymous and basic authentication by using a username and password are supported.
+
+> [!IMPORTANT]
+> Proxy configuration is not supported for [Azure Monitor Metrics (preview)](../essentials/metrics-custom-overview.md) as a destination. If you're sending metrics to this destination, it will use the public internet without any proxy.
-Use the Log Analytics agent if you need to:
+1. Use this flowchart to determine the values of the *`Settings` and `ProtectedSettings` parameters first.
-* Collect logs and performance data from Azure virtual machines or hybrid machines hosted outside of Azure.
-* Send data to a Log Analytics workspace to take advantage of features supported by [Azure Monitor Logs](../logs/data-platform-logs.md), such as [log queries](../logs/log-query-overview.md).
-* Use [VM insights](../vm/vminsights-overview.md), which allows you to monitor your machines at scale and monitor their processes and dependencies on other resources and external processes.
-* Manage the security of your machines by using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md).
-* Use [Azure Automation Update Management](../../automation/update-management/overview.md), [Azure Automation State Configuration](../../automation/automation-dsc-overview.md), or [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md) to deliver comprehensive management of your Azure and non-Azure machines.
-* Use different [solutions](../monitor-reference.md#insights-and-curated-visualizations) to monitor a particular service or application.
+ ![Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
-Limitations of the Log Analytics agent:
+1. After determining the `Settings` and `ProtectedSettings` parameter values, *provide these other parameters* when you deploy Azure Monitor Agent, using PowerShell commands, as shown in the following examples:
-- Can't send data to Azure Monitor Metrics, Azure Storage, or Azure Event Hubs-- Difficult to configure unique monitoring definitions for individual agents-- Difficult to manage at scale because each virtual machine has a unique configuration
+# [Windows VM](#tab/PowerShellWindows)
-## Azure Diagnostics extension
+```powershell
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-The [Azure Diagnostics extension](./diagnostics-extension-overview.md) collects monitoring data from the guest operating system and workloads of Azure virtual machines and other compute resources. It primarily collects data into Azure Storage. It also allows you to define data sinks to send data to other destinations, such as Azure Monitor Metrics and Azure Event Hubs.
+Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
+```
-Use the Azure Diagnostics extension if you need to:
+# [Linux VM](#tab/PowerShellLinux)
-- Send data to Azure Storage for archiving or to analyze it with tools such as [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md).-- Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [Metrics Explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).-- Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).-- Collect [Boot Diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.
+```powershell
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-Limitations of the Azure Diagnostics extension:
+Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
+```
-- Can only be used with Azure resources-- Limited ability to send data to Azure Monitor Logs
+# [Windows Arc-enabled server](#tab/PowerShellWindowsArc)
-## Telegraf agent
+```powershell
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-The [InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md) is used to collect performance data from Linux computers to send to Azure Monitor Metrics.
+New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
+```
-Use the Telegraf agent if you need to:
+# [Linux Arc-enabled server](#tab/PowerShellLinuxArc)
-* Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [Metrics Explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Linux only).
+```powershell
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-## Virtual machine extensions
+New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
+```
-The [Azure Monitor agent](./azure-monitor-agent-manage.md#virtual-machine-extension-details) is only available as a virtual machine extension. The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) install the Log Analytics agent on Azure virtual machines. These are the same agents described above but allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
++
+### Log Analytics gateway configuration
+
+1. Follow the preceding instructions to configure proxy settings on the agent and provide the IP address and port number that corresponds to the gateway server. If you've deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
+1. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
+ `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
+ `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`.
+ (If you're using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).)
+1. Add the **data ingestion endpoint URL** to the allowlist for the gateway
+ `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`.
+1. Restart the **OMS Gateway** service to apply the changes
+ `Stop-Service -Name <gateway-name>`
+ `Start-Service -Name <gateway-name>`.
+
+### Private link configuration
+
+To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) by using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
-On hybrid machines, use [Azure Arc-enabled servers](../../azure-arc/servers/manage-vm-extensions.md) to deploy the Azure Monitor agent, Log Analytics, and Azure Monitor Dependency VM extensions.
+## Compare to legacy agents
-## Supported operating systems
+The tables below provide a comparison of Azure Monitor Agent with the legacy the Azure Monitor telemetry agents for Windows and Linux.
-The following tables list the operating systems that are supported by the Azure Monitor agents. See the documentation for each agent for unique considerations and for the installation process. See Telegraf documentation for its supported operating systems. All operating systems are assumed to be x64. x86 is not supported for any operating system.
+### Windows agents
+
+| | | Azure Monitor Agent | Log Analytics Agent | Diagnostics extension (WAD) |
+| - | - | - | - | - |
+| **Environments supported** | | | | |
+| | Azure | X | X | X |
+| | Other cloud (Azure Arc) | X | X | |
+| | On-premises (Azure Arc) | X | X | |
+| | Windows Client OS | X (Preview) | | |
+| **Data collected** | | | | |
+| | Event Logs | X | X | X |
+| | Performance | X | X | X |
+| | File based logs | X (Preview) | X | X |
+| | IIS logs | X (Preview) | X | X |
+| | ETW events | | | X |
+| | .NET app logs | | | X |
+| | Crash dumps | | | X |
+| | Agent diagnostics logs | | | X |
+| **Data sent to** | | | | |
+| | Azure Monitor Logs | X | X | |
+| | Azure Monitor Metrics<sup>1</sup> | X | | X |
+| | Azure Storage | | | X |
+| | Event Hub | | | X |
+| **Services and features supported** | | | | |
+| | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | |
+| | VM Insights | | X (Preview) | |
+| | Azure Automation | | X | |
+| | Microsoft Defender for Cloud | | X | |
+
+### Linux agents
-### Windows
+| | | Azure Monitor Agent | Log Analytics Agent | Diagnostics extension (LAD) | Telegraf agent |
+| - | - | - | - | - | - |
+| **Environments supported** | | | | | |
+| | Azure | X | X | X | X |
+| | Other cloud (Azure Arc) | X | X | | X |
+| | On-premises (Azure Arc) | X | X | | X |
+| **Data collected** | | | | | |
+| | Syslog | X | X | X | |
+| | Performance | X | X | X | X |
+| | File based logs | X (Preview) | | | |
+| **Data sent to** | | | | | |
+| | Azure Monitor Logs | X | X | | |
+| | Azure Monitor Metrics<sup>1</sup> | X | | | X |
+| | Azure Storage | | | X | |
+| | Event Hub | | | X | |
+| **Services and features supported** | | | | | |
+| | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | | |
+| | VM Insights | X (Preview) | X | | |
+| | Container Insights | X (Preview) | X | | |
+| | Azure Automation | | X | | |
+| | Microsoft Defender for Cloud | | X | | |
+
+<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
+
+### Supported operating systems
+
+The following tables list the operating systems that Azure Monitor Agent and the legacy agents support. All operating systems are assumed to be x64. x86 isn't supported for any operating system.
+
+#### Windows
| Operating system | Azure Monitor agent | Log Analytics agent | Diagnostics extension | |:|::|::|::|::|
The following tables list the operating systems that are supported by the Azure
<sup>1</sup> Running the OS on server hardware, for example, machines that are always connected, always turned on, and not running other workloads (PC, office, browser)<br> <sup>2</sup> Using the Azure Monitor agent [client installer (preview)](./azure-monitor-agent-windows-client.md)
-### Linux
+#### Linux
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|::
-| AlmaLinux 8.* | X | X | |
+| AlmaLinux | X | X | |
| Amazon Linux 2017.09 | | X | | | Amazon Linux 2 | | X | | | CentOS Linux 8 | X <sup>3</sup> | X | |
The following tables list the operating systems that are supported by the Azure
| Red Hat Enterprise Linux Server 7 | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | | | Red Hat Enterprise Linux Server 6.7+ | | X | X |
-| Rocky Linux 8.* | X | X | |
+| Rocky Linux | X | X | |
| SUSE Linux Enterprise Server 15.2 | X <sup>3</sup> | | | | SUSE Linux Enterprise Server 15.1 | X <sup>3</sup> | X | | | SUSE Linux Enterprise Server 15 SP1 | X | X | |
The following tables list the operating systems that are supported by the Azure
| SUSE Linux Enterprise Server 12 SP5 | X | X | X | | SUSE Linux Enterprise Server 12 | X | X | X | | Ubuntu 22.04 LTS | X | | |
-| Ubuntu 20.04 LTS | X | X | X <sup>4</sup> |
+| Ubuntu 20.04 LTS | X | X | X |
| Ubuntu 18.04 LTS | X | X | X | | Ubuntu 16.04 LTS | X | X | X | | Ubuntu 14.04 LTS | | X | X |
The following tables list the operating systems that are supported by the Azure
<sup>2</sup> Known issue collecting Syslog events in versions prior to 1.9.0.<br> <sup>3</sup> Not all kernel versions are supported. Check the supported kernel versions in the following table.
-## Next steps
+> [!NOTE]
+> For Dependency Agent Linux support, see [Dependency Agent documentation](../vm/vminsights-dependency-agent-maintenance.md#dependency-agent-linux-support).
-For more information on each of the agents, see the following articles:
+## Next steps
-- [Overview of the Azure Monitor agent](./azure-monitor-agent-overview.md)-- [Overview of the Log Analytics agent](./log-analytics-agent.md)-- [Azure Diagnostics extension overview](./diagnostics-extension-overview.md)-- [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md)
+- [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Title: Migrate from legacy agents to the new Azure Monitor agent
-description: This article provides guidance for migrating from the existing legacy agents to the new Azure Monitor agent (AMA) and data collection rules (DCR).
+ Title: Migrate from legacy agents to Azure Monitor Agent
+description: This article provides guidance for migrating from the existing legacy agents to the new Azure Monitor Agent (AMA) and data collection rules (DCR).
Previously updated : 6/22/2022 Last updated : 08/04/2022
-# Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics agent to Azure Monitor Agent and track the status of the migration in my account.
+# Customer intent: As an IT manager, I want to understand if and when I should move from using legacy agents to Azure Monitor Agent.
-# Migrate to Azure Monitor agent from Log Analytics agent
-The [Azure Monitor agent (AMA)](azure-monitor-agent-overview.md) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor where it can be used by different features, insights, and other services such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). All of the data collection configuration is handled via [Data Collection Rules](../essentials/data-collection-rule-overview.md). The Azure Monitor agent is meant to replace the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines. This article provides high-level guidance on when and how to migrate to the new Azure Monitor agent (AMA) and the data collection rules (DCR) that define the data the agent should collect.
+# Migrate to Azure Monitor Agent from Log Analytics agent
+[Azure Monitor Agent (AMA)](./agents-overview.md) collects monitoring data from the guest operating system of Azure and hybrid virtual machines. The agent delivers the data to Azure Monitor for use by features, insights, and other services, such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). Azure Monitor Agent replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines Azure Monitor and introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article provides high-level guidance on when and how to migrate to the new Azure Monitor Agent (AMA) based on the agent's benefits and limitations.
-## Why should I migrate to the Azure Monitor agent?
+## Benefits
- **Security and performance**
- - AMA uses Managed Identity or Azure Active Directory (Azure AD) tokens (for clients) which are much more secure than the legacy authentication methods.
- - AMA can provide higher events per second (EPS) upload rate compared to legacy agents
-- **Cost savings** via efficient data collection [using Data Collection Rules](data-collection-rule-azure-monitor-agent.md). This is one of the most useful advantages of using AMA.
- - DCRs allow granular targeting of machines connected to a workspace to collect data from as compared to the ΓÇ£all or nothingΓÇ¥ mode that legacy agents have.
- - Using DCRs, you can filter out data to remove unused events and save additional costs.
-
+ - AMA uses Managed Identity or Azure Active Directory (Azure AD) tokens (for clients), which are much more secure than the legacy authentication methods.
+ - AMA provides a higher events per second (EPS) upload rate compared to legacy agents.
+- **Cost savings** using data collection [using Data Collection Rules](data-collection-rule-azure-monitor-agent.md). This is one of the most useful advantages of using AMA.
+ - DCRs lets you configure data collection for specific machines connected to a workspace as compared to the ΓÇ£all or nothingΓÇ¥ mode that legacy agents have.
+ - Using DCRs you can define which data to ingest and which data to filter out to reduce workspace clutter and save on costs.
- **Simpler management** of data collection, including ease of troubleshooting
- - **Multihoming** on both Windows and Linux is possible easily
- - Every action across the data collection lifecycle, from onboarding/setup to deployment to updates and changes over time, is significantly easier and scalable thanks to agent configuration becoming centralized and ΓÇÿin the cloudΓÇÖ as compared to configuring things on every machine.
- - Enabling/disabling of additional capabilities or services (Sentinel, Defender for Cloud, VM Insights, etc.) is more transparent and controlled, using the extensibility architecture of AMA.
-- **A single agent** that will consolidate all the features necessary to address all telemetry data collection needs across servers and client devices (running Windows 10, 11) as compared to running various different monitoring agents. This is the eventual goal, though AMA is currently converging with the Log Analytics agents.-
-## When should I migrate to the Azure Monitor agent?
-Your migration plan to the Azure Monitor agent should include the following considerations:
-
-|Consideration |Description |
-|||
-|**Environment requirements** | Verify that your environment is currently supported by the AMA. For more information, see [Supported operating systems](./agents-overview.md#supported-operating-systems). |
-|**Current and new feature requirements** | While the AMA provides [several new features](#current-capabilities), such as filtering, scoping, and multihoming, it is not yet at parity with the legacy Log Analytics agent.As you plan your migration, make sure that the features your organization requires are already supported by the AMA. You may decide to continue using the Log Analytics agent for now, and migrate at a later date. See [Supported services and features](./azure-monitor-agent-overview.md#supported-services-and-features) for a current status of features that are supported and that may be in preview. |
+ - Easy **multihoming** on Windows and Linux.
+ - Centralized, ΓÇÿin the cloudΓÇÖ agent configuration makes every action, across the data collection lifecycle, simpler and more easily scalable, from onboarding to deployment to updates and changes over time.
+ - Greater transparency and control of more capabilities and services, such as Sentinel, Defender for Cloud, and VM Insights.
+- **A single agent** that consolidates all features necessary to address all telemetry data collection needs across servers and client devices (running Windows 10, 11). This is the goal, though Azure Monitor Agent currently converges with the Log Analytics agents.
-> [!IMPORTANT]
-> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to the Azure Monitor agent using the information in this article.
+## When should I migrate to the Azure Monitor Agent?
+Your migration plan to the Azure Monitor Agent should include the following considerations:
-## Current capabilities
+- **Environment requirements:** Azure Monitor Agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will only be provided in this new agent. If Azure Monitor Agent supports your current environment, start transitioning to it.
-Azure Monitor agent currently supports the following core functionality:
+- **Current and new feature requirements:** Azure Monitor Agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality. For more information, see [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features).
-- **Collect guest logs and metrics** from any machine in Azure, in other clouds, or on-premises. [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) are required for machines outside of Azure.-- **Centrally manage data collection configuration** using [data collection rules](/azure/azure-monitor/agents/data-collection-rule-overview), and management configuration using Azure Resource Manager (ARM) templates or policies.-- **Use Windows event filtering or multi-homing** for Windows or Linux logs.-- **Improved extension management.** The Azure Monitor agent uses a new method of handling extensibility that's more transparent and controllable than management packs and Linux plug-ins in the current Log Analytics agents.
+ Most new capabilities in Azure Monitor will be made available only with Azure Monitor Agent. Review whether Azure Monitor Agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent.
-> [!NOTE]
-> Windows and Linux machines that reside on cloud platforms other than Azure, or are on-premises machines, must be Azure Arc-enabled so that the AMA can send logs to the Log Analytics workspace. For more information, see:
->
-> - [What are Azure ArcΓÇôenabled servers?](../../azure-arc/servers/overview.md)
-> - [Overview of Azure Arc ΓÇô enabled servers agent](../../azure-arc/servers/agent-overview.md)
-> - [Plan and deploy Azure Arc ΓÇô enabled servers at scale](../../azure-arc/servers/plan-at-scale-deployment.md)
--
-## Gap analysis between agents
-The following tables show gap analyses for the **log types** that are currently collected by each agent. This will be updated as support for AMA grows towards parity with the Log Analytics agent. For a general comparison of Azure Monitor agents, see [Overview of Azure Monitor agents](../agents/azure-monitor-agent-overview.md).
+ If Azure Monitor Agent has all the core capabilities you need, start transitioning to it. If there are critical features that you require, continue with the current agent until Azure Monitor Agent reaches parity.
+- **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available.
> [!IMPORTANT]
-> If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the additional data collected by Microsoft Sentinel.
-
+> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent using the information in this article.
-### Windows logs
+## Should I install Azure Monitor Agent together with a legacy agent?
-|Log type / Support |Azure Monitor agent support |Log Analytics agent support |
-||||
-| **Security Events** | Yes | No |
-| **Performance counters** | Yes | Yes |
-| **Windows Event Logs** | Yes | Yes |
-| **Filtering by event ID** | Yes | No |
-| **Text logs** | Yes | Yes |
-| **IIS logs** | Yes | Yes |
-| **Application and service logs** | Yes | Yes |
-| **Multi-homing** | Yes | Yes |
+Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use their existing functionality during evaluation or migration. While this allows you to begin the transition given the limitations, keep in mind the considerations below:
+- Be careful in collecting duplicate data because it could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install Azure Monitor Agent and create a data collection rule for these events and performance data, you'll collect duplicate data. Make sure you're not collecting the same data from both agents. If you're collecting the same data with both agents, ensure they're **collecting from different machines** or **going to separate destinations**. Collecting duplicate data also generates more charges for data ingestion and retention.
+- Running two telemetry agents on the same machine consumes double the resources, including, but not limited to CPU, memory, storage space, and network bandwidth.
-### Linux logs
+> [!NOTE]
+> When you use both agents during evaluation or migration, you can use the **Category** column of the [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat) table in your Log Analytics workspace, and filter for **Azure Monitor Agent**.
-|Log type / Support |Azure Monitor agent support |Log Analytics agent support |
-||||
-| **Syslog** | Yes | Yes |
-| **Performance counters** | Yes | Yes |
-| **Text logs** | Yes | Yes |
-| **Multi-homing** | Yes | No |
+## Current capabilities
+For full details about the capabilities of Azure Monitor Agent and a comparison with legacy agent capabilities, see [Azure Monitor Agent overview](../agents/agents-overview.md).
-## Test migration by using the Azure portal
-To ensure safe deployment during migration, you should begin testing with a few resources in your nonproduction environment that are running the existing Log Analytics agent. After you can validate the data collected on these test resources, roll out to production by following the same steps.
+If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
-See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. Once you validate data is flowing as expected with the Azure Monitor agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
+## Test migration
+To ensure safe deployment during migration, begin testing with few resources running the existing Log Analytics agent in your nonproduction environment. After you validate the data collected on these test resources, roll out to production by following the same steps.
+See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. After you validate that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
## At-scale migration using Azure Policy [Azure Policy](../../governance/policy/overview.md) and [Resource Manager templates](../resource-manager-samples.md) provide scalability to migrate a large number of agents.
Start by analyzing your current monitoring setup with the Log Analytics agent us
> [!IMPORTANT] > Before you deploy to a large number of agents, you should consider [configuring the workspace](agent-data-sources.md) to disable data collection for the Log Analytics agent. If you leave it enabled, you may collect duplicate data resulting in increased cost until you remove the Log Analytics agents from your virtual machines. Alternatively, you may choose to have duplicate collection during the migration period until you can confirm that the AMA has been deployed and configured correctly.
-See [Using Azure Policy](azure-monitor-agent-manage.md#using-azure-policy) for details on deploying Azure Monitor agent across a set of virtual machines. Associate the agents to the data collection rules developed during your [testing](#test-migration-by-using-the-azure-portal).
+See [Using Azure Policy](azure-monitor-agent-manage.md#using-azure-policy) for details on deploying Azure Monitor Agent across a set of virtual machines. Associate the agents to the data collection rules developed during your [testing](#test-migration).
-Validate that data is flowing as expected with the Azure Monitor agent and that all downstream dependencies like dashboards, alerts, and runbook workers. Workbooks should all continue to function using data from the new agent.
+Validate that data is flowing as expected with the Azure Monitor Agent and that all downstream dependencies like dashboards, alerts, and runbook workers. Workbooks should all continue to function using data from the new agent.
-When you confirm that data is being collected properly, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from the resources. Don't uninstall it if you need to use it for System Center Operations Manager scenarios or others solutions not yet available on the Azure Monitor agent. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent.
+When you confirm that data is being collected properly, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from the resources. Don't uninstall it if you need to use it for System Center Operations Manager scenarios or others solutions not yet available on Azure Monitor Agent. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent.
## Next steps For more information, see: -- [Overview of the Azure Monitor agents](agents-overview.md)-- [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md)-- [Frequently asked questions for AMA migration](/azure/azure-monitor/faq#azure-monitor-agent)
+- [Azure Monitor Agent overview](agents-overview.md)
+- [Azure Monitor Agent migration for Microsoft Sentinel](../../sentinel/ama-migrate.md)
+- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent)
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
- Title: Azure Monitor agent overview
-description: Overview of the Azure Monitor agent, which collects monitoring data from the guest operating system of virtual machines.
--- Previously updated : 7/21/2022----
-# Azure Monitor agent overview
-The Azure Monitor agent collects monitoring data from the guest operating system of [supported infrastructure](#supported-resource-types) and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and configure data collection.
-If you're new to Azure Monitor, the recommendation is to use the Azure Monitor agent.
-
-For an introductory video that explains this new agent and includes a quick demo of how to set things up by using the Azure portal, see [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs).
--
-## Relationship to other agents
-
-Eventually, the Azure Monitor agent will replace the following legacy monitoring agents that are currently used by Azure Monitor:
--- [Log Analytics agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports VM insights and monitoring solutions.-- [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only).-- [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage.-
-Currently, the Azure Monitor agent consolidates features from the Telegraf agent and Log Analytics agent, with [a few limitations](#current-limitations). See migration guidance [here](azure-monitor-agent-migration.md).
-In the future, it will also consolidate features from the Diagnostic extensions.
-
-In addition to consolidating this functionality into a single agent, the Azure Monitor agent provides the following benefits over the existing agents:
--- **Cost savings:**
- - Granular targeting via [data collection rules](../essentials/data-collection-rule-overview.md) to collect specific data types from specific machines, as compared to the "all or nothing" mode that Log Analytics agent supports.
- - XPath queries to filter Windows events get collected to help further reduce ingestion and storage costs.
-- **Simplified management of data collection:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (for example, "multihoming") or other [supported destinations](#data-sources-and-destinations). Every action across the data collection lifecycle, from onboarding to deployment to updates, is easier, scalable, and centralized in Azure by using data collection rules.-- **Management of dependent solutions or -- **Security and performance:** For authentication and security, the Azure Monitor agent uses Managed Identity for virtual machines and Azure Active Directory device tokens for clients. Both technologies are much more secure and "hack proof" than certificates or workspace keys that legacy agents use. This agent performs better at higher events-per-second upload rates compared to legacy agents.-
-### Current limitations
-
- Not all Log Analytics solutions are supported yet. [View supported features and services](#supported-services-and-features).
-
-### Changes in data collection
-
-The methods for defining data collection for the existing agents are distinctly different from each other. Each method has challenges that are addressed with the Azure Monitor agent:
--- The Log Analytics agent gets its configuration from a Log Analytics workspace. It's easy to centrally configure but difficult to define independent definitions for different virtual machines. It can only send data to a Log Analytics workspace.-- Diagnostic extension has a configuration for each virtual machine. It's easy to define independent definitions for different virtual machines but difficult to centrally manage. It can only send data to Azure Monitor Metrics, Azure Event Hubs, or Azure Storage. For Linux agents, the open-source Telegraf agent is required to send data to Azure Monitor Metrics.-
-The Azure Monitor agent uses [data collection rules](../essentials/data-collection-rule-overview.md) to configure data to collect from each agent. Data collection rules enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. They're independent of the workspace and independent of the virtual machine, which allows them to be defined once and reused across machines and environments.
-
-For more information, see [Configure data collection for the Azure Monitor agent](data-collection-rule-azure-monitor-agent.md).
-
-## Coexistence with other agents
-
-The Azure Monitor agent can coexist (run side by side on the same machine) with the legacy Log Analytics agents so that you can continue to use their existing functionality during evaluation or migration. For this reason, you can begin transition even with the limitations, but you must review the following points carefully:
--- Be careful in collecting duplicate data because it could skew query results and affect downstream features like alerts, dashboards, or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents.-
- If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data. As a result, ensure you're not collecting the same data from both agents. If you are, ensure they're *collecting from different machines* or *going to separate destinations*.
-- Besides data duplication, this scenario would also generate more charges for data ingestion and retention.-- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space, and network bandwidth.-
-> [!NOTE]
-> When you use both agents during evaluation or migration, you can use the **Category** column of the [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat) table in your Log Analytics workspace, and filter for **Azure Monitor Agent**.
-
-## Supported resource types
-
-| Resource type | Installation method | More information |
-|:|:|:|
-| Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent by using Azure extension framework. |
-| On-premises servers (Azure Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing the [Azure Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent by using Azure extension framework, provided for on-premises by first installing [Azure Arc agent](../../azure-arc/servers/deployment-options.md). |
-| Windows 10, 11 desktops, workstations | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. |
-| Windows 10, 11 laptops | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. The installer works on laptops, but the agent is *not optimized yet* for battery or network consumption. |
-
-## Supported regions
-
-Azure Monitor agent is available in all public regions and Azure Government clouds. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
-
-## Supported operating systems
-
-For a list of the Windows and Linux operating system versions that are currently supported by the Azure Monitor agent, see [Supported operating systems](agents-overview.md#supported-operating-systems).
-
-## Data sources and destinations
-
-The following table lists the types of data you can currently collect with the Azure Monitor agent by using data collection rules and where you can send that data. For a list of insights, solutions, and other solutions that use the Azure Monitor agent to collect other kinds of data, see [What is monitored by Azure Monitor?](../monitor-reference.md).
-
-The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log Analytics workspace supporting Azure Monitor Logs.
-
-| Data source | Destinations | Description |
-|:|:|:|
-| Performance | Azure Monitor Metrics (preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads |
-| Windows event logs | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system |
-| Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
-| Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine |
-
-<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [Quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.<br>
-<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).
-
-## Supported services and features
-
-The following table shows the current support for the Azure Monitor agent with other Azure services.
-
-| Azure service | Current support | More information |
-|:|:|:|
-| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Private preview | [Sign-up link](https://aka.ms/AMAgent) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows DNS logs: Private preview</li><li>Linux Syslog CEF: Private preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
-
-The following table shows the current support for the Azure Monitor agent with Azure Monitor features.
-
-| Azure Monitor feature | Current support | More information |
-|:|:|:|
-| Text logs and Windows IIS logs | Public preview | [Collect text logs with Azure Monitor agent (preview)](data-collection-text-log.md) |
-| Windows client installer | Public preview | [Set up Azure Monitor agent on Windows client devices](azure-monitor-agent-windows-client.md) |
-| [VM insights](../vm/vminsights-overview.md) | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
-
-The following table shows the current support for the Azure Monitor agent with Azure solutions.
-
-| Solution | Current support | More information |
-|:|:|:|
-| [Change Tracking](../../automation/change-tracking/overview.md) | Supported as File Integrity Monitoring in the Microsoft Defender for Cloud Private Preview. | [Sign-up link](https://aka.ms/AMAgent) |
-| [Update Management](../../automation/update-management/overview.md) | Use Update Management v2 - Public preview | [Update management center (preview) documentation](/azure/update-center/) |
-
-## Costs
-
-There's no cost for the Azure Monitor agent, but you might incur charges for the data ingested. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-
-## Security
-
-The Azure Monitor agent doesn't require any keys but instead requires a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). You must have a system-assigned managed identity enabled on each virtual machine before you deploy the agent.
-
-## Networking
-
-The Azure Monitor agent supports Azure service tags. Both *AzureMonitor* and *AzureResourceManager* tags are required. It supports connecting via *direct proxies, Log Analytics gateway, and private links* as described in the following sections.
-
-### Firewall requirements
-
-| Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection|
-|||||--|--|
-| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
-| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
-| Azure Commercial | management.azure.com | Only needed if sending timeseries data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes |
-| Azure Government | Replace '.com' above with '.us' | Same as above | Same as above | Same as above| Same as above |
-| Azure China | Replace '.com' above with '.cn' | Same as above | Same as above | Same as above| Same as above |
-
-If you use private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).
-
-### Proxy configuration
-
-If the machine connects through a proxy server to communicate over the internet, review the following requirements to understand the network configuration required.
-
-The Azure Monitor agent extensions for Windows and Linux can communicate either through a proxy server or a [Log Analytics gateway](./gateway.md) to Azure Monitor by using the HTTPS protocol. Use it for Azure virtual machines, Azure virtual machine scale sets, and Azure Arc for servers. Use the extensions settings for configuration as described in the following steps. Both anonymous and basic authentication by using a username and password are supported.
-
-> [!IMPORTANT]
-> Proxy configuration is not supported for [Azure Monitor Metrics (preview)](../essentials/metrics-custom-overview.md) as a destination. If you're sending metrics to this destination, it will use the public internet without any proxy.
-
-1. Use this flowchart to determine the values of the *settings* and *protectedSettings* parameters first.
-
- ![Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
-
-1. After the values for the *settings* and *protectedSettings* parameters are determined, *provide these additional parameters* when you deploy the Azure Monitor agent by using PowerShell commands. Refer to the following examples.
-
-# [Windows VM](#tab/PowerShellWindows)
-
-```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-
-Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
-```
-
-# [Linux VM](#tab/PowerShellLinux)
-
-```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-
-Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
-```
-
-# [Windows Arc-enabled server](#tab/PowerShellWindowsArc)
-
-```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-
-New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
-```
-
-# [Linux Arc-enabled server](#tab/PowerShellLinuxArc)
-
-```powershell
-$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
-$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-
-New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
-```
---
-### Log Analytics gateway configuration
-
-1. Follow the preceding instructions to configure proxy settings on the agent and provide the IP address and port number that corresponds to the gateway server. If you've deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
-1. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
- `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
- `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`.
- (If you're using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).)
-1. Add the **data ingestion endpoint URL** to the allowlist for the gateway
- `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`.
-1. Restart the **OMS Gateway** service to apply the changes
- `Stop-Service -Name <gateway-name>`
- `Start-Service -Name <gateway-name>`.
-
-### Private link configuration
-
-To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) by using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
-
-## Next steps
--- [Install the Azure Monitor agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To collect data from virtual machines using the Azure Monitor agent, you'll:
1. Create [data collection rules (DCR)](../essentials/data-collection-rule-overview.md) that define which data Azure Monitor agent sends to which destinations. 1. Associate the data collection rule to specific virtual machines.
-You can associate virtual machines to multiple data collection rules. This allows you to define each data collection rule to address a particular requirement, and associate the data collection rules to virtual machines based on the specific data you want to collect from each machine.
+ You can associate virtual machines to multiple data collection rules. This allows you to define each data collection rule to address a particular requirement, and associate the data collection rules to virtual machines based on the specific data you want to collect from each machine.
## Create data collection rule and association
To send data to Log Analytics, create the data collection rule in the **same reg
### [Portal](#tab/portal)
-In the **Monitor** menu in the Azure portal, select **Data Collection Rules** from the **Settings** section. Click **Create** to create a new Data Collection Rule and assignment.
+1. From the **Monitor** menu, select **Data Collection Rules**.
+1. Select **Create** to create a new Data Collection Rule and associations.
-[![Screenshot of viewing data collection rules in Azure portal.](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
+ [![Screenshot showing the Create button on the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
+
+1. Provide a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**.
-Click **Create** to create a new rule and set of associations. Provide a **Rule name** and specify a **Subscription**, **Resource Group** and **Region**. This specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
-Additionally, choose the appropriate **Platform Type** which specifies the type of resources this rule can apply to. Custom will allow for both Windows and Linux types. This allows for pre-curated creation experiences with options scoped to the selected platform type.
+ **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
-[![Screenshot of Azure portal form to create new data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
+ **Platform Type** specifies the type of resources this rule can apply to. Custom allows for both Windows and Linux types.
-In the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) that should have the Data Collection Rule applied. The Azure Monitor Agent will be installed on resources that don't already have it installed, and will enable Azure Managed Identity as well.
+ [![Screenshot showing the Basics tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
-> [!IMPORTANT]
-> If you need network isolation using private links for collecting data using agents from your resources, then select **Enable Data Collection Endpoints** and select a DCE for each virtual machine. See [Enable network isolation for the Azure Monitor Agent](azure-monitor-agent-data-collection-endpoint.md) for details.
+1. On the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) to which to associate the data collection rule. The portal will install Azure Monitor Agent on resources that don't already have it installed, and will also enable Azure Managed Identity.
+ > [!IMPORTANT]
+ > The portal enables System-Assigned managed identity on the target resources, in addition to existing User-Assigned Identities (if any). For existing applications, unless you specify the User-Assigned identity in the request, the machine will default to using System-Assigned Identity instead.
+ If you need network isolation using private links, select existing endpoints from the same region for the respective resources, or [create a new endpoint](../essentials/data-collection-endpoint-overview.md).
+ [!Screenshot showing the Resources tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
-On the **Collect and deliver** tab, click **Add data source** to add a data source and destination set. Select a **Data source type**, and the corresponding details to select will be displayed. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs or facilities and the severity level.
+1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination.
+1. Select a **Data source type**.
+1. Select which data you want to collect. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs and severity levels.
-[![Screenshot of Azure portal form to select basic performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
+ [!Screenshot of Azure portal form to select basic performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
+1. Select **Custom** to collect logs and performance counters that are not [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to [filter events using XPath queries](#filter-events-using-xpath-queries). You can then specify an [XPath](https://www.w3schools.com/xml/xpath_syntax.asp) to collect any specific values. See [Sample DCR](data-collection-rule-sample-agent.md) for an example.
-To specify other logs and performance counters from the [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to filter events using XPath queries, select **Custom**. You can then specify an [XPath ](https://www.w3schools.com/xml/xpath_syntax.asp) for any specific values to collect. See [Sample DCR](data-collection-rule-sample-agent.md) for an example.
+ [!Screenshot of Azure portal form to select custom performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
-[![Screenshot of Azure portal form to select custom performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
+1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types - for instance multiple Log Analytics workspaces (known as "multi-homing").
-On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of same of different types, for instance multiple Log Analytics workspaces (i.e. "multi-homing"). Windows event and Syslog data sources can only send to Azure Monitor Logs. Performance counters can send to both Azure Monitor Metrics and Azure Monitor Logs.
+ You can send Windows event and Syslog data sources can to Azure Monitor Logs only. You can send performance counters to both Azure Monitor Metrics and Azure Monitor Logs.
-[![Screenshot of Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
+ [!Screenshot of Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
-Click **Add Data Source** and then **Review + create** to review the details of the data collection rule and association with the set of VMs. Click **Create** to create it.
+1. Select **Add Data Source** and then **Review + create** to review the details of the data collection rule and association with the set of virtual machines.
+1. Select **Create** to create the data collection rule.
> [!NOTE]
-> After the data collection rule and associations have been created, it might take up to 5 minutes for data to be sent to the destinations.
+> It might take up to 5 minutes for data to be sent to the destinations after you create the data collection rule and associations.
-## Create rule and association in Azure portal
-
-You can use the Azure portal to create a data collection rule and associate virtual machines in your subscription to that rule. The Azure Monitor agent will be automatically installed and a managed identity created for any virtual machines that don't already have it installed.
-
-> [!IMPORTANT]
-> Creating a data collection rule using the portal also enables System-Assigned managed identity on the target resources, in addition to existing User-Assigned Identities (if any). For existing applications unless they specify the User-Assigned identity in the request, the machine will default to using System-Assigned Identity instead. [Learn More](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)
---
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
--
-## Limit data collection with custom XPath queries
-Since you're charged for any data collected in a Log Analytics workspace, you should collect only the data that you require. Using basic configuration in the Azure portal, you only have limited ability to filter events to collect. For Application and System logs, this is all logs with a particular severity. For Security logs, this is all audit success or all audit failure logs.
-
-To specify additional filters, you must use Custom configuration and specify an XPath that filters out the events you don't. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath would be `Application!*[System[EventID=1035]]`
-
-### Extracting XPath queries from Windows Event Viewer
-One of the ways to create XPath queries is to use Windows Event Viewer to extract XPath queries as shown below.
-
-* In step 5 when pasting over the 'Select Path' parameter value, you must append the log type category followed by '!' and then paste the copied value.
-
-[![Screenshot of steps in Azure portal showing the steps to create an XPath query in the Windows Event Viewer.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
-
-See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log.
-
-> [!TIP]
-> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery locally on your machine first. The following script shows an example.
->
-> ```powershell
-> $XPath = '*[System[EventID=1035]]'
-> Get-WinEvent -LogName 'Application' -FilterXPath $XPath
-> ```
->
-> - **In the cmdlet above, the value for '-LogName' parameter is the initial part of the XPath query until the '!', while only the rest of the XPath query goes into the $XPath parameter.**
-> - If events are returned, the query is valid.
-> - If you receive the message *No events were found that match the specified selection criteria.*, the query may be valid, but there are no matching events on the local machine.
-> - If you receive the message *The specified query is invalid* , the query syntax is invalid.
-
-The following table shows examples for filtering events using a custom XPath.
-
-| Description | XPath |
-|:|:|
-| Collect only System events with Event ID = 4648 | `System!*[System[EventID=4648]]`
-| Collect Security Log events with Event ID = 4648 and a process name of consent.exe | `Security!*[System[(EventID=4648)]] and *[EventData[Data[@Name='ProcessName']='C:\Windows\System32\consent.exe']]` |
-| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` |
-| Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` |
--
-## Create rule and association using REST API
-
-Follow the steps below to create a data collection rule and associations using the REST API.
-
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
### [API](#tab/api)
-1. Create a DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
+1. Create a DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
-2. Create the rule using the [REST API](/rest/api/monitor/datacollectionrules/create#examples).
+2. Create the rule using the [REST API](/rest/api/monitor/datacollectionrules/create#examples).
-3. Create an association for each virtual machine to the data collection rule using the [REST API](/rest/api/monitor/datacollectionruleassociations/create#examples).
+3. Create an association for each virtual machine to the data collection rule using the [REST API](/rest/api/monitor/datacollectionruleassociations/create#examples).
### [PowerShell](#tab/powershell)
In Windows, you can use Event Viewer to extract XPath queries as shown below.
When you paste the XPath query into the field on the **Add data source** screen, (step 5 in the picture below), you must append the log type category followed by '!'.
-[![Extract XPath](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
+[!Screenshot of steps in Azure portal showing the steps to create an XPath query in the Windows Event Viewer.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log.
Examples of filtering events using a custom XPath:
- [Collect text logs using Azure Monitor agent.](data-collection-text-log.md) - Learn more about the [Azure Monitor Agent](azure-monitor-agent-overview.md).-- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
+- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
azure-monitor Diagnostics Extension Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md
Azure Diagnostics extension is an [agent in Azure Monitor](../agents/agents-over
## Primary scenarios The primary scenarios addressed by the diagnostics extension are: -- Collect guest metrics into Azure Monitor Metrics.-- Send guest logs and metrics to Azure storage for archiving.-- Send guest logs and metrics to Azure event hubs to send outside of Azure.
+Use the Azure Diagnostics extension if you need to:
+
+- Send data to Azure Storage for archiving or to analyze it with tools such as [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md).
+- Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [Metrics Explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).
+- Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).
+- Collect [Boot Diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.
+
+Limitations of the Azure Diagnostics extension:
+
+- Can only be used with Azure resources.
+- Limited ability to send data to Azure Monitor Logs.
## Comparison to Log Analytics agent
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
# Log Analytics agent overview
-The Azure Log Analytics agent collects telemetry from Windows and Linux virtual machines in any cloud, on-premises machines, and machines monitored by [System Center Operations Manager](/system-center/scom/). Collected data is sent to your Log Analytics workspace in Azure Monitor.
-
-The Log Analytics agent also supports insights and other services in Azure Monitor, such as [VM insights](../vm/vminsights-enable-overview.md), [Microsoft Defender for Cloud](../../security-center/index.yml), and [Azure Automation](../../automation/automation-intro.md). This article provides a detailed overview of the agent, system and network requirements, and deployment methods.
+This article provides a detailed overview of the Log Analytics agent and the agent's system and network requirements and deployment methods.
>[!IMPORTANT] >The Log Analytics agent is on a **deprecation path** and won't be supported after **August 31, 2024**. If you use the Log Analytics agent to ingest data to Azure Monitor, [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date. You might also see the Log Analytics agent referred to as Microsoft Monitoring Agent (MMA).
+## Primary scenarios
+
+Use the Log Analytics agent if you need to:
+
+* Collect logs and performance data from Azure virtual machines or hybrid machines hosted outside of Azure.
+* Send data to a Log Analytics workspace to take advantage of features supported by [Azure Monitor Logs](../logs/data-platform-logs.md), such as [log queries](../logs/log-query-overview.md).
+* Use [VM insights](../vm/vminsights-overview.md), which allows you to monitor your machines at scale and monitor their processes and dependencies on other resources and external processes.
+* Manage the security of your machines by using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md).
+* Use [Azure Automation Update Management](../../automation/update-management/overview.md), [Azure Automation State Configuration](../../automation/automation-dsc-overview.md), or [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md) to deliver comprehensive management of your Azure and non-Azure machines.
+* Use different [solutions](../monitor-reference.md#insights-and-curated-visualizations) to monitor a particular service or application.
+
+Limitations of the Log Analytics agent:
+
+- Can't send data to Azure Monitor Metrics, Azure Storage, or Azure Event Hubs.
+- Difficult to configure unique monitoring definitions for individual agents.
+- Difficult to manage at scale because each virtual machine has a unique configuration.
+ ## Comparison to other agents For a comparison between the Log Analytics and other agents in Azure Monitor, see [Overview of Azure Monitor agents](agents-overview.md).
azure-monitor Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-activity-log.md
- Title: Create, view, and manage activity log alerts in Azure Monitor
-description: Create activity log alerts by using the Azure portal, an Azure Resource Manager template, and Azure PowerShell.
-- Previously updated : 2/23/2022----
-# Create, view, and manage activity log alerts by using Azure Monitor
-
-*Activity log alerts* are the alerts that get activated when a new activity log event occurs that matches the conditions specified in the alert. You create these alerts for Azure resources by using an Azure Resource Manager template. You can also create, update, or delete these alerts in the Azure portal.
-
-Typically, you create activity log alerts to receive notifications when specific changes occur to resources in your Azure subscription. Alerts are often scoped to particular resource groups or resources. For example, you might want to be notified when any virtual machine in the sample resource group `myProductionResourceGroup` is deleted. Or, you might want to get notified if any new roles are assigned to a user in your subscription.
-
-> [!IMPORTANT]
-> You can't create alerts on service health notifications by using the interface for creating activity log alerts. To learn more about how to create and use service health notifications, see [Receive activity log alerts on service health notifications](../../service-health/alerts-activity-log-service-notifications-portal.md).
-
-When you create alert rules, make sure that:
--- The subscription in the scope isn't different from the subscription where the alert is created.-- The criteria must be the level, status, caller, resource group, resource ID, or resource type event category on which the alert is configured.-- There's no `anyOf` condition or nested conditions in the alert configuration JSON. Only one `allOf` condition is allowed, with no further `allOf` or `anyOf` conditions.-- When the category is `administrative`, you must specify at least one of the preceding criteria in your alert. You can't create an alert that activates every time an event is created in the activity logs.-- Alerts can't be created for events in the `alert` category of the activity log.-
-## Azure portal
-
-You can use the Azure portal to create and modify activity log alert rules. The experience is integrated with an Azure activity log to ensure seamless alert creation for specific events of interest. On the Azure portal, you can create a new activity log alert rule, either from the Azure Monitor alerts pane, or from the Azure Monitor activity log pane.
-
-### Create an alert rule from the Azure Monitor alerts pane
-
-Here's how to create an activity log alert rule in the Azure portal:
-
-1. In the [Azure portal](https://portal.azure.com), select **Monitor**. The Monitor pane consolidates all your monitoring settings and data in one view.
-
-2. Select **Alerts** > **+ Create** > **Alert rule**.
-
- > [!TIP]
- > Most resource panes also have **Alerts** in their resource menu, under **Monitoring**. You can also create alert rules from there.
-
-3. In the **Scope** tab, click **Select scope**. Then, in the context pane that loads, select the target resource(s) that you want to alert on. Use **Filter by subscription**, **Filter by resource type**, and **Filter by location** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
-
- > [!NOTE]
- > As a target, you can select an entire subscription, a resource group, or one or more a specific resources from the same subscription. If you choose a subscription or a resource group as a target, and you also select a resource type, the rule will apply to all resources of that type within the selected subscription or a resource group. If you choose a specific target resource, the rule will apply only to that resource. You can't select multiple subscriptions, or multiple resources from different subscriptions.
-
-4. If the selected resource has activity log operations that you can create alert rules on, you'll see that **Available signal types** lists **Activity Log**. You can view the full list of resource types supported for activity log alerts in [Azure resource provider operations](../../role-based-access-control/resource-provider-operations.md).
-
- :::image type="content" source="media/alerts-activity-log/select-target-new.png" alt-text="Screenshot of the target selection pane." lightbox="media/alerts-activity-log/select-target-new.png":::
-
-5. Once you have selected a target resource, click **Done**.
-
-6. Proceed to the **Condition** tab. Then, in the context pane that loads, you will see a list of signals supported for the resource, which includes those from various categories of **Activity Log**. Select the activity log signal or operation you want to create an alert rule on.
-
-7. You will see a chart for the activity log operation for the last six hours. Use the **Chart period** dropdown list to see a longer history for the operation.
-
-8. Under **Alert logic**, you can optionally define more filtering criteria:
-
- - **Event level**: The severity level of the event: _Verbose_, _Informational_, _Warning_, _Error_, or _Critical_.
- - **Status**: The status of the event: _Started_, _Failed_, or _Succeeded_.
- - **Event initiated by**: Also known as the caller. The email address or Azure Active Directory identifier of the user who performed the operation.
-
- > [!NOTE]
- > Defining at least one of these criteria helps you achieve more effective rules. For example, if the alert scope is an entire subscription, and the selected signal is `All Administrative Operations`, your rule will be more specific if you provide the event level, status, or initiation information.
-
- :::image type="content" source="media/alerts-activity-log/condition-selected-new.png" alt-text="Screenshot of the condition selection pane." lightbox="media/alerts-activity-log/condition-selected-new.png":::
-
-9. Proceed to the **Actions** tab, where you can define what actions and notifications are triggered when the alert rule generates an alert. You can add an action group to the alert rule either by selecting an existing action group or by creating a new action group.
-
-10. Proceed to the **Details** tab. Under **Project details**, select the resource group in which the alert rule resource will be saved. Under **Alert rule details**, specify the **Alert rule name**. You can also provide an **Alert rule description**.
-
- > [!NOTE]
- > The alert severity for activity log alerts can't currently be configured by the user. The severity level always defaults to **Sev4**.
--
-11. Proceed to the **Tags**, where you can set tags on the alert rule you're creating.
-12. Proceed to the **Review + create** tab, where you can review your selections before creating the alert rule. A quick automatic validation will also be performed, notifying you in case any information or missing or needs to be correct. Once you're ready to create the alert rule, Click **Create**.
-
-
-### Create an alert rule from the Azure Monitor activity log pane
-
-An alternative way to create an activity log alert is to start with an activity log event that already occurred, via the [activity log in the Azure portal](../essentials/activity-log.md#view-the-activity-log).
-
-1. On the **Azure Monitor - Activity log** pane, you can filter or find the desired event, and then create an alert on future similar events by selecting **Add activity log alert**.
-
- :::image type="content" source="media/alerts-activity-log/create-alert-rule-from-activity-log-event-new.png" alt-text="Screenshot of alert rule creation from an activity log event." lightbox="media/alerts-activity-log/create-alert-rule-from-activity-log-event-new.png":::
-
-2. The **Create alert rule** wizard opens, with the scope and condition already provided according to the previously selected activity log event. If necessary, you can edit and modify the scope and condition at this stage. Note that by default, the exact scope and condition for the new rule are copied from the original event attributes. For example, the exact resource on which the event occurred, and the specific user or service name who initiated the event, are both included by default in the new alert rule. If you want to make the alert rule more general, modify the scope and condition accordingly (see steps 3-9 in the section "Create an alert rule from the Azure Monitor alerts pane").
-
-3. Then follow steps 9-12 from the section, "Create an alert rule from the Azure Monitor alerts pane."
-
-### View and manage in the Azure portal
-
-1. In the Azure portal, select **Monitor** > **Alerts**. Then select **Alert rules**.
-
- The list of available alert rules appears.
-
-2. Filter or search for the activity log rule to modify.
-
- :::image type="content" source="media/alerts-activity-log/manage-alert-rules-new.png" alt-text="Screenshot of the alert rules management pane." lightbox="media/alerts-activity-log/manage-alert-rules-new.png":::
-
- You can use the available filters, _Subscription_, _Resource group_, _Resource_, _Signal type_, or _Status_, to find the activity rule that you want to edit.
-
-3. Select the alert rule to open it for editing. Make the required changes, and then select **Save**.
-
-## Azure Resource Manager template
-To create an activity log alert rule by using an Azure Resource Manager template, you create a resource of the type `microsoft.insights/activityLogAlerts`. Then you fill in all related properties. Here's a template that creates an activity log alert rule:
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "activityLogAlertName": {
- "type": "string",
- "metadata": {
- "description": "Unique name (within the Resource Group) for the Activity log alert."
- }
- },
- "activityLogAlertEnabled": {
- "type": "bool",
- "defaultValue": true,
- "metadata": {
- "description": "Indicates whether or not the alert is enabled."
- }
- },
- "actionGroupResourceId": {
- "type": "string",
- "metadata": {
- "description": "Resource Id for the Action group."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Insights/activityLogAlerts",
- "apiVersion": "2017-04-01",
- "name": "[parameters('activityLogAlertName')]",
- "location": "Global",
- "properties": {
- "enabled": "[parameters('activityLogAlertEnabled')]",
- "scopes": [
- "[subscription().id]"
- ],
- "condition": {
- "allOf": [
- {
- "field": "category",
- "equals": "Administrative"
- },
- {
- "field": "operationName",
- "equals": "Microsoft.Resources/deployments/write"
- },
- {
- "field": "resourceType",
- "equals": "Microsoft.Resources/deployments"
- }
- ]
- },
- "actions": {
- "actionGroups":
- [
- {
- "actionGroupId": "[parameters('actionGroupResourceId')]"
- }
- ]
- }
- }
- }
- ]
-}
-```
-The previous sample JSON can be saved as, for example, *sampleActivityLogAlert.json*. You can deploy the sample by using [Azure Resource Manager in the Azure portal](../../azure-resource-manager/templates/deploy-portal.md).
-
-> [!NOTE]
-> Notice that the highest level that activity log alerts can be defined is the subscription level. There is no option to define an alert on two subscriptions. The definition should be to alert per subscription.
-
-The following fields are the options that you can use in the Azure Resource Manager template for the conditions fields. (Notice that **Resource Health**, **Advisor** and **Service Health** have extra properties fields for their special fields.)
-
-1. `resourceId`: The resource ID of the impacted resource in the activity log event that the alert should be generated on.
-1. `category`: The category of the activity log event. For example: `Administrative`, `ServiceHealth`, `ResourceHealth`, `Autoscale`, `Security`, `Recommendation`, or `Policy`.
-1. `caller`: The email address or Azure Active Directory identifier of the user who performed the operation of the activity log event.
-1. `level`: Level of the activity in the activity log event that the alert should be generated on. For example: `Critical`, `Error`, `Warning`, `Informational`, or `Verbose`.
-1. `operationName`: The name of the operation in the activity log event. For example: `Microsoft.Resources/deployments/write`.
-1. `resourceGroup`: Name of the resource group for the impacted resource in the activity log event.
-1. `resourceProvider`: For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](../../azure-resource-manager/management/resource-providers-and-types.md).
-1. `status`: String describing the status of the operation in the activity event. For example: `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved`.
-1. `subStatus`: Usually, this field is the HTTP status code of the corresponding REST call. But it can also include other strings describing a substatus. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others.
-1. `resourceType`: The type of the resource that was affected by the event. For example: `Microsoft.Resources/deployments`.
-
-For example:
-
-```json
-"condition": {
- "allOf": [
- {
- "field": "category",
- "equals": "Administrative"
- },
- {
- "field": "resourceType",
- "equals": "Microsoft.Resources/deployments"
- }
- ]
- }
-
-```
-
-For more information about the activity log fields, see [Azure activity log event schema](../essentials/activity-log-schema.md).
-
-> [!NOTE]
-> It might take up to 5 minutes for the new activity log alert rule to become active.
-
-## REST API
-The Azure Monitor Activity Log Alerts API is a REST API. It's fully compatible with the Azure Resource Manager REST API. You can use it with PowerShell, by using the Resource Manager cmdlet or the Azure CLI.
--
-### Deploy the Resource Manager template with PowerShell
-To use PowerShell to deploy the sample Resource Manager template shown in the previous [Azure Resource Manager template](#azure-resource-manager-template) section, use the following command:
-
-```powershell
-New-AzResourceGroupDeployment -ResourceGroupName "myRG" -TemplateFile sampleActivityLogAlert.json -TemplateParameterFile sampleActivityLogAlert.parameters.json
-```
-
-The *sampleActivityLogAlert.parameters.json* file contains the values provided for the parameters needed for alert rule creation.
-
-### Use activity log PowerShell cmdlets
-
-Activity log alerts have dedicated PowerShell cmdlets available:
--- [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert): Creates a new activity log alert or updates an existing activity log alert.-- [Get-AzActivityLogAlert](/powershell/module/az.monitor/get-azactivitylogalert): Gets one or more activity log alert resources.-- [Enable-AzActivityLogAlert](/powershell/module/az.monitor/enable-azactivitylogalert): Enables an existing activity log alert and sets its tags.-- [Disable-AzActivityLogAlert](/powershell/module/az.monitor/disable-azactivitylogalert): Disables an existing activity log alert and sets its tags.-- [Remove-AzActivityLogAlert](/powershell/module/az.monitor/remove-azactivitylogalert): Removes an activity log alert.-
-### Azure CLI
-
-You can manage activity log alert rules by using dedicated Azure CLI commands under the set [az monitor activity-log alert](/cli/azure/monitor/activity-log/alert).
-
-To create a new activity log alert rule, use the following commands:
-
-1. [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
-2. [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule.
-3. [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
-
-To retrieve one activity log alert rule resource, use the Azure CLI command [az monitor activity-log alert show](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-show
-). To view all activity log alert rule resources in a resource group, use [az monitor activity-log alert list](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-list).
-You can remove activity log alert rule resources by using the Azure CLI command [az monitor activity-log alert delete](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-delete).
-
-## Next steps
--- Learn about [webhook schema for activity logs](./activity-log-alerts-webhook.md).-- Read an [overview of activity logs](./activity-log-alerts.md).-- Learn more about [action groups](./action-groups.md). -- Learn about [service health notifications](../../service-health/service-notifications.md).
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
+
+ Title: Create Azure Monitor alert rules
+description: Learn how to create a new alert rule.
+++ Last updated : 08/03/2022++
+# Create a new alert rule
+
+This article shows you how to create an alert rule. Learn more about alerts [here](alerts-overview.md).
+
+You create an alert rule by combining:
+ - The resource(s) to be monitored.
+ - The signal or telemetry from the resource
+ - Conditions
+
+And then defining these elements for the resulting alert actions using:
+ - [Alert processing rules](alerts-action-rules.md)
+ - [Action groups](./action-groups.md)
+
+## Create a new alert rule in the Azure portal
+
+1. In the [portal](https://portal.azure.com/), select **Monitor**, then **Alerts**.
+1. Expand the **+ Create** menu, and select **Alert rule**.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-new-alert-rule.png" alt-text="Screenshot showing steps to create new alert rule.":::
+
+1. In the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, **resource location**, or do a search.
+
+ You can see the **Available signal types** for your selected resource(s) at the bottom right of the pane. The available signal types change based on the selected resource.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot showing the select resource pane for creating new alert rule.":::
+
+1. Select **Include all future resources** to include any future resources added to the selected scope.
+1. Select **Done**.
+1. Select **Next: Condition>** at the bottom of the page.
+1. In the **Select a signal** pane, the **Signal type**, **Monitor service**, and **Signal name** fields are pre-populated with the available values for your selected scope. You can narrow the signal list using these fields. The **Signal type** determines which [type of alert](alerts-overview.md#types-of-alerts) rule you're creating.
+1. Select the **Signal name**, and follow the steps below depending on the type of alert you're creating.
+ ### [Metric alert](#tab/metric)
+
+ 1. In the **Configure signal logic** pane, you can preview the results of the selected metric signal. Select values for the following fields.
+
+ |Field |Description |
+ |||
+ |Select time series|Select the time series to include in the results. |
+ |Chart period|Select the time span to include in the results. Can be from the last 6 hours to the last week.|
+
+ 1. (Optional) Depending on the signal type, you may see the **Split by dimensions** section.
+
+ Dimensions are name-value pairs that contain more data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values. Dimensions can be either number or string columns.
+
+ If you select more than one dimension value, each time series that results from the combination will trigger its own alert, and will be charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data), or you can use dimensions to alert only when the number of transactions is high for specific APIs.
+
+ |Field |Description |
+ |||
+ |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.<br>Splitting on the **Azure Resource ID** column makes the specified resource into the alert target. If detected, the **ResourceID** column is selected automatically and changes the context of the fired alert to the record's resource. |
+ |Operator|The operator used on the dimension name and value. |
+ |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. |
+ |Include all future values| Select this field to include any future values added to the selected dimension. |
+
+ 1. In the **Alert logic** section:
+
+ |Field |Description |
+ |||
+ |Threshold|Select if threshold should be evaluated based on a static value or a dynamic value.<br>A static threshold evaluates the rule using the threshold value that you configure.<br>Dynamic Thresholds use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#dynamic-thresholds). |
+ |Operator|Select the operator for comparing the metric value against the threshold. |
+ |Aggregation type|Select the aggregation function to apply on the data points: Sum, Count, Average, Min, or Max. |
+ |Threshold value|If you selected a **static** threshold, enter the threshold value for the condition logic. |
+ |Unit|If the selected metric signal supports different units,such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.|
+ |Threshold sensitivity| If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern is required to trigger an alert. |
+ |Aggregation granularity| Select the interval over which data points are grouped using the aggregation type function.|
+ |Frequency of evaluation|Select the frequency on how often the alert rule should be run. Selecting frequency smaller than granularity of data points grouping will result in sliding window evaluation. |
+
+ 1. Select **Done**.
+ ### [Log alert](#tab/log)
+
+ > [!NOTE]
+ > If you are creating a new log alert rule, note that current alert rule wizard is a little different from the earlier experience. For detailed information about the changes, see [changes to log alert rule creation experience](#changes-to-log-alert-rule-creation-experience).
+
+ 1. In the **Logs** pane, write a query that will return the log events for which you want to create an alert.
+ To use one of the predefined alert rule queries, expand the **Schema and filter pane** on the left of the **Logs** pane, then select the **Queries** tab, and select one of the queries.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot of the query pane when creating a new log alert rule.":::
+
+ 1. Select **Run** to run the alert.
+ 1. The **Preview** section shows you the query results. When you're finished editing your query, select **Continue Editing Alert**.
+ 1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last 5 minutes. If the system detects summarized query results, the rule is automatically updated with that information.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot of the conditions tab when creating a new log alert rule.":::
+
+ 1. In the **Measurement** section, select values for these fields:
+
+ |Field |Description |
+ |||
+ |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, syslog, application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. For example, CPU percentage. |
+ |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value using the aggregation granularity. For example: Total, Average, Minimum, or Maximum. |
+ |Aggregation granularity| The interval for aggregating multiple records to one numeric value.|
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot of the measurements tab when creating a new log alert rule.":::
+
+ 1. (Optional) In the **Split by dimensions** section, you can use dimensions to monitor the values of multiple instances of a resource with one rule. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. When you split by dimensions, alerts are split into separate alerts by grouping combinations of numerical or string columns to monitor for the same condition on multiple Azure resources. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually notifications are sent for each instance.
+
+ Splitting on **Azure Resource ID** column makes specified resource the target of the alert.
+
+ If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. The alert payload includes the combination that triggered the alert.
+
+ You can select up to six more splittings for any columns that contain text or numbers.
+
+ You can also decide **not** to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+
+ Select values for these fields:
+
+ |Field |Description |
+ |||
+ |Resource ID column|Splitting on the **Azure Resource ID** column makes the specified resource the target of the alert. If detected, the **ResourceID** column is selected automatically and changes the context of the fired alert to the record's resource. |
+ |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.|
+ |Operator|The operator used on the dimension name and value. |
+ |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. |
+ |Include all future values| Select this field to include any future values added to the selected dimension. |
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot of the splitting by dimensions section of a new log alert rule.":::
+
+ 1. In the **Alert logic** section, select values for these fields:
+
+ |Field |Description |
+ |||
+ |Operator| The query results are transformed into a number. In this field, select the operator to use to compare the number against the threshold.|
+ |Threshold value| A number value for the threshold. |
+ |Frequency of evaluation|The interval in which the query is run. Can be set from a minute to a day. |
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot of alert logic section of a new log alert rule.":::
+
+ 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set the **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. This setting is defined by your application business policy.
+
+ Select values for these fields under **Number of violations to trigger the alert**:
+
+ |Field |Description |
+ |||
+ |Number of violations|The number of violations that trigger the alert.|
+ |Evaluation period|The time period within which the number of violations occur. |
+ |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than 2 days, the 2 day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to 2 days of data.<br> If the query requires more data than the alert evaluation, and there's no **ago** command in the query, you can change the time range manually.|
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot of the advanced options section of a new log alert rule.":::
+
+ > [!NOTE]
+ > If you, or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**, or the rule creation will fail because it won't meet the policy requirements.
+
+ 1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from unique alert splitting by dimensions.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-alert-rule-preview.png" alt-text="Screenshot of a preview of a new alert rule.":::
+
+ ### [Activity log alert](#tab/activity-log)
+
+ 1. In the **Conditions** pane, select the **Chart period**.
+ 1. The **Preview** chart shows you the results of your selection.
+ 1. In the **Alert logic** section:
+
+ |Field |Description |
+ |||
+ |Event level| Select the level of the events that this alert rule monitors. Values are: **Critical**, **Error**, **Warning**, **Informational**, **Verbose** and **All**.|
+ |Status|Select the status levels for which the alert is evaluated.|
+ |Event initiated by|Select the user or service principal that initiated the event.|
+
+
+
+ From this point on, you can select the **Review + create** button at any time.
+
+1. In the **Actions** tab, select or create the required [action groups](./action-groups.md).
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot of the actions tab when creating a new alert rule.":::
+
+1. In the **Details** tab, define the **Project details** by selecting the **Subscription** and **Resource group**.
+1. Define the **Alert rule details**.
+
+ ### [Metric alert](#tab/metric)
+
+ 1. Select the **Severity**.
+ 1. Enter values for the **Alert rule name** and the **Alert rule description**.
+ 1. Select the **Region**.
+ 1. (Optional) In the **Advanced options** section, you can set several options.
+
+ |Field |Description |
+ |||
+ |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|
+ |Automatically resolve alerts (preview) |Select to resolve the alert when the condition isn't met anymore.|
+ 1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot of the details tab when creating a new alert rule.":::
+
+ ### [Log alert](#tab/log)
+
+ 1. Select the **Severity**.
+ 1. Enter values for the **Alert rule name** and the **Alert rule description**.
+ 1. Select the **Region**.
+ 1. (Optional) In the **Advanced options** section, you can set several options.
+
+ |Field |Description |
+ |||
+ |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|
+ |Automatically resolve alerts (preview) |Select to resolve the alert when the condition isn't met anymore.|
+ |Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.|
+ |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.|
+
+ 1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot of the details tab when creating a new log alert rule.":::
+
+ ### [Activity log alert](#tab/activity-log)
+
+ 1. Enter values for the **Alert rule name** and the **Alert rule description**.
+ 1. Select the **Region**.
+ 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
+ 1. (Optional) If you have configured action groups for this alert rule, you can add custom properties to the alert payload to add additional information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot of the actions tab when creating a new activity log alert rule.":::
+
+
+
+1. In the **Tags** tab, set any required tags on the alert rule resource.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot of the Tags tab when creating a new alert rule.":::
+
+1. In the **Review + create** tab, a validation will run and inform you of any issues.
+1. When validation passes and you've reviewed the settings, select the **Create** button.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot of the Review and create tab when creating a new alert rule.":::
++
+## Create a new alert rule using CLI
+
+You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-with-azure-cli). The code examples below are using [Azure Cloud Shell](../../cloud-shell/overview.md). You can see the full list of the [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor#azure-monitor-references).
+
+1. In the [portal](https://portal.azure.com/), select **Cloud Shell**, and at the prompt, use the following commands:
+ ### [Metric alert](#tab/metric)
+
+ To create a metric alert rule, use the **az monitor metrics alert create** command. You can see detailed documentation on the metric alert rule create command in the **az monitor metrics alert create** section of the [CLI reference documentation for metric alerts](/cli/azure/monitor/metrics/alert).
+
+ To create a metric alert rule that monitors if average Percentage CPU on a VM is greater than 90:
+ ```azurecli
+ az monitor metrics alert create -n {nameofthealert} -g {ResourceGroup} --scopes {VirtualMachineResourceID} --condition "avg Percentage CPU > 90" --description {descriptionofthealert}
+ ```
+ ### [Log alert](#tab/log)
+
+ To create a log alert rule that monitors count of system event errors:
+ ```azurecli
+ az monitor scheduled-query create -g {ResourceGroup} -n {nameofthealert} --scopes {vm_id} --condition "count \'union Event, Syslog | where TimeGenerated > a(1h) | where EventLevelName == \"Error\" or SeverityLevel== \"err\"\' > 2" --description {descriptionofthealert}
+ ```
+
+ > [!NOTE]
+ > Azure CLI support is only available for the scheduledQueryRules API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described below. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch to use CLI. [Learn more about switching](./alerts-log-api-switch.md).
+
+ ### [Activity log alert](#tab/activity-log)
+
+ To create an activity log alert rule, use the **az monitor activity-log alert create** command. You can see detailed documentation on the metric alert rule create command in the **az monitor activity-log alert create** section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
+
+ To create a new activity log alert rule, use the following commands:
+ - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
+ - [az monitor activity-log alert scope](/cli/azure/monitor/activity-log/alert/scope): Add scope for the created activity log alert rule.
+ - [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule.
+
+
+
+## Create a new alert rule using PowerShell
+
+- To create a metric alert rule using PowerShell, use this cmdlet: [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2)
+- To create an activity log alert rule using PowerShell, use this cmdlet: [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert)
+
+## Create an activity log alert rule from the Activity log pane
+
+You can also create an activity log alert on future events similar to an activity log event that already occurred.
+
+1. In the [portal](https://portal.azure.com/), [go to the activity log pane](../essentials/activity-log.md#view-the-activity-log).
+1. Filter or find the desired event, and then create an alert by selecting **Add activity log alert**.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png" alt-text="Screenshot of creating an alert rule from an activity log event." lightbox="media/alerts-create-new-alert-rule/create-alert-rule-from-activity-log-event-new.png":::
+
+2. The **Create alert rule** wizard opens, with the scope and condition already provided according to the previously selected activity log event. If necessary, you can edit and modify the scope and condition at this stage. By default, the exact scope and condition for the new rule are copied from the original event attributes. For example, the exact resource on which the event occurred, and the specific user or service name who initiated the event, are both included by default in the new alert rule. If you want to make the alert rule more general, modify the scope, and condition accordingly (see steps 3-9 in the section "Create an alert rule from the Azure Monitor alerts pane").
+
+3. Follow the rest of the steps from [Create a new alert rule in the Azure portal](#create-a-new-alert-rule-in-the-azure-portal).
+
+## Create an activity log alert rule using an Azure Resource Manager template
+
+To create an activity log alert rule using an Azure Resource Manager template, create a `microsoft.insights/activityLogAlerts` resource, and fill in all related properties.
+
+> [!NOTE]
+>The highest level that activity log alerts can be defined is the subscription level. Define the alert to alert per subscription. You can't define an alert on two subscriptions.
+
+The following fields are the options in the Azure Resource Manager template for the conditions fields. (The **Resource Health**, **Advisor** and **Service Health** fields have extra properties fields.)
++
+|Field |Description |
+|||
+|resourceId|The resource ID of the impacted resource in the activity log event on which the alert is generated.|
+|category|The category of the activity log event. Possible values: `Administrative`, `ServiceHealth`, `ResourceHealth`, `Autoscale`, `Security`, `Recommendation`, or `Policy` |
+|caller|The email address or Azure Active Directory identifier of the user who performed the operation of the activity log event. |
+|level |Level of the activity in the activity log event for the alert. Possible values: `Critical`, `Error`, `Warning`, `Informational`, or `Verbose`.|
+|operationName |The name of the operation in the activity log event. Possible values: `Microsoft.Resources/deployments/write`. |
+|resourceGroup |Name of the resource group for the impacted resource in the activity log event. |
+|resourceProvider |For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](../../azure-resource-manager/management/resource-providers-and-types.md). |
+|status |String describing the status of the operation in the activity event. Possible values: `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved` |
+|subStatus |Usually, this field is the HTTP status code of the corresponding REST call. This field can also include other strings describing a substatus. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others. |
+|resourceType |The type of the resource that was affected by the event. For example: `Microsoft.Resources/deployments`. |
+
+This example sets the condition to the **Administrative** category:
+
+```json
+"condition": {
+ "allOf": [
+ {
+ "field": "category",
+ "equals": "Administrative"
+ },
+ {
+ "field": "resourceType",
+ "equals": "Microsoft.Resources/deployments"
+ }
+ ]
+ }
+
+```
+
+This is an example template that creates an activity log alert rule using the **Administrative** condition:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "activityLogAlertName": {
+ "type": "string",
+ "metadata": {
+ "description": "Unique name (within the Resource Group) for the Activity log alert."
+ }
+ },
+ "activityLogAlertEnabled": {
+ "type": "bool",
+ "defaultValue": true,
+ "metadata": {
+ "description": "Indicates whether or not the alert is enabled."
+ }
+ },
+ "actionGroupResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Resource Id for the Action group."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/activityLogAlerts",
+ "apiVersion": "2017-04-01",
+ "name": "[parameters('activityLogAlertName')]",
+ "location": "Global",
+ "properties": {
+ "enabled": "[parameters('activityLogAlertEnabled')]",
+ "scopes": [
+ "[subscription().id]"
+ ],
+ "condition": {
+ "allOf": [
+ {
+ "field": "category",
+ "equals": "Administrative"
+ },
+ {
+ "field": "operationName",
+ "equals": "Microsoft.Resources/deployments/write"
+ },
+ {
+ "field": "resourceType",
+ "equals": "Microsoft.Resources/deployments"
+ }
+ ]
+ },
+ "actions": {
+ "actionGroups":
+ [
+ {
+ "actionGroupId": "[parameters('actionGroupResourceId')]"
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+This sample JSON can be saved as, for example, *sampleActivityLogAlert.json*. You can deploy the sample by using [Azure Resource Manager in the Azure portal](../../azure-resource-manager/templates/deploy-portal.md).
+
+For more information about the activity log fields, see [Azure activity log event schema](../essentials/activity-log-schema.md).
+
+> [!NOTE]
+> It might take up to 5 minutes for the new activity log alert rule to become active.
+
+## Create a new activity log alert rule using the REST API
+
+The Azure Monitor Activity Log Alerts API is a REST API. It's fully compatible with the Azure Resource Manager REST API. You can use it with PowerShell, by using the Resource Manager cmdlet or the Azure CLI.
++
+### Deploy the Resource Manager template with PowerShell
+
+To use PowerShell to deploy the sample Resource Manager template shown in the [previous section](#create-an-activity-log-alert-rule-using-an-azure-resource-manager-template) section, use the following command:
+
+```powershell
+New-AzResourceGroupDeployment -ResourceGroupName "myRG" -TemplateFile sampleActivityLogAlert.json -TemplateParameterFile sampleActivityLogAlert.parameters.json
+```
+The *sampleActivityLogAlert.parameters.json* file contains the values provided for the parameters needed for alert rule creation.
+
+## Changes to log alert rule creation experience
+
+If you're creating a new log alert rule, note that current alert rule wizard is a little different from the earlier experience:
+
+- Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action:
+ - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
+ - When you need to investigate in the logs, use the link in the alert to the search results in Logs.
+ - If you need the raw search results or for any other advanced customizations, use Logic Apps.
+- The new alert rule wizard doesn't support customization of the JSON payload.
+ - Use custom properties in the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to add static parameters and associated values to the webhook actions triggered by the alert.
+ - For more advanced customizations, use Logic Apps.
+- The new alert rule wizard doesn't support customization of the email subject.
+ - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](alerts-unified-log.md#split-by-alert-dimensions) to trigger an alert of the desired resource using the resource ID column.
+ - For more advanced customizations, use Logic Apps.
+
+## Next steps
+ - [View and manage your alert instances](alerts-manage-alert-instances.md)
azure-monitor Alerts Dynamic Thresholds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-dynamic-thresholds.md
Last updated 2/23/2022
-# Metric Alerts with Dynamic Thresholds in Azure Monitor
+# Dynamic thresholds in Metric Alerts
-Metric Alert with Dynamic Thresholds detection leverages advanced machine learning (ML) to learn metrics' historical behavior, identify patterns and anomalies that indicate possible service issues. It provides support of both a simple UI and operations at scale by allowing users to configure alert rules through the Azure Resource Manager API, in a fully automated manner.
+ Dynamic thresholds in metric alerts use advanced machine learning (ML) to learn metrics' historical behavior, and to identify patterns and anomalies that indicate possible service issues. Dynamic thresholds in metric alerts support both a simple UI and operations at scale by allowing users to configure alert rules through the fully automated Azure Resource Manager API.
-Once an alert rule is created, it will fire only when the monitored metric doesnΓÇÖt behave as expected, based on its tailored thresholds.
+An alert rule using a dynamic threshold only fires when the monitored metric doesnΓÇÖt behave as expected, based on its tailored thresholds.
We would love to hear your feedback, keep it coming at <azurealertsfeedback@microsoft.com>.
-## Why and when is using dynamic condition type recommended?
+Alert rules with dynamic thresholds provide:
+- **Scalable Alerting**. Dynamic threshold alert rules can create tailored thresholds for hundreds of metric series at a time, yet are as easy to define as an alert rule on a single metric. They give you fewer alerts to create and manage. You can use either Azure portal or the Azure Resource Manager API to create them. The scalable approach is especially useful when dealing with metric dimensions or when applying to multiple resources, such as to all subscription resources. [Learn more about how to configure Metric Alerts with Dynamic Thresholds using templates](./alerts-metric-create-templates.md).
-1. **Scalable Alerting** ΓÇô Dynamic threshold alert rules can create tailored thresholds for hundreds of metric series at a time, yet providing the same ease of defining an alert rule on a single metric. They give you fewer alerts to create and manage. You can use either Azure portal or the Azure Resource Manager API to create them. The scalable approach is especially useful when dealing with metric dimensions or when applying to multiple resources, such as to all subscription resources. [Learn more about how to configure Metric Alerts with Dynamic Thresholds using templates](./alerts-metric-create-templates.md).
+- **Smart Metric Pattern Recognition**. Using our ML technology, weΓÇÖre able to automatically detect metric patterns and adapt to metric changes over time, which may often include seasonality (hourly / daily / weekly). Adapting to the metricsΓÇÖ behavior over time and alerting based on deviations from its pattern relieves the burden of knowing the "right" threshold for each metric. The ML algorithm used in dynamic thresholds is designed to prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern.
-1. **Smart Metric Pattern Recognition** ΓÇô Using our ML technology, weΓÇÖre able to automatically detect metric patterns and adapt to metric changes over time, which may often include seasonality (hourly / daily / weekly). Adapting to the metricsΓÇÖ behavior over time and alerting based on deviations from its pattern relieves the burden of knowing the "right" threshold for each metric. The ML algorithm used in Dynamic Thresholds is designed to prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern.
+- **Intuitive Configuration**. Dynamic thresholds allow you to set up metric alerts using high-level concepts, alleviating the need to have extensive domain knowledge about the metric.
-1. **Intuitive Configuration** ΓÇô Dynamic Thresholds allows setting up metric alerts using high-level concepts, alleviating the need to have extensive domain knowledge about the metric.
+## Configure alerts rules with dynamic thresholds
-## How to configure alerts rules with Dynamic Thresholds?
-
-Alerts with Dynamic Thresholds can be configured through Metric Alerts in Azure Monitor. [Learn more about how to configure Metric Alerts](alerts-metric.md).
+Alerts with Dynamic thresholds can be configured using Azure Monitor metric alerts. [Learn more about how to configure Metric Alerts](alerts-metric.md).
## How are the thresholds calculated?
The thresholds are selected in such a way that a deviation from these thresholds
> [!NOTE] > Dynamic Thresholds can detect seasonality for hourly, daily, or weekly patterns. Other patterns like bi-hourly or semi-weekly seasonality might not be detected. To detect weekly seasonality, at least three weeks of historical data are required.
-## What does 'Sensitivity' setting in Dynamic Thresholds mean?
+## What does the 'Sensitivity' setting in Dynamic Thresholds mean?
Alert threshold sensitivity is a high-level concept that controls the amount of deviation from metric behavior required to trigger an alert. This option doesn't require domain knowledge about the metric like static threshold. The options available are: -- High ΓÇô The thresholds will be tight and close to the metric series pattern. An alert rule will be triggered on the smallest deviation, resulting in more alerts.-- Medium ΓÇô Less tight and more balanced thresholds, fewer alerts than with high sensitivity (default).-- Low ΓÇô The thresholds will be loose with more distance from metric series pattern. An alert rule will only trigger on large deviations, resulting in fewer alerts.
+- High: The thresholds will be tight and close to the metric series pattern. An alert rule will be triggered on the smallest deviation, resulting in more alerts.
+- Medium: Less tight and more balanced thresholds, fewer alerts than with high sensitivity (default).
+- Low: The thresholds will be loose with more distance from metric series pattern. An alert rule will only trigger on large deviations, resulting in fewer alerts.
## What are the 'Operator' setting options in Dynamic Thresholds?
You can choose the alert to be triggered on one of the following three condition
## What do the advanced settings in Dynamic Thresholds mean?
-**Failing Periods** - Dynamic Thresholds also allows you to configure "Number violations to trigger the alert", a minimum number of deviations required within a certain time window for the system to raise an alert (the default time window is four deviations in 20 minutes). The user can configure failing periods and choose what to be alerted on by changing the failing periods and time window. This ability reduces alert noise generated by transient spikes. For example:
+**Failing Periods**. Using dynamic thresholds, you can also configure a minimum number of deviations required within a certain time window for the system to raise an alert. The default is four deviations in 20 minutes. You can configure failing periods and choose what to be alerted on by changing the failing periods and time window. These configurations reduce alert noise generated by transient spikes. For example:
To trigger an alert when the issue is continuous for 20 minutes, 4 consecutive times in a given period grouping of 5 minutes, use the following settings:
To trigger an alert when there was a violation from a Dynamic Thresholds in 20 m
![Failing periods settings for issue for 20 minutes out of the last 30 minutes with period grouping of 5 minutes](media/alerts-dynamic-thresholds/0009.png)
-**Ignore data before** - Users may also optionally define a start date from which the system should begin calculating the thresholds from. A typical use case may occur when a resource was a running in a testing mode and is now promoted to serve a production workload, and therefore the behavior of any metric during the testing phase should be disregarded.
+**Ignore data before**. Users may also optionally define a start date from which the system should begin calculating the thresholds. A typical use case may occur when a resource was a running in a testing mode and is now promoted to serve a production workload, and therefore the behavior of any metric during the testing phase should be disregarded.
> [!NOTE] > An alert fires when the rule is evaluated and the result shows an anomaly. The alert is resolved if the rule is evaluated and does not show an anomaly three times in a row.
-## How do you find out why a Dynamic Thresholds alert was triggered?
+## How do you find out why a dynamic thresholds alert was triggered?
You can explore triggered alert instances by clicking on the link in the email or text message, or browse to see the alerts in the Azure portal. [Learn more about the alerts view](./alerts-page.md).
The system automatically recognizes prolonged outages and removes them from thre
Dynamic Thresholds can be applied to most platform and custom metrics in Azure Monitor and it was also tuned for the common application and infrastructure metrics. The following items are best practices on how to configure alerts on some of these metrics using Dynamic Thresholds.
-### Dynamic Thresholds on virtual machine CPU percentage metrics
+### Configure dynamic thresholds on virtual machine CPU percentage metrics
-1. In [Azure portal](https://portal.azure.com), click on **Monitor**. The Monitor view consolidates all your monitoring settings and data in one view.
+1. In [Azure portal](https://portal.azure.com), select **Monitor**. The Monitor view consolidates all your monitoring settings and data in one view.
-2. Click **Alerts** then click **+ New alert rule**.
+2. Select **Alerts** then select **+ New alert rule**.
> [!TIP] > Most resource blades also have **Alerts** in their resource menu under **Monitoring**, you could create alerts from there as well.
-3. Click **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **'Virtual Machines' Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
+3. Select **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **'Virtual Machines' Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
-4. Once you have selected a target resource, click on **Add condition**.
+4. Once you've selected a target resource, select **Add condition**.
5. Select the **'CPU Percentage'**.
-6. Optionally, refine the metric by adjusting **Period** and **Aggregation**. It is discouraged to use 'Maximum' aggregation type for this metric type as it is less representative of behavior. For 'Maximum' aggregation type static threshold maybe more appropriate.
+6. Optionally, refine the metric by adjusting **Period** and **Aggregation**. It's discouraged to use 'Maximum' aggregation type for this metric type as it is less representative of behavior. For 'Maximum' aggregation type static threshold maybe more appropriate.
-7. You will see a chart for the metric for the last 6 hours. Define the alert parameters:
+7. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
1. **Condition Type** - Choose 'Dynamic' option. 1. **Sensitivity** - Choose Medium/Low sensitivity to reduce alert noise. 1. **Operator** - Choose 'Greater Than' unless behavior represents the application usage.
- 1. **Frequency** - Consider lowering based on business impact of the alert.
+ 1. **Frequency** - Consider lowering the frequency based on business impact of the alert.
1. **Failing Periods** (Advanced Option) - The look back window should be at least 15 minutes. For example, if the period is set to five minutes, then failing periods should be at least three or more.
-8. The metric chart will display the calculated thresholds based on recent data.
+8. The metric chart displays the calculated thresholds based on recent data.
-9. Click **Done**.
+9. Select **Done**.
10. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**. 11. Add an action group to the alert either by selecting an existing action group or creating a new action group.
-12. Click **Done** to save the metric alert rule.
+12. Select **Done** to save the metric alert rule.
> [!NOTE] > Metric alert rules created through portal are created in the same resource group as the target resource.
-### Dynamic Thresholds on Application Insights HTTP request execution time
+### Configure dynamic thresholds on Application Insights HTTP request execution time
-1. In [Azure portal](https://portal.azure.com), click on **Monitor**. The Monitor view consolidates all your monitoring settings and data in one view.
+1. In [Azure portal](https://portal.azure.com), select on **Monitor**. The Monitor view consolidates all your monitoring settings and data in one view.
-2. Click **Alerts** then click **+ New alert rule**.
+2. Select **Alerts** then select **+ New alert rule**.
> [!TIP] > Most resource blades also have **Alerts** in their resource menu under **Monitoring**, you could create alerts from there as well.
-3. Click **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **'Application Insights' Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
+3. Select **Select target**, in the context pane that loads, select a target resource that you want to alert on. Use **Subscription** and **'Application Insights' Resource type** drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.
-4. Once you have selected a target resource, click on **Add condition**.
+4. Once you've selected a target resource, select **Add condition**.
5. Select the **'HTTP request execution time'**.
-6. Optionally, refine the metric by adjusting **Period** and **Aggregation**. It is discouraged to use 'Maximum' aggregation type for this metric type as it is less representative of behavior. For 'Maximum' aggregation type static threshold maybe more appropriate.
+6. Optionally, refine the metric by adjusting **Period** and **Aggregation**. We discourage using the **Maximum** aggregation type for this metric type, since it is less representative of behavior. Static thresholds maybe more appropriate for the **Maximum** aggregation type.
-7. You will see a chart for the metric for the last 6 hours. Define the alert parameters:
+7. You'll see a chart for the metric for the last 6 hours. Define the alert parameters:
1. **Condition Type** - Choose 'Dynamic' option. 1. **Operator** - Choose 'Greater Than' to reduce alerts fired on improvement in duration. 1. **Frequency** - Consider lowering based on business impact of the alert. 8. The metric chart will display the calculated thresholds based on recent data.
-9. Click **Done**.
+9. Select **Done**.
10. Fill in **Alert details** like **Alert Rule Name**, **Description**, and **Severity**. 11. Add an action group to the alert either by selecting an existing action group or creating a new action group.
-12. Click **Done** to save the metric alert rule.
+12. Select **Done** to save the metric alert rule.
> [!NOTE] > Metric alert rules created through portal are created in the same resource group as the target resource.
-## Interpreting Dynamic Threshold charts
+## Interpret Dynamic Threshold charts
Following is a chart showing a metric, its dynamic threshold limits, and some alerts fired when the value was outside of the allowed thresholds.
Use the following information to interpret the previous chart.
- **Blue line** - The actual measured metric over time. - **Blue shaded area** - Shows the allowed range for the metric. As long as the metric values stay within this range, no alert will occur.-- **Blue dots** - If you left click on part of the chart and then hover over the blue line, you see a blue dot appear under your cursor showing an individual aggregated metric value.
+- **Blue dots** - If you left select on part of the chart and then hover over the blue line, a blue dot appears under your cursor showing an individual aggregated metric value.
- **Pop-up with blue dot** - Shows the measured metric value (the blue dot) and the upper and lower values of allowed range. - **Red dot with a black circle** - Shows the first metric value out of the allowed range. This is the value that fires a metric alert and puts it in an active state.-- **Red dots**- Indicate additional measured values outside of the allowed range. They will not fire additional metric alerts, but the alert stays in the active.
+- **Red dots**- Indicate other measured values outside of the allowed range. They won't fire additional metric alerts, but the alert stays in the active.
- **Red area** - Shows the time when the metric value was outside of the allowed range. The alert remains in the active state as long as subsequent measured values are out of the allowed range, but no new alerts are fired. - **End of red area** - When the blue line is back inside the allowed values, the red area stops and the measured value line turns blue. The status of the metric alert fired at the time of the red dot with black outline is set to resolved.
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
- Title: Create Azure Monitor log alert rules and manage alert instances | Microsoft Docs
-description: Create Azure Monitor log alert rules and manage your alert instances.
--- Previously updated : 05/23/2022---
-# Create Azure Monitor log alert rules and manage alert instances
-
-This article shows you how to create log alert rules and manage your alert instances. Azure Monitor log alerts allow users to use a [Log Analytics](../logs/log-analytics-tutorial.md) query to evaluate resource logs at a set frequency and fire an alert based on the results. Rules can trigger one or more actions using [alert processing rules](alerts-action-rules.md) and [action groups](./action-groups.md). Learn the concepts behind log alerts [here](alerts-types.md#log-alerts).
-
-You create an alert rule by combining:
-
-And then defining these elements of the triggered alert:
-
-You can also [create log alert rules using Azure Resource Manager templates](../alerts/alerts-log-create-templates.md).
-## Create a new log alert rule in the Azure portal
-
-1. In the [portal](https://portal.azure.com/), select the relevant resource. We recommend monitoring at scale by using a subscription or resource group.
-1. In the Resource menu, select **Logs**.
-1. Write a query that will find the log events for which you want to create an alert. You can use the [alert query examples article](../logs/queries.md) to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md). Also, [learn how to create optimized alert queries](alerts-log-query.md).
-1. From the top command bar, Select **+ New Alert rule**.
-
- :::image type="content" source="media/alerts-log/alerts-create-new-alert-rule.png" alt-text="Create new alert rule." lightbox="media/alerts-log/alerts-create-new-alert-rule-expanded.png":::
-
-1. The **Condition** tab opens, populated with your log query.
-
- By default, the rule counts the number of results in the last 5 minutes.
-
- If the system detects summarized query results, the rule is automatically updated with that information.
-
- :::image type="content" source="media/alerts-log/alerts-logs-conditions-tab.png" alt-text="Conditions Tab.":::
-
-1. In the **Measurement** section, select values for these fields:
-
- |Field |Description |
- |||
- |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, syslog, application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. For example, CPU percentage. |
- |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value using the aggregation granularity. For example: Total, Average, Minimum, or Maximum. |
- |Aggregation granularity| The interval for aggregating multiple records to one numeric value.|
-
- :::image type="content" source="media/alerts-log/alerts-log-measurements.png" alt-text="Measurements.":::
-
-1. (Optional) In the **Split by dimensions** section, you can create resource-centric alerts at scale for a subscription or resource group. Splitting by dimensions groups combinations of numerical or string columns to monitor for the same condition on multiple Azure resources.
-
- If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. The alert payload includes the combination that triggered the alert.
-
- You can select up to six more splittings for any number or text columns types.
-
- You can also decide **not** to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
-
- Select values for these fields:
-
- |Field |Description |
- |||
- |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.<br>Splitting on the Azure Resource ID column makes the specified resource into the alert target. If a Resource ID column is detected, it is selected automatically and changes the context of the fired alert to the record's resource. |
- |Operator|The operator used on the dimension name and value. |
- |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. |
-
- :::image type="content" source="media/alerts-log/alerts-create-log-rule-dimensions.png" alt-text="Screenshot of the splitting by dimensions section of a new log alert rule.":::
-
-1. In the **Alert logic** section, select values for these fields:
-
- |Field |Description |
- |||
- |Operator| The query results are transformed into a number. In this field, select the operator to use to compare the number against the threshold.|
- |Threshold value| A number value for the threshold. |
- |Frequency of evaluation|The interval in which the query is run. Can be set from a minute to a day. |
-
- :::image type="content" source="media/alerts-log/alerts-create-log-rule-logic.png" alt-text="Screenshot of alert logic section of a new log alert rule.":::
-
-1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set the **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. This setting is defined by your application business policy.
-
- Select values for these fields under **Number of violations to trigger the alert**:
-
- |Field |Description |
- |||
- |Number of violations|The number of violations that have to occur to trigger the alert.|
- |Evaluation period|The amount of time within which those violations have to occur. |
- |Override query time range| Enter a value in this field if the alert evaluation period is different than the query time range.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than 2 days, the 2 day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to 2 days of data.<br> If the query requires more data than the alert evaluation, and there is no **ago** command in the query, you can change the time range manually.|
-
- :::image type="content" source="media/alerts-log/alerts-rule-preview-advanced-options.png" alt-text="Screenshot of the advanced options section of a new log alert rule.":::
-
-1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from unique alert splitting by dimensions.
-
- :::image type="content" source="media/alerts-log/alerts-create-alert-rule-preview.png" alt-text="Screenshot of a preview of a new alert rule.":::
-
-1. From this point on, you can select the **Review + create** button at any time.
-1. In the **Actions** tab, select or create the required [action groups](./action-groups.md).
-
- :::image type="content" source="media/alerts-log/alerts-rule-actions-tab.png" alt-text="Actions tab.":::
-
-1. In the **Details** tab, define the **Project details** and the **Alert rule details**.
-1. (Optional) In the **Advanced options** section, you can set several options, including whether to **Enable upon creation**, or to **Mute actions** for a period of time after the alert rule fires.
-
- :::image type="content" source="media/alerts-log/alerts-rule-details-tab.png" alt-text="Details tab.":::
-
- > [!NOTE]
- > If you, or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage** option in **Advanced options**, or the rule creation will fail as it will not meet the policy requirements.
-
-1. In the **Tags** tab, set any required tags on the alert rule resource.
-
- :::image type="content" source="media/alerts-log/alerts-rule-tags-tab.png" alt-text="Tags tab.":::
-
-1. In the **Review + create** tab, a validation will run and inform you of any issues.
-1. When validation passes and you have reviewed the settings, select the **Create** button.
-
- :::image type="content" source="media/alerts-log/alerts-rule-review-create.png" alt-text="Review and create tab.":::
-
-> [!NOTE]
-> This section above describes creating alert rules using the new alert rule wizard.
-> The new alert rule experience is a little different than the old experience. Please note these changes:
-> - Previously, search results were included in the payloads of the triggered alert and its associated notifications. This was a limited solution, since the email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results.
-> To get detailed context information about the alert so that you can decide on the appropriate action :
-> - We recommend using [Dimensions](alerts-unified-log.md#split-by-alert-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
-> - When you need to investigate in the logs, use the link in the alert to the search results in Logs.
-> - If you need the raw search results or for any other advanced customizations, use Logic Apps.
-> - The new alert rule wizard does not support customization of the JSON payload.
-> - Use custom properties in the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to add static parameters and associated values to the webhook actions triggered by the alert.
-> - For more advanced customizations, use Logic Apps.
-> - The new alert rule wizard does not support customization of the email subject.
-> - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](alerts-unified-log.md#split-by-alert-dimensions) to trigger an alert of the desired resource using the resource id column.
-> - For more advanced customizations, use Logic Apps.
-## Manage alert rules in the Alerts portal
-
-> [!NOTE]
-> This section describes how to manage alert rules created in the latest UI or using an API version later than `2018-04-16`. See [View and manage alert rules created in previous versions](alerts-manage-alerts-previous-version.md) for information about how to view and manage alert rules created in the previous UI.
-
-1. In the [portal](https://portal.azure.com/), select the relevant resource.
-1. Under **Monitoring**, select **Alerts**.
-1. From the top command bar, select **Alert rules**.
-1. Select the alert rule that you want to edit.
-1. Edit any fields necessary, then select **Save** on the top command bar.
-## Manage log alerts using CLI
-
-This section describes how to manage log alerts using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). Quickest way to start using Azure CLI is through [Azure Cloud Shell](../../cloud-shell/overview.md). For this article, we'll use Cloud Shell.
-> [!NOTE]
-> Azure CLI support is only available for the scheduledQueryRules API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described below. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch to use CLI. [Learn more about switching](./alerts-log-api-switch.md).
--
-1. In the [portal](https://portal.azure.com/), select **Cloud Shell**.
-1. At the prompt, you can use commands with ``--help`` option to learn more about the command and how to use it. For example, the following command shows you the list of commands available for creating, viewing, and managing log alerts:
- ```azurecli
- az monitor scheduled-query --help
- ```
-1. You can create a log alert rule that monitors count of system event errors:
- ```azurecli
- az monitor scheduled-query create -g {ResourceGroup} -n {nameofthealert} --scopes {vm_id} --condition "count \'union Event, Syslog | where TimeGenerated > ago(1h) | where EventLevelName == \"Error\" or SeverityLevel== \"err\"\' > 2" --description {descriptionofthealert}
- ```
-1. You can view all the log alerts in a resource group using the following command:
- ```azurecli
- az monitor scheduled-query list -g {ResourceGroup}
- ```
-1. You can see the details of a particular log alert rule using the name or the resource ID of the rule:
- ```azurecli
- az monitor scheduled-query show -g {ResourceGroup} -n {AlertRuleName}
- ```
- ```azurecli
- az monitor scheduled-query show --ids {RuleResourceId}
- ```
-1. You can disable a log alert rule using the following command:
- ```azurecli
- az monitor scheduled-query update -g {ResourceGroup} -n {AlertRuleName} --disabled false
- ```
-1. You can delete a log alert rule using the following command:
- ```azurecli
- az monitor scheduled-query delete -g {ResourceGroup} -n {AlertRuleName}
- ```
-You can also use Azure Resource Manager CLI with [templates](./alerts-log-create-templates.md) files:
-```azurecli
-az login
-az deployment group create \
- --name AlertDeployment \
- --resource-group ResourceGroupofTargetResource \
- --template-file mylogalerttemplate.json \
- --parameters @mylogalerttemplate.parameters.json
-```
-On success for creation, 201 is returned. On success for update, 200 is returned.
-## Next steps
-
-* Learn about [Log alerts](alerts-types.md#log-alerts).
-* Create log alerts using [Azure Resource Manager Templates](./alerts-log-create-templates.md).
-* Understand [webhook actions for log alerts](./alerts-log-webhook.md).
-* Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Alerts Manage Alert Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-instances.md
+
+ Title: Manage your alert instances
+description: The alerts page summarizes all alert instances in all your Azure resources generated in the last 30 days and allows you to manage your alert instances.
+ Last updated : 08/03/2022++
+# Manage your alert instances
+The alerts page summarizes all alert instances in all your Azure resources generated in the last 30 days. You can see all types of alerts from multiple subscriptions in a single pane. You can search for a specific alert and manage alert instances.
+
+There are a few ways to get to the alerts page:
+
+- From the home page in the [Azure portal](https://portal.azure.com/), select **Monitor** > **Alerts**.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-monitor-menu.png" alt-text="Screenshot of the alerts link on the Azure monitor menu. ":::
+
+- From a specific resource, go to the **Monitoring** section, and choose **Alerts**. The landing page contains the alerts on that specific resource.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-resource-menu.png" alt-text="Screenshot of the alerts link on the menu of a resource in the Azure portal.":::
+
+## The alerts summary pane
+
+The alerts summary pane summarizes the alerts fired in the last 24 hours. You can filter the list of alert instances by **time range**, **subscription**, **alert condition**, **severity**, and more. If you navigated to the alerts page by selecting a specific alert severity, the list is pre-filtered for that severity.
+
+To see more details about a specific alert instance, select the alert instance to open the **Alert Details** page.
+
+
+## The alerts details page
+
+The **alerts details** page provides details about the selected alert.
+
+ - To change the user response to the alert, select **Change user response** .
+ - To see all closed alerts, select the **History** tab.
+
+## Next steps
+
+- [Learn about Azure Monitor alerts](./alerts-overview.md)
+- [Create a new alert rule](alerts-log.md)
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
+
+ Title: Manage your alert rules
+description: Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
+++ Last updated : 08/03/2022++
+# Manage your alert rules
+
+Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
+
+## Manage alert rules in the Azure portal
+
+1. In the [portal](https://portal.azure.com/), select **Monitor**, then **Alerts**.
+1. From the top command bar, select **Alert rules**. You'll see all of your alert rules across subscriptions. You can filter the list of rules using the available filters: **Resource group**, **Resource type**, **Resource** and **Signal type**.
+1. Select the alert rule that you want to edit. You can select multiple alert rules and enable or disable them. Multi-selecting rules can be useful when you want to perform maintenance on specific resources.
+1. Edit any of the fields in the following sections. You can't edit the **Alert Rule Name**, **Scope**, or **Signal type** of an existing alert rule.
+ - **Condition**. Learn more about conditions for [metric alert rules](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=metric#tabpanel_1_metric), [log alert rules](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=log#tabpanel_1_log), and [activity log alert rules](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=activity-log#tabpanel_1_activity-log)
+ - **Actions**
+ - **Alert rule details**
+1. Select **Save** on the top command bar.
+
+> [!NOTE]
+> This section describes how to manage alert rules created in the latest UI or using an API version later than `2018-04-16`. See [View and manage log alert rules created in previous versions](alerts-manage-alerts-previous-version.md) for information about how to view and manage log alert rules created in the previous UI.
+
+## Enable recommended alert rules in the Azure portal (preview)
+
+> [!NOTE]
+> The alert rule recommendations feature is currently in preview and is only enabled for VMs.
+
+If you don't have alert rules defined for the selected resource, either individually or as part of a resource group or subscription, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or enable recommended out-of-the-box alert rules in the Azure portal.
++
+The system compiles a list of recommended alert rules based on:
+- The resource providerΓÇÖs knowledge of important signals and thresholds for monitoring the resource.
+- Telemetry that tells us what customers commonly alert on for this resource.
+
+To enable recommended alert rules:
+
+1. On the **Alerts** page, select **Enable recommended alert rules**. The **Enable recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource.
+1. In the **Alert me if** section, select all of the rules you want to enable. The rules are populated with the default values for the rule condition, such as the percentage of CPU usage that you want to trigger an alert. You can change the default values if you would like.
+1. In the **Notify me by** section, select the way you want to be notified if an alert is fired.
+1. Select **Enable**.
++
+## Manage metric alert rules with the Azure CLI
+
+This section describes how to do manage metric alert rules using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md).
+
+1. In the [portal](https://portal.azure.com/), select **Cloud Shell**.
+
+You can use commands with ``--help`` option to learn more about the command and how to use it. For example, the following command shows you the list of commands available for creating, viewing, and managing metric alerts.
+
+```azurecli
+az monitor metrics alert --help
+```
+
+### View all the metric alerts in a resource group
+
+```azurecli
+az monitor metrics alert list -g {ResourceGroup}
+```
+
+### See the details of a particular metric alert rule
+
+Use the name or the resource ID of the rule in the following commands:
+
+```azurecli
+az monitor metrics alert show -g {ResourceGroup} -n {AlertRuleName}
+```
+
+```azurecli
+az monitor metrics alert show --ids {RuleResourceId}
+```
+
+### Disable a metric alert rule
+
+```azurecli
+az monitor metrics alert update -g {ResourceGroup} -n {AlertRuleName} --enabled false
+```
+
+### Delete a metric alert rule
+
+```azurecli
+az monitor metrics alert delete -g {ResourceGroup} -n {AlertRuleName}
+```
+
+## Manage metric alert rules with PowerShell
+
+Metric alert rules have these dedicated PowerShell cmdlets:
+
+- [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2): Create a new metric alert rule or update an existing one.
+- [Get-AzMetricAlertRuleV2](/powershell/module/az.monitor/get-azmetricalertrulev2): Get one or more metric alert rules.
+- [Remove-AzMetricAlertRuleV2](/powershell/module/az.monitor/remove-azmetricalertrulev2): Delete a metric alert rule.
+
+## Manage metric alert rules with REST API
+
+- [Create Or Update](/rest/api/monitor/metricalerts/createorupdate): Create a new metric alert rule or update an existing one.
+- [Get](/rest/api/monitor/metricalerts/get): Get a specific metric alert rule.
+- [List By Resource Group](/rest/api/monitor/metricalerts/listbyresourcegroup): Get a list of metric alert rules in a specific resource group.
+- [List By Subscription](/rest/api/monitor/metricalerts/listbysubscription): Get a list of metric alert rules in a specific subscription.
+- [Update](/rest/api/monitor/metricalerts/update): Update a metric alert rule.
+- [Delete](/rest/api/monitor/metricalerts/delete): Delete a metric alert rule.
+
+## Manage log alert rules using the CLI
+
+This section describes how to manage log alerts using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md).
+
+> [!NOTE]
+> Azure CLI support is only available for the scheduledQueryRules API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described below. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch to use CLI. [Learn more about switching](./alerts-log-api-switch.md).
++
+1. In the [portal](https://portal.azure.com/), select **Cloud Shell**.
+
+You can use commands with ``--help`` option to learn more about the command and how to use it. For example, the following command shows you the list of commands available for creating, viewing, and managing log alerts.
+
+```azurecli
+az monitor scheduled-query --help
+```
+
+### View all the log alert rules in a resource group
+
+```azurecli
+az monitor scheduled-query list -g {ResourceGroup}
+```
+
+### See the details of a log alert rule
+
+Use the name or the resource ID of the rule in the following command:
+
+```azurecli
+az monitor scheduled-query show -g {ResourceGroup} -n {AlertRuleName}
+```
+```azurecli
+az monitor scheduled-query show --ids {RuleResourceId}
+```
+
+### Disable a log alert rule
+
+```azurecli
+az monitor scheduled-query update -g {ResourceGroup} -n {AlertRuleName} --disabled true
+```
+
+### Delete a log alert rule
+
+```azurecli
+az monitor scheduled-query delete -g {ResourceGroup} -n {AlertRuleName}
+```
+
+### Manage log alert rules using the Azure Resource Manager CLI with [templates](./alerts-log-create-templates.md)
+
+```azurecli
+az login
+az deployment group create \
+ --name AlertDeployment \
+ --resource-group ResourceGroupofTargetResource \
+ --template-file mylogalerttemplate.json \
+ --parameters @mylogalerttemplate.parameters.json
+```
+
+A 201 response is returned on successful creation. 200 is returned on successful updates.
+
+## Manage activity log alert rules using PowerShell
+
+Activity log alerts have these dedicated PowerShell cmdlets:
+
+- [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert): Creates a new activity log alert or updates an existing activity log alert.
+- [Get-AzActivityLogAlert](/powershell/module/az.monitor/get-azactivitylogalert): Gets one or more activity log alert resources.
+- [Enable-AzActivityLogAlert](/powershell/module/az.monitor/enable-azactivitylogalert): Enables an existing activity log alert and sets its tags.
+- [Disable-AzActivityLogAlert](/powershell/module/az.monitor/disable-azactivitylogalert): Disables an existing activity log alert and sets its tags.
+- [Remove-AzActivityLogAlert](/powershell/module/az.monitor/remove-azactivitylogalert): Removes an activity log alert.
+
+## Next steps
+
+- [Learn about Azure Monitor alerts](./alerts-overview.md)
+- [Create a new alert rule](alerts-log.md)
azure-monitor Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric.md
- Title: "Create, view, and manage Metric Alerts Using Azure Monitor"
-description: Learn how to use Azure portal or CLI to create, view, and manage metric alert rules.
--- Previously updated : 2/23/2022---
-# Create, view, and manage metric alerts using Azure Monitor
-
-Metric alerts in Azure Monitor provide a way to get notified when one of your metrics crosses a threshold. Metric alerts work on a range of multi-dimensional platform metrics, custom metrics, Application Insights standard and custom metrics. In this article, we will describe how to create, view, and manage metric alert rules through Azure portal and Azure CLI. You can also create metric alert rules using Azure Resource Manager templates, which are described in [a separate article](./alerts-metric-create-templates.md).
-
-You can learn more about how metric alerts work from [metric alerts overview](./alerts-metric-overview.md).
-
-## Create with Azure portal
-
-The following procedure describes how to create a metric alert rule in Azure portal:
-
-1. In [Azure portal](https://portal.azure.com), click on **All Services -> Monitor**. The Monitor blade consolidates all your monitoring settings and data in one view.
-
-2. Click **Alerts**, then expand the **+ Create** menu and select **Alert rule**.
-
- > [!TIP]
- > Most resource blades also have **Alerts** in their resource menu under **Monitoring**, you could create alert rules from there as well.
-
-3. In the **Scope** tab, click **Select scope**. Then, in the context pane that loads, select the target resource(s) that you want to alert on. Use **Filter by subscription**, **Filter by resource type**, and **Filter by location** drop-downs to find the resourcevirtual you want to monitor. You can also use the search bar to find your resource.
-
-4. If the selected resource has metrics you can create alert rules on, **Available signal types** on the bottom right will include metrics. You can view the full list of resource types supported for metric alerts in this [article](./alerts-metric-near-real-time.md#metrics-and-dimensions-supported).
-
-5. Once you have selected a target resource, click **Done**.
-
-6. Proceed to the **Condition** tab. Then, in the context pane that loads, you will see a list of signals supported for the resource. Select the metric you want to create an alert on.
-
-7. You will see a chart showing the metric's behavior for the last six hours. Use the **Chart period** dropdown to select to see longer history for the metric.
-
-8. If the metric has dimensions, you will see a dimensions table presented. Optionally, select one or more values per dimension.
- - The displayed dimension values are based on metric data from the last day.
- - If the dimension value you're looking for isn't displayed, click "Add custom value" to add a custom dimension value.
- - You can also choose **Select all current and future values** for any of the dimensions. This will dynamically scale the selection to all current and future values for the dimension.
-
- The metric alert rule will evaluate the condition for all combinations of values selected. [Learn more about how alerting on multi-dimensional metrics works](./alerts-metric-overview.md).
-
- > [!NOTE]
- > Using "All" as a dimension value is equivalent to choosing "Select all current and future values".
-
-9. Select the **Threshold** type, **Operator**, and **Aggregation type**. This will determine the logic that the metric alert rule will evaluate.
- - If you are using a **Static** threshold, continue to define a **Threshold value**. The metric chart can help determine what might be a reasonable threshold.
- - If you are using a **Dynamic** threshold, continue to define the **Threshold sensitivity**. The metric chart will display the calculated thresholds based on recent data. [Learn more about Dynamic Thresholds condition type and sensitivity options](../alerts/alerts-dynamic-thresholds.md).
-
-10. Optionally, refine the condition by adjusting **Aggregation granularity** and **Frequency of evaluation**.
-
-11. Click **Done**.
-
-12. Optionally, add another criteria if you want to monitor a complex alert rule. Currently users can have alert rules with Dynamic Thresholds criteria as a single criterion.
-
-13. Proceed to the **Actions** tab, where you can define what actions and notifications are triggered when the alert rule generates an alert. You can add an action group to the alert rule either by selecting an existing action group or by creating a new action group.
-
-14. Proceed to the **Details** tab. Under **Project details**, select the subscription and resource group in which the alert rule resource will be saved. Under **Alert rule details**, specify the **Severity** and **Alert rule name**. You can also provide an **Alert rule description**, select if the alert rule should be enabled when created, and if it should **Automatically resolve alerts** (instructs the alert rule to maintain a state, and not fire continuously if there's already a fired alert on the same condition).
-
-15. Proceed to the **Tags**, where you can set tags on the alert rule you're creating.
-16. Proceed to the **Review + create** tab, where you can review your selections before creating the alert rule. A quick automatic validation will also be performed, notifying you in case any information or missing or needs to be correct. Once you're ready to create the alert rule, Click **Create**.
--
-## View and manage with Azure portal
-
-You can view and manage metric alert rules using the Manage Rules blade under Alerts. The procedure below shows you how to view your metric alert rules and edit one of them.
-
-1. In Azure portal, navigate to **Monitor**.
-
-2. Click on **Alerts**, and then on **Alert rules**.
-
-3. In the **Alert rules** blade, you can view all your alert rules across subscriptions. You can further filter the rules using **Resource group**, **Resource type**, and **Resource**. If you want to see only metric alerts, select **Signal type** as *Metrics*.
-
- > [!TIP]
- > In the **Alert rules** blade, you can select multiple alert rules and enable/disable them. This might be useful when certain target resources need to be put under maintenance.
-
-4. Click on the name of the metric alert rule you want to edit.
-
-5. In this page, you can change various settingd of the alert rule.
-
- > [!NOTE]
- > You can't edit the **Alert Rule Name** after the metric alert rule is created.
-
-6. Click **Save** to save your edits.
--
-## With Azure CLI
-
-The previous sections described how to create, view, and manage metric alert rules using Azure portal. This section will describe how to do the same using cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). Quickest way to start using Azure CLI is through [Azure Cloud Shell](../../cloud-shell/overview.md). For this article, we will use Cloud Shell.
-
-1. Go to Azure portal, click on **Cloud Shell**.
-
-2. At the prompt, you can use commands with ``--help`` option to learn more about the command and how to use it. For example, the following command shows you the list of commands available for creating, viewing, and managing metric alerts
-
- ```azurecli
- az monitor metrics alert --help
- ```
-
-3. You can create a simple metric alert rule that monitors if average Percentage CPU on a VM is greater than 90
-
- ```azurecli
- az monitor metrics alert create -n {nameofthealert} -g {ResourceGroup} --scopes {VirtualMachineResourceID} --condition "avg Percentage CPU > 90" --description {descriptionofthealert}
- ```
-
-4. You can view all the metric alerts in a resource group using the following command
-
- ```azurecli
- az monitor metrics alert list -g {ResourceGroup}
- ```
-
-5. You can see the details of a particular metric alert rule using the name or the resource ID of the rule.
-
- ```azurecli
- az monitor metrics alert show -g {ResourceGroup} -n {AlertRuleName}
- ```
-
- ```azurecli
- az monitor metrics alert show --ids {RuleResourceId}
- ```
-
-6. You can disable a metric alert rule using the following command.
-
- ```azurecli
- az monitor metrics alert update -g {ResourceGroup} -n {AlertRuleName} --enabled false
- ```
-
-7. You can delete a metric alert rule using the following command.
-
- ```azurecli
- az monitor metrics alert delete -g {ResourceGroup} -n {AlertRuleName}
- ```
-
-## With PowerShell
-
-Metric alert rules have dedicated PowerShell cmdlets available:
--- [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2): Create a new metric alert rule or update an existing one.-- [Get-AzMetricAlertRuleV2](/powershell/module/az.monitor/get-azmetricalertrulev2): Get one or more metric alert rules.-- [Remove-AzMetricAlertRuleV2](/powershell/module/az.monitor/remove-azmetricalertrulev2): Delete a metric alert rule.-
-## With REST API
--- [Create Or Update](/rest/api/monitor/metricalerts/createorupdate): Create a new metric alert rule or update an existing one.-- [Get](/rest/api/monitor/metricalerts/get): Get a specific metric alert rule.-- [List By Resource Group](/rest/api/monitor/metricalerts/listbyresourcegroup): Get a list of metric alert rules in a specific resource group.-- [List By Subscription](/rest/api/monitor/metricalerts/listbysubscription): Get a list of metric alert rules in a specific subscription.-- [Update](/rest/api/monitor/metricalerts/update): Update a metric alert rule.-- [Delete](/rest/api/monitor/metricalerts/delete): Delete a metric alert rule.-
-## Next steps
--- [Create metric alerts using Azure Resource Manager Templates](./alerts-metric-create-templates.md)-- [Understand how metric alerts work](./alerts-metric-overview.md)-- [Understand how metric alerts with Dynamic Thresholds condition work](../alerts/alerts-dynamic-thresholds.md)-- [Understand the web hook schema for metric alerts](./alerts-metric-near-real-time.md#payload-schema)-- [Troubleshooting problems in metric alerts](./alerts-troubleshoot-metric.md)
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
description: Learn about Azure Monitor alerts, alert rules, action processing ru
Previously updated : 06/09/2022 Last updated : 07/19/2022 -+ # What are Azure Monitor Alerts?
You can alert on any metric or log data source in the Azure Monitor data platfor
This diagram shows you how alerts work:
-An **alert rule** monitors your telemetry and captures a signal that indicates that something is happening on a specified target. The alert rule captures the signal and checks to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, which initiates the associated action group and updates the state of the alert.
+An **alert rule** monitors your telemetry and captures a signal that indicates that something is happening on the specified resource. The alert rule captures the signal and checks to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, which initiates the associated action group and updates the state of the alert.
-You create an alert rule by combining:
+An alert rule combines:
+ - The resource(s) to be monitored
- The signal or telemetry from the resource - Conditions If you're monitoring more than one resource, the condition is evaluated separately for each of the resources and alerts are fired for each resource separately. Once an alert is triggered, the alert is made up of:
+ - **Alert processing rules** allow you to apply processing on fired alerts. Alert processing rules modify the fired alerts as they are being fired. You can use alert processing rules to add or suppress action groups, apply filters or have the rule processed on a pre-defined schedule.
+ - **Action groups** can trigger notifications or an automated workflow to let users know that an alert has been triggered. Action groups can include:
- Notification methods such as email, SMS, and push notifications. - Automation Runbooks - Azure functions
Once an alert is triggered, the alert is made up of:
- Secure webhooks - Webhooks - Event hubs-- The **alert condition** is set by the system. When an alert fires, the alertΓÇÖs monitor condition is set to ΓÇÿfiredΓÇÖ, and when the underlying condition that caused the alert to fire clears, the monitor condition is set to ΓÇÿresolvedΓÇÖ.
+- **Alert conditions** are set by the system. When an alert fires, the alertΓÇÖs monitor condition is set to ΓÇÿfiredΓÇÖ, and when the underlying condition that caused the alert to fire clears, the monitor condition is set to ΓÇÿresolvedΓÇÖ.
- The **user response** is set by the user and doesnΓÇÖt change until the user changes it. You can see all alert instances in all your Azure resources generated in the last 30 days on the **[Alerts page](alerts-page.md)** in the Azure portal.
azure-monitor Alerts Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-page.md
- Title: View and manage your alert instances
-description: The alerts page summarizes all alert instances in all your Azure resources generated in the last 30 days.
- Previously updated : 2/23/2022--
-# View and manage your alert instances
-
-The alerts page summarizes all alert instances in all your Azure resources generated in the last 30 days. You can see all your different types of alerts from multiple subscriptions in a single pane, and you can find specific alert instances for troubleshooting purposes.
-
-You can get to the alerts page in any of the following ways:
--- From the home page in the [Azure portal](https://portal.azure.com/), select **Monitor** > **Alerts**. -
- :::image type="content" source="media/alerts-managing-alert-instances/alerts-monitor-menu.png" alt-text="Screenshot of alerts link on monitor menu. ":::
-
-- From a specific resource, go to the **Monitoring** section, and choose **Alerts**. The landing page is pre-filtered for alerts on that specific resource.-
- :::image type="content" source="media/alerts-managing-alert-instances/alerts-resource-menu.png" alt-text="Screenshot of alerts link on a resource's menu.":::
-## Alert rule recommendations (preview)
-
-> [!NOTE]
-> The alert rule recommendations feature is currently in preview and is only enabled for VMs.
-
-If you don't have alert rules defined for the selected resource, either individually or as part of a resource group or subscription, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or enable recommended out-of-the-box alert rules in the Azure portal.
--
-The system compiles a list of recommended alert rules based on:
-- The resource providerΓÇÖs knowledge of important signals and thresholds for monitoring the resource.-- Telemetry that tells us what customers commonly alert on for this resource.-
-To enable recommended alert rules:
-1. On the **Alerts** page, select **Enable recommended alert rules**. The **Enable recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource.
-1. In the **Alert me if** section, select all of the rules you want to enable. The rules are populated with the default values for the rule condition, such as the percentage of CPU usage that you want to trigger an alert. You can change the default values if you would like.
-1. In the **Notify me by** section, select the way you want to be notified if an alert is fired.
-1. Select **Enable**.
--
-## The alerts summary pane
-
-If you have alerts configured for this resource, the alerts summary pane summarizes the alerts fired in the last 24 hours. You can modify the list of alert instances by selecting filters such as **time range**, **subscription**, **alert condition**, **severity**, and more. Select an alert instance.
-
-To see more details about a specific alert instance, select the alerts instance to open the **Alert Details** page.
-> [!NOTE]
-> If you navigated to the alerts page by selecting a specific alert severity, the list is pre-filtered for that severity.
-
-
-## The alerts details page
-
- The **Alerts details** page provides details about the selected alert. Select **Change user response** to change the user response to the alert. You can see all closed alerts in the **History** tab.
--
-## Next steps
--- [Learn about Azure Monitor alerts](./alerts-overview.md)-- [Create a new alert rule](alerts-log.md)
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
The target of the metric alert rule can be:
When you create an alert rule for a single resource, you can apply multiple conditions. For example, you could create an alert rule to monitor an Azure virtual machine and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items". When an alert rule has multiple conditions, the alert fires when all the conditions in the alert rule are true and is resolved when at least one of the conditions is no longer true for three consecutive checks. ### Narrow the target using Dimensions
-Dimensions are name-value pairs that contain additional data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
-For example, the Transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). You can choose to have an alert fired when there is a high number of transactions in any API name (which is the aggregated data), or you can use dimensions to further break it down to alert only when the number of transactions is high for specific API names.
+Dimensions are name-value pairs that contain more data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
+For example, the Transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). You can choose to have an alert fired when there's a high number of transactions in any API name (which is the aggregated data), or you can use dimensions to further break it down to alert only when the number of transactions is high for specific API names.
If you use more than one dimension, the metric alert rule can monitor multiple dimension values from different dimensions of a metric. The alert rule separately monitors all the dimensions value combinations. See [this article](alerts-metric-multiple-time-series-single-rule.md) for detailed instructions on using dimensions in metric alert rules.
Dynamic thresholds use advanced machine learning (ML) to:
Machine Learning continuously uses new data to learn more and make the threshold more accurate. Because the system adapts to the metricsΓÇÖ behavior over time, and alerts based on deviations from its pattern, you don't have to know the "right" threshold for each metric. Dynamic thresholds help you:-- Create scalable alerts for hundreds of metric series with one alert rule. Fewer alert rules leads to to less time that you have to spend on creating and managing alerts rules.
+- Create scalable alerts for hundreds of metric series with one alert rule. If you have fewer alert rules, you spend less time creating and managing alerts rules.
- Create rules without having to know what threshold to configure - Configure up metric alerts using high-level concepts without extensive domain knowledge about the metric - Prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern
Dynamic thresholds help you:
See [this article](alerts-dynamic-thresholds.md) for detailed instructions on using dynamic thresholds in metric alert rules. ## Log alerts
-A log alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. If the conditions are met, an alert is fired. Because you can use Log Analytics queries, log alerts allow you to perform advanced logic operations on your data and to use the robust features of KQL for data manipulation of log data.
+
+A log alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. If the conditions are met, an alert is fired. Because you can use Log Analytics queries, you can perform advanced logic operations on your data and use the robust KQL features to manipulate log data.
The target of the log alert rule can be: - A single resource, such as a VM.
You can configure if log alerts are [stateful or stateless](alerts-overview.md#a
> Log alerts work best when you are trying to detect specific data in the logs, as opposed to when you are trying to detect a **lack** of data in the logs. Since logs are semi-structured data, they are inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you are trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md). ### Dimensions in log alert rules+ You can use dimensions when creating log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually notifications are sent for each instance. ### Splitting by dimensions in log alert rules+ To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations using numerical or string columns. Splitting on the Azure resource ID column makes the specified resource into the alert target. You may also decide not to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%. ### Using the API+ Manage new rules in your workspaces using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API. > [!NOTE] > Log alerts for Log Analytics used to be managed using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md). ## Log alerts on your Azure bill+ Log Alerts are listed under resource provider microsoft.insights/scheduledqueryrules with: - Log Alerts on Application Insights shown with exact resource name along with resource group and alert properties. - Log Alerts on Log Analytics shown with exact resource name along with resource group and alert properties; when created using scheduledQueryRules API. - Log alerts created from [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure Resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have this resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log Alerts on legacy API are shown with above hidden resource name along with resource group and alert properties.+ > [!Note] > Unsupported resource characters such as <, >, %, &, \, ?, / are replaced with _ in the hidden resource names and this will also reflect in the billing information. ## Activity log alerts+ An activity log alert monitors a resource by checking the activity logs for a new activity log event that matches the defined conditions. You may want to use activity log alerts for these types of scenarios:
Activity log alert rules are Azure resources, so they can be created by using an
An activity log alert only monitors events in the subscription in which the alert is created. ## Smart Detection alerts+ After setting up Application Insights for your project, when your app generates a certain minimum amount of data, Smart Detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies. Smart Detection monitors the data received from your app, and in particular the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
-As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If there is an abnormal rise in failure rate compared to previous performance, an analysis is triggered. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature needs no set-up nor configuration, as it uses machine learning algorithms to predict the normal failure rate.
+As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If there's an abnormal rise in failure rate compared to previous performance, an analysis is triggered. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature needs no set-up nor configuration, as it uses machine learning algorithms to predict the normal failure rate.
While metric alerts tell you there might be a problem, Smart Detection starts the diagnostic work for you, performing much of the analysis you would otherwise have to do yourself. You get the results neatly packaged, helping you to get quickly to the root of the problem.
-Smart detection works for any web app, hosted in the cloud or on your own servers, that generate application request or dependency data.
+Smart detection works for web apps hosted in the cloud or on your own servers that generate application requests or dependency data.
## Next steps - Get an [overview of alerts](alerts-overview.md).
azure-monitor Automate With Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-with-logic-apps.md
Title: Automate Azure Application Insights processes by using Logic Apps description: Learn how you can quickly automate repeatable processes by adding the Application Insights connector to your logic app. Previously updated : 03/11/2019 Last updated : 07/31/2022+++ # Automate Application Insights processes by using Logic Apps
Last updated 03/11/2019
Do you find yourself repeatedly running the same queries on your telemetry data to check whether your service is functioning properly? Are you looking to automate these queries for finding trends and anomalies and then build your own workflows around them? The Azure Application Insights connector for Logic Apps is the right tool for this purpose. > [!NOTE]
-> The Azure Application Insights connector has been replaced with the [Azure Monitor connector](../logs/logicapp-flow-connector.md) that is integrated with Azure Active Directory instead of requiring an API key and also allows you to retrieve data from a Log Analytics workspace.
+> The Azure Application Insights connector has been replaced by the [Azure Monitor connector](../logs/logicapp-flow-connector.md), which is integrated with Azure Active Directory instead of requiring an API key and also allows you to retrieve data from a Log Analytics workspace.
With this integration, you can automate numerous processes without writing a single line of code. You can create a logic app with the Application Insights connector to quickly automate any Application Insights process.
-You can add additional actions as well. The Logic Apps feature of Azure App Service makes hundreds of actions available. For example, by using a logic app, you can automatically send an email notification or create a bug in Azure DevOps. You can also use one of the many available [templates](../../logic-apps/logic-apps-create-logic-apps-from-templates.md) to help speed up the process of creating your logic app.
+You can also add other actions. The Logic Apps feature of Azure App Service makes hundreds of actions available. For example, by using a logic app, you can automatically send an email notification or create a bug in Azure DevOps. You can also use one of the many available [templates](../../logic-apps/logic-apps-create-logic-apps-from-templates.md) to help speed up the process of creating your logic app.
## Create a logic app for Application Insights
In this tutorial, you learn how to create a logic app that uses the Analytics au
### Step 1: Create a logic app 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Click **Create a resource**, select **Web + Mobile**, and then select **Logic App**.
+1. Select **Create a resource** > **Web + Mobile** > **Logic App**.
- ![New logic app window](./media/automate-with-logic-apps/1createlogicapp.png)
+ ![Screenshot that shows the New logic app window.](./media/automate-with-logic-apps/1createlogicapp.png)
### Step 2: Create a trigger for your logic app
-1. In the **Logic App Designer** window, under **Start with a common trigger**, select **Recurrence**.
+1. In the **Logic Apps Designer** window, under **Start with a common trigger**, select **Recurrence**.
- ![Logic App Designer window](./media/automate-with-logic-apps/2logicappdesigner.png)
+ ![Screenshot that shows the Logic App Designer window.](./media/automate-with-logic-apps/2logicappdesigner.png)
1. In the **Interval** box, type **1** and then,**Frequency** box, select **Day**.
- ![Logic App Designer "Recurrence" window](./media/automate-with-logic-apps/3recurrence.png)
+ ![Screenshot that shows the Logic Apps Designer "Recurrence" window.](./media/automate-with-logic-apps/3recurrence.png)
### Step 3: Add an Application Insights action
-1. Click **New step**.
+1. Select **New step**.
1. In the **Choose an action** search box, type **Azure Application Insights**.
-1. Under **Actions**, click **Azure Application Insights - Visualize Analytics query**.
+1. Under **Actions**, select **Azure Application Insights - Visualize Analytics query**.
- ![Logic App Designer "Choose an action" window](./media/automate-with-logic-apps/4visualize.png)
+ ![Screenshot that shows the Logic App Designer "Choose an action" window.](./media/automate-with-logic-apps/4visualize.png)
### Step 4: Connect to an Application Insights resource
-To complete this step, you need an application ID and an API key for your resource. You can retrieve them from the Azure portal, as shown in the following diagram:
+To complete this step, you need an application ID and an API key for your resource:
-![Screenshot shows the API Access page in the Azure portal with the Create API key button selected.](./media/automate-with-logic-apps/5apiaccess.png)
+1. Select **API access** > **Create API key**:
-![Application ID in the Azure portal](./media/automate-with-logic-apps/6apikey.png)
+ ![Screenshot shows the API Access page in the Azure portal with the Create API key button selected.](./media/automate-with-logic-apps/5apiaccess.png)
+
+ ![Screenshot that shows the Application ID in the Azure portal.](./media/automate-with-logic-apps/6apikey.png)
-Provide a name for your connection, the application ID, and the API key.
+1. Provide a name for your connection, the application ID, and the API key.
-![Logic App Designer flow connection window](./media/automate-with-logic-apps/7connection.png)
+ ![Screenshot that shows the Logic App Designer flow connection window.](./media/automate-with-logic-apps/7connection.png)
### Step 5: Specify the Analytics query and chart type In the following example, the query selects the failed requests within the last day and correlates them with exceptions that occurred as part of the operation. Analytics correlates the failed requests, based on the operation_Id identifier. The query then segments the results by using the autocluster algorithm.
-When you create your own queries, verify that they are working properly in Analytics before you add it to your flow.
+When you create your own queries, verify that they're working properly in Analytics before you add it to your flow.
1. In the **Query** box, add the following Analytics query:
When you create your own queries, verify that they are working properly in Analy
1. In the **Chart Type** box, select **Html Table**.
- ![Analytics query configuration window](./media/automate-with-logic-apps/8query.png)
+ ![Screenshot that shows the Analytics query configuration window.](./media/automate-with-logic-apps/8query.png)
### Step 6: Configure the logic app to send email
-1. Click **New step**.
+1. Select **New step**.
1. In the search box, type **Office 365 Outlook**.
-1. Click **Office 365 Outlook - Send an email**.
+1. Select **Office 365 Outlook - Send an email**.
- ![Office 365 Outlook selection](./media/automate-with-logic-apps/9sendemail.png)
+ ![Screenshot that shows the Send an email button on the Office 365 Outlook screen.](./media/automate-with-logic-apps/9sendemail.png)
-1. In the **Send an email** window, do the following:
+1. In the **Send an email** window:
a. Type the email address of the recipient. b. Type a subject for the email.
- c. Click anywhere in the **Body** box and then, on the dynamic content menu that opens at the right, select **Body**.
+ c. Select anywhere in the **Body** box and then, on the dynamic content menu that opens at the right, select **Body**.
- d. Click the **Add new parameter** drop down and select Attachments and Is HTML.
+ d. Select the **Add new parameter** dropdown and select Attachments and Is HTML.
- ![Screenshot shows the Send an email window with the Body box highlighted and the Dynamic content menu with Body highlighted on the right side.](./media/automate-with-logic-apps/10emailbody.png)
+ ![Screenshot that shows the Send an email window with the Body box highlighted and the Dynamic content menu with Body highlighted on the right side.](./media/automate-with-logic-apps/10emailbody.png)
- ![Office 365 Outlook configuration](./media/automate-with-logic-apps/11emailparameter.png)
+ ![Screenshot that shows the Add new parameter dropdown in the Send an email window with the Attachments and Is HTML checkboxes selected](./media/automate-with-logic-apps/11emailparameter.png)
-1. On the dynamic content menu, do the following:
+1. On the dynamic content menu:
a. Select **Attachment Name**.
When you create your own queries, verify that they are working properly in Analy
c. In the **Is HTML** box, select **Yes**.
- ![Office 365 email configuration screen](./media/automate-with-logic-apps/12emailattachment.png)
+ ![Screenshot that shows the Office 365 email configuration screen.](./media/automate-with-logic-apps/12emailattachment.png)
### Step 7: Save and test your logic app
-* Click **Save** to save your changes.
-
-You can wait for the trigger to run the logic app, or you can run the logic app immediately by selecting **Run**.
-![Logic app creation screen](./media/automate-with-logic-apps/13save.png)
+1. Select **Save** to save your changes.
-When your logic app runs, the recipients you specified in the email list will receive an email that looks like the following:
-
-![Logic app email message](./media/automate-with-logic-apps/flow9.png)
+ You can wait for the trigger to run the logic app, or you can run the logic app immediately by selecting **Run**.
+
+ ![Screenshot that shows the Save button on the Logic Apps Designer screen](./media/automate-with-logic-apps/13save.png)
+
+ When your logic app runs, the recipients you specified in the email list will receive an email that looks like this:
+
+ ![Image showing email message generated by logic app with query result set](./media/automate-with-logic-apps/email-generated-by-logic-app-generated-email.png)
+ > [!NOTE]
+ > The log app generates an email with a JPG file that depicts the query result set. If your query doesn't return results, the logic app won't create a JPG file.
+
## Next steps - Learn more about creating [Analytics queries](../logs/get-started-queries.md).
When your logic app runs, the recipients you specified in the email list will re
-<!--Link references-->
-
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
# Collect custom metrics for a Linux VM with the InfluxData Telegraf agent
-By using Azure Monitor, you can collect custom metrics via your application telemetry, an agent running on your Azure resources, or even outside-in monitoring systems. Then you can submit them directly to Azure Monitor. This article provides instructions on how to deploy the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux VM in Azure and configure the agent to publish metrics to Azure Monitor.
+This article explains how to deploy and configure the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux virtual machine to send metrics to Azure Monitor.
## InfluxData Telegraf agent
azure-monitor Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-troubleshoot.md
By default, Guest (classic) metrics are stored in Azure Storage account, which y
1. Confirm that [Azure Diagnostic Extension](../agents/diagnostics-extension-overview.md) is enabled and configured to collect metrics. > [!WARNING]
- > You cannot use [Log Analytics agent](../agents/agents-overview.md#log-analytics-agent) (also referred to as the Microsoft Monitoring Agent, or "MMA") to send **Guest (classic)** into a storage account.
+ > You cannot use [Log Analytics agent](../agents/log-analytics-agent.md) (also referred to as the Microsoft Monitoring Agent, or "MMA") to send **Guest (classic)** into a storage account.
1. Ensure that **Microsoft.Insights** resource provider is [registered for your subscription](#microsoftinsights-resource-provider-isnt-registered-for-your-subscription).
azure-monitor Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-sql.md
description: Azure SQL Analytics solution helps you manage your Azure SQL databa
Previously updated : 03/10/2022 Last updated : 07/29/2022 # Monitor Azure SQL Database using Azure SQL Analytics (preview)
+**APPLIES TO:** Azure SQL Database, Azure SQL Managed Instance
> [!CAUTION] > Azure SQL Analytics (preview) is an integration with Azure Monitor, where many monitoring solutions are no longer in active development. For more monitoring options, see [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](/azure/azure-sql/database/monitor-tune-overview).
While Azure SQL Analytics (preview) is free to use, consumption of diagnostics t
- Use [log queries](../logs/log-query-overview.md) in Azure Monitor to view detailed Azure SQL data. - [Create your own dashboards](../visualize/tutorial-logs-dashboards.md) showing Azure SQL data. - [Create alerts](../alerts/alerts-overview.md) when specific Azure SQL events occur.
+- [Monitor Azure SQL Database with Azure Monitor](/azure/azure-sql/database/monitoring-sql-database-azure-monitor)
+- [Monitor Azure SQL Managed Instance with Azure Monitor](/azure/azure-sql/database/monitoring-sql-managed-instance-azure-monitor)
azure-monitor Logicapp Flow Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logicapp-flow-connector.md
Title: Use Azure Monitor Logs with Azure Logic Apps and Power Automate
description: Learn how you can use Azure Logic Apps and Power Automate to quickly automate repeatable processes by using the Azure Monitor connector. --++ Last updated 03/22/2022 # Azure Monitor Logs connector for Logic Apps and Power Automate
-[Azure Logic Apps](../../logic-apps/index.yml) and [Power Automate](https://flow.microsoft.com) allow you to create automated workflows using hundreds of actions for a variety of services. The Azure Monitor Logs connector allows you to build workflows that retrieve data from a Log Analytics workspace or an Application Insights application in Azure Monitor. This article describes the actions included with the connector and provides a walkthrough to build a workflow using this data.
+[Azure Logic Apps](../../logic-apps/index.yml) and [Power Automate](https://flow.microsoft.com) allow you to create automated workflows using hundreds of actions for various services. The Azure Monitor Logs connector allows you to build workflows that retrieve data from a Log Analytics workspace or an Application Insights application in Azure Monitor. This article describes the actions included with the connector and provides a walkthrough to build a workflow using this data.
For example, you can create a logic app to use Azure Monitor log data in an email notification from Office 365, create a bug in Azure DevOps, or post a Slack message. You can trigger a workflow by a simple schedule or from some action in a connected service such as when a mail or a tweet is received. ## Connector limits The Azure Monitor Logs connector has these limits:
-* Max query response size: ~16.7 MB (16 MiB). The connector infrastructure dictates that size limit is set lower than query API limit
-* Max number of records: 500,000
-* Max connector timeout: 110 second
-* Max query timeout: 100 second
-* Visualization in Logs page and the connector are using different charting libraries and some functionality isn't available in the connector currently
+* Max query response size: ~16.7 MB (16 MiB). The connector infrastructure dictates that size limit is set lower than query API limit.
+* Max number of records: 500,000.
+* Max connector timeout: 110 seconds.
+* Max query timeout: 100 seconds.
+* Visualizations in the Logs page and the connector use different charting libraries and some functionality isn't available in the connector currently.
The connector may reach limits depending on the query you use and the size of the results. You can often avoid such cases by adjusting the flow recurrence to run more frequent on smaller time range, or aggregate data to reduce the results size. Frequent queries with lower intervals than 120 seconds arenΓÇÖt recommended due to caching.
The following table describes the actions included with the Azure Monitor Logs c
| Action | Description | |:|:| | [Run query and and list results](/connectors/azuremonitorlogs/#run-query-and-list-results) | Returns each row as its own object. Use this action when you want to work with each row separately in the rest of the workflow. The action is typically followed by a [For each activity](../../logic-apps/logic-apps-control-flow-loops.md#foreach-loop). |
-| [Run query and and visualize results](/connectors/azuremonitorlogs/#run-query-and-visualize-results) | Returns all rows in the result set as a single formatted object. Use this action when you want to use the result set together in the rest of the workflow, such as sending the results in a mail. |
+| [Run query and and visualize results](/connectors/azuremonitorlogs/#run-query-and-visualize-results) | Returns a JPG file that depicts the query result set. This action lets you use the result set in the rest of the workflow by sending the results in an email, for example. The action only returns a JPG file if the query returns results.|
## Walkthroughs
-The following tutorials illustrate the use of the Azure Monitor connectors in Azure Logic Apps. You can perform these same example with Power Automate, the only difference being how to you create the initial workflow and run it when complete. Configuration of the workflow and actions is the same between both. See [Create a flow from a template in Power Automate](/power-automate/get-started-logic-template) to get started.
+The following tutorial illustrates the use of the Azure Monitor Logs connector in Azure Logic Apps. You can perform the same tutorial with Power Automate, the only difference being how you create the initial workflow and run it when complete. You configure the workflow and actions in the same way for both Logic Apps and Power Automate. See [Create a flow from a template in Power Automate](/power-automate/get-started-logic-template) to get started.
### Create a Logic App
-Go to **Logic Apps** in the Azure portal and click **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new logic app and then give it a unique name. You can turn on **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
+1. Go to **Logic Apps** in the Azure portal and select **Add**.
+1. Select a **Subscription**, **Resource group**, and **Region** to store the new logic app and then give it a unique name. You can turn on the **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
-![Create logic app](media/logicapp-flow-connector/create-logic-app.png)
+ ![Screenshot that shows the Basics tab on the Logic App creation screen.](media/logicapp-flow-connector/create-logic-app.png)
-Click **Review + create** and then **Create**. When the deployment is complete, click **Go to resource** to open the **Logic Apps Designer**.
+1. Select **Review + create** > **Create**.
+1. When the deployment is complete, select **Go to resource** to open the **Logic Apps Designer**.
### Create a trigger for the logic app
-Under **Start with a common trigger**, select **Recurrence**. This creates a logic app that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.
+1. Under **Start with a common trigger**, select **Recurrence**.
-![Recurrence action](media/logicapp-flow-connector/recurrence-action.png)
+ This creates a logic app that automatically runs at a regular interval.
+
+1. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.
+
+ ![Screenshot that shows the Logic Apps Designer "Recurrence" window on which you can set the interval and frequency at which the logic app runs.](media/logicapp-flow-connector/recurrence-action.png)
## Walkthrough: Mail visualized results
-The following tutorial shows you how to create a logic app that sends the results of an Azure Monitor log query by email.
+This tutorial shows how to create a logic app that sends the results of an Azure Monitor log query by email.
### Add Azure Monitor Logs action
-Click **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, type **azure monitor** and then select **Azure Monitor Logs**.
+1. Select **+ New step** to add an action that runs after the recurrence action.
+1. Under **Choose an action**, type **azure monitor** and then select **Azure Monitor Logs**.
-![Azure Monitor Logs action](media/logicapp-flow-connector/select-azure-monitor-connector.png)
+ ![Screenshot that shows the Logic App Designer "Choose an action" window.](media/logicapp-flow-connector/select-azure-monitor-connector.png)
-Click **Azure Log Analytics ΓÇô Run query and visualize results**.
-
-![Screenshot of a new action being added to a step in the Logic App Designer. Azure Monitor Logs is highlighted under Choose an action.](media/logicapp-flow-connector/select-query-action-visualize.png)
+1. Select **Azure Log Analytics ΓÇô Run query and visualize results**.
+ ![Screenshot of a new action being added to a step in the Logic Apps Designer. Azure Monitor Logs is highlighted under Choose an action.](media/logicapp-flow-connector/select-query-action-visualize.png)
### Add Azure Monitor Logs action
-Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select *Log Analytics Workspace* for the **Resource Type** and then select the workspace's name under **Resource Name**.
-
-Add the following log query to the **Query** window.
+1. Select the **Subscription** and **Resource Group** for your Log Analytics workspace.
+1. Select *Log Analytics Workspace* for the **Resource Type** and then select the workspace's name under **Resource Name**.
+1. Add the following log query to the **Query** window.
-```Kusto
-Event
-| where EventLevelName == "Error"
-| where TimeGenerated > ago(1day)
-| summarize TotalErrors=count() by Computer
-| sort by Computer asc
-```
+ ```Kusto
+ Event
+ | where EventLevelName == "Error"
+ | where TimeGenerated > ago(1day)
+ | summarize TotalErrors=count() by Computer
+ | sort by Computer asc
+ ```
-Select *Set in query* for the **Time Range** and **HTML Table** for the **Chart Type**.
+1. Select *Set in query* for the **Time Range** and **HTML Table** for the **Chart Type**.
-![Screenshot of the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logicapp-flow-connector/run-query-visualize-action.png)
-
-The mail will be sent by the account associated with the current connection. You can specify another account by clicking on **Change connection**.
-
+ ![Screenshot of the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logicapp-flow-connector/run-query-visualize-action.png)
+
+ The account associated with the current connection sends the email. To specify another account, select **Change connection**.
+
### Add email action
-Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **outlook** and then select **Office 365 Outlook**.
+1. Select **+ New step** > **+ Add an action**.
+1. Under **Choose an action**, type **outlook** and then select **Office 365 Outlook**.
-![Select Outlook connector](media/logicapp-flow-connector/select-outlook-connector.png)
+ ![Screenshot that shows the Logic App Designer "Choose an action" window with the Office 365 Outlook button highlighted.](media/logicapp-flow-connector/select-outlook-connector.png)
-Select **Send an email (V2)**.
+1. Select **Send an email (V2)**.
-![Office 365 Outlook selection window](media/logicapp-flow-connector/select-mail-action.png)
+ ![Screenshot of a new action being added to a step in the Logic Apps Designer. Send an email (V2) is highlighted under Choose an action.](media/logicapp-flow-connector/select-mail-action.png)
-Click anywhere in the **Body** box to open a **Dynamic content** window opens with values from the previous actions in the logic app. Select **See more** and then **Body** which is the results of the query in the Log Analytics action.
+1. Click anywhere in the **Body** box to open a **Dynamic content** window opens with values from the previous actions in the logic app.
+1. Select **See more** and then **Body** which is the results of the query in the Log Analytics action.
-![Select body](media/logicapp-flow-connector/select-body.png)
+ ![Screenshot of the settings for the new Send an email (V2) action, showing the body of the email being defined.](media/logicapp-flow-connector/select-body.png)
-Specify the email address of a recipient in the **To** window and a subject for the email in **Subject**.
-
-![Mail action](media/logicapp-flow-connector/mail-action.png)
+1. Specify the email address of a recipient in the **To** window and a subject for the email in **Subject**.
+ ![Screenshot of the settings for the new Send an email (V2) action, showing the subject line and email recepients being defined.](media/logicapp-flow-connector/mail-action.png)
### Save and test your logic app
-Click **Save** and then **Run** to perform a test run of the logic app.
-
-![Save and run](media/logicapp-flow-connector/save-run.png)
+1. Select **Save** and then **Run** to perform a test run of the logic app.
+ ![Save and run](media/logicapp-flow-connector/save-run.png)
-When the logic app completes, check the mail of the recipient that you specified. You should have received a mail with a body similar to the following:
-![Sample email](media/logicapp-flow-connector/sample-mail.png)
+ When the logic app completes, check the mail of the recipient that you specified. You should receive a mail with a body similar to the following:
+ ![An image of a sample email.](media/logicapp-flow-connector/sample-mail.png)
+ > [!NOTE]
+ > The log app generates an email with a JPG file that depicts the query result set. If your query doesn't return results, the logic app won't create a JPG file.
## Next steps
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
alertsmanagementresources
## Common alert rules The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log metric measurement alerts are provided for each. For guidance on which type of alert to use, see [Choose the alert type](#choose-the-alert-type).
-If you're unfamiliar with the process for creating alert rules in Azure Monitor, see the following articles for guidance:
--- [Create, view, and manage metric alerts using Azure Monitor](../alerts/alerts-metric.md)-- [Create, view, and manage log alerts using Azure Monitor](../alerts/alerts-log.md)
+If you're unfamiliar with the process for creating alert rules in Azure Monitor, see the [instructions to create a new alert rule](../alerts/alerts-create-new-alert-rule.md).
### Machine unavailable The most basic requirement is to send an alert when a machine is unavailable. It could be stopped, the guest operating system could be unresponsive, or the agent could be unresponsive. There are various ways to configure this alerting, but the most common is to use the heartbeat sent from the Log Analytics agent.
Here's a walk-through of creating a log alert for when the CPU of a virtual mach
|Operator |The operator to compare the metric value against the threshold|Greater than| |Threshold value| The value that the result is measured against.|80| |Frequency of evaluation|How often the alert rule should run. A frequency smaller than the aggregation granularity results in a sliding window evaluation.|15 minutes|
- 1. (Optional) In the **Advanced options** section, set the [Number of violations to trigger alert](../alerts/alerts-unified-log.md#number-of-violations-to-trigger-alert).
- :::image type="content" source="../alerts/media/alerts-log/alerts-rule-preview-advanced-options.png" alt-text="Screenshot of alerts rule preview advanced options.":::
+ 1. (Optional) In the **Advanced options** section, set the **Number of violations to trigger alert**.
+ :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot of alerts rule preview advanced options.":::
1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from unique alert splitting by dimensions.
- :::image type="content" source="../alerts/media/alerts-log/alerts-create-alert-rule-preview.png" alt-text="Screenshot of alerts rule preview.":::
+ :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-create-alert-rule-preview.png" alt-text="Screenshot of alerts rule preview.":::
1. From this point on, you can select the **Review + create** button at any time. 1. In the **Actions** tab, select or create the required [action groups](../alerts/action-groups.md).
- :::image type="content" source="../alerts/media/alerts-log/alerts-rule-actions-tab.png" alt-text="Screenshot of alerts rule preview actions tab.":::
+ :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot of alerts rule preview actions tab.":::
1. In the **Details** tab, define the **Project details** and the **Alert rule details**.
- 1. (Optional) In the **Advanced options** section, you can set several options, including whether to **Enable upon creation**, or to [**mute actions**](../alerts/alerts-unified-log.md#state-and-resolving-alerts) for a period after the alert rule fires.
- :::image type="content" source="../alerts/media/alerts-log/alerts-rule-details-tab.png" alt-text="Screenshot of alerts rule preview details tab.":::
+ 1. (Optional) In the **Advanced options** section, you can set several options, including whether to **Enable upon creation**, or to **mute actions** for a period after the alert rule fires.
+ :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot of alerts rule preview details tab.":::
> [!NOTE] > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage** option in **Advanced options**, or the rule creation will fail as it will not meet the policy requirements. 1. In the **Tags** tab, set any required tags on the alert rule resource.
- :::image type="content" source="../alerts/media/alerts-log/alerts-rule-tags-tab.png" alt-text="Screenshot of alerts rule preview tags tab.":::
+ :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot of alerts rule preview tags tab.":::
1. In the **Review + create** tab, a validation will run and inform you of any issues. 1. When validation passes and you have reviewed the settings, click the **Create** button.
- :::image type="content" source="../alerts/media/alerts-log/alerts-rule-review-create.png" alt-text="Screenshot of alerts rule preview review and create tab.":::
+ :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot of alerts rule preview review and create tab.":::
## Next steps * [Monitor workloads running on virtual machines.](monitor-virtual-machine-workloads.md)
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
Any monitoring tool, such as Azure Monitor, requires an agent installed on a mac
> [!NOTE] > The Azure Monitor agent will completely replace the Log Analytics agent, diagnostic extension, and Telegraf agent once it gains required functionality. These other agents are still required for features such as VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel. -- [Azure Monitor agent](../agents/agents-overview.md#azure-monitor-agent): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension.-- [Log Analytics agent](../agents/agents-overview.md#log-analytics-agent): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This agent is the same agent used for System Center Operations Manager.
+- [Azure Monitor agent](../agents/agents-overview.md): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension.
+- [Log Analytics agent](../agents/log-analytics-agent.md): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This agent is the same agent used for System Center Operations Manager.
- [Dependency agent](vminsights-dependency-agent-maintenance.md): Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions.-- [Azure Diagnostic extension](../agents/agents-overview.md#azure-diagnostics-extension): Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage.
+- [Azure Diagnostic extension](../agents/diagnostics-extension-overview.md): Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage.
## Next steps
azure-netapp-files Performance Oracle Single Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-single-volumes.md
na Previously updated : 09/30/2020 Last updated : 08/04/2022 # Oracle database performance on Azure NetApp Files single volumes
This article addresses the following topics about Oracle in the cloud. These top
* What is the difference in performance between the regular Linux kernel NFS (kNFS) client and OracleΓÇÖs own Direct NFS client? * As far as bandwidth is concerned, is the performance of a single Azure NetApp Files volume enough? + ## Testing environment and components The following diagram illustrates the environment used for testing. For consistency and simplicity, Ansible playbooks were used to deploy all elements of the test bed.
azure-netapp-files Solutions Benefits Azure Netapp Files Oracle Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-oracle-database.md
na Previously updated : 04/23/2020 Last updated : 08/04/2022 # Benefits of using Azure NetApp Files with Oracle Database Oracle Direct NFS (dNFS) makes it possible to drive higher performance than the operating system's own NFS driver. This article explains the technology and provides a performance comparison between dNFS and the traditional NFS client (Kernel NFS). It also shows the advantages and the ease of using dNFS with Azure NetApp Files. + ## How Oracle Direct NFS works The following summary explains how Oracle Direct NFS works at a high level:
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
Title: CI/CD with Azure Pipelines and Bicep files description: In this quickstart, you learn how to configure continuous integration in Azure Pipelines by using Bicep files. It shows how to use an Azure CLI task to deploy a Bicep file. Previously updated : 02/23/2022 Last updated : 08/03/2022 # Quickstart: Integrate Bicep with Azure Pipelines
You need a [Bicep file](./quickstart-create-bicep-use-visual-studio-code.md) tha
![Select pipeline](./media/add-template-to-azure-pipelines/select-pipeline.png)
-## Azure CLI task
+## Deploy Bicep files
+
+You can use Azure Resource Group Deployment task or Azure CLI task to deploy a Bicep file.
+
+### Use Azure Resource Group Deployment task
+
+Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure Resource Group Deployment task](/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment):
+
+```yml
+trigger:
+- master
+
+name: Deploy Bicep files
+
+variables:
+ vmImageName: 'ubuntu-latest'
+
+ azureServiceConnection: '<your-connection-name>'
+ resourceGroupName: 'exampleRG'
+ location: '<your-resource-group-location>'
+ templateFile: './main.bicep'
+pool:
+ vmImage: $(vmImageName)
+
+steps:
+- task: AzureResourceManagerTemplateDeployment@3
+ inputs:
+ deploymentScope: 'Resource Group'
+ azureResourceManagerConnection: '$(azureServiceConnection)'
+ action: 'Create Or Update Resource Group'
+ resourceGroupName: '$(resourceGroupName)'
+ location: '$(location)'
+ templateLocation: 'Linked artifact'
+ csmFile: '$(templateFile)'
+ overrideParameters: '-storageAccountType Standard_LRS'
+ deploymentMode: 'Incremental'
+ deploymentName: 'DeployPipelineTemplate'
+```
+
+For the descriptions of the task inputs, see [Azure Resource Group Deployment task](/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment).
+
+Select **Save**. The build pipeline automatically runs. Go back to the summary for your build pipeline, and watch the status.
+
+### Use Azure CLI task
Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure CLI task](/azure/devops/pipelines/tasks/deploy/azure-cli):
steps:
az deployment group create --resource-group $(resourceGroupName) --template-file $(templateFile) ```
-The Azure CLI task takes the following inputs:
-
-* `azureSubscription`, provide the name of the service connection that you created. See [Prerequisites](#prerequisites).
-* `scriptType`, use **bash**.
-* `scriptLocation`, use **inlineScript**, or **scriptPath**. If you specify **scriptPath**, you'll also need to specify a `scriptPath` parameter.
-* `inlineScript`, specify your script lines. The script provided in the sample deploys a Bicep file called *main.bicep*.
+For the descriptions of the task inputs, see [Azure CLI task](/azure/devops/pipelines/tasks/deploy/azure-cli).
Select **Save**. The build pipeline automatically runs. Go back to the summary for your build pipeline, and watch the status.
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/overview.md
Title: Overview of managed applications
-description: Describes the concepts for Azure Managed Applications, which provides cloud solutions that are easy for consumers to deploy and operate.
+ Title: Overview of Azure Managed Applications
+description: Describes the concepts for Azure Managed Applications that provide cloud solutions that are easy for consumers to deploy and operate.
- Previously updated : 07/12/2019 + Last updated : 08/03/2022
-# Azure managed applications overview
+# Azure Managed Applications overview
-Azure managed applications enable you to offer cloud solutions that are easy for consumers to deploy and operate. You implement the infrastructure and provide ongoing support. To make a managed application available to all customers, publish it in the Azure marketplace. To make it available to only users in your organization, publish it to an internal catalog.
+Azure Managed Applications enable you to offer cloud solutions that are easy for consumers to deploy and operate. You implement the infrastructure and provide ongoing support. To make a managed application available to all customers, publish it in Azure Marketplace. To make it available to only users in your organization, publish it to an internal catalog.
-A managed application is similar to a solution template in the Marketplace, with one key difference. In a managed application, the resources are deployed to a resource group that's managed by the publisher of the app. The resource group is present in the consumer's subscription, but an identity in the publisher's tenant has access to the resource group. As the publisher, you specify the cost for ongoing support of the solution.
+A managed application is similar to a solution template in Azure Marketplace, with one key difference. In a managed application, the resources are deployed to a resource group that's managed by the publisher of the app. The resource group is present in the consumer's subscription, but an identity in the publisher's tenant has access to the resource group. As the publisher, you specify the cost for ongoing support of the solution.
> [!NOTE]
-> Formerly, the documentation for Azure Custom Providers was included with the documentation for Managed Applications. That documentation has been moved. Now, see [Azure Custom Providers](../custom-providers/overview.md).
+> The documentation for Azure Custom Providers used to be included with Managed Applications. That documentation was moved to [Azure Custom Providers](../custom-providers/overview.md).
## Advantages of managed applications
-Managed applications reduce barriers to consumers using your solutions. They don't need expertise in cloud infrastructure to use your solution. Consumers have limited access to the critical resources, don't need to worry about making a mistake when managing it.
+Managed applications reduce barriers to consumers using your solutions. They don't need expertise in cloud infrastructure to use your solution. Consumers have limited access to the critical resources and don't need to worry about making a mistake when managing it.
-Managed applications enable you to establish an ongoing relationship with your consumers. You define terms for managing the application, and all charges are handled through Azure billing.
+Managed applications enable you to establish an ongoing relationship with your consumers. You define terms for managing the application and all charges are handled through Azure billing.
-Although customers deploy these managed applications in their subscriptions, they don't have to maintain, update, or service them. You can make sure that all customers are using approved versions. Customers don't have to develop application-specific domain knowledge to manage these applications. Customers automatically acquire application updates without the need to worry about troubleshooting and diagnosing issues with the applications.
+Although customers deploy managed applications in their subscriptions, they don't have to maintain, update, or service them. You can make sure that all customers are using approved versions. Customers don't have to develop application-specific domain knowledge to manage these applications. Customers automatically acquire application updates without the need to worry about troubleshooting and diagnosing issues with the applications.
For IT teams, managed applications enable you to offer pre-approved solutions to users in the organization. You know these solutions are compliant with organizational standards.
Managed Applications support [managed identities for Azure resources](./publish-
## Types of managed applications
-You can publish your managed application either externally or internally.
+You can publish your managed application either internally in the service catalog or externally in Azure Marketplace.
-![Publish internally or externally](./media/overview/manage_app_options.png)
### Service catalog
-The service catalog is an internal catalog of approved solutions for users in an organization. You use the catalog to meet organizational standards while they offering solutions for the organizations. Employees use the catalog to easily find applications that are recommended and approved by their IT departments. They see the managed applications that other people in their organization share with them.
+The service catalog is an internal catalog of approved solutions for users in an organization. You use the catalog to meet organizational standards and offer solutions for the organization. Employees use the catalog to find applications that are recommended and approved by their IT departments. They see the managed applications that other people in their organization share with them.
-For information about publishing a Service Catalog managed application, see [Create service catalog application](publish-service-catalog-app.md).
+For information about publishing a managed application to a service catalog, see [Quickstart: Create and publish a managed application definition](publish-service-catalog-app.md).
-### Marketplace
+### Azure Marketplace
-Vendors wishing to bill for their services can make a managed application available through the Azure marketplace. After the vendor publishes an application, it's available to users outside the organization. With this approach, managed service providers (MSPs), independent software vendors (ISVs), and system integrators (SIs) can offer their solutions to all Azure customers.
+Vendors who want to bill for their services can make a managed application available through Azure Marketplace. After the vendor publishes an application, it's available to users outside their organization. With this approach, a managed service provider (MSP), independent software vendor (ISV), or system integrator (SI) can offer their solutions to all Azure customers.
-For information about publishing a managed application to the Marketplace, see [Create marketplace application](../../marketplace/azure-app-offer-setup.md).
+For information about publishing a managed application to Azure Marketplace, see [Create an Azure application offer](../../marketplace/azure-app-offer-setup.md).
## Resource groups for managed applications
-Typically, the resources for a managed application are in two resource groups. The consumer manages one resource group, and the publisher manages the other resource group. When defining the managed application, the publisher specifies the levels of access. The publisher can request either a permanent role assignment, or [just-in-time access](request-just-in-time-access.md) for an assignment that is constrained to a time period.
+Typically, the resources for a managed application are in two resource groups. The consumer manages one resource group, and the publisher manages the other resource group. When the managed application is defined, the publisher specifies the levels of access. The publisher can request either a permanent role assignment, or [just-in-time access](request-just-in-time-access.md) for an assignment that is constrained to a time period.
Restricting access for [data operations](../../role-based-access-control/role-definitions.md) is currently not supported for all data providers in Azure. The following image shows a scenario where the publisher requests the owner role for the managed resource group. The publisher placed a read-only lock on this resource group for the consumer. The publisher's identities that are granted access to the managed resource group are exempt from the lock.
-![Resource group access](./media/overview/access.png)
### Application resource group
-This resource group holds the managed application instance. This resource group may only contain one resource. The resource type of the managed application is **Microsoft.Solutions/applications**.
+This resource group holds the managed application instance. This resource group may only contain one resource. The resource type of the managed application is [Microsoft.Solutions/applications](/azure/templates/microsoft.solutions/applications).
The consumer has full access to the resource group and uses it to manage the lifecycle of the managed application.
The consumer has full access to the resource group and uses it to manage the lif
This resource group holds all the resources that are required by the managed application. For example, this resource group contains the virtual machines, storage accounts, and virtual networks for the solution. The consumer has limited access to this resource group because the consumer doesn't manage the individual resources for the managed application. The publisher's access to this resource group corresponds to the role specified in the managed application definition. For example, the publisher might request the Owner or Contributor role for this resource group. The access is either permanent or limited to a specific time.
-When publishing the [managed application to the marketplace](../../marketplace/azure-app-offer-setup.md), the publisher can grant consumers the ability to perform specific actions on resources in the managed resource group. For example, the publisher can specify that consumers can restart virtual machines. All other actions beyond read actions are still denied. Changes to resources in a managed resource group by a consumer with granted actions are subject to the [Azure Policy](../../governance/policy/overview.md) assignments within the consumers tenant scoped to include the managed resource group.
+When the [managed application is published to the marketplace](../../marketplace/azure-app-offer-setup.md), the publisher can grant consumers the ability to perform specific actions on resources in the managed resource group. For example, the publisher can specify that consumers can restart virtual machines. All other actions beyond read actions are still denied. Changes to resources in a managed resource group by a consumer with granted actions are subject to the [Azure Policy](../../governance/policy/overview.md) assignments within the consumer's tenant scoped to include the managed resource group.
When the consumer deletes the managed application, the managed resource group is also deleted.
You can apply an [Azure Policy](../../governance/policy/overview.md) to audit yo
In this article, you learned about benefits of using managed applications. Go to the next article to create a managed application definition. > [!div class="nextstepaction"]
-> [Quickstart: Publish an Azure managed application definition](publish-service-catalog-app.md)
+> [Quickstart: Create and publish a managed application definition](publish-service-catalog-app.md)
azure-resource-manager Create Private Link Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-portal.md
Last updated 04/26/2022
-# Use portal to create private link for managing Azure resources (preview)
+# Use portal to create private link for managing Azure resources
This article explains how you can use [Azure Private Link](../../private-link/index.yml) to restrict access for managing resources in your subscriptions. It shows using the Azure portal for setting up management of resources through private access.
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
azure-signalr Signalr Howto Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-diagnostic-logs.md
Messaging logs provide tracing information for the SignalR hub messages received
#### Http request logs
-Http request logs provide detailed information for the http requests received by Azure Web PubSub. For example, status code and URL of the request. Http request log is helpful to troubleshoot request-related issues.
+Http request logs provide detailed information for the http requests received by Azure SignalR. For example, status code and URL of the request. Http request log is helpful to troubleshoot request-related issues.
### Archive to a storage account
By checking the sign-in server and service side, you can easily find out whether
For the direction **from client to server via SignalR service**, SignalR service will **only** consider the invocation that is originated from diagnostic client, that is, the message generated directly in diagnostic client, or service message generated due to the invocation of diagnostic client indirectly.
-The tracing ID will be generated in SignalR service once the message arrives at SignalR service in **Path 1**. SignalR service will generate a log `Received a message <MessageTracingId> from client connection <ConnectionId>.` for each message in diagnostic client. Once the message leaves from the SignalR to server, SignalR service will generate a log message `Sent a message <MessageTracingId> to server connection <ConnectionId> successfully.` If you see these two logs, you can be sure that the message passes through SignalR service successfully.
+The tracing ID will be generated in SignalR service once the message arrives at SignalR service in **Path 1**. SignalR service will generate a log `Received a message <MessageTracingId> from client connection <ConnectionId>.` for each message in diagnostic client. Once the message leaves from the SignalR to server, SignalR service will generate a log message `Sent a message <MessageTracingId> to server connection <ConnectionId> successfully.`. If you see these two logs, you can be sure that the message passes through SignalR service successfully.
> [!NOTE] > Due to the limitation of ASP.NET Core SignalR, the message comes from client doesn't contains any message level ID. But ASP.NET SignalR generate *invocation ID* for each message, you can use it to map with the tracing ID.
The tracing ID will be generated in SignalR service once the message arrives at
Then the message carries the tracing ID Server in **Path 2**. Server will generate a log `Received message <messagetracingId> from client connection <connectionId>` once the message arrives. <span id="message-flow-detail-for-path3"></span>
-Once the message invokes the hub method in server, a new service message will be generated with a *new tracing ID*. Once the service message is generated, server will generate a sign in template `Start to broadcast/send message <MessageTracingId> ...`, the actual log will be based on your scenario. Then the message will be delivered to SignalR service in **Path 3**, once the service message leaves from server, a log called `Succeeded to send message <MessageTracingId>` will be generated.
+Once the message invokes the hub method in server, a new service message will be generated with a *new tracing ID*. Once the service message is generated, server will generate a sign-in template `Start to broadcast/send message <MessageTracingId> ...`, the actual log will be based on your scenario. Then the message will be delivered to SignalR service in **Path 3**, once the service message leaves from server, a log called `Succeeded to send message <MessageTracingId>` will be generated.
> [!NOTE] > The tracing ID of the message from client cannot map to the tracing ID of the service message to be sent to SignalR service.
azure-signalr Signalr Quickstart Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-dotnet-core.md
ms.devlang: csharp - Previously updated : 09/28/2020 Last updated : 08/03/2022 # Quickstart: Create a chat room by using SignalR Service
-Azure SignalR Service is an Azure service that helps developers easily build web applications with real-time features. This service was originally based on [SignalR for ASP.NET Core 2.1](/aspnet/core/signalr/introduction?preserve-view=true&view=aspnetcore-2.1), but now supports later versions.
+Azure SignalR Service is an Azure service that helps developers easily build web applications with real-time features.
-This article shows you how to get started with the Azure SignalR Service. In this quickstart, you'll create a chat application by using an ASP.NET Core MVC web app. This app will make a connection with your Azure SignalR Service resource to enable real-time content updates. You'll host the web application locally and connect with multiple browser clients. Each client will be able to push content updates to all other clients.
+This article shows you how to get started with the Azure SignalR Service. In this quickstart, you'll create a chat application by using an ASP.NET Core MVC web app. This app will make a connection with your Azure SignalR Service resource to enable real-time content updates. You'll host the web application locally and connect with multiple browser clients. Each client will be able to push content updates to all other clients.
You can use any code editor to complete the steps in this quickstart. One option is [Visual Studio Code](https://code.visualstudio.com/), which is available on the Windows, macOS, and Linux platforms.
-The code for this tutorial is available for download in the [AzureSignalR-samples GitHub repository](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/ChatRoom). Also, you can create the Azure resources used in this quickstart by following [Create a SignalR Service script](scripts/signalr-cli-create-service.md).
+The code for this tutorial is available for download in the [AzureSignalR-samples GitHub repository](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/ChatRoom). You can create the Azure resources used in this quickstart by following [Create a SignalR Service script](scripts/signalr-cli-create-service.md).
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note-dotnet.md)]
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Create an Azure SignalR resource -
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnetcore).
## Create an ASP.NET Core web app
In this section, you use the [.NET Core command-line interface (CLI)](/dotnet/co
dotnet new mvc ```
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnetcore).
- ## Add Secret Manager to the project In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app-secrets) to your project. The Secret Manager tool stores sensitive data for development work outside your project tree. This approach helps prevent the accidental sharing of app secrets in source code.
-1. Open your *.csproj* file. Add a `DotNetCliToolReference` element to include *Microsoft.Extensions.SecretManager.Tools*. Also add a `UserSecretsId` element as shown in the following code for *chattest.csproj*, and save the file.
+1. Open your *csproj* file. Add a `DotNetCliToolReference` element to include *Microsoft.Extensions.SecretManager.Tools*. Also add a `UserSecretsId` element as shown in the following code for *chattest.csproj*, and save the file.
```xml <Project Sdk="Microsoft.NET.Sdk.Web">
In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app-
</Project> ```
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnetcore).
- ## Add Azure SignalR to the web app 1. Add a reference to the `Microsoft.Azure.SignalR` NuGet package by running the following command:
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
dotnet restore ```
-3. Add a secret named *Azure:SignalR:ConnectionString* to Secret Manager.
+3. Add a secret named *Azure:SignalR:ConnectionString* to Secret Manager.
This secret will contain the connection string to access your SignalR Service resource. *Azure:SignalR:ConnectionString* is the default configuration key that SignalR looks for to establish a connection. Replace the value in the following command with the connection string for your SignalR Service resource.
- You must run this command in the same directory as the *.csproj* file.
+ You must run this command in the same directory as the `csproj` file.
```dotnetcli dotnet user-secrets set Azure:SignalR:ConnectionString "<Your connection string>"
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
} ```
- By not passing a parameter to `AddAzureSignalR()`, this code uses the default configuration key for the SignalR Service resource connection string. The default configuration key is *Azure:SignalR:ConnectionString*.
+ Not passing a parameter to `AddAzureSignalR()` causes this code to use the default configuration key for the SignalR Service resource connection string. The default configuration key is *Azure:SignalR:ConnectionString*.
5. In *Startup.cs*, update the `Configure` method by replacing it with the following code.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
### Add a hub class
-In SignalR, a hub is a core component that exposes a set of methods that can be called from the client. In this section, you define a hub class with two methods:
+In SignalR, a *hub* is a core component that exposes a set of methods that can be called by the client. In this section, you define a hub class with two methods:
* `Broadcast`: This method broadcasts a message to all clients. * `Echo`: This method sends a message back to the caller.
The client user interface for this chat room app will consist of HTML and JavaSc
Copy the *css/site.css* file from the *wwwroot* folder of the [samples repository](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/ChatRoom/wwwroot). Replace your project's *css/site.css* with the one you copied.
-Here's the main code of *https://docsupdatetracker.net/index.html*:
-
-Create a new file in the *wwwroot* directory named *https://docsupdatetracker.net/index.html*, copy, and paste the following HTML into the newly created file:
+Create a new file in the *wwwroot* directory named *https://docsupdatetracker.net/index.html*, copy, and paste the following HTML into the newly created file.
```html <!DOCTYPE html>
In this section, you'll add a development runtime environment for ASP.NET Core.
} ```
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnetcore).
## Build and run the app locally
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
![Example of an Azure SignalR group chat](media/signalr-quickstart-dotnet-core/signalr-quickstart-complete-local.png)
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnetcore).
## Clean up resources
If you'll continue to the next tutorial, you can keep the resources created in t
If you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges. > [!IMPORTANT]
-> Deleting a resource group is irreversible and includes all the resources in that group. Make sure that you don't accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample in an existing resource group that contains resources you want to keep, you can delete each resource individually from its blade instead of deleting the resource group.
+> Deleting a resource group is irreversible and includes all the resources in that group. Make sure that you don't accidentally delete the wrong resource group or resources. If you created the resources this sample in an existing resource group that contains resources you want to keep, you can delete each resource individually from its blade instead of deleting the resource group.
Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
azure-vmware Enable Hcx Access Over Internet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-hcx-access-over-internet.md
Last updated 7/19/2022
# Enable HCX access over the internet
-In this article you'll learn how to access the HCX over a Public IP address using Azure VMware Solution. You'll also learn how to pair HCX sites, and create service mesh from on-premises to Azure VMware Solutions private cloud using Public IP. The service mesh allows you to migrate a workload from an on-premises datacenter to Azure VMware Solutions private cloud over the public internet. This solution is useful where the customer isn't using Express Route or VPN connectivity with the Azure cloud.
+
+In this article, you'll learn how to perform HCX migration over a Public IP address using Azure VMware Solution.
+>[!IMPORTANT]
+>Before configuring a Public IP on your Azure VMware Solution private cloud, please consult your Network Administrator to understand the implications and the impact to your environment.
+
+You'll also learn how to pair HCX sites and create service mesh from on-premises to an Azure VMware Solution private cloud using a Public IP. The service mesh allows you to migrate a workload from an on-premises datacenter to an Azure VMware Solution private cloud over the public internet. This solution is useful when the customer is not using ExpressRoute or VPN connectivity with the Azure cloud.
+
> [!IMPORTANT]
-> The on-premises HCX appliance should be reachable from the internet to establish HCX communication from on-premises to Azure VMware Solution private cloud.
+> The on-premises HCX appliance should be reachable from the internet to establish HCX communication from on-premises to the Azure VMware Solution private cloud.
## Configure Public IP block
-Configure a Public IP block through portal by using the Public IP feature of the Azure VMware Solution private cloud.
+To perform HCX Migration over the public internet, you'll need a minimum of six Public IP addresses. Five of these Public IP addresses will be used for the Public IP segment, and one will be used for configuring Network Address Translation (NAT). You can obtain the Public IP block by reserving a /29 from the Azure VMware Solution portal. Configure a Public IP block through portal by using the Public IP feature of the Azure VMware Solution private cloud.
1. Sign in to Azure VMware Solution portal. 1. Under **Workload Networking**, select **Public IP (preview)**.- 1. Select **+Public IP**. 1. Enter the **Public IP name** and select the address space from the **Address space** drop-down list according to the number of IPs required, then select **Configure**. >[!Note]
Before you create a Public IP segment, get your credentials for NSX-T Manager fr
1. Sign in to NSX-T Manager using credentials provided by the Azure VMware Solution portal. 1. Under the **Manage** section, select **Identity**. 1. Copy the NSX-T Manager admin user password.
-1. Browse the NSX-T Manger, paste the admin password in the password field, and select **Login**.
-1. Under the **Networking** section, select **Connectivity** and **Segments**, and then select **ADD SEGMENT**.
-1. Provide Segment name, select Tier-1 router as connected gateway, and provide the public segment under subnets.
+
+1. Browse the NSX-T Manger and paste the admin password in the password field, and select **Login**.
+1. Under the **Networking** section select **Connectivity** and **Segments**, then select **ADD SEGMENT**.
+1. Provide Segment name, select Tier-1 router as connected gateway, and provide the reserved Public IP under subnets. The Public IP block for this Public IP segment shouldn't include the first and last Public IPs from the overall Public IP block. For example, if you reserved 20.95.1.16/29, you would input 20.95.1.16/30.
1. Select **Save**. ΓÇ» ## Assign public IP to HCX manager HCX manager of destination Azure VMware Solution SDDC should be reachable from the internet to do site pairing with source site. HCX Manager can be exposed by way of DNAT rule and a static null route. Because HCX Manager is in the provider space, not within the NSX-T environment, the null route is necessary to allow HCX Manager to route back to the client by way of the DNAT rule. ### Add static null route to the T1 router
-1. Sign in to NSX-T manager and select **Networking**.
+
+The static null route is used to allow HCX private IP to route through the NSX T1 for public endpoints.
+
+1. Sign in to NSX-T manager, and select **Networking**.
1. Under the **Connectivity** section, select **Tier-1 Gateways**. 1. Edit the existing T1 gateway. 1. Expand **STATIC ROUTES**.
HCX manager of destination Azure VMware Solution SDDC should be reachable from t
1. Select **CLOSE EDITING**. ### Add NAT rule to T1 gateway
-
-1. Sign in to NSX-T Manager and select **Networking**.
+
+>[!Note]
+>The NAT rules should use a different Public IP address than your Public IP segment.
+1. Sign in to NSX-T Manager, and select **Networking**.
1. Select **NAT**. 1. Select the T1 Gateway. 1. Select **ADD NAT RULE**.
HCX manager of destination Azure VMware Solution SDDC should be reachable from t
1. The DNAT Rule Destination is the Public IP for HCX Manager. The Translated IP is the HCX Manager IP in the cloud. 1. The SNAT Rule Source is the HCX Manager IP in the cloud. The Translated IP is the non-overlapping /32 IP from the Static Route. 1. Make sure to set the Firewall option on DNAT rule to **Match External Address**.
-1. Create T1 Gateway Firewall rules to allow only expected traffic to the Public IP for HCX Manager and drop everything else.
+1. Create T1 Gateway Firewall rules to allow only expected traffic to the Public IP for HCX Manager and drop everything else.
+ 1. Create a Gateway Firewall rule on the T1 that allows your On-Premise as the **Source IP** and the Azure VMware Solution reserved Public as the **Destination IP**. This rule should be the highest priority.
+ 1. Create a Gateway Firewall rule on the T1 that denies all other traffic where the **Source IP** is and ΓÇ£AnyΓÇ¥ and **Destination IP** is the Azure VMware Solution reserved Public IP.
>[!NOTE] > HCX manager can now be accessed over the internet using public IP.
azure-web-pubsub Howto Troubleshoot Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-troubleshoot-resource-logs.md
If you get 401 Unauthorized returned for client requests, check your resource lo
### Throttling
-If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
+If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
backup Offline Backup Azure Data Box Dpm Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/offline-backup-azure-data-box-dpm-mabs.md
Title: Offline Backup with Azure Data Box for DPM and MABS description: You can use Azure Data Box to seed initial Backup data offline from DPM and MABS. Previously updated : 07/29/2021 Last updated : 08/04/2022+++ # Offline seeding using Azure Data Box for DPM and MABS
Last updated 07/29/2021
This article explains how you can use Azure Data Box to seed initial backup data offline from DPM and MABS to an Azure Recovery Services vault.
-You can use [Azure Data Box](../databox/data-box-overview.md) to seed your large initial DPM/MABS backups offline (without using the network) to a Recovery Services vault. This process saves time and network bandwidth that would otherwise be consumed moving large amounts of backup data online over a high-latency network. This feature is currently in preview.
+You can use [Azure Data Box](../databox/data-box-overview.md) to seed your large initial DPM/MABS backups offline (without using the network) to a Recovery Services vault. This process saves time and network bandwidth that would otherwise be consumed moving large amounts of backup data online over a high-latency network.
Offline backup based on Azure Data Box provides two distinct advantages over [offline backup based on the Azure Import/Export service](backup-azure-backup-server-import-export.md):
Specify alternate source: *WIM:D:\Sources\Install.wim:4*
![Choose initial online replication](./media/offline-backup-azure-data-box-dpm-mabs/choose-initial-online-replication.png)
- >[!NOTE]
- > The option to select **Transfer using Microsoft Owned disks** isn't available for MABS v3 since the feature is in preview. Reach out to us at [systemcenterfeedback@microsoft.com](mailto:systemcenterfeedback@microsoft.com) if you want to use this feature for MABS v3.
- 12. Sign into Azure when prompted, using the user credentials that have owner access on the Azure Subscription. After a successful sign-in, the following screen is displayed: ![After successful login](./media/offline-backup-azure-data-box-dpm-mabs/after-successful-login.png)
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
bastion Kerberos Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/kerberos-authentication-portal.md
description: Learn how to configure Bastion to use Kerberos authentication via t
Previously updated : 03/08/2022 Last updated : 08/03/2022
This article shows you how to configure Azure Bastion to use Kerberos authentica
> During Preview, the Kerberos setting for Azure Bastion can be configured in the Azure portal only. >
-## <a name="prereq"></a>Prerequisites
+## Prerequisites
* An Azure account with an active subscription. If you don't have one, [create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). To be able to connect to a VM through your browser using Bastion, you must be able to sign in to the Azure portal. * An Azure virtual network. For steps to create a VNet, see [Quickstart: Create a virtual network](../virtual-network/quick-create-portal.md).
-## <a name="vnet"></a>Update VNet DNS servers
+## Update VNet DNS servers
In this section, the following steps help you update your virtual network to specify custom DNS settings.
In this section, the following steps help you update your virtual network to spe
1. Go to the virtual network for which you want to deploy the Bastion resources. 1. Go to the **DNS servers** page for your VNet and select **Custom**. Add the IP address of your Azure-hosted domain controller and **Save**.
- :::image type="content" source="./media/kerberos-authentication-portal/dns-servers.png" alt-text="Screenshot of DNS servers page." lightbox="./media/kerberos-authentication-portal/dns-servers.png":::
+## Deploy Bastion
-## <a name="deploy"></a>Deploy Bastion
+1. Begin configuring your bastion deployment using the steps in [Tutorial: Deploy Bastion using manual configuration settings](tutorial-create-host-portal.md). Configure the settings on the **Basics** tab. Then, at the top of the page, click **Advanced** to go to the Advanced tab.
-In this section, the following steps help you deploy Bastion to your virtual network.
+1. On the **Advanced** tab, select **Kerberos**.
-1. Deploy Bastion to your VNet using the steps in [Tutorial: Deploy Bastion using manual configuration settings](tutorial-create-host-portal.md). Configure the settings on the **Basics** tab. Then, select the **Advanced** tab.
+ :::image type="content" source="./media/kerberos-authentication-portal/select-kerberos.png" alt-text="Screenshot of select bastion features." lightbox="./media/kerberos-authentication-portal/select-kerberos.png":::
-1. On the **Advanced** tab, select **Kerberos**. Then select the **Review + create** and **Create** to deploy Bastion to your virtual network.
-
- :::image type="content" source="./media/kerberos-authentication-portal/select-kerberos.png" alt-text="Screenshot of Advanced tab." lightbox="./media/kerberos-authentication-portal/select-kerberos.png":::
+1. At the bottom of the page, select **Review + create**, then **Create** to deploy Bastion to your virtual network.
1. Once the deployment completes, you can use it to sign in to any reachable Windows VMs joined to the custom DNS you specified in the earlier steps.
-## <a name="modify"></a>To modify an existing Bastion deployment
+## To modify an existing Bastion deployment
In this section, the following steps help you modify your virtual network and existing Bastion deployment for Kerberos authentication.
-1. [Update the DNS settings](#vnet) for your virtual network.
+1. [Update the DNS settings](#update-vnet-dns-servers) for your virtual network.
1. Go to the portal page for your Bastion deployment and select **Configuration**. 1. On the Configuration page, select **Kerberos authentication**, then select **Apply**. 1. Bastion will update with the new configuration settings.
-## <a name="verify"></a>To verify Bastion is using Kerberos
+## To verify Bastion is using Kerberos
Once you have enabled Kerberos on your Bastion resource, you can verify that it's actually using Kerberos for authentication to the target domain-joined VM.
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
Title: 'Tutorial: Deploy Bastion using manual settings: Azure portal'
-description: Learn how to deploy Bastion using manual settings using the Azure portal.
-
+ Title: 'Tutorial: Deploy Bastion using specified settings: Azure portal'
+description: Learn how to deploy Bastion using settings that you specify - Azure portal.
Previously updated : 05/05/2022 Last updated : 08/03/2022 -+
-# Tutorial: Deploy Bastion using manual settings
+# Tutorial: Deploy Bastion using specified settings
-This tutorial helps you deploy Azure Bastion from the Azure portal using manual settings. When you use manual settings, you can specify configuration values such as instance counts and the SKU at the time of deployment. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration.
+This tutorial helps you deploy Azure Bastion from the Azure portal using your own specified manual settings. When you use manual settings, you can specify configuration values such as instance counts and the SKU at the time of deployment. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration.
In this tutorial, you deploy Bastion using the Standard SKU tier and adjust host scaling (instance count). After the deployment is complete, you connect to your VM via private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it.
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to your VNet.
+1. Go to your virtual network.
+
+1. On the page for your virtual network, in the left pane, select **Bastion** to open the **Bastion** page.
-1. Select **Bastion** in the left pane to open the **Bastion** page.
+1. On the Bastion page, select **I want to configure Azure Bastion on my own** to configure manually. This lets you configure specific additional settings when deploying Bastion to your VNet.
-1. On the Bastion page, select **Configure manually**. This lets you configure specific additional settings before deploying Bastion to your VNet.
- :::image type="content" source="./media/tutorial-create-host-portal/configure-manually.png" alt-text="Screenshot of Bastion page showing configure manually button." lightbox="./media/tutorial-create-host-portal/configure-manually.png":::
+ :::image type="content" source="./media/tutorial-create-host-portal/configure-manually.png" alt-text="Screenshot of Bastion page showing configure bastion on my own." lightbox="./media/tutorial-create-host-portal/configure-manually.png":::
1. On the **Create a Bastion** page, configure the settings for your bastion host. Project details are populated from your virtual network values. Configure the **Instance details** values.
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
:::image type="content" source="./media/tutorial-create-host-portal/create-a-bastion.png" alt-text="Screenshot of Create a Bastion."lightbox="./media/tutorial-create-host-portal/create-a-bastion.png":::
-1. The public IP address section is where you configure the public IP address of the Bastion host resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating. This IP address doesn't have anything to do with any of the VMs that you want to connect to. Create a new IP address. You can leave the default naming suggestion.
+1. The **Public IP address** section is where you configure the public IP address of the Bastion host resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating. Create a new IP address. You can leave the default naming suggestion.
1. When you finish specifying the settings, select **Review + Create**. This validates the values.
batch Private Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/private-connectivity.md
This article describes the steps to create a private endpoint to access Batch ac
Batch account resource has two endpoints supported to access with private endpoints: -- Account endpoint (sub-resource: **batchAccount**): this is the endpoint for [Batch Service REST API](/rest/api/batchservice/) (data plane), for example managing pools, compute nodes, jobs, tasks, etc.
+- Account endpoint (sub-resource: **batchAccount**): this endpoint is used for accessing [Batch Service REST API](/rest/api/batchservice/) (data plane), for example managing pools, compute nodes, jobs, tasks, etc.
-- Node management endpoint (sub-resource: **nodeManagement**): used by Batch pool nodes to access Batch node management service. This is only applicable when using [simplified compute node communication](simplified-compute-node-communication.md). This feature is in preview.
+- Node management endpoint (sub-resource: **nodeManagement**): used by Batch pool nodes to access Batch node management service. This endpoint is only applicable when using [simplified compute node communication](simplified-compute-node-communication.md). This feature is in preview.
> [!IMPORTANT] > - This preview sub-resource is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). + ## Azure portal Use the following steps to create a private endpoint with your Batch account using the Azure portal:
Use the following steps to create a private endpoint with your Batch account usi
- For **Private DNS Zone**, select **privatelink.batch.azure.com**. The private DNS zone is determined automatically. You can't change this setting by using the Azure portal. > [!IMPORTANT]
-> If you have existing private endpoints created with previous private DNS zone `privatelink.<region>.batch.azure.com`, please follow [Migration with existing Batch account private endpoints](#migration-with-existing-batch-account-private-endpoints).
+> - If you have existing private endpoints created with previous private DNS zone `privatelink.<region>.batch.azure.com`, please follow [Migration with existing Batch account private endpoints](#migration-with-existing-batch-account-private-endpoints).
+> - If you've selected private DNS zone integration, make sure the private DNS zone is linked to your virtual network successfully. It's possible that Azure portal let you choose an existing private DNS zone, which might not be linked to your virtual network and you'll need to manually add the [virtual network link](../dns/private-dns-virtual-network-links.md).
6. Select **Review + create**, then wait for Azure to validate your configuration. 7. When you see the **Validation passed** message, select **Create**.
-> [!NOTE]
+> [!TIP]
> You can also create the private endpoint from **Private Link Center** in Azure portal, or create a new resource by searching **private endpoint**. ## Use the private endpoint
After the private endpoint is provisioned, you can access the Batch account from
- Private endpoint for **nodeManagement**: Batch pool's compute nodes can connect to and be managed by Batch node management service. > [!IMPORTANT]
-> If [public network access](public-network-access.md) is disabled with Batch account, performing account operations (for example pools, jobs) outside of the virtual network where the private endpoint is provisioned will result in an "AuthorizationFailure" message for Batch account in the Azure Portal.
+> If [public network access](public-network-access.md) is disabled with Batch account, performing account operations (for example pools, jobs) outside of the virtual network where the private endpoint is provisioned will result in an "AuthorizationFailure" message for Batch account in the Azure portal.
To view the IP addresses for the private endpoint from the Azure portal:
When you're creating the private endpoint, you can integrate it with a [private
## Migration with existing Batch account private endpoints
-With the introduction of the new private endpoint sub-resource `nodeManagement` for Batch node management endpoint, the default private DNS zone for Batch account is simplified from `privatelink.<region>.batch.azure.com` to `privatelink.batch.azure.com`. The existing private endpoints for sub-resource `batchAccount` will continue to work, and no action is needed.
+With the introduction of the new private endpoint sub-resource **nodeManagement** for Batch node management endpoint, the default private DNS zone for Batch account is simplified from `privatelink.<region>.batch.azure.com` to `privatelink.batch.azure.com`. To keep backward compatibility with the previously used private DNS zone, for a Batch account with any approved **batchAccount** private endpoint, its account endpoint's DNS CNAME mappings contains both zones (with the previous zone comes first), for example:
+
+```
+myaccount.east.batch.azure.com CNAME myaccount.privatelink.east.batch.azure.com
+myaccount.privatelink.east.batch.azure.com CNAME myaccount.east.privatelink.batch.azure.com
+myaccount.east.privatelink.batch.azure.com CNAME <Batch API public FQDN>
+```
+
+### Continue to use previous private DNS zone
+
+If you've already used the previous DNS zone `privatelink.<region>.batch.azure.com` with your virtual network, you should continue to use it for existing and new **batchAccount** private endpoints, and no action is needed.
+
+> [!IMPORTANT]
+> With existing usage of previous private DNS zone, please keep using it even with newly created private endpoints. Do not use the new zone with your DNS integration solution until you can [migrate to the new zone](#migrating-previous-private-dns-zone-to-the-new-zone).
+
+### Create a new batchAccount private endpoint with DNS integration in Azure portal
+
+If you manually create a new **batchAccount** private endpoint using Azure portal with automatic DNS integration enabled, it will use the new private DNS zone `privatelink.batch.azure.com` for the DNS integration: create the private DNS zone, link it to your virtual network, and configure DNS A record in the zone for your private endpoint.
+
+However, if your virtual network has already been linked to the previous private DNS zone `privatelink.<region>.batch.azure.com`, it will break the DNS resolution for your batch account in your virtual network, because the DNS A record for your new private endpoint is added into the new zone but DNS resolution checks the previous zone first for backward-compatibility support.
-However, if you have existing `batchAccount` private endpoints that are enabled with automatic private DNS integration using previous private DNS zone, extra configuration is needed for the new `batchAccount` private endpoint to create in the same virtual network:
+You can mitigate this issue with following options:
-- If you don't need the previous private endpoint anymore, delete the private endpoint. Also unlink the previous private DNS zone from your virtual network. No more configuration is needed for the new private endpoint.
+- If you don't need the previous private DNS zone anymore, unlink it from your virtual network. No further action is needed.
- Otherwise, after the new private endpoint is created:
However, if you have existing `batchAccount` private endpoints that are enabled
1. Manually add a DNS CNAME record. For example, `myaccount CNAME => myaccount.<region>.privatelink.batch.azure.com`. > [!IMPORTANT]
-> This manual mitigation is only needed when you create a new **batchAccount** private endpoint with private DNS integration in the same virtual network which has existing private endpoints.
+> This manual mitigation is only needed when you create a new **batchAccount** private endpoint with private DNS integration in the same virtual network which has already been linked to the previous private DNS zone.
+
+### Migrating previous private DNS zone to the new zone
+
+Although you can keep using the previous private DNS zone with your existing deployment process, it's recommended to migrate it to the new zone for simplicity of DNS configuration management:
+
+- With the new private DNS zone `privatelink.batch.azure.com`, you won't need to configure and manage different zones for each region with your Batch accounts.
+- When you start to use the new [**nodeManagement** private endpoint](./private-connectivity.md) that also uses the new private DNS zone, you'll only need to manage one single private DNS zone for both types of private endpoints.
+
+You can migrate the previous private DNS zone with following steps:
+
+1) Create and link the new private DNS zone `privatelink.batch.azure.com` to your virtual network.
+2) Copy all DNS A records from the previous private DNS zone to the new zone:
+
+```
+From zone "privatelink.<region>.batch.azure.com":
+ myaccount A <ip>
+To zone "privatelink.batch.azure.com":
+ myaccount.<region> A <ip>
+```
+
+3) Unlink the previous private DNS zone from your virtual network.
+4) Verify DNS resolution within your virtual network, and the Batch account DNS name should continue to be resolved to the private endpoint IP address:
+
+```
+nslookup myaccount.<region>.batch.azure.com
+```
+
+5) Start to use the new private DNS zone with your deployment process for new private endpoints.
+6) Delete the previous private DNS zone after the migration is completed.
## Pricing
For details on costs related to private endpoints, see [Azure Private Link prici
When creating a private endpoint with your Batch account, keep in mind the following: - Private endpoint resources with the sub-resource **batchAccount** must be created in the same subscription as the Batch account.-- Resource movement is not supported for private endpoints with Batch accounts.
+- Resource movement isn't supported for private endpoints with Batch accounts.
- If a Batch account resource is moved to a different resource group or subscription, the private endpoints can still work, but the association to the Batch account breaks. If you delete the private endpoint resource, its associated private endpoint connection still exists in your Batch account. You can manually remove connection from your Batch account. - To delete the private connection, either delete the private endpoint resource, or delete the private connection in the Batch account (this action disconnects the related private endpoint resource).-- DNS records in the private DNS zone are not removed automatically when you delete a private endpoint connection from the Batch account. You must manually remove the DNS records before adding a new private endpoint linked to this private DNS zone. If you don't clean up the DNS records, unexpected access issues might happen.
+- DNS records in the private DNS zone aren't removed automatically when you delete a private endpoint connection from the Batch account. You must manually remove the DNS records before adding a new private endpoint linked to this private DNS zone. If you don't clean up the DNS records, unexpected access issues might happen.
## Next steps
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
To restrict access to these nodes and reduce the discoverability of these nodes
- Enable outbound access for Batch node management. A pool with no public IP addresses doesn't have internet outbound access enabled by default. To allow compute nodes to access the Batch node management service (see [Use simplified compute node communication](simplified-compute-node-communication.md)) either:
- - Use `nodeManagement` [private endpoint with Batch accounts](private-connectivity.md). This is the preferred method.
+ - Use [**nodeManagement**](private-connectivity.md) private endpoint with Batch accounts, which provides private access to Batch node management service from the virtual network. This is the preferred method.
- Alternatively, provide your own internet outbound access support (see [Outbound access to the internet](#outbound-access-to-the-internet)).
+> [!IMPORTANT]
+> There are two sub-resources for private endpoints with Batch accounts. Please use the **nodeManagement** private endpoint for the Batch pool without public IP addresses.
+ ## Current limitations 1. Pools without public IP addresses must use Virtual Machine Configuration and not Cloud Services Configuration.
To restrict access to these nodes and reduce the discoverability of these nodes
## Create a pool without public IP addresses in the Azure portal
+1. If needed, create [**nodeManagement**](private-connectivity.md) private endpoint for your Batch account in the virtual network (see the outbound access requirement in [prerequisites](#prerequisites)).
1. Navigate to your Batch account in the Azure portal. 1. In the **Settings** window on the left, select **Pools**. 1. In the **Pools** window, select **Add**. 1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown. 1. Select the correct **Publisher/Offer/Sku** of your image. 1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, as well as any desired optional settings.
-1. Select a virtual network and subnet you wish to use. This virtual network must be in the same location as the pool you are creating.
+1. Select a virtual network and subnet you wish to use. This virtual network must be in the same location as the pool you're creating.
1. In **IP address provisioning type**, select **NoPublicIPAddresses**. ![Screenshot of the Add pool screen with NoPublicIPAddresses selected.](./media/batch-pool-no-public-ip-address/create-pool-without-public-ip-address.png)
Another way to provide outbound connectivity is to use a user-defined route (UDR
> [!IMPORTANT] > There is no extra network resource (load balancer, network security group) created for simplified node communication pools without public IP addresses. Since the compute nodes in the pool are not bound to any load balancer, Azure may provide [Default Outbound Access](../virtual-network/ip-services/default-outbound-access.md). However, Default Outbound Access is not suitable for production workloads, so it is strongly recommended to bring your own Internet outbound access.
+## Troubleshooting
+
+### Unusable compute nodes in a Batch pool
+
+If compute nodes run into unusable state in a Batch pool without public IP addresses, the first and most important check is to verify the outbound access to the Batch node management service. It must be configured correctly so that compute nodes are able to connect to service from your virtual network.
+
+If you're using **nodeManagement** private endpoint:
+
+- Check if the private endpoint is in provisioning succeeded state, and also in **Approved** status.
+- Check if the DNS configuration is set up correctly for the node management endpoint of your Batch account. You can confirm it by running `nslookup <nodeManagementEndpoint>` from within your virtual network, and the DNS name should be resolved to the private endpoint IP address.
+- Run TCP ping with the node management endpoint using default HTTPS port (443). This probe can tell if the private link connection is working as expected.
+
+```
+# Windows
+Test-TcpConnection -ComputeName <nodeManagementEndpoint> -Port 443
+# Linux
+nc -v <nodeManagementEndpoint> 443
+```
+
+> [!TIP]
+> You can get the node management endpoint from your [Batch account's properties](batch-account-create-portal.md#view-batch-account-properties).
+
+If the TCP ping fails (for example, timed out), it's typically an issue with the private link connection, and you can raise Azure support ticket with this private endpoint resource. Otherwise, this node unusable issue can be troubleshot as normal Batch pools, and you can raise support ticket with your Batch account.
+
+If you're using your own internet outbound solution instead of private endpoint, run the same TCP ping with node management endpoint as shown above. If it's not working, check if your outbound access is configured correctly by following detailed requirements for [simplified compute node communication](simplified-compute-node-communication.md).
+
+### Connect to compute nodes
+
+There's no internet inbound access to compute nodes in the Batch pool without public IP addresses. To access your compute nodes for debugging, you'll need to connect from within the virtual network:
+
+- Use jumpbox machine inside the virtual network, then connect to your compute nodes from there.
+- Or, try using other remote connection solutions like [Azure Bastion](../bastion/bastion-overview.md):
+ - Create Bastion in the virtual network with [IP based connection](../bastion/connect-ip-address.md) enabled.
+ - Use Bastion to connect to the compute node using its IP address.
+
+You can follow the guide [Connect to compute nodes](error-handling.md#connect-to-compute-nodes) to get user credential and IP address for the target compute node in your Batch pool.
+ ## Migration from previous preview version of No Public IP pools For existing pools that use the [previous preview version of Azure Batch No Public IP pool](batch-pool-no-public-ip-address.md), it's only possible to migrate pools created in a [virtual network](batch-virtual-network.md). To migrate the pool, follow the [opt-in process for simplified node communication](simplified-compute-node-communication.md):
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Previously updated : 02/18/2022 Last updated : 08/01/2022 # What is Custom Neural Voice?
-Custom Neural Voice is a text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data.
+Custom Neural Voice is a text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data. If you're looking for ready-to-use options, check out our [text-to-speech](text-to-speech.md) service.
Based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md#custom-neural-voice) for Custom Neural Voice.
-> [!NOTE]
+> [!IMPORTANT]
> Custom Neural Voice access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural). ## The basics of Custom Neural Voice
You can adapt the neural text-to-speech engine to fit your needs. To create a cu
## Custom Neural Voice project types
-Speech Studio provides two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite.
+Speech Studio provides two Custom Neural Voice (CNV) project types: CNV Lite and CNV Pro.
-The following table summarizes key differences between the CNV Pro and CNV Lite project types.
+The following table summarizes key differences between the CNV Lite and CNV Pro project types.
|**Items**|**Lite (Preview)**| **Pro**| ||||
Review these CNV Pro articles to learn more and get started.
| Persona | A persona describes who you want this voice to be. A good persona design will inform all voice creation. This might include choosing an available voice model already created, or starting from scratch by casting and recording a new voice talent.| | Script | A script is a text file that contains the utterances to be spoken by your voice talent. (The term *utterances* encompasses both full sentences and shorter phrases.)|
+## The process for creating a professional custom neural voice
+
+Creating a great custom neural voice requires careful quality control in each step, from voice design and data preparation, to the deployment of the voice model to your system. The following sections discuss some key steps you'll take when you're creating a custom neural voice for your organization.
+
+### Persona design
+
+First, [design a persona](/record-custom-voice-samples.md#choose-your-voice-talent) of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning.
+
+### Script selection
+
+Carefully [select the recording script](/record-custom-voice-samples.md#script-selection-criteria) to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
+
+### Preparing training data
+
+It's a good idea to capture the audio recordings in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model depends heavily on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.
+
+After the recordings are ready, [prepare the training data](how-to-custom-voice-prepare-data.md) in the right format.
+
+### Training
+
+After you've prepared the training data, go to [Speech Studio](https://aka.ms/speechstudio/customvoice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
+
+### Testing
+
+Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs a good idea to use scripts within and outside the training dataset, so you can test the quality more broadly for different content.
+
+### Tuning and adjustment
+
+The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech.
+
+SSML is the markup language used to communicate with the text-to-speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
+ ## Responsible use of AI To learn how to use Custom Neural Voice responsibly, check the following articles.
To learn how to use Custom Neural Voice responsibly, check the following article
## Next steps > [!div class="nextstepaction"]
-> [Get started with Custom Neural Voice](how-to-custom-voice.md)
+> [Create a Project](how-to-custom-voice.md)
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Previously updated : 02/18/2022 Last updated : 08/01/2022
In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned abo
> [!NOTE] > See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects. This article focuses on the creation of a professional Custom Neural Voice using the Pro project.
-## Prerequisites
-
-* [Create a custom voice project](how-to-custom-voice.md)
-* [Prepare training data](how-to-custom-voice-prepare-data.md)
- ## Set up voice talent A *voice talent* is an individual or target speaker whose voices are recorded and used to create neural voice models. Before you create a voice, define your voice persona and select a right voice talent. For details on recording voice samples, see [the tutorial](record-custom-voice-samples.md).
To train a neural voice, you must create a voice talent profile with an audio fi
Upload this audio file to the Speech Studio as shown in the following screenshot. You create a voice talent profile, which is used to verify against your training data when you create a voice model. For more information, see [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext). :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Screenshot that shows the upload voice talent statement.":::
-
-> [!NOTE]
-> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext), and then [apply for access](https://aka.ms/customneural).
The following steps assume that you've prepared the voice talent verbal consent files. Go to [Speech Studio](https://aka.ms/custom-voice-portal) to select a Custom Neural Voice project, and then follow these steps to create a voice talent profile.
The following steps assume that you've prepared the voice talent verbal consent
1. Select **Add voice talent**.
-1. Next, to define voice characteristics, select **Target scenario**. Then describe your **Voice characteristics**.
-
- >[!NOTE]
- >The scenarios you provide must be consistent with what you've applied for in the application form.
-
-1. Then, go to **Upload voice talent statement**, and follow the instruction to upload the voice talent statement you've prepared beforehand.
+1. Next, to define voice characteristics, select **Target scenario**. Then describe your **Voice characteristics**. The scenarios you provide must be consistent with what you've applied for in the application form.
- >[!NOTE]
- >Make sure the verbal statement is recorded in the same settings as your training data, including the recording environment and speaking style.
+1. Go to **Upload voice talent statement**, and follow the instruction to upload the voice talent statement you've prepared beforehand. Make sure the verbal statement is recorded in the same settings as your training data, including the recording environment and speaking style.
1. Go to **Review and create**, review the settings, and select **Submit**.
You can do the following to create and review your training data:
> [!NOTE] >- Duplicate audio names are removed from the training. Make sure the data you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicates, they're rejected.
->- If you've created data files in the previous version of Speech Studio, you must specify a training set for your data in advance to use them. If you haven't, an exclamation mark is appended to the data name, and the data can't be used.
All data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Speech service. Go to [Prepare training data](how-to-custom-voice-prepare-data.md), and confirm that your data is correctly formatted. > [!NOTE] > - Standard subscription (S0) users can upload five data files simultaneously. If you reach the limit, wait until at least one of your data files finishes importing. Then try again.
-> - The maximum number of data files allowed to be imported per subscription is 500 .zip files for standard subscription (S0) users.
+> - The maximum number of data files allowed to be imported per subscription is 500 .zip files for standard subscription (S0) users. Please see out [Speech service quotas and limits](speech-services-quotas-and-limits.md#custom-neural-voice) for more details.
Data files are automatically validated when you select **Submit**. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. If there are any errors, fix them and submit again.
After you validate your data files, you can use them to build your Custom Neural
>[!NOTE] >- To create a custom neural voice, select at least 300 utterances.
- >- To train a neural voice, you must specify a voice talent profile. This profile must provide the audio consent file of the voice talent, acknowledging to use his or her speech data to train a custom neural voice model. Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access](https://aka.ms/customneural).
+ >- To train a neural voice, you must specify a voice talent profile. This profile must provide the audio consent file of the voice talent, acknowledging to use their speech data to train a custom neural voice model.
1. Choose your test script. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script, including up to 100 utterances. The test script must exclude the filenames (the ID of each utterance). Otherwise, these IDs are spoken. Here's an example of how the utterances are organized in one .txt file:
After you validate your data files, you can use them to build your Custom Neural
1. Review the settings, then select **Submit** to start training the model.
- > [!NOTE]
- > Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files.
+ Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files.
The **Train model** table displays a new entry that corresponds to this newly created model.
For more information, [learn more about the capabilities and limits of this feat
- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) - [How to record voice samples](record-custom-voice-samples.md) - [Text-to-Speech API reference](rest-text-to-speech.md)-- [Long Audio API](long-audio-api.md)-
+- [Long Audio API](long-audio-api.md)
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
Previously updated : 02/18/2022 Last updated : 08/01/2022
When you're ready to create a custom Text-to-Speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications. > [!NOTE]
-> See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects. This article focuses on the creation of a professional Custom Neural Voice using the Pro project.
+> This article focuses on the creation of a professional Custom Neural Voice using the Pro project. See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects.
## Voice talent verbal statement
-Before you can train your own Text-to-Speech voice model, you'll need audio recordings and the associated text transcriptions. On this page, we'll review data types, how they're used, and how to manage each.
+Before you can train your own Text-to-Speech voice model, you'll need [audio recordings](/record-custom-voice-samples.md) and the [associated text transcriptions](/how-to-custom-voice-prepare-data.md#types-of-training-data). On this page, we'll review data types, how they're used, and how to manage each.
> [!IMPORTANT] > To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence. You can find the statement in multiple languages [here](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
Before you can train your own Text-to-Speech voice model, you'll need audio reco
> > Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access here](https://aka.ms/customneural).
-
- ## Types of training data A voice training dataset includes audio recordings, and a text file with the associated transcriptions. Each audio file should contain a single utterance (a single sentence or a single turn for a dialog system), and be less than 15 seconds long.
-In some cases, you may not have the right dataset ready and will want to test the custom neural voice training with available audio files, short or long, with or without transcripts. We provide tools (beta) to help you segment your audio into utterances and prepare transcripts using the [Batch Transcription API](batch-transcription.md).
+In some cases, you may not have the right dataset ready and will want to test the custom neural voice training with available audio files, short or long, with or without transcripts. We provide options (beta) to help you segment your audio into utterances and prepare transcripts using the [Batch Transcription API](batch-transcription.md).
This table lists data types and how each is used to create a custom Text-to-Speech voice model. | Data type | Description | When to use | Additional processing required |
-| | -- | -- | |
+| | -- | -- | |
| **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. | | **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (.wav or .mp3, longer than 20 seconds), paired with a collection (.zip) of transcripts that contains all spoken words. | You have audio files and matching transcripts, but they aren't segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. | | **Audio only (beta)** | A collection (.zip) of audio files (.wav or .mp3) without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.|
Files should be grouped by type into a dataset and uploaded as a zip file. Each
## Individual utterances + matching transcript
-You can prepare recordings of individual utterances and the matching transcript in two ways. Either write a script and have it read by a voice talent or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
+You can prepare recordings of individual utterances and the matching transcript in two ways. Either [write a script and have it read by a voice talent](/speech-service/record-custom-voice-samples.md) or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
Below is an example of how the transcripts are organized utterance by utterance
``` ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts will introduce quality loss during the training.
-## Long audio + transcript (beta)
+## Long audio and transcript (beta)
In some cases, you may not have segmented audio available. We provide a service (beta) through the Speech Studio to help you segment long audio files and create transcriptions. Keep in mind, this service will be charged toward your speech-to-text subscription usage.
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
Previously updated : 02/18/2022 Last updated : 08/01/2022
-# Get started with Custom Neural Voice
+# Create a Project
[Custom Neural Voice](https://aka.ms/customvoice) is a set of online tools that you use to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md#custom-neural-voice) and [region](regions.md#custom-neural-voices).
-> [!NOTE]
-> Custom Neural Voice Pro can be used to create higher-quality models that are indistinguishable from human recordings. For access you must commit to using it in alignment with our responsible AI principles. Learn more about our [policy on the limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply here](https://aka.ms/customneural).
+> [!IMPORTANT]
+> Custom Neural Voice Pro can be used to create higher-quality models that are indistinguishable from human recordings. For access you must commit to using it in alignment with our responsible AI principles. Learn more about our [policy on limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply here](https://aka.ms/customneural).
> > With [Custom Neural Voice Lite](custom-neural-voice.md#custom-neural-voice-project-types) (public preview), you can create a model for demonstration and evaluation purpose. No application is required. Microsoft restricts and selects the recording and testing samples for use with Custom Neural Voice Lite. You must apply the full access to Custom Neural Voice in order to deploy and use the Custom Neural Voice Lite model for business purpose.
Once you've created an Azure account and a Speech service subscription, you'll n
1. Select your subscription and create a speech project. 1. If you want to switch to another Speech subscription, select the **cog** icon at the top.
-> [!NOTE]
+> [!IMPORTANT]
> Custom Neural Voice training is currently only available in East US, Southeast Asia, UK South, with the S0 tier. Make sure you select the right Speech resource if you would like to create a neural voice. ## Create a project
To create a custom voice project:
1. Sign in to [Speech Studio](https://aka.ms/speechstudio). 1. Select **Text-to-Speech** > **Custom Voice** > **Create project**.
- See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects.
+ See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Lite and Custom Neural Voice Pro projects.
-1. After you've created a CNV Pro project, you'll see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. See [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md) to set up the voice talent, and proceed to training data.
-
-## Tips for creating a professional custom neural voice
-
-Creating a great custom neural voice requires careful quality control in each step, from voice design and data preparation, to the deployment of the voice model to your system. The following sections discuss some key steps to take when you're creating a custom neural voice for your organization.
-
-### Persona design
-
-First, design a persona of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning.
-
-### Script selection
-
-Carefully select the recording script to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
-
-### Preparing training data
-
-It's a good idea to capture the audio recordings in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model depends heavily on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.
-
-After the recordings are ready, follow [Prepare training data](how-to-custom-voice-prepare-data.md) to prepare the training data in the right format.
-
-### Training
-
-After you've prepared the training data, go to [Speech Studio](https://aka.ms/speechstudio/customvoice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
-
-### Testing
-
-Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs a good idea to use scripts within and outside the training dataset, so you can test the quality more broadly for different content.
-
-### Tuning and adjustment
-
-The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech.
-
-SSML is the markup language used to communicate with the text-to-speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
+1. After you've created a CNV Pro project, click your project's name and you'll see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. See [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md) to set up the voice talent, and proceed to training data.
## Cross lingual feature
If you're using the old version of Custom Voice (which is scheduled to be retire
## Next steps - [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)-- [Train your voice model](how-to-custom-voice-create-voice.md)-- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) - [How to record voice samples](record-custom-voice-samples.md)
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
Previously updated : 02/18/2022 Last updated : 08/01/2022 zone_pivot_groups: programming-languages-set-nineteen
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
-# Get facial pose events for lip-sync
+# Get facial position with viseme
> [!NOTE]
-> At this time, viseme events are available only for [neural voices](language-support.md#text-to-speech).
+> Viseme ID supports neural voices in [all viseme-supported locales](language-support.md#viseme). Scalable Vector Graphics (SVG) only supports neural voices in `en-US` locale, and blend shapes supports neural voices in `en-US` and `zh-CN` locales.
-A _viseme_ is the visual description of a phoneme in spoken language. It defines the position of the face and mouth when a person speaks a word. Each viseme depicts the key facial poses for a specific set of phonemes.
+A *viseme* is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. Each viseme depicts the key facial poses for a specific set of phonemes.
-You can use visemes to control the movement of 2D and 3D avatar models, so that the mouth movements are perfectly matched to synthetic speech. For example, you can:
+You can use visemes to control the movement of 2D and 3D avatar models, so that the facial positions are best aligned with synthetic speech. For example, you can:
* Create an animated virtual voice assistant for intelligent kiosks, building multi-mode integrated services for your customers. * Build immersive news broadcasts and improve audience experiences with natural face and mouth movements.
You can use visemes to control the movement of 2D and 3D avatar models, so that
For more information about visemes, view this [introductory video](https://youtu.be/ui9XT47uwxs). > [!VIDEO https://www.youtube.com/embed/ui9XT47uwxs]
-## Azure Neural TTS can produce visemes with speech
+## Overall workflow of producing viseme with speech
-Neural Text-to-SpeechΓÇ»(Neural TTS) turns input text or SSML (Speech Synthesis Markup Language) into lifelike synthesized speech. Speech audio output can be accompanied by viseme IDs and their offset timestamps. Each viseme ID specifies a specific pose in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme. Using a 2D or 3D rendering engine, you can use these viseme events to animate your avatar.
+Neural Text-to-SpeechΓÇ»(Neural TTS) turns input text or SSML (Speech Synthesis Markup Language) into lifelike synthesized speech. Speech audio output can be accompanied by viseme ID, Scalable Vector Graphics (SVG), or blend shapes. Using a 2D or 3D rendering engine, you can use these viseme events to animate your avatar.
The overall workflow of viseme is depicted in the following flowchart: ![Diagram of the overall workflow of viseme.](media/text-to-speech/viseme-structure.png)
-*Viseme ID* and *audio offset output* are described in the following table:
+You can request viseme output in SSML. For details, see [how to use viseme element in SSML](speech-synthesis-markup.md#viseme-element).
-| Visme&nbsp;element | Description |
-|--|-|
-| Viseme ID | An integer number that specifies a viseme.<br>For English (US), we offer 22 different visemes, each depicting the mouth shape for a specific set of phonemes. There is no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they look the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes). |
-| Audio offset | The start time of each viseme, in ticks (100 nanoseconds). |
+## Viseme ID
+Viseme ID refers to an integer number that specifies a viseme. We offer 22 different visemes, each depicting the mouth shape for a specific set of phonemes. There's no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they look the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes).
+
+Speech audio output can be accompanied by viseme IDs and `Audio offset`. The `Audio offset` indicates the offset timestamp that represents the start time of each viseme, in ticks (100 nanoseconds).
+
+### Map phonemes to visemes
+
+Visemes vary by language and locale. Each locale has a set of visemes that correspond to its specific phonemes. The [SSML phonetic alphabets](speech-ssml-phonetic-sets.md) documentation maps viseme IDs to the corresponding International Phonetic Alphabet (IPA) phonemes.
+
+## 2D SVG animation
+
+For 2D characters, you can design a character that suits your scenario and use Scalable Vector Graphics (SVG) for each viseme ID to get a time-based face position.
+
+With temporal tags that are provided in a viseme event, these well-designed SVGs will be processed with smoothing modifications, and provide robust animation to the users. For example, the following illustration shows a red-lipped character that's designed for language learning.
+
+![Screenshot showing a 2D rendering example of four red-lipped mouths, each representing a different viseme ID that corresponds to a phoneme.](media/text-to-speech/viseme-demo-2D.png)
+
+## 3D blend shapes animation
+
+You can use blend shapes to drive the facial movements of a 3D character that you designed.
+
+The blend shapes JSON string is represented as a 2-dimensional matrix. Each row represents a frame. Each frame (in 60 FPS) contains an array of 55 facial positions.
## Get viseme events with the Speech SDK
using (var synthesizer = new SpeechSynthesizer(speechConfig, audioConfig))
{ Console.WriteLine($"Viseme event received. Audio offset: " + $"{e.AudioOffset / 10000}ms, viseme id: {e.VisemeId}.");+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ var animation = e.Animation;
};
- var result = await synthesizer.SpeakSsmlAsync(ssml));
+ // If VisemeID is the only thing you want, you can also use `SpeakTextAsync()`
+ var result = await synthesizer.SpeakSsmlAsync(ssml);
} ```
synthesizer->VisemeReceived += [](const SpeechSynthesisVisemeEventArgs& e)
// The unit of e.AudioOffset is tick (1 tick = 100 nanoseconds), divide by 10,000 to convert to milliseconds. << "Audio offset: " << e.AudioOffset / 10000 << "ms, " << "viseme id: " << e.VisemeId << "." << endl;+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ auto animation = e.Animation;
};
+// If VisemeID is the only thing you want, you can also use `SpeakTextAsync()`
auto result = synthesizer->SpeakSsmlAsync(ssml).get(); ```
synthesizer.VisemeReceived.addEventListener((o, e) -> {
// The unit of e.AudioOffset is tick (1 tick = 100 nanoseconds), divide by 10,000 to convert to milliseconds. System.out.print("Viseme event received. Audio offset: " + e.getAudioOffset() / 10000 + "ms, "); System.out.println("viseme id: " + e.getVisemeId() + ".");+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ String animation = e.getAnimation();
});
+// If VisemeID is the only thing you want, you can also use `SpeakTextAsync()`
SpeechSynthesisResult result = synthesizer.SpeakSsmlAsync(ssml).get(); ```
SpeechSynthesisResult result = synthesizer.SpeakSsmlAsync(ssml).get();
```Python speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_config)
+def viseme_cb(evt):
+ print("Viseme event received: audio offset: {}ms, viseme id: {}.".format(
+ evt.audio_offset / 10000, evt.viseme_id))
+
+ # `Animation` is an xml string for SVG or a json string for blend shapes
+ animation = evt.animation
+ # Subscribes to viseme received event
-speech_synthesizer.viseme_received.connect(lambda evt: print(
- "Viseme event received: audio offset: {}ms, viseme id: {}.".format(evt.audio_offset / 10000, evt.viseme_id)))
+speech_synthesizer.viseme_received.connect(viseme_cb)
+# If VisemeID is the only thing you want, you can also use `speak_text_async()`
result = speech_synthesizer.speak_ssml_async(ssml).get() ```
var synthesizer = new SpeechSDK.SpeechSynthesizer(speechConfig, audioConfig);
// Subscribes to viseme received event synthesizer.visemeReceived = function (s, e) { window.console.log("(Viseme), Audio offset: " + e.audioOffset / 10000 + "ms. Viseme ID: " + e.visemeId);+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ var animation = e.Animation;
}
+// If VisemeID is the only thing you want, you can also use `speakTextAsync()`
synthesizer.speakSsmlAsync(ssml); ```
SPXSpeechSynthesizer *synthesizer =
// Subscribes to viseme received event [synthesizer addVisemeReceivedEventHandler: ^ (SPXSpeechSynthesizer *synthesizer, SPXSpeechSynthesisVisemeEventArgs *eventArgs) { NSLog(@"Viseme event received. Audio offset: %fms, viseme id: %lu.", eventArgs.audioOffset/10000., eventArgs.visemeId);+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ NSString *animation = eventArgs.Animation;
}];
+// If VisemeID is the only thing you want, you can also use `SpeakText`
[synthesizer speakSsml:ssml]; ``` ::: zone-end
-Here is an example of the viseme output.
+Here's an example of the viseme output.
+
+# [Viseme ID](#tab/visemeid)
```text (Viseme), Viseme ID: 1, Audio offset: 200ms.
Here is an example of the viseme output.
(Viseme), Viseme ID: 13, Audio offset: 2350ms. ```
-After you obtain the viseme output, you can use these events to drive character animation. You can build your own characters and automatically animate them.
+# [2D SVG](#tab/2dsvg)
-For 2D characters, you can design a character that suits your scenario and use Scalable Vector Graphics (SVG) for each viseme ID to get a time-based face position. With temporal tags that are provided in a viseme event, these well-designed SVGs will be processed with smoothing modifications, and provide robust animation to the users. For example, the following illustration shows a red-lipped character that's designed for language learning.
+The SVG output is a xml string that contains the animation.
+Render the SVG animation along with the synthesized speech to see the mouth movement.
-![Screenshot showing a 2D rendering example of four red-lipped mouths, each representing a different viseme ID that corresponds to a phoneme.](media/text-to-speech/viseme-demo-2D.png)
+```xml
+<svg width= "1200px" height= "1200px" ..>
+ <g id= "front_start" stroke= "none" stroke-width= "1" fill= "none" fill-rule= "evenodd">
+ <animate attributeName= "d" begin= "d_dh_front_background_1_0.end" dur= "0.27500
+ ...
+```
-For 3D characters, think of the characters as string puppets. The puppet master pulls the strings from one state to another and the laws of physics do the rest and drive the puppet to move fluidly. The viseme output acts as a puppet master to provide an action timeline. The animation engine defines the physical laws of action. By interpolating frames with easing algorithms, the engine can further generate high-quality animations.
+# [3D blend shapes](#tab/3dblendshapes)
-## Map phonemes to visemes
-Visemes vary by language and locale. Each locale has a set of visemes that correspond to its specific phonemes. The [SSML phonetic alphabets](speech-ssml-phonetic-sets.md) documentation maps viseme IDs to the corresponding International Phonetic Alphabet (IPA) phonemes.
+Each viseme event includes a series of frames in the `Animation` SDK property. These are grouped to best align the facial positions with the audio. Your 3D engine should render each group of `BlendShapes` frames immediately before the corresponding audio chunk. The `FrameIndex` value indicates how many frames preceded the current list of frames.
+
+The output json looks like the following sample. Each frame within `BlendShapes` contains an array of 55 facial positions represented as decimal values between 0 to 1. The decimal values are in the same order as described in the facial positions table below.
+```json
+{
+ "FrameIndex":0,
+ "BlendShapes":[
+ [0.021,0.321,...,0.258],
+ [0.045,0.234,...,0.288],
+ ...
+ ]
+}
+```
+
+The order of `BlendShapes` is as follows.
+
+| Order | Facial position in `BlendShapes`|
+| | -- |
+| 1 | eyeBlinkLeft|
+| 2 | eyeLookDownLeft|
+| 3 | eyeLookInLeft|
+| 4 | eyeLookOutLeft|
+| 5 | eyeLookUpLeft|
+| 6 | eyeSquintLeft|
+| 7 | eyeWideLeft|
+| 8 | eyeBlinkRight|
+| 9 | eyeLookDownRight|
+| 10 | eyeLookInRight|
+| 11 | eyeLookOutRight|
+| 12 | eyeLookUpRight|
+| 13 | eyeSquintRight|
+| 14 | eyeWideRight|
+| 15 | jawForward|
+| 16 | jawLeft|
+| 17 | jawRight|
+| 18 | jawOpen|
+| 19 | mouthClose|
+| 20 | mouthFunnel|
+| 21 | mouthPucker|
+| 22 | mouthLeft|
+| 23 | mouthRight|
+| 24 | mouthSmileLeft|
+| 25 | mouthSmileRight|
+| 26 | mouthFrownLeft|
+| 27 | mouthFrownRight|
+| 28 | mouthDimpleLeft|
+| 29 | mouthDimpleRight|
+| 30 | mouthStretchLeft|
+| 31 | mouthStretchRight|
+| 32 | mouthRollLower|
+| 33 | mouthRollUpper|
+| 34 | mouthShrugLower|
+| 35 | mouthShrugUpper|
+| 36 | mouthPressLeft|
+| 37 | mouthPressRight|
+| 38 | mouthLowerDownLeft|
+| 39 | mouthLowerDownRight|
+| 40 | mouthUpperUpLeft|
+| 41 | mouthUpperUpRight|
+| 42 | browDownLeft|
+| 43 | browDownRight|
+| 44 | browInnerUp|
+| 45 | browOuterUpLeft|
+| 46 | browOuterUpRight|
+| 47 | cheekPuff|
+| 48 | cheekSquintLeft|
+| 49 | cheekSquintRight|
+| 50 | noseSneerLeft|
+| 51 | noseSneerRight|
+| 52 | tongueOut|
+| 53 | headRoll|
+| 54 | leftEyeRoll|
+| 55 | rightEyeRoll|
+++
+After you obtain the viseme output, you can use these events to drive character animation. You can build your own characters and automatically animate them.
## Next steps
-> [!div class="nextstepaction"]
-> [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)
+- [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)
+- [How to improve synthesis with SSML](speech-synthesis-markup.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following neural voices are in public preview.
| Language | Locale | Gender | Voice name | Style support | |-||--|-|| | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaomengNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyiNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaozhenNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunfengNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunhaoNeural` <sup>New</sup> | Optimized for promoting a product or service, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunjianNeural` <sup>New</sup> | Optimized for broadcasting sports event, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
Use the following table to determine supported styles and roles for each neural
|zh-CN-XiaoshuangNeural|`chat`|Supported|| |zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `disgruntled`, `fearful`, `gentle`, `lyrical`, `newscast`, `poetry-reading`, `sad`, `serious`|Supported|| |zh-CN-XiaoxuanNeural|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported|Supported|
+|zh-CN-XiaoyiNeural <sup>Public preview</sup>|`affectionate`, `angry`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported||
+|zh-CN-XiaozhenNeural <sup>Public preview</sup>|`angry`, `cheerful`, `disgruntled`, `fearful`, `sad`, `serious`|Supported||
|zh-CN-YunfengNeural <sup>Public preview</sup>|`calm`, `angry`, ` disgruntled`, `cheerful`, `fearful`, `sad`, `serious`, `depressed`|Supported|| |zh-CN-YunhaoNeural <sup>Public preview</sup>|`general`, `advertisement-upbeat` <sup>Public preview</sup>|Supported|| |zh-CN-YunjianNeural <sup>Public preview</sup>|`narration-relaxed`, `sports-commentary` <sup>Public preview</sup>, `sports-commentary-excited` <sup>Public preview</sup>|Supported|| |zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `narration-relaxed`, `sad`, `serious`|Supported|Supported|
-|zh-CN-YunxiaNeural <sup>Public preview</sup>|`angry`, `calm`, `cheerful`, `fearful`, `narration-relaxed`, `sad`|Supported||
+|zh-CN-YunxiaNeural <sup>Public preview</sup>|`angry`, `calm`, `cheerful`, `fearful`, `sad`|Supported||
|zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported|| |zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported| |zh-CN-YunzeNeural <sup>Public preview</sup>|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `documentary-narration`, `fearful`, `sad`, `serious`|Supported|Supported| -- ### Custom Neural Voice Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data.
There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (pre
| Turkish (Turkey) | `tr-TR` | No |No| | Vietnamese (Vietnam) | `vi-VN` | No |No|
+### Viseme
+
+A _viseme_ is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. Each viseme depicts the key facial poses for a specific set of phonemes. Speech audio output can be accompanied by a viseme ID, Scalable Vector Graphics (SVG), or blend shapes. For more information, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md).
+
+> [!NOTE]
+> Viseme ID supports [neural voices](#text-to-speech) in the locales listed below. SVG only supports neural voices in the `en-US` locale, and blend shapes supports neural voices in the `en-US` and `zh-CN` locales.
+
+The following table lists the languages supported by viseme ID.
+
+| Language | Locale |
+|||
+| Arabic (Algeria) | `ar-DZ` |
+| Arabic (Bahrain) | `ar-BH` |
+| Arabic (Egypt) | `ar-EG` |
+| Arabic (Iraq) | `ar-IQ` |
+| Arabic (Jordan) | `ar-JO` |
+| Arabic (Kuwait) | `ar-KW` |
+| Arabic (Lebanon) | `ar-LB` |
+| Arabic (Libya) | `ar-LY` |
+| Arabic (Morocco) | `ar-MA` |
+| Arabic (Oman) | `ar-OM` |
+| Arabic (Qatar) | `ar-QA` |
+| Arabic (Saudi Arabia) | `ar-SA` |
+| Arabic (Syria) | `ar-SY` |
+| Arabic (Tunisia) | `ar-TN` |
+| Arabic (United Arab Emirates) | `ar-AE` |
+| Arabic (Yemen) | `ar-YE` |
+| Bulgarian (Bulgaria) | `bg-BG` |
+| Catalan (Spain) | `ca-ES` |
+| Chinese (Cantonese, Traditional) | `zh-HK` |
+| Chinese (Mandarin, Simplified) | `zh-CN` |
+| Chinese (Taiwanese Mandarin) | `zh-TW` |
+| Croatian (Croatia) | `hr-HR` |
+| Czech (Czech) | `cs-CZ` |
+| Danish (Denmark) | `da-DK` |
+| Dutch (Belgium) | `nl-BE` |
+| Dutch (Netherlands) | `nl-NL` |
+| English (Australia) | `en-AU` |
+| English (Canada) | `en-CA` |
+| English (Hongkong) | `en-HK` |
+| English (India) | `en-IN` |
+| English (Ireland) | `en-IE` |
+| English (Kenya) | `en-KE` |
+| English (New Zealand) | `en-NZ` |
+| English (Nigeria) | `en-NG` |
+| English (Philippines) | `en-PH` |
+| English (Singapore) | `en-SG` |
+| English (South Africa) | `en-ZA` |
+| English (Tanzania) | `en-TZ` |
+| English (United Kingdom) | `en-GB` |
+| English (United States) | `en-US` |
+| Finnish (Finland) | `fi-FI` |
+| French (Belgium) | `fr-BE` |
+| French (Canada) | `fr-CA` |
+| French (France) | `fr-FR` |
+| French (Switzerland) | `fr-CH` |
+| German (Austria) | `de-AT` |
+| German (Germany) | `de-DE` |
+| German (Switzerland) | `de-CH` |
+| Greek (Greece) | `el-GR` |
+| Gujarati (India) | `gu-IN` |
+| Hebrew (Israel) | `he-IL` |
+| Hindi (India) | `hi-IN` |
+| Hungarian (Hungary) | `hu-HU` |
+| Indonesian (Indonesia) | `id-ID` |
+| Italian (Italy) | `it-IT` |
+| Japanese (Japan) | `ja-JP` |
+| Korean (Korea) | `ko-KR` |
+| Malay (Malaysia) | `ms-MY` |
+| Marathi (India) | `mr-IN` |
+| Norwegian (Bokmål, Norway) | `nb-NO` |
+| Polish (Poland) | `pl-PL` |
+| Portuguese (Brazil) | `pt-BR` |
+| Portuguese (Portugal) | `pt-PT` |
+| Romanian (Romania) | `ro-RO` |
+| Russian (Russia) | `ru-RU` |
+| Slovak (Slovakia) | `sk-SK` |
+| Slovenian (Slovenia) | `sl-SI` |
+| Spanish (Argentina) | `es-AR` |
+| Spanish (Bolivia) | `es-BO` |
+| Spanish (Chile) | `es-CL` |
+| Spanish (Colombia) | `es-CO` |
+| Spanish (Costa Rica) | `es-CR` |
+| Spanish (Cuba) | `es-CU` |
+| Spanish (Dominican Republic) | `es-DO` |
+| Spanish (Ecuador) | `es-EC` |
+| Spanish (El Salvador) | `es-SV` |
+| Spanish (Equatorial Guinea) | `es-GQ` |
+| Spanish (Guatemala) | `es-GT` |
+| Spanish (Honduras) | `es-HN` |
+| Spanish (Mexico) | `es-MX` |
+| Spanish (Nicaragua) | `es-NI` |
+| Spanish (Panama) | `es-PA` |
+| Spanish (Paraguay) | `es-PY` |
+| Spanish (Peru) | `es-PE` |
+| Spanish (Puerto Rico) | `es-PR` |
+| Spanish (Spain) | `es-ES` |
+| Spanish (Uruguay) | `es-UY` |
+| Spanish (US) | `es-US` |
+| Spanish (Venezuela) | `es-VE` |
+| Swahili (Tanzania) | `sw-TZ` |
+| Swedish (Sweden) | `sv-SE` |
+| Tamil (India) | `ta-IN` |
+| Tamil (Malaysia) | `ta-MY` |
+| Tamil (Singapore) | `ta-SG` |
+| Tamil (Sri Lanka) | `ta-LK` |
+| Telugu (India) | `te-IN` |
+| Thai (Thailand) | `th-TH` |
+| Turkish (Turkey) | `tr-TR` |
+| Ukrainian (Ukraine) | `uk-UA` |
+| Urdu (India) | `ur-IN` |
+| Urdu (Pakistan) | `ur-PK` |
+| Vietnamese (Vietnam) | `vi-VN` |
+ ## Language identification With language identification, you set and get one of the supported locales in the following table. We only compare at the language level, such as English and German. If you include multiple locales of the same language, for example, `en-IN` and `en-US`, we'll only compare English (`en`) with the other candidate languages.
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
Previously updated : 02/18/2022 Last updated : 08/01/2022
-# Record voice samples to create a professional custom neural voice
+# How to record voice samples for Custom Neural Voice
This article provides you instructions on preparing high-quality voice samples for creating a professional voice model using the Custom Neural Voice Pro project.
Recording engineer |Oversees the technical aspects of the recording and operate
Director |Prepares the script and coaches the voice talent's performance. Editor |Finalizes the audio files and prepares them for upload to Speech Studio
-An individual may fill more than one role. This guide assumes that you'll be primarily filling the director role and hiring both a voice talent and a recording engineer. If you want to make the recordings yourself, this article includes some information about the recording engineer role. The editor role isn't needed until after the session, so can be performed by the director or the recording engineer.
+An individual may fill more than one role. This guide assumes that you'll be filling the director role and hiring both a voice talent and a recording engineer. If you want to make the recordings yourself, this article includes some information about the recording engineer role. The editor role isn't needed until after the recording session, and can be performed by the director or the recording engineer.
## Choose your voice talent
-Actors with experience in voiceover or voice character work make good custom neural voice talent. You can also often find suitable talent among announcers and newsreaders. Choose voice talent whose natural voice you like. It's possible to create unique "character" voices, but it's much harder for most talent to perform them consistently, and the effort can cause voice strain. The single most important factor for choosing voice talent is consistency. Your recordings for the same voice style should all sound like they were made on the same day in the same room. You can approach this ideal through good recording practices and engineering.
+Actors with experience in voiceover, voice character work, announcing or newsreading make good voice talent. Choose voice talent whose natural voice you like. It's possible to create unique "character" voices, but it's much harder for most talent to perform them consistently, and the effort can cause voice strain. The single most important factor for choosing voice talent is consistency. Your recordings for the same voice style should all sound like they were made on the same day in the same room. You can approach this ideal through good recording practices and engineering.
-Your voice talent is the other half of the equation. They must be able to speak with consistent rate, volume level, pitch, and tone. Clear diction is a must. The talent also needs to be able to strictly control their pitch variation, emotional affect, and speech mannerisms. Recording voice samples can be more fatiguing than other kinds of voice work. Most voice talent can record for two or three hours a day. Limit sessions to three or four a week, with a day off in-between if possible.
+Your voice talent must be able to speak with consistent rate, volume level, pitch, and tone with clear dictation. They also need to be able to control their pitch variation, emotional affect, and speech mannerisms. Recording voice samples can be more fatiguing than other kinds of voice work, so most voice talent can usually only record for two or three hours a day. Limit sessions to three or four days a week, with a day off in-between if possible.
-Work with your voice talent to develop a "persona" that defines the overall sound and emotional tone of the custom neural voice. In the process, you'll pinpoint what "neutral" sounds like for that persona. Using the Custom Neural Voice capability, you can train a model that speaks with emotions. Define the "speaking styles" and ask your voice talent to read the script in a way that resonates the styles you want.
+Work with your voice talent to develop a persona that defines the overall sound and emotional tone of the custom neural voice, making sure to pinpoint what "neutral" sounds like for that persona. Using the Custom Neural Voice capability, you can train a model that speaks with emotion, so define the speaking styles of your persona and ask your voice talent to read the script in a way that resonates with the styles you want.
-A persona might have, for example, a naturally upbeat personality. So "their" voice might carry a note of optimism even when they speak neutrally. However, such a personality trait should be subtle and consistent. Listen to readings by existing voices to get an idea of what you're aiming for.
+For example, a persona with a naturally upbeat personality would carry a note of optimism even when they speak neutrally. However, this personality trait should be subtle and consistent. Listen to readings by existing voices to get an idea of what you're aiming for.
> [!TIP] > Usually, you'll want to own the voice recordings you make. Your voice talent should be amenable to a work-for-hire contract for the project.
A persona might have, for example, a naturally upbeat personality. So "their" vo
The starting point of any custom neural voice recording session is the script, which contains the utterances to be spoken by your voice talent. The term "utterances" encompasses both full sentences and shorter phrases. Building a custom neural voice requires at least 300 recorded utterances as training data.
-The utterances in your script can come from anywhere: fiction, non-fiction, transcripts of speeches, news reports, and anything else available in printed form. For a brief discussion of potential legal issues, see the ["Legalities"](#legalities) section. You can also write your own text.
-Your utterances don't need to come from the same source, or the same kind of source. They don't even need to have anything to do with each other. However, if you'll use set phrases (for example, "You have successfully logged in") in your speech application, make sure to include them in your script. It will give your custom neural voice a better chance of pronouncing those phrases well.
+The utterances in your script can come from anywhere: fiction, non-fiction, transcripts of speeches, news reports, and anything else available in printed form. For a brief discussion of potential legal issues, see the ["Legalities"](#legalities) section. You can also write your own text.
-We recommend the recording scripts include both general sentences and your domain-specific sentences. For example, if you plan to record 2,000 sentences, 1,000 of them could be general sentences, another 1,000 of them could be sentences from your target domain or the use case of your application.
+Your utterances don't need to come from the same source, the same kind of source, or have anything to do with each other. However, if you use set phrases (for example, "You have successfully logged in") in your speech application, make sure to include them in your script. It will give your custom neural voice a better chance of pronouncing those phrases well.
-We provide [sample scripts in the 'General', 'Chat' and 'Customer Service' domains for each language](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) to help you prepare your recording scripts. You can use these Microsoft shared scripts for your recordings directly or use them as a reference to create your own.
+We recommend the recording scripts include both general sentences and domain-specific sentences. For example, if you plan to record 2,000 sentences, 1,000 of them could be general sentences, another 1,000 of them could be sentences from your target domain or the use case of your application.
-You can select your domain-specific scripts from the sentences that your custom neural voice will be used to read.
+We provide [sample scripts in the 'General', 'Chat' and 'Customer Service' domains for each language](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) to help you prepare your recording scripts. You can use these Microsoft shared scripts for your recordings directly or use them as a reference to create your own.
### Script selection criteria
Below are some general guidelines that you can follow to create a good corpus (r
- Balance your script to cover different sentence types in your domain including statements, questions, exclamations, long sentences, and short sentences.
- In general, each sentence should contain 4 words to 30 words. It's required that no duplicate sentences are included in your script.<br>
- For how to balance the different sentence types, refer to the following table.
+ Each sentence should contain 4 words to 30 words, and no duplicate sentences should be included in your script.<br>
+ For how to balance the different sentence types, refer to the following table:
| Sentence types | Coverage | | : | : |
- | Statement sentences | Statement sentences are the major part of the script, taking about 70-80% of all. |
- | Question sentences | Question sentences should take about 10%-20% of your domain script, including 5%-10% of rising and 5%-10% of falling tones. |
- | Exclamation sentences| Exclamation sentences should take about 10%-20% of your scripts.|
- | Short word/phrase| Short word/phrase scripts should also take about 10% cases of the total utterances, with 5 to 7 words per case. |
+ | Statement sentences | Statement sentences should be 70-80% of the script.|
+ | Question sentences | Question sentences should be about 10%-20% of your domain script, including 5%-10% of rising and 5%-10% of falling tones. |
+ | Exclamation sentences| Exclamation sentences should be about 10%-20% of your script.|
+ | Short word/phrase| Short word/phrase scripts should be about 10% of total utterances, with 5 to 7 words per case. |
> [!NOTE]
- > Regarding short word/phrase, actually it means that single words or phrases should be included and separated with a comma. It helps a voice talent pause briefly at the comma when reading the scripts.
+ > Short words/phrases should be separated with a commas. They help remind your voice talent to pause briefly when reading them.
Best practices include:
- - Balanced coverage for Part of Speech, like verb, noun, adjective, and so on.
- - Balanced coverage for pronunciations. Include all letters from A to Z so the Text-to-Speech engine learns how to pronounce each letter in your defined style.
- - Readable, understandable, common-sense for speaker to read out.
- - Avoid too much similar pattern for word/phrase, like "easy" and "easier".
- - Include different format of numbers: address, unit, phone, quantity, date, and so on, in all sentence types.
- - Include spelling sentences if it's something your custom neural voice will be used to read. For example, "Spell of Apple is A P P L E".
+ - Balanced coverage for Parts of Speech, like verbs, nouns, adjectives, and so on.
+ - Balanced coverage for pronunciations. Include all letters from A to Z so the Text-to-Speech engine learns how to pronounce each letter in your style.
+ - Readable, understandable, common-sense scripts for the speaker to read.
+ - Avoid too many similar patterns for words/phrases, like "easy" and "easier".
+ - Include different formats of numbers: address, unit, phone, quantity, date, and so on, in all sentence types.
+ - Include spelling sentences if it's something your custom neural voice will be used to read. For example, "The spelling of Apple is A P P L E".
-- Don't put multiple sentences into one line/one utterance. Separate each line per utterances.
+- Don't put multiple sentences into one line/one utterance. Separate each line by utterance.
-- Make sure the sentence is mostly clean. In general, don't include too many non-standard words like numbers or abbreviations as they're usually hard to read. Some application may need to read many numbers or acronyms. In this case, you can include these words, but normalize them in their spoken form.
+- Make sure the sentence is clean. Generally, don't include too many non-standard words like numbers or abbreviations as they're usually hard to read. Some applications may require the reading of many numbers or acronyms. In these cases, you can include these words, but normalize them in their spoken form.
Below are some best practices for example:
- - For lines with abbreviations, instead of "BTW", you have "by the way".
- - For lines with digits, instead of "911", you have "nine one one".
- - For lines with acronyms, instead of "ABC", you have "A B C".
+ - For lines with abbreviations, instead of "BTW", write "by the way".
+ - For lines with digits, instead of "911", write "nine one one".
+ - For lines with acronyms, instead of "ABC", write "A B C".
- With that, make sure your voice talent pronounces these words in the expected way. Keep your script and recordings match consistently during the training process.
+ With that, make sure your voice talent pronounces these words in an expected way. Keep your script and recordings matched during the training process.
- Your script should include many different words and sentences with different kinds of sentence lengths, structures, and moods. -- Check the script carefully for errors. If possible, have someone else check it too. When you run through the script with your talent, you'll probably catch a few more mistakes.
+- Check the script carefully for errors. If possible, have someone else check it too. When you run through the script with your voice talent, you may catch more mistakes.
### Difference between voice talent script and training script
-The training script can differ from the voice talent script, especially for scripts that contain digits, symbols, abbreviations, date, and time. Scripts prepared for the voice talent must follow the native reading conventions, such as 50% and $45. The scripts used for training must be normalized to match the audio recording, such as *fifty percent* and *forty-five dollars*.
+The training script can differ from the voice talent script, especially for scripts that contain digits, symbols, abbreviations, date, and time. Scripts prepared for the voice talent must follow native reading conventions, such as 50% and $45. The scripts used for training must be normalized to match the audio recording, such as *fifty percent* and *forty-five dollars*.
> [!NOTE] > We provide some example scripts for the voice talent on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script). To use the example scripts for training, you must normalize them according to the recordings of your voice talent before uploading the file.
The following table shows the difference between scripts for voice talent and th
| Category |Voice talent script example | Training script example (normalized) | | | | |
-| Digits |123| one hundred and twenty-three |
+| Digits |123| one hundred and twenty-three|
| Symbols |50%| fifty percent| | Abbreviation |ASAP| as soon as possible| | Date and time |March 3rd at 5:00 PM| March third at five PM| ### Typical defects of a script
-The script's poor quality can adversely affect the training results. To achieve high-quality training results, it's crucial to avoid the defects.
+The script's poor quality can adversely affect the training results. To achieve high-quality training results, it's crucial to avoid defects.
-The script defects generally fall into the following categories:
+Script defects generally fall into the following categories:
| Category | Example | | : | : |
-| Have a meaningless content in a common way. | |
+| Meaningless content. | "Colorless green ideas sleep furiously."|
| Incomplete sentences. |- "This was my last eve" (no subject, no specific meaning) <br>- "He's obviously already funny (no quote mark in the end, it's not a complete sentence) | | Typo in the sentences. | - Start with a lower case<br>- No ending punctuation if needed<br> - Misspelling <br>- Lack of punctuation: no period in the end (except news title)<br>- End with symbols, except comma, question, exclamation <br>- Wrong format, such as:<br> &emsp;- 45$ (should be $45)<br> &emsp;- No space or excess space between word/punctuation | |Duplication in similar format, one per each pattern is enough. |- "Now is 1pm in New York"<br>- "Now is 2pm in New York"<br>- "Now is 3pm in New York"<br>- "Now is 1pm in Seattle"<br>- "Now is 1pm in Washington D.C." |
-|Uncommon foreign words: only the commonly used foreign word is acceptable in our script. | |
-|Emoji or any other uncommon symbols. | |
+|Uncommon foreign words: only commonly used foreign words are acceptable in the script. | In English one might use the French word "faux" in common speech, but a French expression such as "coincer la bulle" would be uncommon. |
+|Emoji or any other uncommon symbols | |
### Script format
-You can write your script in Microsoft Word. The script is for use during the recording session, so you can set it up any way you find easy to work with. Create the text file that's required by Speech Studio separately.
+The script is for use during recording sessions, so you can set it up any way you find easy to work with. Create the text file that's required by Speech Studio separately.
A basic script format contains three columns: -- The number of the utterance, starting at 1. Numbering makes it easy for everyone in the studio to refer to a particular utterance ("let's try number 356 again"). You can use the Word paragraph numbering feature to number the rows of the table automatically.
+- The number of the utterance, starting at 1. Numbering makes it easy for everyone in the studio to refer to a particular utterance ("let's try number 356 again"). You can use the Microsoft Word paragraph numbering feature to number the rows of the table automatically.
- A blank column where you'll write the take number or time code of each utterance to help you find it in the finished recording. - The text of the utterance itself. ![Sample script](media/custom-voice/script.png) > [!NOTE]
-> Most studios record in short segments known as *takes*. Each take typically contains 10 to 24 utterances. Just noting the take number is sufficient to find an utterance later. If you're recording in a studio that prefers to make longer recordings, you'll want to note the time code instead. The studio will have a prominent time display.
+> Most studios record in short segments known as "takes". Each take typically contains 10 to 24 utterances. Just noting the take number is sufficient to find an utterance later. If you're recording in a studio that prefers to make longer recordings, you'll want to note the time code instead. The studio will have a prominent time display.
Leave enough space after each row to write notes. Be sure that no utterance is split between pages. Number the pages, and print your script on one side of the paper.
-Print three copies of the script: one for the talent, one for the engineer, and one for the director (you). Use a paper clip instead of staples: an experienced voice artist will separate the pages to avoid making noise as the pages are turned.
+Print three copies of the script: one for the voice talent, one for the recording engineer, and one for the director (you). Use a paper clip instead of staples: an experienced voice artist will separate the pages to avoid making noise as the pages are turned.
### Voice talent statement
Read more about the [voice talent verification](/legal/cognitive-services/speech
### Legalities
-Under copyright law, an actor's reading of copyrighted text might be a performance for which the author of the work should be compensated. This performance won't be recognizable in the final product, the custom neural voice. Even so, the legality of using a copyrighted work for this purpose isn't well established. Microsoft can't provide legal advice on this issue; consult your own counsel.
+Under copyright law, an actor's reading of copyrighted text might be a performance for which the author of the work should be compensated. This performance won't be recognizable in the final product, the custom neural voice. Even so, the legality of using a copyrighted work for this purpose isn't well established. Microsoft can't provide legal advice on this issue; consult your own legal counsel.
Fortunately, it's possible to avoid these issues entirely. There are many sources of text you can use without permission or license.
You can refer to below specification to prepare for the audio samples as best pr
### Typical audio errors
-For high-quality training results, avoiding audio errors is highly recommended. The errors of audio normally involve the following categories:
+For high-quality training results, avoiding audio errors is highly recommended. Audio errors can are usually within following categories:
- Audio file name doesn't match the script ID. - WAR file has an invalid format and can't be read.-- Audio sampling rate is lower than 16 KHz. Also, it's recommended that wav file sampling rate should be equal or higher than 24 KHz for high-quality neural voice.
+- Audio sampling rate is lower than 16 KHz. It's recommended that the .wav file sampling rate be equal or higher than 24 KHz for high-quality neural voice.
- Volume peak isn't within the range of -3 dB (70% of max volume) to -6 dB (50%). -- Waveform overflow. That is, the waveform at its peak value is cut and thus not complete.
+- Waveform overflow: the waveform is cut at its peak value and is thus not complete.
![waveform overflow](media/custom-voice/overflow.png) -- The silence part isn't clean, such as ambient noise, mouth noise and echo.
+- The silent parts of the recording aren't clean; you can hear sounds such as ambient noise, mouth noise and echo.
For example, below audio contains the environment noise between speeches. ![environment noise](media/custom-voice/environment-noise.png)
- Below sample contains noises of DC offset or echo.
+ Below sample contains signs of DC offset or echo.
![DC offset or echo](media/custom-voice/dc-offset-noise.png)
For high-quality training results, avoiding audio errors is highly recommended.
### Do it yourself
-If you want to make the recording yourself, rather than going into a recording studio, here's a short primer. Thanks to the rise of home recording and podcasting, it's easier than ever to find good recording advice and resources online.
+If you want to make the recording yourself, instead of going into a recording studio, here's a short primer. Thanks to the rise of home recording and podcasting, it's easier than ever to find good recording advice and resources online.
Your "recording booth" should be a small room with no noticeable echo or "room tone." It should be as quiet and soundproof as possible. Drapes on the walls can be used to reduce echo and neutralize or "deaden" the sound of the room. Use a high-quality studio condenser microphone ("mic" for short) intended for recording voice. Sennheiser, AKG, and even newer Zoom mics can yield good results. You can buy a mic, or rent one from a local audio-visual rental firm. Look for one with a USB interface. This type of mic conveniently combines the microphone element, preamp, and analog-to-digital converter into one package, simplifying hookup.
-You may also use an analog microphone. Many rental houses offer "vintage" microphones renowned for their voice character. Note that professional analog gear uses balanced XLR connectors, rather than the 1/4-inch plug that's used in consumer equipment. If you go analog, you'll also need a preamp and a computer audio interface with these connectors.
+You may also use an analog microphone. Many rental houses offer "vintage" microphones known for their voice character. Note that professional analog gear uses balanced XLR connectors, rather than the 1/4-inch plug that's used in consumer equipment. If you go analog, you'll also need a preamp and a computer audio interface with these connectors.
Install the microphone on a stand or boom, and install a pop filter in front of the microphone to eliminate noise from "plosive" consonants like "p" and "b." Some microphones come with a suspension mount that isolates them from vibrations in the stand, which is helpful.
The voice talent must stay at a consistent distance from the microphone. Use tap
Use a stand to hold the script. Avoid angling the stand so that it can reflect sound toward the microphone.
-The person operating the recording equipmentΓÇöthe engineerΓÇöshould be in a separate room from the talent, with some way to talk to the talent in the recording booth (a *talkback circuit).*
+The person operating the recording equipment ΓÇö the recording engineer ΓÇö should be in a separate room from the talent, with some way to talk to the talent in the recording booth (a *talkback circuit*).
The recording should contain as little noise as possible, with a goal of an 80-dB signal-to-noise ratio or better.
Here, most of the range (height) is used, but the highest peaks of the signal do
Record directly into the computer via a high-quality audio interface or a USB port, depending on the mic you're using. For analog, keep the audio chain simple: mic, preamp, audio interface, computer. You can license both [Avid Pro Tools](https://www.avid.com/en/pro-tools) and [Adobe Audition](https://www.adobe.com/products/audition.html) monthly at a reasonable cost. If your budget is extremely tight, try the free [Audacity](https://www.audacityteam.org/).
-Record at 44.1 KHz 16 bit monophonic (CD quality) or better. Current state-of-the-art is 48 KHz 24 bit, if your equipment supports it. You'll down-sample your audio to 24 KHz 16-bit before you submit it to Speech Studio. Still, it pays to have a high-quality original recording in the event edits are needed.
+Record at 44.1 KHz 16 bit monophonic (CD quality) or better. Current state-of-the-art is 48 KHz 24 bit, if your equipment supports it. You'll down-sample your audio to 24 KHz 16-bit before you submit it to Speech Studio. Still, it pays to have a high-quality original recording in the event that edits are needed.
Ideally, have different people serve in the roles of director, engineer, and talent. Don't try to do it all yourself. In a pinch, one person can be both the director and the engineer.
Coach your talent to take a deep breath and pause for a moment before each utter
Record approximately five seconds of silence before the first recording to capture the "room tone". This practice helps Speech Studio compensate for noise in the recordings. > [!TIP]
-> All you really need to capture is the voice talent, so you can make a monophonic (single-channel) recording of just their lines. However, if you record in stereo, you can use the second channel to record the chatter in the control room to capture discussion of particular lines or takes. Remove this track from the version that's uploaded to Speech Studio.
+> All you need to capture is the voice talent, so you can make a monophonic (single-channel) recording of just their lines. However, if you record in stereo, you can use the second channel to record the chatter in the control room to capture discussion of particular lines or takes. Remove this track from the version that's uploaded to Speech Studio.
Listen closely, using headphones, to the voice talent's performance. You're looking for good but natural diction, correct pronunciation, and a lack of unwanted sounds. Don't hesitate to ask your talent to re-record an utterance that doesn't meet these standards.
Listen to each file carefully. At this stage, you can edit out small unwanted so
Convert each file to 16 bits and a sample rate of 24 KHz before saving and if you recorded the studio chatter, remove the second channel. Save each file in WAV format, naming the files with the utterance number from your script.
-Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Train your voice model](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
+Finally, create the transcript that associates each WAV file with a text version of the corresponding utterance. [Train your voice model](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
Archive the original recordings in a safe place in case you need them later. Preserve your script and notes, too.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The `speak` element is the root element. It's *required* for all SSML documents.
**Attributes**
-| Attribute | Description | Required or optional |
-|--|-||
-| `version` | Indicates the version of the SSML specification used to interpret the document markup. The current version is 1.0. | Required |
-| `xml:lang` | Specifies the language of the root document. The value can contain a lowercase, two-letter language code, for example, `en`. Or the value can contain the language code and uppercase country/region, for example, `en-US`. | Required |
-| `xmlns` | Specifies the URI to the document that defines the markup vocabulary (the element types and attribute names) of the SSML document. The current URI is http://www.w3.org/2001/10/synthesis. | Required |
+| Attribute | Description | Required or optional |
+| - | | -- |
+| `version` | Indicates the version of the SSML specification used to interpret the document markup. The current version is 1.0. | Required |
+| `xml:lang` | Specifies the language of the root document. The value can contain a lowercase, two-letter language code, for example, `en`. Or the value can contain the language code and uppercase country/region, for example, `en-US`. | Required |
+| `xmlns` | Specifies the URI to the document that defines the markup vocabulary (the element types and attribute names) of the SSML document. The current URI is http://www.w3.org/2001/10/synthesis. | Required |
## Choose a voice for text-to-speech
The `voice` element is required. It's used to specify the voice that's used for
**Attribute**
-| Attribute | Description | Required or optional |
-|--|-||
-| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
+| Attribute | Description | Required or optional |
+| | | -- |
+| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
**Example**
Within the `speak` element, you can specify multiple voices for text-to-speech o
**Attribute**
-| Attribute | Description | Required or optional |
-|--|-||
-| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
+| Attribute | Description | Required or optional |
+| | | -- |
+| `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
**Example**
Styles, style degree, and roles are supported for a subset of neural voices. If
- The [Voice List API](rest-text-to-speech.md#get-a-list-of-voices). - The code-free [Audio Content Creation](https://aka.ms/audiocontentcreation) portal.
-| Attribute | Description | Required or optional |
-|--|-||
-| `style` | Specifies the speaking style. Speaking styles are voice specific. | Required if adjusting the speaking style for a neural voice. If you're using `mstts:express-as`, the style must be provided. If an invalid value is provided, this element is ignored. |
-| `styledegree` | Specifies the intensity of the speaking style. **Accepted values**: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. | Optional. If you don't set the `style` attribute, the `styledegree` attribute is ignored. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.|
-| `role` | Specifies the speaking role-play. The voice acts as a different age and gender, but the voice name isn't changed. | Optional. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices: `zh-CN-XiaomoNeural`, `zh-CN-XiaoxuanNeural`, `zh-CN-YunxiNeural`, and `zh-CN-YunyeNeural`.|
+| Attribute | Description | Required or optional |
+| - | | -- |
+| `style` | Specifies the speaking style. Speaking styles are voice specific. | Required if adjusting the speaking style for a neural voice. If you're using `mstts:express-as`, the style must be provided. If an invalid value is provided, this element is ignored. |
+| `styledegree` | Specifies the intensity of the speaking style. **Accepted values**: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. | Optional. If you don't set the `style` attribute, the `styledegree` attribute is ignored. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices. |
+| `role` | Specifies the speaking role-play. The voice acts as a different age and gender, but the voice name isn't changed. | Optional. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices: `zh-CN-XiaomoNeural`, `zh-CN-XiaoxuanNeural`, `zh-CN-YunxiNeural`, and `zh-CN-YunyeNeural`. |
### Style
The following table has descriptions of each supported style.
|`style="customerservice"`|Expresses a friendly and helpful tone for customer support.| |`style="depressed"`|Expresses a melancholic and despondent tone with lower pitch and energy.| |`style="disgruntled"`|Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt.|
+|`style="documentary-narration"`|Narrates documentaries in a relaxed, interested, and informative style suitable for dubbing documentaries, expert commentary, and similar content.|
|`style="embarrassed"`|Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable.| |`style="empathetic"`|Expresses a sense of caring and understanding.| |`style="envious"`|Expresses a tone of admiration when you desire something that someone else has.|
This SSML snippet illustrates how the `role` attribute is used to change the rol
The following table has descriptions of each supported role.
-|Role | Description |
-|-|-|
-|`role="Girl"` | The voice imitates to a girl. |
-|`role="Boy"` | The voice imitates to a boy. |
-|`role="YoungAdultFemale"`| The voice imitates to a young adult female.|
-|`role="YoungAdultMale"` | The voice imitates to a young adult male.|
-|`role="OlderAdultFemale"`| The voice imitates to an older adult female.|
-|`role="OlderAdultMale"` | The voice imitates to an older adult male.|
-|`role="SeniorFemale"` | The voice imitates to a senior female.|
-|`role="SeniorMale"` | The voice imitates to a senior male.|
+| Role | Description |
+| - | -- |
+| `role="Girl"` | The voice imitates to a girl. |
+| `role="Boy"` | The voice imitates to a boy. |
+| `role="YoungAdultFemale"` | The voice imitates to a young adult female. |
+| `role="YoungAdultMale"` | The voice imitates to a young adult male. |
+| `role="OlderAdultFemale"` | The voice imitates to an older adult female. |
+| `role="OlderAdultMale"` | The voice imitates to an older adult male. |
+| `role="SeniorFemale"` | The voice imitates to a senior female. |
+| `role="SeniorMale"` | The voice imitates to a senior male. |
## Adjust speaking languages
You can adjust the speaking language for the `en-US-JennyMultilingualNeural` neu
**Attribute**
-| Attribute | Description | Required or optional |
-|--|-||
-| `lang` | Specifies the language that you want the neural voice to speak. | Required to adjust the speaking language for the neural voice. If you're using `lang xml:lang`, the locale must be provided. |
+| Attribute | Description | Required or optional |
+| | | - |
+| `lang` | Specifies the language that you want the neural voice to speak. | Required to adjust the speaking language for the neural voice. If you're using `lang xml:lang`, the locale must be provided. |
> [!NOTE] > The `<lang xml:lang>` element is incompatible with the `prosody` and `break` elements. You can't adjust pause and prosody like pitch, contour, rate, or volume in this element.
-Use this table to determine which speaking languages are supported for each neural voice. If the voice does not speak the language of the input text, the Speech service won't output synthesized audio.
+Use this table to determine which speaking languages are supported for each neural voice. If the voice doesn't speak the language of the input text, the Speech service won't output synthesized audio.
-| Voice | Primary and default locale | Additional locales |
-|-||-|
-| `en-US-JennyMultilingualNeural` | `en-US` | `de-DE`, `en-AU`, `en-CA`, `en-GB`, `es-ES`, `es-MX`, `fr-CA`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, `zh-CN` |
+| Voice | Primary and default locale | Additional locales |
+| - | -- | - |
+| `en-US-JennyMultilingualNeural` | `en-US` | `de-DE`, `en-AU`, `en-CA`, `en-GB`, `es-ES`, `es-MX`, `fr-CA`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, `zh-CN` |
**Example**
-The primary language for `en-US-JennyMultilingualNeural` is `en-US`. You must specify `en-US` as the default language within the `speak` element, whether or not the language is adjusted elsewhere. This SSML snippet shows how speak `de-DE` with the `en-US-JennyMultilingualNeural` neural voice.
+The primary language for `en-US-JennyMultilingualNeural` is `en-US`. You must specify `en-US` as the default language within the `speak` element, whether or not the language is adjusted elsewhere. This SSML snippet shows how to speak `de-DE` with the `en-US-JennyMultilingualNeural` neural voice.
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
Use the `break` element to insert pauses or breaks between words. You can also u
**Attributes**
-| Attribute | Description | Required or optional |
-|--|-||
-| `strength` | Specifies the relative duration of a pause by using one of the following values:<ul><li>none</li><li>x-weak</li><li>weak</li><li>medium (default)</li><li>strong</li><li>x-strong</li></ul> | Optional |
-| `time` | Specifies the absolute duration of a pause in seconds or milliseconds (ms). This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`. | Optional |
+| Attribute | Description | Required or optional |
+| - | - | -- |
+| `strength` | Specifies the relative duration of a pause by using one of the following values:<ul><li>none</li><li>x-weak</li><li>weak</li><li>medium (default)</li><li>strong</li><li>x-strong</li></ul> | Optional |
+| `time` | Specifies the absolute duration of a pause in seconds or milliseconds (ms). This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`. | Optional |
| Strength | Description |
-|-|-|
+| -- | -- |
| None, or if no value provided | 0 ms | | X-weak | 250 ms | | Weak | 500 ms | | Medium | 750 ms |
-| Strong | 1,000 ms |
-| X-strong | 1,250 ms |
+| Strong | 1,000 ms |
+| X-strong | 1,250 ms |
**Example**
Use the `mstts:silence` element to insert pauses before or after text, or betwee
**Attributes**
-| Attribute | Description | Required or optional |
-|--|-||
-| `type` | Specifies the location of silence to be added: <ul><li>`Leading` ΓÇô At the beginning of text </li><li>`Tailing` ΓÇô At the end of text </li><li>`Sentenceboundary` ΓÇô Between adjacent sentences </li></ul> | Required |
-| `Value` | Specifies the absolute duration of a pause in seconds or milliseconds. This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`. | Required |
+| Attribute | Description | Required or optional |
+| | - | -- |
+| `type` | Specifies the location of silence to be added: <ul><li>`Leading` ΓÇô At the beginning of text </li><li>`Tailing` ΓÇô At the end of text </li><li>`Sentenceboundary` ΓÇô Between adjacent sentences </li></ul> | Required |
+| `Value` | Specifies the absolute duration of a pause in seconds or milliseconds. This value should be set less than 5,000 ms. Examples of valid values are `2s` and `500ms`. | Required |
**Example**
Phonetic alphabets are composed of phones, which are made up of letters, numbers
**Attributes**
-| Attribute | Description | Required or optional |
-|--|-||
-| `alphabet` | Specifies the phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`sapi` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`ups` &ndash; See [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li></ul><br>The alphabet applies only to the `phoneme` in the element.| Optional |
-| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, text-to-speech rejects the entire SSML document and produces none of the speech output specified in the document. | Required if using phonemes |
+| Attribute | Description | Required or optional |
+| - | - | -- |
+| `alphabet` | Specifies the phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`sapi` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`ups` &ndash; See [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li></ul><br>The alphabet applies only to the `phoneme` in the element. | Optional |
+| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, text-to-speech rejects the entire SSML document and produces none of the speech output specified in the document. | Required if using phonemes |
**Examples**
The custom lexicon currently supports UTF-8 encoding.
**Attribute**
-| Attribute | Description | Required or optional |
-|--|-||
-| `uri` | The address of the external PLS document | Required |
+| Attribute | Description | Required or optional |
+| | - | -- |
+| `uri` | The address of the external PLS document | Required |
**Usage**
Because prosodic attribute values can vary over a wide range, the speech recogni
**Attributes**
-| Attribute | Description | Required or optional |
-|--|-||
-| `pitch` | Indicates the baseline pitch for the text. You can express the pitch as:<ul><li>An absolute value, expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value, expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.</li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
-| `contour` |Contour now supports neural voice. Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Each target is defined by sets of parameter pairs. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch by using a relative value or an enumeration value for pitch (see `pitch`). | Optional |
-| `range` | A value that represents the range of pitch for the text. You can express `range` by using the same absolute values, relative values, or enumeration values used to describe `pitch`. | Optional |
-| `rate` | Indicates the speaking rate of the text. You can express `rate` as:<ul><li>A relative value, expressed as a number that acts as a multiplier of the default. For example, a value of *1* results in no change in the rate. A value of *0.5* results in a halving of the rate. A value of *3* results in a tripling of the rate.</li><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
-| `volume` | Indicates the volume level of the speaking voice. You can express the volume as:<ul><li>An absolute value, expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. An example is 75. The default is 100.0.</li><li>A relative value, expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are +10 or -5.5.</li><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
+| Attribute | Description | Required or optional |
+| | | -- |
+| `pitch` | Indicates the baseline pitch for the text. You can express the pitch as:<ul><li>An absolute value, expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value, expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.</li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
+| `contour` | Contour now supports neural voice. Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Each target is defined by sets of parameter pairs. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch by using a relative value or an enumeration value for pitch (see `pitch`). | Optional |
+| `range` | A value that represents the range of pitch for the text. You can express `range` by using the same absolute values, relative values, or enumeration values used to describe `pitch`. | Optional |
+| `rate` | Indicates the speaking rate of the text. You can express `rate` as:<ul><li>A relative value, expressed as a number that acts as a multiplier of the default. For example, a value of *1* results in no change in the rate. A value of *0.5* results in a halving of the rate. A value of *3* results in a tripling of the rate.</li><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
+| `volume` | Indicates the volume level of the speaking voice. You can express the volume as:<ul><li>An absolute value, expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. An example is 75. The default is 100.0.</li><li>A relative value, expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are +10 or -5.5.</li><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
### Change speaking rate
The optional `emphasis` element is used to add or remove word-level stress for t
**Attribute**
-| Attribute | Description | Required or optional |
-|--|-||
-| `level` | Indicates the strength of emphasis to be applied:<ul><li>`reduced`</li><li>`none`</li><li>`moderate`</li><li>`strong`</li></ul><br>When the `level` attribute is not specified, the default level is `moderate`. For details on each attribute, see [emphasis element](https://www.w3.org/TR/speech-synthesis11/#S3.2.2)| Optional|
+| Attribute | Description | Required or optional |
+| | -- | -- |
+| `level` | Indicates the strength of emphasis to be applied:<ul><li>`reduced`</li><li>`none`</li><li>`moderate`</li><li>`strong`</li></ul><br>When the `level` attribute isn't specified, the default level is `moderate`. For details on each attribute, see [emphasis element](https://www.w3.org/TR/speech-synthesis11/#S3.2.2) | Optional |
**Example**
The `say-as` element is optional. It indicates the content type, such as number
**Attributes**
-| Attribute | Description | Required or optional |
-|--|-||
-| `interpret-as` | Indicates the content type of an element's text. For a list of types, see the following table. | Required |
-| `format` | Provides additional information about the precise formatting of the element's text for content types that might have ambiguous formats. SSML defines formats for content types that use them. See the following table. | Optional |
-| `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional |
-
-The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `format` column is not empty in the table below.
-
-| interpret-as | format | Interpretation |
-|--|--|-|
-| `address` | | The text is spoken as an address. The speech synthesis engine pronounces:<br /><br />`I'm at <say-as interpret-as="address">150th CT NE, Redmond, WA</say-as>`<br /><br />As "I'm at 150th Court Northeast Redmond Washington." |
-| `cardinal`, `number` | | The text is spoken as a cardinal number. The speech synthesis engine pronounces:<br /><br />`There are <say-as interpret-as="cardinal">3</say-as> alternatives`<br /><br />As "There are three alternatives." |
-| `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." |
-| `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." |
-| `digits`, `number_digit` | | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." |
-| `fraction` | | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." |
-| `ordinal` | | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option." |
-| `telephone` | | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. Examples are "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format` attribute. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
-| `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
-| `duration` | hms, hm, ms | The text is spoken as a duration. The `format` attribute specifies the duration's format (*h=hour, m=minute, and s=second*). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="duration">01:18:30</say-as>`<br /><br /> As "one hour eighteen minutes and thirty seconds".<br />Pronounces:<br /><br />`<say-as interpret-as="duration" format="ms">01:18</say-as>`<br /><br /> As "one minute and eighteen seconds".<br />This tag is only supported on English and Spanish.|
-| `name` | | The text is spoken as a person's name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> As [qiú] instead of [chóu]. |
+| Attribute | Description | Required or optional |
+| -- | - | -- |
+| `interpret-as` | Indicates the content type of an element's text. For a list of types, see the following table. | Required |
+| `format` | Provides additional information about the precise formatting of the element's text for content types that might have ambiguous formats. SSML defines formats for content types that use them. See the following table. | Optional |
+| `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional |
+
+The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `format` column isn't empty in the table below.
+
+| interpret-as | format | Interpretation |
+| - | - | -- |
+| `address` | | The text is spoken as an address. The speech synthesis engine pronounces:<br /><br />`I'm at <say-as interpret-as="address">150th CT NE, Redmond, WA</say-as>`<br /><br />As "I'm at 150th Court Northeast Redmond Washington." |
+| `cardinal`, `number` | | The text is spoken as a cardinal number. The speech synthesis engine pronounces:<br /><br />`There are <say-as interpret-as="cardinal">3</say-as> alternatives`<br /><br />As "There are three alternatives." |
+| `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." |
+| `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." |
+| `digits`, `number_digit` | | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." |
+| `fraction` | | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." |
+| `ordinal` | | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option." |
+| `telephone` | | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. Examples are "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format` attribute. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
+| `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
+| `duration` | hms, hm, ms | The text is spoken as a duration. The `format` attribute specifies the duration's format (*h=hour, m=minute, and s=second*). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="duration">01:18:30</say-as>`<br /><br /> As "one hour eighteen minutes and thirty seconds".<br />Pronounces:<br /><br />`<say-as interpret-as="duration" format="ms">01:18</say-as>`<br /><br /> As "one minute and eighteen seconds".<br />This tag is only supported on English and Spanish. |
+| `name` | | The text is spoken as a person's name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> As [qiú] instead of [chóu]. |
**Usage**
Any audio included in the SSML document must meet these requirements:
**Attribute**
-| Attribute | Description | Required or optional |
-|--|--||
+| Attribute | Description | Required or optional |
+| | | - |
| `src` | Specifies the location/URL of the audio file. | Required if using the audio element in your SSML document. | **Example**
Only one background audio file is allowed per SSML document. You can intersperse
**Attributes**
-| Attribute | Description | Required or optional |
-|--|-||
-| `src` | Specifies the location/URL of the background audio file. | Required if using background audio in your SSML document |
-| `volume` | Specifies the volume of the background audio file. **Accepted values**: `0` to `100` inclusive. The default value is `1`. | Optional |
-| `fadein` | Specifies the duration of the background audio fade-in as milliseconds. The default value is `0`, which is the equivalent to no fade in. **Accepted values**: `0` to `10000` inclusive. | Optional |
-| `fadeout` | Specifies the duration of the background audio fade-out in milliseconds. The default value is `0`, which is the equivalent to no fade out. **Accepted values**: `0` to `10000` inclusive. | Optional |
+| Attribute | Description | Required or optional |
+| | -- | -- |
+| `src` | Specifies the location/URL of the background audio file. | Required if using background audio in your SSML document |
+| `volume` | Specifies the volume of the background audio file. **Accepted values**: `0` to `100` inclusive. The default value is `1`. | Optional |
+| `fadein` | Specifies the duration of the background audio fade-in as milliseconds. The default value is `0`, which is the equivalent to no fade in. **Accepted values**: `0` to `10000` inclusive. | Optional |
+| `fadeout` | Specifies the duration of the background audio fade-out in milliseconds. The default value is `0`, which is the equivalent to no fade out. **Accepted values**: `0` to `10000` inclusive. | Optional |
**Example**
You can use the `bookmark` element to insert custom markers in SSML to get the o
**Attribute**
-| Attribute | Description | Required or optional |
-|--|--||
-| `mark` | Specifies the reference text of the `bookmark` element. | Required |
+| Attribute | Description | Required or optional |
+| | - | -- |
+| `mark` | Specifies the reference text of the `bookmark` element. | Required |
**Example**
All elements from the [MathML 2.0](https://www.w3.org/TR/MathML2/) and [MathML 3
The MathML entities are not supported by XML syntax, so you must use the their corresponding [unicode characters](https://www.w3.org/2003/entities/2007/htmlmathml.json) to represent the entities, for example, the entity `&copy;` should be represented by its unicode characters `&#x00A9;`, otherwise an error will occur.
+## Viseme element
+
+A _viseme_ is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. You can use the `mstts:viseme` element in SSML to request viseme output. For more information, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md).
+
+**Syntax**
+
+```xml
+<mstts:viseme type="string"/>
+```
+
+**Attributes**
+
+| Attribute | Description | Required or optional |
+| | | -- |
+| `type` | Specifies the type of viseme output.<ul><li>`redlips_front` ΓÇô lip-sync with viseme ID and audio offset output </li><li>`FacialExpression` ΓÇô blend shapes output</li></ul> | Required |
+
+> [!NOTE]
+> Currently, `redlips_front` only supports neural voices in `en-US` locale, and `FacialExpression` supports neural voices in `en-US` and `zh-CN` locales.
+
+**Example**
+
+This SSML snippet illustrates how to request blend shapes with your synthesized speech.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <mstts:viseme type="FacialExpression"/>
+ Rainbow has seven colors: Red, orange, yellow, green, blue, indigo, and violet.
+ </voice>
+</speak>
+```
+ ## Next steps [Language support: Voices, locales, languages](language-support.md)
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
communication-services Inbound Calling Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md
# Enable inbound telephony calling for Azure Communication Services.
-Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel with phone numbers [provided by Microsoft](./telephony-concept.md#voice-calling-pstn) and phone numbers that supplied by [direct routing](./telephony-concept.md#azure-direct-routing).
+Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel. You can use phone numbers [provided by Microsoft](./telephony-concept.md#voice-calling-pstn) and phone numbers supplied by [direct routing](./telephony-concept.md#azure-direct-routing).
**Inbound calling with Dynamics 365 Omnichannel (OC)**
- Supported in General Availability, to setup inbound PSTN or direct routing with Dynamics 365 OC, follow the [instructions here](/dynamics365/customer-service/voice-channel-inbound-calling)
+ Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling)
**Inbound calling with Power Virtual Agents**
communication-services Media Comp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-comp.md
Media Composition REST APIs (and open-source SDKs) allow you to command the Azur
This functionality is activated through REST APIs and open-source SDKs. Below is an example of the JSON encoded configuration of a presenter layout for the above scenario:
-```
+```json
{
- layout: {
- type: ΓÇÿpresenterΓÇÖ,
- presenter: {
- supportPosition: ΓÇÿrightΓÇÖ,
- primarySource: ΓÇÿ1ΓÇÖ, // source id
+ "layout": {
+ "presenter": {
+ "presenterId": "presenter",
+ "supportId": "translatorSupport",
+ "supportPosition": "topLeft",
+ "supportAspectRatio": 3/2
}
- },
- sources: [
- { id: ΓÇÿ1ΓÇÖ }, { id: ΓÇÿ2ΓÇÖ }
- ]
+ }
}- ```
-The presenter layout is one of several layouts available through the media composition capability:
-- **Grid** - This is the typical video calling layout, where all media sources are shown on a grid with similar sizes. You can use the grid layout to specify grid positions and size.-- **Presentation.** Similar to the grid layout but media sources can have different sizes, allowing for emphasis.-- **Presenter** - This layout overlays two sources on top of each other.-- **Weather Person** - This layout overlays two sources, but in real-time Azure will remove the background behind people.
+The presenter layout is one of several layouts available through the media composition capability:
+- **Grid** - The grid layout shows the specified media sources in a standard grid format. You can specify the number of rows and columns in the grid as well as which media source should be placed in each slot of the grid.
+- **Auto-Grid** - This layout automatically displays all the media sources in the scene in an optimized way. Unlike the grid layout, it does not allow for customizations on the number of rows and columns.
+- **Presentation** - The presentation layout features a fixed media source, the presenter, covering the majority of the scene. The other media sources are arranged in either a row or column in the remaining space of the scene.
+- **Presenter** - This is a picture-in-picture layout composed of two sources. One source is the background of the scene. This commonly represents the content being presented or the main presenter. The secondary source is cropped and positioned at a corner of the scene.
+- **Custom** - You can customize the layout to fit your specific scenario. Media sources can have different sizes and be placed at any position on the scene.
<!-To try out media composition, check out following content:--> <!- [Quick Start - Applying Media Composition to a video call](../../quickstarts/media-composition/get-started-media-composition.md) -->
connectors Connectors Create Api Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-ftp.md
Title: Connect to FTP server
-description: Automate tasks and workflows that create, monitor, and manage files on an FTP server by using Azure Logic Apps.
+ Title: Connect to FTP servers
+description: Connect to an FTP server from workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 12/15/2019 Last updated : 07/24/2022 tags: connectors
-# Create, monitor, and manage FTP files by using Azure Logic Apps
+# Connect to an FTP server from workflows in Azure Logic Apps
-With Azure Logic Apps and the FTP connector, you can create automated tasks and workflows that create, monitor, send, and receive files through your account on an FTP server, along with other actions, for example:
+This article shows how to access your FTP server from a workflow in Azure Logic Apps with the FTP connector. You can then create automated workflows that run when triggered by events in your FTP server or in other systems and run actions to manage files on your FTP server.
+
+For example, your workflow can start with an FTP trigger that monitors and responds to events on your FTP server. The trigger makes the outputs available to subsequent actions in your workflow. Your workflow can run FTP actions that create, send, receive, and manage files through your FTP server account using the following specific tasks:
* Monitor when files are added or changed.
-* Get, create, copy, update, list, and delete files.
-* Get file content and metadata.
-* Extract archives to folders.
+* Create, copy, delete, list, and update files.
+* Get file metadata and content.
+* Manage folders.
-You can use triggers that get responses from your FTP server and make the output available to other actions. You can use run actions in your logic apps for managing files on your FTP server. You can also have other actions use the output from FTP actions. For example, if you regularly get files from your FTP server, you can send email about those files and their content by using the Office 365 Outlook connector or Outlook.com connector. If you're new to logic apps, review [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md).
+If you're new to Azure Logic Apps, review the following get started documentation:
-## Limitations
+* [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)
+* [Quickstart: Create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-* The FTP connector supports only explicit FTP over TLS/SSL (FTPS) and isn't compatible with implicit FTPS.
+## Connector technical reference
-* By default, FTP actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB, FTP actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The **Get file content** action implicitly uses chunking.
+The FTP connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-* FTP triggers don't support chunking. When requesting file content, triggers select only files that are 50 MB or smaller. To get files larger than 50 MB, follow this pattern:
+| Logic app type (plan) | Environment | Connector version |
+||-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector (Standard class). For more information, review the following documentation: <br><br>- [FTP managed connector reference](/connectors/ftp) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector (Standard class) and ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [FTP managed connector reference](/connectors/ftp) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Standard class) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string. For more information, review the following documentation: <br><br>- [FTP managed connector reference](/connectors/ftp) <br>- [FTP built-in connector operations](#built-in-operations) section later in this article <br>- [Managed connectors in Azure Logic Apps](managed.md) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
+||||
- * Use an FTP trigger that returns file properties, such as **When a file is added or modified (properties only)**.
+## Limitations
- * Follow the trigger with the FTP **Get file content** action, which reads the complete file and implicitly uses chunking.
+* Capacity and throughput
-* If you have an on-premises FTP server, consider creating an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) or using [Azure App Service Hybrid connections](../app-service/app-service-hybrid-connections.md), which both let you access on-premises data sources without using an on-premises data gateway.
+ * Built-in connector for Standard workflows:
-## How FTP triggers work
+ By default, FTP actions can read or write files that are *200 MB or smaller*. Currently, the FTP built-in connector doesn't support chunking.
-FTP triggers work by polling the FTP file system and looking for any file that was changed since the last poll. Some tools let you preserve the timestamp when the files change. In these cases, you have to disable this feature so your trigger can work. Here are some common settings:
+ * Managed connector for Consumption and Standard workflows
-| SFTP client | Action |
-|-|--|
-| Winscp | Go to **Options** > **Preferences** > **Transfer** > **Edit** > **Preserve timestamp** > **Disable** |
-| FileZilla | Go to **Transfer** > **Preserve timestamps of transferred files** > **Disable** |
-|||
+ By default, FTP actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB, FTP actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The **Get file content** action implicitly uses chunking.
-When a trigger finds a new file, the trigger checks that the new file is complete, and not partially written. For example, a file might have changes in progress when the trigger checks the file server. To avoid returning a partially written file, the trigger notes the timestamp for the file that has recent changes, but doesn't immediately return that file. The trigger returns the file only when polling the server again. Sometimes, this behavior might cause a delay that is up to twice the trigger's polling interval.
+* FTP managed connector triggers might experience missing, incomplete, or delayed results when the "last modified" timestamp is preserved. On the other hand, the FTP *built-in* connector trigger in Standard logic app workflows doesn't have this limitation. For more information, review the FTP connector's [Limitations](/connectors/ftp/#limitations) section.
## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Your FTP host server address and account credentials
+* The logic app workflow where you want to access your FTP account. To start your workflow with an FTP trigger, you have to start with a blank workflow. To use an FTP action, start your workflow with another trigger, such as the **Recurrence** trigger.
- The FTP connector requires that your FTP server is accessible from the internet and set up to operate in *passive* mode. Your credentials let your logic app create a connection and access your FTP account.
+* For more requirements that apply to both the FTP managed connector and built-in connector, review the [FTP managed connector reference - Requirements](/connectors/ftp/#requirements).
-* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+<a name="known-issues"></a>
-* The logic app where you want to access your FTP account. To start with an FTP trigger, [create a blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md). To use an FTP action, start your logic app with another trigger, for example, the **Recurrence** trigger.
+## Known issues
-## Connect to FTP
+<a name="add-ftp-trigger"></a>
-1. Sign in to the [Azure portal](https://portal.azure.com), and open your logic app in Logic App Designer.
+## Add an FTP trigger
-1. For blank logic apps, in the search box, enter `ftp` as your filter. From the **Triggers** list, select the trigger that you want.
+A Consumption logic app workflow can use only the FTP managed connector. However, a Standard logic app workflow can use the FTP managed connector *and* the FTP built-in connector. In a Standard logic app workflow, managed connectors are also labeled as **Azure** connectors.
- -or-
+The FTP managed connector and built-in connector each have only one trigger available:
- For existing logic apps, under the last step where you want to add an action, select **New step**, and then select **Add an action**. In the search box, enter `ftp` as your filter. From the **Actions** list, select the action that you want.
+* Managed connector trigger: The FTP trigger named **When a file is added or modified (properties only)** runs a Consumption or Standard logic app workflow when one or more files are added or changed in a folder on the FTP server. This trigger gets only the file properties or metadata, not the file content. However, to get the file content, your workflow can follow this trigger with other FTP actions.
- To add an action between steps, move your pointer over the arrow between steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
+ For more information about this trigger, review [When a file is added or modified (properties only)](/connectors/ftp/#when-a-file-is-added-or-modified-(properties-only)).
-1. Provide your connection information, and select **Create**.
+* Built-in connector trigger: The FTP trigger named **When a file is added or updated** runs a Standard logic app workflow when one or more files are added or changed in a folder on the FTP server. This trigger gets only the file properties or metadata, not the file content. However, to get the content, your workflow can follow this trigger with other FTP actions. For more information about this trigger, review [When a file is added or updated](#when-file-added-updated).
-1. Provide the information for your selected trigger or action and continue building your logic app's workflow.
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create and edit logic app workflows:
-## Examples
+* Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
-<a name="file-added-modified"></a>
+* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
-### Add FTP trigger
+### [Consumption](#tab/consumption)
-The **When a file is added or modified (properties only)** trigger starts a logic app workflow when the trigger detects that a file is added or changed on an FTP server. For example, you can add a condition that checks the file's content and decides whether to get that content, based on whether that content meets a specified condition. Finally, you can add an action that gets the file's content, and put that content into a different folder on the SFTP server.
+1. In the [Azure portal](https://portal.azure.com), and open your blank logic app workflow in the designer.
-For example, you can use this trigger to monitor an FTP folder for new files that describe customer orders. You can then use an FTP action such as **Get file metadata** to get the properties for that new file, and then use **Get file content** to get the content from that file for further processing and store that order in an orders database.
+1. On the designer, under the search box, select **All**.
-Here is an example that shows how to use the **When a file is added or modified (properties only)** trigger.
+1. In the search box, enter **ftp**. From the triggers list, select the trigger named **When a filed is added or modified (properties only)**.
-1. Sign in to the [Azure portal](https://portal.azure.com), and open your logic app in Logic App Designer, if not open already.
+ ![Screenshot shows Azure portal, Consumption workflow designer, and FTP trigger selected.](./media/connectors-create-api-ftp/ftp-select-trigger-consumption.png)
-1. For blank logic apps, in the search box, enter `ftp` as your filter. Under the triggers list, select this trigger: **When a filed is added or modified (properties only)**
+1. Provide the [information for your connection](/connectors/ftp/#creating-a-connection). When you're done, select **Create**.
- ![Find and select the FTP trigger](./media/connectors-create-api-ftp/select-ftp-trigger-logic-app.png)
+ > [!NOTE]
+ >
+ > By default, this connector transfers files in text format. To transfer files in binary format,
+ > for example, where and when encoding is used, select the binary transport option.
-1. Provide the necessary details for your connection, and then select **Create**.
+ ![Screenshot shows Consumption workflow designer and FTP connection profile.](./media/connectors-create-api-ftp/ftp-trigger-connection-consumption.png)
- By default, this connector transfers files in text format. To transfer files in binary format, for example, where and when encoding is used, select **Binary Transport**.
+1. After the trigger information box appears, find the folder that you want to monitor for new or edited files.
- ![Create connection to FTP server](./media/connectors-create-api-ftp/create-ftp-connection-trigger.png)
+ 1. In the **Folder** box, select the folder icon to view the folder directory.
-1. In the **Folder** box, select the folder icon so that a list appears. To find the folder you want to monitor for new or edited files, select the right angle arrow (**>**), browse to that folder, and then select the folder.
+ 1. Select the right angle arrow (**>**). Browse to the folder that you want, and then select the folder.
- ![Find and select folder to monitor](./media/connectors-create-api-ftp/select-folder-ftp-trigger.png)
+ ![Screenshot shows Consumption workflow designer, FTP trigger, and "Folder" property where browsing for folder to select.](./media/connectors-create-api-ftp/ftp-trigger-select-folder-consumption.png)
Your selected folder appears in the **Folder** box.
- ![Selected folder appears in the "Folder" property](./media/connectors-create-api-ftp/selected-folder-ftp-trigger.png)
+ ![Screenshot shows Consumption workflow designer, FTP trigger, and "Folder" property with selected folder.](./media/connectors-create-api-ftp/ftp-trigger-selected-folder-consumption.png)
-1. Save your logic app. On the designer toolbar, select **Save**.
+1. When you're done, save your workflow.
-Now that your logic app has a trigger, add the actions you want to run when your logic app finds a new or edited file. For this example, you can add an FTP action that gets the new or updated content.
+### [Standard](#tab/standard)
-<a name="get-content"></a>
+This section shows the steps for the following FTP connector triggers:
-### Add FTP action
+* [*Built-in* trigger named **When a file is added or updated**](#built-in-connector-trigger)
-The **Get file metadata** action gets the properties for a file that's on your FTP server and the **Get file content** action gets the file content based on the information about that file on your FTP server. For example, you can add the trigger from the previous example and these actions to get the file's content after that file is added or edited.
+ If you use this FTP built-in trigger, you can get the file content by just using the FTP built-in action named **Get file content** without using the **Get file metadata** action first, unlike when you use the FTP managed trigger. For more information about FTP built-in connector operations, review [FTP built-in connector operations](#ftp-built-in-connector-operations) later in this article.
-1. Under the trigger or any other actions, select **New step**.
+* [*Managed* trigger named **When a file is added or modified (properties only)**](#managed-connector-trigger)
-1. In the search box, enter `ftp` as your filter. Under the actions list, select this action: **Get file metadata**
+ If you use this FTP managed trigger, you have to later use the **Get file metadata** action first to get a single array item before you use any other action on the file that was added or modified. This workaround results from the [known issue around the **Split On** setting](#known-issues) described earlier in this article.
- ![Select the "Get file metadata" action](./media/connectors-create-api-ftp/select-get-file-metadata-ftp-action.png)
+<a name="built-in-connector-trigger"></a>
-1. If you already have a connection to your FTP server and account, go to the next step. Otherwise, provide the necessary details for that connection, and then select **Create**.
+#### Built-in connector trigger
- ![Create FTP server connection](./media/connectors-create-api-ftp/create-ftp-connection-action.png)
+1. In the [Azure portal](https://portal.azure.com), and open your blank logic app workflow in the designer.
-1. After the **Get file metadata** action appears, click inside the **File** box so that the dynamic content list appears. You can now select properties for the outputs from previous steps. In the dynamic content list, under **Get file metadata**, select the **List of Files Id** property, which references the collection where the file was added or updated.
+1. On the designer, select **Choose an operation**. Under the search box, select **Built-in**.
- ![Find and select "List of Files Id" property](./media/connectors-create-api-ftp/select-list-of-files-id-output.png)
+1. In the search box, enter **ftp**. From the triggers list, select the trigger named **When a filed is added or updated**.
- The **List of Files Id** property now appears in the **File** box.
+ ![Screenshot shows the Azure portal, Standard workflow designer, search box with "Built-in" selected underneath, and FTP trigger selected.](./media/connectors-create-api-ftp/ftp-select-trigger-built-in-standard.png)
- ![Selected "List of Files Id" property](./media/connectors-create-api-ftp/selected-list-file-ids-ftp-action.png)
+1. Provide the information for your connection.
-1. Now add this FTP action: **Get file content**
+ > [!NOTE]
+ >
+ > By default, this connector transfers files in text format. To transfer files in binary format,
+ > for example, where and when encoding is used, select the binary transport option.
- ![Find and select the "Get file content" action](./media/connectors-create-api-ftp/select-get-file-content-ftp-action.png)
+ ![Screenshot shows Standard workflow designer, FTP built-in trigger, and connection profile.](./media/connectors-create-api-ftp/ftp-trigger-connection-built-in-standard.png)
-1. After the **Get file content** action appears, click inside the **File** box so that the dynamic content list appears. You can now select properties for the outputs from previous steps. In the dynamic content list, under **Get file metadata**, select the **Id** property, which references the file that was added or updated.
+1. When you're done, select **Create**.
- ![Find and select "Id" property](./media/connectors-create-api-ftp/get-file-content-id-output.png)
+1. When the trigger information box appears, in the **Folder path** box, specify the path to the folder that you want to monitor.
- The **Id** property now appears in the **File** box.
+ ![Screenshot shows Standard workflow designer, FTP built-in trigger, and "Folder path" with the specific folder path to monitor.](./media/connectors-create-api-ftp/ftp-trigger-built-in-folder-path-standard.png)
- ![Selected "Id" property](./media/connectors-create-api-ftp/selected-get-file-content-id-ftp-action.png)
+1. When you're done, save your logic app workflow.
-1. Save your logic app.
+<a name="managed-connector-trigger"></a>
-## Test your logic app
+#### Managed connector trigger
-To check that your workflow returns the content that you expect, add another action that sends you the content from the uploaded or updated file.
+1. In the [Azure portal](https://portal.azure.com), and open your blank logic app workflow in the designer.
-1. Under the **Get file content** action, add an action that can send you the file's contents. This example adds the **Send an email** action for the Office 365 Outlook.
+1. On the designer, select **Choose an operation**. Under the search box, select **Azure**.
- ![Add an action for sending email](./media/connectors-create-api-ftp/select-send-email-action.png)
+1. In the search box, enter **ftp**. From the triggers list, select the trigger named **When a filed is added or modified (properties only)**.
-1. After the action appears, provide the information and include the properties that you want to test. For example, include the **File content** property, which appears in the dynamic content list after you select **See more** in the **Get file content** section.
+ ![Screenshot shows the Azure portal, Standard workflow designer, search box with "Azure" selected underneath, and FTP trigger selected.](./media/connectors-create-api-ftp/ftp-select-trigger-azure-standard.png)
- ![Provide information about email action](./media/connectors-create-api-ftp/selected-send-email-action.png)
+1. Provide the [information for your connection](/connectors/ftp/#creating-a-connection).
-1. Save your logic app. To run and trigger the logic app, on the toolbar, select **Run**, and then add a file to the FTP folder that your logic app now monitors.
+ > [!NOTE]
+ >
+ > By default, this connector transfers files in text format. To transfer files in binary format,
+ > for example, where and when encoding is used, select the binary transport option.
-## Connector reference
+ ![Screenshot shows Standard workflow designer, FTP managed connector trigger, and connection profile.](./media/connectors-create-api-ftp/ftp-trigger-connection-azure-standard.png)
-For more technical details about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/ftpconnector/).
+1. When you're done, select **Create**.
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
+1. When the trigger information box appears, find the folder that you want to monitor for new or edited files.
-## Next steps
+ 1. In the **Folder** box, select the folder icon to view the folder directory.
+
+ 1. Select the right angle arrow (**>**). Browse to the folder that you want, and then select the folder.
+
+ ![Screenshot shows Standard workflow designer, FTP managed connector trigger, and "Folder" property where browsing for folder to select.](./media/connectors-create-api-ftp/ftp-trigger-azure-select-folder-standard.png)
+
+ Your selected folder appears in the **Folder** box.
+
+ ![Screenshot shows Standard workflow designer, FTP managed connector trigger, and "Folder" property with selected folder.](./media/connectors-create-api-ftp/ftp-trigger-azure-selected-folder-standard.png)
+
+1. When you're done, save your logic app workflow.
+++
+When you save your workflow, this step automatically publishes your updates to your deployed logic app, which is live in Azure. With only a trigger, your workflow just checks the FTP server based on your specified schedule. You have to [add an action](#add-ftp-action) that responds to the trigger and does something with the trigger outputs.
+
+<a name="add-ftp-action"></a>
+
+## Add an FTP action
+
+A Consumption logic app workflow can use only the FTP managed connector. However, a Standard logic app workflow can use the FTP managed connector and the FTP built-in connector. Each version has multiple actions. For example, both managed and built-in connector versions have their own actions to get file metadata and get file content.
+
+* Managed connector actions: These actions run in a Consumption or Standard logic app workflow.
+
+* Built-in connector actions: These actions run only in a Standard logic app workflow.
+
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create and edit logic app workflows:
+
+* Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
+
+* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
+
+Before you can use an FTP action, your workflow must already start with a trigger, which can be any kind that you choose. For example, you can use the generic **Recurrence** built-in trigger to start your workflow on specific schedule.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), and open your logic app workflow in the designer.
+
+1. Find and select the [FTP action](/connectors/ftp/) that you want to use.
+
+ This example continues with the action named **Get file metadata** so you can get the metadata for a single array item.
+
+ 1. On the designer, under the trigger or any other actions, select **New step**.
+
+ 1. Under the **Choose an operation** search box, select **All**.
+
+ 1. In the search box, enter **ftp get file metadata**.
+
+ 1. From the actions list, select the action named **Get file metadata**.
+
+ ![Screenshot shows the Azure portal, Consumption workflow designer, search box with "ftp get file metadata" entered, and "Get file metadata" action selected.](./media/connectors-create-api-ftp/ftp-get-file-metadata-action-consumption.png)
+
+1. If necessary, provide the [information for your connection](/connectors/ftp/#creating-a-connection). When you're done, select **Create**.
+
+ > [!NOTE]
+ >
+ > By default, this connector transfers files in text format. To transfer files in binary format,
+ > for example, where and when encoding is used, select the binary transport option.
+
+ ![Screenshot shows Consumption workflow designer and FTP connection profile for an action.](./media/connectors-create-api-ftp/ftp-action-connection-consumption.png)
+
+1. After the **Get file metadata** action information box appears, click inside the **File** box so that the dynamic content list opens.
+
+ You can now select outputs from the preceding trigger.
+
+1. In the dynamic content list, under **When a file is added or modified**, select **List of Files Id**.
+
+ ![Screenshot shows Consumption workflow designer, "Get file metadata" action, dynamic content list opened, and "List of Files Id" selected.](./media/connectors-create-api-ftp/ftp-get-file-metadata-list-files-id-output-consumption.png)
+
+ The **File** property now references the **List of Files Id** trigger output.
+
+1. On the designer, under the **Get file metadata** action, select **New step**.
+
+1. Under the **Choose an operation** search box, select **All**.
+
+1. In the search box, enter **ftp get file content**.
+
+1. From the actions list, select the action named **Get file content**.
+
+ ![Screenshot shows the Azure portal, Consumption workflow designer, search box with "ftp get file content" entered, and "Get file content" action selected.](./media/connectors-create-api-ftp/ftp-get-file-content-action-consumption.png)
+
+1. After the **Get file content** action information box appears, click inside the **File** box so that the dynamic content list opens.
+
+ You can now select outputs from the preceding trigger and any other actions.
+
+1. In the dynamic content list, under **Get file metadata**, select **Id**, which references the file that was added or updated.
+
+ ![Screenshot shows Consumption workflow designer, "Get file content" action, and "File" property with dynamic content list opened and "Id" property selected.](./media/connectors-create-api-ftp/ftp-get-file-content-id-output-consumption.png)
+
+ The **File** property now references the **Id** action output.
+
+ ![Screenshot shows Consumption workflow designer, "Get file content" action, and "File" property with "Id" entered.](./media/connectors-create-api-ftp/ftp-get-file-content-id-entered-consumption.png)
+
+1. When you're done, save your logic app workflow.
+
+### [Standard](#tab/standard)
+
+The steps to add and use an FTP action differ based on whether your workflow uses an "Azure" managed connector trigger or a built-in connector trigger.
+
+* [**Workflows with a built-in trigger**](#built-in-trigger-workflows): Describes the steps for workflows that start with a built-in trigger.
+
+ If you used the FTP built-in trigger, and you want the content from a newly added or updated file, you can use a **For each** loop to iterate through the array that's returned by the trigger. You can then use just the **Get file content** action without any other intermediary actions. For more information about FTP built-in connector operations, review [FTP built-in connector operations](#ftp-built-in-connector-operations) later in this article.
+
+* [**Workflows with a managed trigger**](#managed-trigger-workflows): Describes the steps for workflows that start with a managed trigger.
+
+ If you used the FTP managed connector trigger, and want the content from a newly added or modified file, you can use a **For each** loop to iterate through the array that's returned by the trigger. You then have to use intermediary actions such as the FTP action named **Get file metadata** before you use the **Get file content** action.
+
+<a name="built-in-trigger-workflows"></a>
+
+#### Workflows with a built-in trigger
+
+1. In the [Azure portal](https://portal.azure.com), and open your logic app workflow in the designer.
+
+1. On the designer, under the trigger or any other actions, select the plus sign (**+**) > **Add an action**.
+
+1. On the **Add an action** pane, under the **Choose an operation** search box, select **Built-in**.
+
+1. In the search box, enter **ftp get file content**. From the actions list, select **Get file content**.
+
+ ![Screenshot shows Azure portal, Standard workflow designer, search box with "Built-in" selected underneath, and "Get file content" selected.](./media/connectors-create-api-ftp/ftp-action-get-file-content-built-in-standard.png)
+
+1. If necessary, provide the information for your connection. When you're done, select **Create**.
+
+ > [!NOTE]
+ >
+ > By default, this connector transfers files in text format. To transfer files in binary format,
+ > for example, where and when encoding is used, select the binary transport option.
+
+ ![Screenshot shows Standard workflow designer, FTP built-in action, and connection profile.](./media/connectors-create-api-ftp/ftp-action-connection-built-in-standard.png)
+
+1. In the action information pane that appears, click inside the **File path** box so that the dynamic content list opens.
+
+ You can now select outputs from the preceding trigger.
+
+1. In the dynamic content list, under **When a file is added or updated**, select **File path**.
+
+ ![Screenshot shows Standard workflow designer, "Get file content" action, dynamic content list opened, and "File path" selected.](./media/connectors-create-api-ftp/ftp-action-get-file-content-file-path-built-in-standard.png)
+
+ The **File path** property now references the **File path** trigger output.
+
+ ![Screenshot shows Standard workflow designer and "Get file content" action complete.](./media/connectors-create-api-ftp/ftp-action-get-file-content-complete-built-in-standard.png)
+
+1. Add any other actions that your workflow needs. When you're done, save your logic app workflow.
+
+<a name="managed-trigger-workflows"></a>
+
+#### Workflows with a managed trigger
+
+1. In the [Azure portal](https://portal.azure.com), and open your logic app workflow in the designer.
+
+1. On the designer, under the trigger or any other actions, select the plus sign (**+**) > **Add an action**.
+
+1. On the **Add an action** pane, under the **Choose an operation** search box, select **Azure**.
+
+1. In the search box, enter **ftp get file metadata**. From the actions list, select the **Get file metadata** action.
+
+ ![Screenshot shows Azure portal, Standard workflow designer, search box with "Azure" selected underneath, and "Get file metadata" action selected.](./media/connectors-create-api-ftp/ftp-action-get-file-metadata-azure-standard.png)
+
+1. If necessary, provide the [information for your connection](/connectors/ftp/#creating-a-connection). When you're done, select **Create**.
+
+ > [!NOTE]
+ >
+ > By default, this connector transfers files in text format. To transfer files in binary format,
+ > for example, where and when encoding is used, select the binary transport option.
+
+ ![Screenshot shows Standard workflow designer, FTP managed connector action, and connection profile.](./media/connectors-create-api-ftp/ftp-action-connection-azure-standard.png)
+
+1. In the action information pane that appears, click inside the **File** box so that the dynamic content list opens.
+
+ You can now select outputs from the preceding trigger.
+
+1. In the dynamic content list, under **When a file is added or modified (properties only)**, select **List of Files Id**.
+
+ ![Screenshot shows Standard workflow designer, "Get file metadata" action, dynamic content list opened, and "List of Files Id" selected.](./media/connectors-create-api-ftp/ftp-get-file-metadata-list-files-azure-standard.png)
+
+ The **File** property now references the **List of Files Id** trigger output.
+
+ ![Screenshot shows Standard workflow designer, "Get file metadata" action, and "File" property set to "List of Files Id" trigger output.](./media/connectors-create-api-ftp/ftp-get-file-metadata-complete-azure-standard.png)
+
+1. On the designer, under the **Get file metadata** action, select the plus sign (**+**) > **Add an action**.
+
+1. In the **Add an action** pane, under the **Choose an operation** search box, select **Azure**.
+
+1. In the search box, enter **ftp get file content**. From the actions list, select the **Get file content** action.
+
+ ![Screenshot shows Standard workflow designer, "Get file content" action, and "File" property set to "Id" trigger output.](./media/connectors-create-api-ftp/ftp-get-file-content-azure-standard.png)
+
+1. In the action information pane that appears, click inside the **File** box so that the dynamic content list opens.
+
+ You can now select outputs from the preceding trigger or actions.
+
+1. In the dynamic content list, under **Get file metadata**, select **Id**.
+
+ ![Screenshot shows Standard workflow designer, "Get file content" action, dynamic content list opened, and "Id" selected.](./media/connectors-create-api-ftp/ftp-get-file-content-file-id-azure-standard.png)
+
+ The **File** property now references the **Id** action output.
+
+ ![Screenshot shows Standard workflow designer, "Get file content" action, and "File" property set to "Id" action output.](./media/connectors-create-api-ftp/ftp-get-file-content-complete-azure-standard.png)
+
+1. Add any other actions that your workflow needs. When you're done, save your logic app workflow.
+++
+## Test your workflow
+
+To check that your workflow returns the content that you expect, add another action that sends you the content from the added or updated file. This example uses the Office 365 Outlook action named **Send an email**.
+
+### [Consumption](#tab/consumption)
+
+1. Under the **Get file content** action, add the Office 365 Outlook action named **Send an email**. If you have an Outlook.com account instead, add the Outlook.com **Send an email** action, and adjust the following steps accordingly.
+
+ 1. On the designer, under the **Get file content** action, select **New step**.
+
+ 1. Under the **Choose an operation** search box, select **All**.
+
+ 1. In the search box, enter **office 365 outlook send an email**. From the actions list, select the Office 365 Outlook action named **Send an email**.
+
+ ![Screenshot shows Consumption workflow designer and "Send an email" action under all the other actions.](./media/connectors-create-api-ftp/send-email-action-consumption.png)
+
+1. If necessary, sign in to your email account.
+
+1. In the action information box, provide the required values and include any other parameters or properties that you want to test.
+
+ For example, you can include the **File content** output from the **Get file content** action. To find this output, follow these steps:
+
+ 1. In the **Get file content** action, click inside the **Body** box so that the dynamic content list opens.
+
+ 1. In the dynamic content list, next to **Get file content**, select **See more**.
+
+ ![Screenshot shows Consumption workflow designer, "Send an email" action, and dynamic content list opened with "See more" selected next to "Get file content".](./media/connectors-create-api-ftp/send-email-action-body-see-more-consumption.png)
+
+ 1. In the dynamic content list, under **Get file content**, select **File Content**.
+
+ The **Body** property now references the **File Content** action output.
+
+ ![Screenshot shows Consumption workflow designer, "Send an email" action, dynamic content list opened, and "File Content" action output selected.](./media/connectors-create-api-ftp/send-email-body-file-content-output-consumption.png)
+
+1. Save your logic app workflow.
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+1. To run and trigger the workflow, on the designer toolbar, select **Run Trigger** > **Run**. Add a file to the FTP folder that your workflow monitors.
+
+### [Standard](#tab/standard)
+
+#### Workflow with built-in trigger and actions
+
+1. Under the **Get file content** action, add the Office 365 Outlook action named **Send an email**. If you have an Outlook.com account instead, add the Outlook.com **Send an email** action, and adjust the following steps accordingly.
+
+ 1. On the designer, select **Choose an operation**.
+
+ 1. On the **Add an action** pane, under the **Choose an operation** search box, select **Azure**.
+
+ 1. In the search box, enter **office 365 outlook send an email**. From the actions list, select the Office 365 Outlook action named **Send an email**.
+
+ ![Screenshot shows Standard workflow designer and "Send an email" action under all the other actions.](./media/connectors-create-api-ftp/send-email-action-with-built-in-standard.png)
+
+1. If necessary, sign in to your email account.
+
+1. In the action information box, provide the required values and include any other parameters or properties that you want to test.
+
+ For example, you can include the **File content** output from the **Get file content** action. To find this output, follow these steps:
+
+ 1. In the **Get file content** action, click inside the **Body** box so that the dynamic content list opens. In the dynamic content list, next to **Get file content**, select **See more**.
+
+ ![Screenshot shows Standard workflow designer, "Send an email" action, and dynamic content list opened with "See more" selected next to "Get file content".](./media/connectors-create-api-ftp/send-email-action-body-see-more-built-in-standard.png)
+
+ 1. In the dynamic content list, under **Get file content**, select **File content**.
+
+ The **Body** property now references the **File content** action output.
+
+ ![Screenshot shows Standard workflow designer and "Send an email" action with "File content" action output.](./media/connectors-create-api-ftp/send-email-action-complete-built-in-standard.png)
+
+1. Save your logic app workflow.
+
+1. To run and trigger the workflow, follow these steps:
+
+ 1. On workflow menu, select **Overview**.
+
+ 1. On the **Overview** pane toolbar, select **Run Trigger** > **Run**.
+
+ 1. Add a file to the FTP folder that your workflow monitors.
+
+#### Workflow with managed trigger and actions
+
+1. Under the **Get file content** action, add the Office 365 Outlook action named **Send an email**. If you have an Outlook.com account instead, add the Outlook.com **Send an email** action, and adjust the following steps accordingly.
+
+ 1. On the designer, select **Choose an operation**.
+
+ 1. On the **Add an action** pane, under the **Choose an operation** search box, select **Azure**.
+
+ 1. In the search box, enter **office 365 outlook send an email**. From the actions list, select the Office 365 Outlook action named **Send an email**.
+
+ ![Screenshot shows Standard workflow designer and "Send an email" action under all the other managed actions.](./media/connectors-create-api-ftp/send-email-action-with-azure-standard.png)
+
+1. If necessary, sign in to your email account.
+
+1. In the action information box, provide the required values and include any other parameters or properties that you want to test.
+
+ For example, you can include the **File content** output from the **Get file content** action. To find this output, follow these steps:
+
+ 1. In the **Get file content** action, click inside the **Body** box so that the dynamic content list opens. In the dynamic content list, next to **Get file content**, select **See more**.
+
+ ![Screenshot shows Standard workflow designer, "Send an email" action, and dynamic content list opened with "See more" selected next to "Get file content" managed action section.](./media/connectors-create-api-ftp/send-email-action-body-see-more-azure-standard.png)
+
+ 1. In the dynamic content list, under **Get file content**, select **File content**.
+
+ The **Body** property now references the **File content** action output.
+
+ ![Screenshot shows Standard workflow designer and "Send an email" action with "File content" managed action output.](./media/connectors-create-api-ftp/send-email-action-complete-azure-standard.png)
+
+1. Save your logic app workflow.
+
+1. To run and trigger the workflow, follow these steps:
+
+ 1. On workflow menu, select **Overview**.
+
+ 1. On the **Overview** pane toolbar, select **Run Trigger** > **Run**.
+
+ 1. Add a file to the FTP folder that your workflow monitors.
+++
+<a name="built-in-operations"></a>
+
+## FTP built-in connector operations
+
+The FTP built-in connector is available only for Standard logic app workflows and provides the following operations:
+
+| Trigger | Description |
+||-|
+| [**When a file is added or updated**](#when-file-added-updated) | Start a logic app workflow when a file is added or updated in the specified folder on the FTP server. <p><p>**Note**: This trigger gets only the file metadata or properties, not the file content. However, to get the content, your workflow can follow this trigger with the [**Get file content**](#get-file-content) action. |
+|||
+
+| Action | Description |
+|--|-|
+| [**Create file**](#create-file) | Create a file using the specified file path and file content. |
+| [**Delete file**](#delete-file) | Delete a file using the specified file path. |
+| [**Get file content**](#get-file-content) | Get the content of a file using the specified file path. |
+| [**Get file metadata**](#get-file-metadata) | Get the metadata or properties of a file using the specified file path. |
+| [**List files and subfolders in a folder**](#list-files-subfolders-folder) | Get a list of files and subfolders in the specified folder. |
+| [**Update file**](#update-file) | Update a file using the specified file path and file content. |
+|||
+
+<a name="when-file-added-updated"></a>
+
+### When a file is added or updated
+
+Operation ID: `whenFtpFilesAreAddedOrModified`
+
+This trigger starts a logic app workflow run when a file is added or updated in the specified folder on the FTP server. The trigger gets only the file metadata or properties, not any file content. However, to get the content, your workflow can follow this trigger with the [**Get file content**](#get-file-content) action.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Folder path** | `folderPath` | True | `string` | The folder path, relative to the root directory. |
+| **Number of files to return** | `maxFileCount` | False | `integer` | The maximum number of files to return from a single trigger run. Valid values range from 1 - 100. <br><br>**Note**: By default, the **Split On** setting is enabled and forces this trigger to process each file individually in parallel. |
+| **Cutoff timestamp to ignore older files** | `oldFileCutOffTimestamp` | False | `dateTime` | The cutoff time to use for ignoring older files. Use the timestamp format `YYYY-MM-DDTHH:MM:SS`. To disable this feature, leave this property empty. |
+||||||
+
+#### Returns
+
+When the trigger's **Split On** setting is enabled, the trigger returns the metadata or properties for one file at a time. Otherwise, the trigger returns an array that contains each file's metadata.
+
+| Name | Type |
+|||
+| **List of files** | [BlobMetadata](/connectors/ftp/#blobmetadata) |
+|||
+
+<a name="create-file"></a>
+
+### Create file
+
+Operation ID: `createFile`
+
+This action creates a file using the specified file path and file content. If the file already exists, this action overwrites that file.
+
+> [!IMPORTANT]
+>
+> If you delete or rename a file on the FTP server immediately after creation within the same workflow,
+> the operation might return an HTTP **404** error, which is by design. To avoid this problem, include
+> a 1-minute delay before you delete or rename any newly created files. You can use the
+> [**Delay** action](connectors-native-delay.md) to add this delay to your workflow.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **File path** | `filePath` | True | `string` | The file path, including the file name extension if any, relative to the root directory. |
+| **File content** | `fileContent` | True | `string` | The file content. |
+||||||
+
+#### Returns
+
+This action returns a [BlobMetadata](/connectors/ftp/#blobmetadata) object named **Body**.
+
+| Name | Type |
+|||
+| **File metadata File name** | `string` |
+| **File metadata File path** | `string` |
+| **File metadata File size** | `string` |
+| **File metadata** | [BlobMetadata](/connectors/ftp/#blobmetadata) |
+|||
+
+<a name="delete-file"></a>
+
+### Delete file
+
+Operation ID: `deleteFtpFile`
+
+This action deletes a file using the specified file path.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **File path** | `filePath` | True | `string` | The file path, including the file name extension if any, relative to the root directory. |
+||||||
+
+#### Returns
+
+None
+
+<a name="get-file-content"></a>
+
+### Get file content
+
+Operation ID: `getFtpFileContent`
+
+This action gets the content of a file using the specified file path.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **File path** | `path` | True | `string` | The file path, including the file name extension if any, relative to the root directory. |
+||||||
+
+#### Returns
+
+This action returns the content of a file as a binary value named **File content**.
+
+| Name | Type |
+|||
+| **File content** | Binary |
+|||
+
+<a name="get-file-metadata"></a>
+
+### Get file metadata
+
+Operation ID: `getFileMetadata`
+
+This action gets the metadata or properties of a file using the specified file path.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **File path** | `path` | True | `string` | The file path, including the file name extension if any, relative to the root directory. |
+||||||
+
+#### Returns
+
+This action returns the following outputs:
+
+| Name | Type |
+|||
+| **File name** | `string` |
+| **File path** | `string` |
+| **File size** | `string` |
+| **Last updated time** | `string` |
+| **File metadata** | [BlobMetadata](/connectors/ftp/#blobmetadata) |
+|||
+
+<a name="list-files-subfolders-folder"></a>
+
+### List files and subfolders in a folder
+
+Operation ID: `listFilesInFolder`
+
+This action gets a list of files and subfolders in the specified folder.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Folder path** | `folderPath` | True | `string` | The folder path, relative to the root directory. |
+| **File content** | `fileContent` | True | `string` | The content for the file |
+||||||
+
+#### Returns
+
+This action returns an array that's named **Response** and contains [BlobMetadata](/connectors/ftp/#blobmetadata) objects.
+
+| Name | Type |
+|||
+| **Response** | Array with [BlobMetadata](/connectors/ftp/#blobmetadata) objects |
+|||
+
+<a name="update-file"></a>
+
+### Update file
+
+Operation ID: `updateFile`
+
+This action updates a file using the specified file path and file content.
+
+> [!IMPORTANT]
+>
+> If you delete or rename a file on the FTP server immediately after creation within the same workflow,
+> the operation might return an HTTP **404** error, which is by design. To avoid this problem, include
+> a 1-minute delay before you delete or rename any newly created files. You can use the
+> [**Delay** action](connectors-native-delay.md) to add this delay to your workflow.
+
+#### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **File path** | `filePath` | True | `string` | The file path, including the file name extension if any, relative to the root directory. |
+| **File content** | `fileContent` | True | `string` | The content for the file |
+||||||
+
+#### Returns
+
+This action returns a [BlobMetadata](/connectors/ftp/#blobmetadata) object named **Body**.
+
+| Name | Type |
+|||
+| **Body** | [BlobMetadata](/connectors/ftp/#blobmetadata) |
+|||
+
+## Next steps
+* [Connectors overview for Azure Logic Apps](../connectors/apis-list.md)
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
cosmos-db Account Databases Containers Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/account-databases-containers-items.md
Azure Cosmos DB is a fully managed platform-as-a-service (PaaS). To begin using
The Azure Cosmos account is the fundamental unit of global distribution and high availability. Your Azure Cosmos account contains a unique DNS name and you can manage an account by using the Azure portal or the Azure CLI, or by using different language-specific SDKs. For more information, see [how to manage your Azure Cosmos account](how-to-manage-database-account.md). For globally distributing your data and throughput across multiple Azure regions, you can add and remove Azure regions to your account at any time. You can configure your account to have either a single region or multiple write regions. For more information, see [how to add and remove Azure regions to your account](how-to-manage-database-account.md). You can configure the [default consistency](consistency-levels.md) level on an account.
-## Elements in an Azure Cosmos account
+## Elements in an Azure Cosmos DB account
An Azure Cosmos container is the fundamental unit of scalability. You can virtually have an unlimited provisioned throughput (RU/s) and storage on a container. Azure Cosmos DB transparently partitions your container using the logical partition key that you specify in order to elastically scale your provisioned throughput and storage.
The following image shows the hierarchy of different entities in an Azure Cosmos
:::image type="content" source="./media/account-databases-containers-items/cosmos-entities.png" alt-text="Azure Cosmos account entities" border="false":::
-## Azure Cosmos databases
+## Azure Cosmos DB databases
You can create one or multiple Azure Cosmos databases under your account. A database is analogous to a namespace. A database is the unit of management for a set of Azure Cosmos containers. The following table shows how a database is mapped to various API-specific entities:
You can interact with an Azure Cosmos database with Azure Cosmos APIs as describ
|Create new database| Yes | Yes | Yes (database is mapped to a keyspace) | Yes | NA | NA | |Update database| Yes | Yes | Yes (database is mapped to a keyspace) | Yes | NA | NA |
-## Azure Cosmos containers
+## Azure Cosmos DB containers
An Azure Cosmos container is the unit of scalability both for provisioned throughput and storage. A container is horizontally partitioned and then replicated across multiple regions. The items that you add to the container are automatically grouped into logical partitions, which are distributed across physical partitions, based on the partition key. The throughput on a container is evenly distributed across the physical partitions. To learn more about partitioning and partition keys, see [Partition data](partitioning-overview.md).
A container is specialized into API-specific entities as shown in the following
> [!NOTE] > When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
-### Properties of an Azure Cosmos container
+### Properties of an Azure Cosmos DB container
An Azure Cosmos container has a set of system-defined properties. Depending on which API you use, some properties might not be directly exposed. The following table describes the list of system-defined properties:
An Azure Cosmos container has a set of system-defined properties. Depending on w
|uniqueKeyPolicy | User-configurable | Used to ensure the uniqueness of one or more values in a logical partition. For more information, see [Unique key constraints](unique-keys.md). | Yes | No | No | No | Yes | |AnalyticalTimeToLive | User-configurable | Provides the ability to delete items automatically from a container after a set time period. For details, see [Time to Live](analytical-store-introduction.md). | Yes | No | Yes | No | No |
-### Operations on an Azure Cosmos container
+### Operations on an Azure Cosmos DB container
An Azure Cosmos container supports the following operations when you use any of the Azure Cosmos APIs:
An Azure Cosmos container supports the following operations when you use any of
| Update a container | Yes | Yes | Yes | Yes | NA | NA | | Delete a container | Yes | Yes | Yes | Yes | NA | NA |
-## Azure Cosmos items
+## Azure Cosmos DB items
Depending on which API you use, an Azure Cosmos item can represent either a document in a collection, a row in a table, or a node or edge in a graph. The following table shows the mapping of API-specific entities to an Azure Cosmos item:
cosmos-db Cassandra Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-change-feed.md
The following example shows how to get a change feed on all the rows in a Cassan
In each iteration, the query resumes at the last point changes were read, using paging state. We can see a continuous stream of new changes to the table in the Keyspace. We will see changes to rows that are inserted, or updated. Watching for delete operations using change feed in Cassandra API is currently not supported.
+> [!NOTE]
+> Reusing a token after dropping a collection and then recreating it with the same name results in an error.
+> We advise you to set the pageState to null when creating a new collection and reusing collection name.
+ # [Java](#tab/java) ```java
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
You can provision throughput at a container-level or a database-level in terms o
| Resource | Limit | | | |
-| Maximum RUs per container ([dedicated throughput provisioned mode](account-databases-containers-items.md#azure-cosmos-containers)) | 1,000,000 <sup>1</sup> |
-| Maximum RUs per database ([shared throughput provisioned mode](account-databases-containers-items.md#azure-cosmos-containers)) | 1,000,000 <sup>1</sup> |
+| Maximum RUs per container ([dedicated throughput provisioned mode](account-databases-containers-items.md#azure-cosmos-db-containers)) | 1,000,000 <sup>1</sup> |
+| Maximum RUs per database ([shared throughput provisioned mode](account-databases-containers-items.md#azure-cosmos-db-containers)) | 1,000,000 <sup>1</sup> |
| Maximum RUs per partition (logical & physical) | 10,000 | | Maximum storage across all items per (logical) partition | 20 GB <sup>2</sup>| | Maximum number of distinct (logical) partition keys | Unlimited |
In summary, here are the minimum provisioned RU limits when using manual through
| Resource | Limit | | | |
-| Minimum RUs per container ([dedicated throughput provisioned mode with manual throughput](./account-databases-containers-items.md#azure-cosmos-containers)) | 400 |
-| Minimum RUs per database ([shared throughput provisioned mode with manual throughput](./account-databases-containers-items.md#azure-cosmos-containers)) | 400 RU/s for first 25 containers. |
+| Minimum RUs per container ([dedicated throughput provisioned mode with manual throughput](./account-databases-containers-items.md#azure-cosmos-db-containers)) | 400 |
+| Minimum RUs per database ([shared throughput provisioned mode with manual throughput](./account-databases-containers-items.md#azure-cosmos-db-containers)) | 400 RU/s for first 25 containers. |
Cosmos DB supports programmatic scaling of throughput (RU/s) per container or database via the SDKs or portal.
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cmk.md
No, there's no charge to enable this feature.
All the data stored in your Azure Cosmos account is encrypted with the customer-managed keys, except for the following metadata: -- The names of your Azure Cosmos DB [accounts, databases, and containers](./account-databases-containers-items.md#elements-in-an-azure-cosmos-account)
+- The names of your Azure Cosmos DB [accounts, databases, and containers](./account-databases-containers-items.md#elements-in-an-azure-cosmos-db-account)
- The names of your [stored procedures](./stored-procedures-triggers-udfs.md)
cosmos-db Index Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-overview.md
# Indexing in Azure Cosmos DB - Overview [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-Azure Cosmos DB is a schema-agnostic database that allows you to iterate on your application without having to deal with schema or index management. By default, Azure Cosmos DB automatically indexes every property for all items in your [container](account-databases-containers-items.md#azure-cosmos-containers) without having to define any schema or configure secondary indexes.
+Azure Cosmos DB is a schema-agnostic database that allows you to iterate on your application without having to deal with schema or index management. By default, Azure Cosmos DB automatically indexes every property for all items in your [container](account-databases-containers-items.md#azure-cosmos-db-containers) without having to define any schema or configure secondary indexes.
The goal of this article is to explain how Azure Cosmos DB indexes data and how it uses indexes to improve query performance. It is recommended to go through this section before exploring how to customize [indexing policies](index-policy.md).
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
cosmos-db How To Create Multiple Cosmos Db Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-multiple-cosmos-db-triggers.md
This article describes how you can configure multiple Azure Functions triggers f
When building serverless architectures with [Azure Functions](../../azure-functions/functions-overview.md), it's [recommended](../../azure-functions/performance-reliability.md#avoid-long-running-functions) to create small function sets that work together instead of large long running functions.
-As you build event-based serverless flows using the [Azure Functions trigger for Cosmos DB](./change-feed-functions.md), you'll run into the scenario where you want to do multiple things whenever there is a new event in a particular [Azure Cosmos container](../account-databases-containers-items.md#azure-cosmos-containers). If actions you want to trigger, are independent from one another, the ideal solution would be to **create one Azure Functions triggers for Cosmos DB per action** you want to do, all listening for changes on the same Azure Cosmos container.
+As you build event-based serverless flows using the [Azure Functions trigger for Cosmos DB](./change-feed-functions.md), you'll run into the scenario where you want to do multiple things whenever there is a new event in a particular [Azure Cosmos container](../account-databases-containers-items.md#azure-cosmos-db-containers). If actions you want to trigger, are independent from one another, the ideal solution would be to **create one Azure Functions triggers for Cosmos DB per action** you want to do, all listening for changes on the same Azure Cosmos container.
## Optimizing containers for multiple Triggers
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Synapse Link isn't recommended if you're looking for traditional data warehouse
* Currently Synapse Link isn't fully compatible with continuous backup mode. Click [here](analytical-store-introduction.md#backup) for more information.
-* Granular Role-based Access (RBAC)s isn't supported when querying from Synapse. Users that have access to your Synapse workspace and have access to the Cosmos DB account can access all containers within that account. We currently don't support more granular access to the containers.
+* Granular Role-based Access (RBAC)s isn't supported when querying from Synapse. Users that have access to your Synapse workspace and have access to the Cosmos DB account can access all containers within that account. We currently don't support more granular access to the containers.
+* Currently Azure Synapse Workspaces don't support linked services using `Managed Identity`. Always use the `MasterKey` option.
## Security
cost-management-billing Migrate Ea Reporting Arm Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reporting-arm-apis-overview.md
Title: Migrate from EA Reporting to Azure Resource Manager APIs overview
+ Title: Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview
-description: This article provides an overview about migrating from EA Reporting to Azure Resource Manager APIs.
+description: This article provides an overview about migrating from Azure Enterprise Reporting to Microsoft Cost Management APIs.
Last updated 07/15/2022
-# Migrate from EA Reporting to Azure Resource Manager APIs overview
+# Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview
-This article helps developers that have built custom solutions using the [Azure Reporting APIs for Enterprise Customers](../manage/enterprise-api.md) to migrate to Azure Resource Manager APIs for Cost Management. Service principal support for the newer Azure Resource Manager APIs. Azure Resource Manager APIs are still in active development. Consider migrating to them instead of using the older Azure Reporting APIs for Enterprise customers. The older APIs are being deprecated. This article helps you understand the differences between the Reporting APIs and the Azure Resource Manager APIs, what to expect when you migrate to the Azure Resource Manager APIs, and the new capabilities that are available with the new Azure Resource Manager APIs.
+This article helps developers that have built custom solutions using the [Azure Enterprise Reporting APIs](../manage/enterprise-api.md) to migrate to Microsoft Cost Management APIs. Service principal support is available in the newer Cost Management APIs and they are still actively being developed. Consider migrating to them instead of using the older Azure Enterprise Reporting APIs. The older APIs are being deprecated. This article helps you understand the differences between the Azure Enterprise Reporting APIs and the Cost Management APIs, what to expect when you migrate to the Cost Management APIs, and the new capabilities that are available with the Cost Management APIs.
## API differences
-The following information describes the differences between the older Reporting APIs for Enterprise Customers and the newer Azure Resource Manager APIs.
+The following information describes the differences between the older Azure Enterprise Reporting APIs and the newer Cost Management APIs.
-| Use | Enterprise Agreement APIs | Azure Resource Manager APIs |
+| Use | Azure Enterprise Reporting APIs | Microsoft Cost Management APIs |
| | | | | Authentication | API key provisioned in the Enterprise Agreement (EA) portal | Azure Active Directory (Azure AD) Authentication using user tokens or service principals. Service principals take the place of API keys. | | Scopes and permissions | All requests are at the enrollment scope. API Key permission assignments will determine whether data for the entire enrollment, a department, or a specific account is returned. No user authentication. | Users or service principals are assigned access to the enrollment, department, or account scope. |
The following information describes the differences between the older Reporting
## Migration checklist - Familiarize yourself with the [Azure Resource Manager REST APIs](/rest/api/azure).-- Determine which EA APIs you use and see which Azure Resource Manager APIs to move to at [EA API mapping to new Azure Resource Manager APIs](../costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md#ea-api-mapping-to-new-azure-resource-manager-apis).-- Configure service authorization and authentication for the Azure Resource Manager APIs. For more information, see [Assign permission to ACM APIs](cost-management-api-permissions.md).-- Test the APIs and then update any programming code to replace EA API calls with Azure Resource Manager API calls.
+- Determine which Enterprise Reporting APIs you use and see which Cost Management APIs to move to at [Enterprise Reporting API mapping to new Cost Management APIs](../costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md#ea-api-mapping-to-new-azure-resource-manager-apis).
+- Configure service authorization and authentication for the Cost Management APIs. For more information, see [Assign permission to ACM APIs](cost-management-api-permissions.md).
+- Test the APIs and then update any programming code to replace Enterprise Reporting API calls with Cost Management API calls.
- Update error handling to use new error codes. Some considerations include:
- - Azure Resource Manager APIs have a timeout period of 60 seconds.
- - Azure Resource Manager APIs have rate limiting in place. This results in a `429 throttling error` if rates are exceeded. Build your solutions so that you don't make too many API calls in a short time period.
+ - Cost Management APIs have a timeout period of 60 seconds.
+ - Cost Management APIs have rate limiting in place. This results in a `429 throttling error` if rates are exceeded. Build your solutions so that you don't make too many API calls in a short time period.
- Review the other Cost Management APIs available through Azure Resource Manager and assess for use later. For more information, see [Use additional Cost Management APIs](../costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md#use-additional-cost-management-apis).
-## EA API mapping to new Azure Resource Manager APIs
+## Enterprise Reporting API mapping to new Cost Management APIs
-Use the following information to identify the EA APIs that you currently use and the replacement Azure Resource Manager API to use instead.
+Use the following information to identify the Enterprise Reporting APIs that you currently use and the replacement Cost Management API to use instead.
-| Scenario | EA APIs | Azure Resource Manager APIs |
+| Scenario | Enterprise Reporting APIs | Cost Management APIs |
| | | | | [Migrate from EA Usage Details APIs](migrate-ea-usage-details-api.md) | [/usagedetails/download](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail)<br>[/usagedetails/submit](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail)<br>[/usagedetails](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail)<br>[/usagedetailsbycustomdate](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail) | Use [Microsoft.CostManagement/Exports](/rest/api/cost-management/exports/create-or-update) for all recurring data ingestion workloads. <br>Use the [Cost Details](/rest/api/cost-management/generate-cost-details-report) report for small on-demand datasets. | | [Migrate from EA Balance Summary APIs](migrate-ea-balance-summary-api.md) | [/balancesummary](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary) | [Microsoft.Consumption/balances](/rest/api/consumption/balances/getbybillingaccount) |
Use the following information to identify the EA APIs that you currently use and
## Use additional Cost Management APIs
-After you've migrated to Azure Resource Manager APIs for your existing reporting scenarios, you can use many other APIs, too. The APIs are also available through Azure Resource Manager and can be automated using service principal-based authentication. Here's a quick summary of the new capabilities that you can use.
+After you've migrated to the Cost Management APIs for your existing reporting scenarios, you can use many other APIs, too. The APIs are also available through Azure Resource Manager and can be automated using service principal-based authentication. Here's a quick summary of the new capabilities that you can use.
- [Budgets](/rest/api/consumption/budgets/createorupdate) - Use to set thresholds to proactively monitor your costs, alert relevant stakeholders, and automate actions in response to threshold breaches. - [Alerts](/rest/api/cost-management/alerts) - Use to view alert information including, but not limited to, budget alerts, invoice alerts, credit alerts, and quota alerts.
After you've migrated to Azure Resource Manager APIs for your existing reporting
## Next steps - Familiarize yourself with the [Azure Resource Manager REST APIs](/rest/api/azure).-- If needed, determine which EA APIs you use and see which Azure Resource Manager APIs to move to at [EA API mapping to new Azure Resource Manager APIs](../costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md#ea-api-mapping-to-new-azure-resource-manager-apis).
+- If needed, determine which Enterprise Reporting APIs you use and see which Cost Management APIs to move to at [Enterprise Reporting API mapping to new Cost Management APIs](../costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md#ea-api-mapping-to-new-azure-resource-manager-apis).
- If you're not already using Azure Resource Manager APIs, [register your client app with Azure AD](/rest/api/azure/#register-your-client-application-with-azure-ad).-- If needed, update any of your programming code to use [Azure AD authentication](/rest/api/azure/#create-the-request) with your service principal.
+- If needed, update any of your programming code to use [Azure AD authentication](/rest/api/azure/#create-the-request) with your service principal.
cost-management-billing Enterprise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/enterprise-api.md
Title: Azure Billing Enterprise APIs
-description: Learn about the Reporting APIs that enable Enterprise Azure customers to pull consumption data programmatically.
+ Title: Azure Enterprise Reporting APIs
+description: Learn about the Azure Enterprise Reporting APIs that enable customers to pull consumption data programmatically.
tags: billing
Last updated 09/15/2021
-# Overview of Reporting APIs for Enterprise customers
+# Overview of the Azure Enterprise Reporting APIs
> [!Note]
-> Microsoft no longer updates the Azure Billing - Enterprise Reporting APIs. Instead, you should use [Azure Consumption](/rest/api/consumption) APIs.
+> Microsoft no longer updates the Azure Enterprise Reporting APIs. Instead, you should use Cost Management APIs. To learn more, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](../automate/migrate-ea-reporting-arm-apis-overview.md).
-The Reporting APIs enable Enterprise Azure customers to programmatically pull consumption and billing data into preferred data analysis tools. Enterprise customers have signed an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) with Azure to make negotiated Azure Prepayment (previously called monetary commitment) and gain access to custom pricing for Azure resources.
+The Azure Enterprise Reporting APIs enable Enterprise Azure customers to programmatically pull consumption and billing data into preferred data analysis tools. Enterprise customers have signed an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) with Azure to make negotiated Azure Prepayment (previously called monetary commitment) and gain access to custom pricing for Azure resources.
All date and time parameters required for APIs must be represented as combined Coordinated Universal Time (UTC) values. Values returned by APIs are shown in UTC format.
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Follow these steps to get started:
artifact: 'ArmTemplates' publishLocation: 'pipeline' ```
+> [!NOTE]
+> Node version 10.x is currently still supported but may be deprected in the future. We highly recommend upgrading to the latest version.
4. Enter your YAML code. We recommend that you use the YAML file as a starting point.
data-factory Data Flow Cast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-cast.md
To modify the data type for columns in your data flow, add columns to "Cast sett
**Type:** Choose the data type to cast your column to. If you pick "complex", you can then select "Define complex type" and define structures, arrays, and maps inside the expression builder. > [!NOTE]
-> Support for complex data type casting from the Cast transformation is currently unavailable. Use a Derived Column transformation instead.
+> Support for complex data type casting from the Cast transformation is currently unavailable. Use a Derived Column transformation instead. In the Derived Column, type conversion errors always result in NULL and require explicity error handling using an Assert. The Cast transformation can automatically trap conversion errors using the "Assert type check" property.
**Format:** Some data types, like decimal and dates, will allow for additional formatting options.
data-factory Data Flow Date Time Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-date-time-functions.md
Previously updated : 02/02/2022 Last updated : 08/03/2022 # Date and time functions in mapping data flow
data-factory Data Flow Derived Column https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-derived-column.md
Previously updated : 12/10/2021 Last updated : 08/03/2022 # Derived column transformation in mapping data flow
data-factory Data Flow Exists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-exists.md
Previously updated : 03/22/2022 Last updated : 08/03/2022 # Exists transformation in mapping data flow
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expression-functions.md
Previously updated : 07/19/2022 Last updated : 08/03/2022 # Expression functions in mapping data flow
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expressions-usage.md
Previously updated : 07/19/2022 Last updated : 08/03/2022 # Data transformation expression usage in mapping data flow
data-factory Data Flow External Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-external-call.md
Previously updated : 05/03/2022 Last updated : 08/03/2022 # External call transformation in mapping data flows
data-factory Data Flow Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-filter.md
Previously updated : 09/09/2021 Last updated : 08/03/2022 # Filter transformation in mapping data flow
data-factory Data Flow Flatten https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-flatten.md
Previously updated : 09/29/2021 Last updated : 08/03/2022 # Flatten transformation in mapping data flow
data-factory Data Flow Flowlet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-flowlet.md
Previously updated : 11/11/2021 Last updated : 08/04/2022 # Flowlet transformation in mapping data flow
data-factory Data Flow Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-join.md
Previously updated : 06/10/2022 Last updated : 08/04/2022 # Join transformation in mapping data flow
data-factory Data Flow Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-lookup.md
Previously updated : 09/09/2021 Last updated : 08/04/2022 # Lookup transformations in mapping data flow
data-factory Data Flow Map Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-map-functions.md
Previously updated : 02/02/2022 Last updated : 08/04/2022 # Map functions in mapping data flow
data-factory Data Flow Metafunctions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-metafunctions.md
Previously updated : 03/05/2022 Last updated : 08/04/2022 # Metafunctions in mapping data flow
data-factory Data Flow New Branch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-new-branch.md
Previously updated : 09/09/2021 Last updated : 08/04/2022 # Creating a new branch in mapping data flow
data-factory Data Flow Parse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-parse.md
Previously updated : 02/03/2022 Last updated : 08/04/2022 # Parse transformation in mapping data flow
data-factory Data Flow Pivot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-pivot.md
Previously updated : 09/09/2021 Last updated : 08/04/2022 # Pivot transformation in mapping data flow
The pivot transformation requires three different inputs: group by columns, the
### Group by Select which columns to aggregate the pivoted columns over. The output data will group all rows with the same group by values into one row. The aggregation done in the pivoted column will occur over each group.
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/01/2022 Last updated : 08/04/2022
databox-online Azure Stack Edge Gpu 2203 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2203-release-notes.md
Previously updated : 04/14/2022 Last updated : 07/25/2022
The following release notes identify the critical open issues and the resolved issues for the 2203 release for your Azure Stack Edge devices. These release notes are applicable for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices. Features and issues that correspond to a specific model are called out wherever applicable.
-The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your device, carefully review the information contained in the release notes.
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2203** release, which maps to software version number **2.2.1902.4561**. This software can be applied to your device if you are running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
+This article applies to the **Azure Stack Edge 2203** release, which maps to software version number **2.2.1902.4561**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
## What's new
The 2203 release has the following features and enhancements:
The following table provides a summary of known issues in this release.
-| No. | Feature | Issue | Workaround/comments |
+|No. |Feature |Issue |Workaround/comments |
| | | | | |**1.**|Preview features |For this release, the following features are available in preview: <br> - Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. <br> - VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only. <br> - Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R. |These features will be generally available in later releases. |
-|**2.**|HPN VMs |For this release, the Standard_F12_HPN can only support one network interface and cannot be used for Multi-Access Edge Computing (MEC) de