Updates from: 04/24/2023 01:08:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider.md
Previously updated : 12/16/2022 Last updated : 04/24/2023
The following example shows the `entityID` value in the SAML metadata:
The `identifierUris` property will accept URLs only on the domain `tenant-name.onmicrosoft.com`. ```json
-"identifierUris":"https://tenant-name.onmicrosoft.com",
+"identifierUris":"https://tenant-name.onmicrosoft.com/app-name",
``` #### Share the application's metadata with Azure AD B2C
active-directory-b2c Tokens Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tokens-overview.md
Previously updated : 03/09/2023 Last updated : 04/24/2023
Azure AD B2C supports the [OAuth 2.0 and OpenID Connect protocols](protocols-ove
The following tokens are used in communication with Azure AD B2C: -- **ID token** - A JWT that contains claims that you can use to identify users in your application. This token is securely sent in HTTP requests for communication between two components of the same application or service. You can use the claims in an ID token as you see fit. They're commonly used to display account information or to make access control decisions in an application. ID tokens are signed, but they're not encrypted. When your application or API receives an ID token, it must validate the signature to prove that the token is authentic. Your application or API must also validate a few claims in the token to prove that it's valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.
+- **ID token** - A JWT that contains claims that you can use to identify users in your application. This token is securely sent in HTTP requests for communication between two components of the same application or service. You can use the claims in an ID token as you see fit. They're commonly used to display account information or to make access control decisions in an application. The ID tokens issued by Azure AD B2C are signed, but they're not encrypted. When your application or API receives an ID token, it must validate the signature to prove that the token is authentic. Your application or API must also validate a few claims in the token to prove that it's valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.
- **Access token** - A JWT that contains claims that you can use to identify the granted permissions to your APIs. Access tokens are signed, but they aren't encrypted. Access tokens are used to provide access to APIs and resource servers. When your API receives an access token, it must validate the signature to prove that the token is authentic. Your API must also validate a few claims in the token to prove that it's valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
description: Create user-assigned managed identities.
+editor:
Last updated 03/08/2022-+ zone_pivot_groups: identity-mi-methods
To create a user-assigned managed identity, your account needs the [Managed Iden
- **Resource group**: Choose a resource group to create the user-assigned managed identity in, or select **Create new** to create a new resource group. - **Region**: Choose a region to deploy the user-assigned managed identity, for example, **West US**. - **Name**: Enter the name for your user-assigned managed identity, for example, UAI1.
-
+ [!INCLUDE [ua-character-limit](~/includes/managed-identity-ua-character-limits.md)]
-
+ :::image type="content" source="media/how-manage-user-assigned-managed-identities/create-user-assigned-managed-identity-portal.png" alt-text="Screenshot that shows the Create User Assigned Managed Identity pane."::: 1. Select **Review + create** to review the changes.
In some environments, administrators choose to limit who can manage user-assigne
1. A list of the user-assigned managed identities for your subscription is returned. Select the user-assigned managed identity that you want to manage. 1. Select **Access control (IAM)**. 1. Choose **Add role assignment**.
-
+ ![Screenshot that shows the user-assigned managed identity access control screen](media/how-manage-user-assigned-managed-identities/role-assign.png) 1. In the **Add role assignment** pane, choose the role to assign and choose **Next**.
In this article, you learn how to create, list, delete, or assign a role to a us
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
-> [!IMPORTANT]
-> To modify user permissions when you use an app service principal by using the CLI, you must provide the service principal more permissions in the Azure Active Directory Graph API because portions of the CLI perform GET requests against the Graph API. Otherwise, you might end up receiving an "Insufficient privileges to complete the operation" message. To do this step, go into the **App registration** in Azure AD, select your app, select **API permissions**, and scroll down and select **Azure Active Directory Graph**. From there, select **Application permissions**, and then add the appropriate permissions.
+> [!IMPORTANT]
+> To modify user permissions when you use an app service principal by using the CLI, you must provide the service principal more permissions in the Azure Active Directory Graph API because portions of the CLI perform GET requests against the Graph API. Otherwise, you might end up receiving an "Insufficient privileges to complete the operation" message. To do this step, go into the **App registration** in Azure AD, select your app, select **API permissions**, and scroll down and select **Azure Active Directory Graph**. From there, select **Application permissions**, and then add the appropriate permissions.
-## Create a user-assigned managed identity
+## Create a user-assigned managed identity
To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
To use Azure PowerShell locally for this article instead of using Cloud Shell:
Connect-AzAccount ```
-1. Install the [latest version of PowerShellGet](/powershell/scripting/gallery/installing-psget#for-systems-with-powershell-50-or-newer-you-can-install-the-latest-powershellget).
+1. Install the [latest version of PowerShellGet](/powershell/gallery/powershellget/install-powershellget).
```azurepowershell Install-Module -Name PowerShellGet -AllowPrerelease
Resource Manager templates help you deploy new or modified resources defined by
- Use a [custom template from Azure Marketplace](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template) to create a template from scratch or base it on an existing common or [quickstart template](https://azure.microsoft.com/resources/templates/). - Derive from an existing resource group by exporting a template. You can export them from either [the original deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates) or from the [current state of the deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates). - Use a local [JSON editor (such as VS Code)](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md), and then upload and deploy by using PowerShell or the Azure CLI.-- Use the Visual Studio [Azure Resource Group project](../../azure-resource-manager/templates/create-visual-studio-deployment-project.md) to create and deploy a template.
+- Use the Visual Studio [Azure Resource Group project](../../azure-resource-manager/templates/create-visual-studio-deployment-project.md) to create and deploy a template.
-## Create a user-assigned managed identity
+## Create a user-assigned managed identity
To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
In this article, you learn how to create, list, and delete a user-assigned manag
az account get-access-token ```
-## Create a user-assigned managed identity
+## Create a user-assigned managed identity
To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
DELETE https://management.azure.com/subscriptions/80c696ff-5efa-4909-a64d-f1b616
## Next steps For information on how to assign a user-assigned managed identity to an Azure VM or virtual machine scale set by using CURL, see:-- [Configure managed identities for Azure resources on an Azure VM using REST API calls](qs-configure-rest-vm.md#user-assigned-managed-identity)
+- [Configure managed identities for Azure resources on an Azure VM using REST API calls](qs-configure-rest-vm.md#user-assigned-managed-identity)
- [Configure managed identities for Azure resources on a virtual machine scale set using REST API calls](qs-configure-rest-vmss.md#user-assigned-managed-identity) Learn how to use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets.
active-directory Tutorial Windows Vm Ua Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-ua-arm.md
documentationcenter: ''
+editor:
na
Last updated 01/11/2022 -+ # Tutorial: Use a user-assigned managed identity on a Windows VM to access Azure Resource Manager
-This tutorial explains how to create a user-assigned identity, assign it to a Windows Virtual Machine (VM), and then use that identity to access the Azure Resource Manager API. Managed Service Identities are automatically managed by Azure. They enable authentication to services that support Azure AD authentication, without needing to embed credentials into your code.
+This tutorial explains how to create a user-assigned identity, assign it to a Windows Virtual Machine (VM), and then use that identity to access the Azure Resource Manager API. Managed Service Identities are automatically managed by Azure. They enable authentication to services that support Azure AD authentication, without needing to embed credentials into your code.
You learn how to: > [!div class="checklist"] > * Create a user-assigned managed identity > * Assign your user-assigned identity to your Windows VM
-> * Grant the user-assigned identity access to a Resource Group in Azure Resource Manager
-> * Get an access token using the user-assigned identity and use it to call Azure Resource Manager
+> * Grant the user-assigned identity access to a Resource Group in Azure Resource Manager
+> * Get an access token using the user-assigned identity and use it to call Azure Resource Manager
> * Read the properties of a Resource Group [!INCLUDE [az-powershell-update](../../../includes/updated-for-az.md)]
To use Azure PowerShell locally for this article (rather than using Cloud Shell)
Connect-AzAccount ```
-1. Install the [latest version of PowerShellGet](/powershell/scripting/gallery/installing-psget#for-systems-with-powershell-50-or-newer-you-can-install-the-latest-powershellget).
+1. Install the [latest version of PowerShellGet](/powershell/gallery/powershellget/install-powershellget).
```azurepowershell Install-Module -Name PowerShellGet -AllowPrerelease
$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myVM
Update-AzVM -ResourceGroupName TestRG -VM $vm -IdentityType "UserAssigned" -IdentityID "/subscriptions/<SUBSCRIPTIONID>/resourcegroups/myResourceGroupVM/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1" ```
-## Grant access
+## Grant access
-This section shows how to grant your user-assigned identity access to a Resource Group in Azure Resource Manager. Managed identities for Azure resources provide identities that your code can use to request access tokens to authenticate to resource APIs that support Azure AD authentication. In this tutorial, your code will access the Azure Resource Manager API.
+This section shows how to grant your user-assigned identity access to a Resource Group in Azure Resource Manager. Managed identities for Azure resources provide identities that your code can use to request access tokens to authenticate to resource APIs that support Azure AD authentication. In this tutorial, your code will access the Azure Resource Manager API.
Before your code can access the API, you need to grant the identity access to a resource in Azure Resource Manager. In this case, the Resource Group in which the VM is contained. Update the value for `<SUBSCRIPTION ID>` as appropriate for your environment.
CanDelegate: False
## Access data
-### Get an access token
+### Get an access token
For the remainder of the tutorial, you will work from the VM we created earlier.
For the remainder of the tutorial, you will work from the VM we created earlier.
4. Now that you have created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session.
-5. Using PowerShellΓÇÖs `Invoke-WebRequest`, make a request to the local managed identities for Azure resources endpoint to get an access token for Azure Resource Manager. The `client_id` value is the value returned when you created the user-assigned managed identity.
+5. Using PowerShell's `Invoke-WebRequest`, make a request to the local managed identities for Azure resources endpoint to get an access token for Azure Resource Manager. The `client_id` value is the value returned when you created the user-assigned managed identity.
```azurepowershell $response = Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&client_id=af825a31-b0e0-471f-baea-96de555632f9&resource=https://management.azure.com/' -Method GET -Headers @{Metadata="true"}
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md
Title: Create a trust relationship between a user-assigned managed identity and an external identity provider
-description: Set up a trust relationship between a user-assigned managed identity in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates.
+description: Set up a trust relationship between a user-assigned managed identity in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates.
zone_pivot_groups: identity-wif-mi-methods
This article describes how to manage a federated identity credential on a user-assigned managed identity in Azure Active Directory (Azure AD). The federated identity credential creates a trust relationship between a user-assigned managed identity and an external identity provider (IdP). Configuring a federated identity credential on a system-assigned managed identity isn't supported.
-After you configure your user-assigned managed identity to trust an external IdP, configure your external software workload to exchange a token from the external IdP for an access token from Microsoft identity platform. The external workload uses the access token to access Azure AD protected resources without needing to manage secrets (in supported scenarios). To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md).
+After you configure your user-assigned managed identity to trust an external IdP, configure your external software workload to exchange a token from the external IdP for an access token from Microsoft identity platform. The external workload uses the access token to access Azure AD protected resources without needing to manage secrets (in supported scenarios). To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md).
In this article, you learn how to create, list, and delete federated identity credentials on a user-assigned managed identity.
Use the following values from your Azure AD Managed Identity for your GitHub wor
- `AZURE_CLIENT_ID` the managed identity **Client ID** -- `AZURE_SUBSCRIPTION_ID` the **Subscription ID**.
+- `AZURE_SUBSCRIPTION_ID` the **Subscription ID**.
The following screenshot demonstrates how to copy the managed identity ID and subscription ID.
For example, for a workflow triggered by a push to the tag named "v2":
on: push: # Sequence of patterns matched against refs/heads
- branches:
+ branches:
- main - 'mona/octocat' - 'releases/**' # Sequence of patterns matched against refs/tags
- tags:
+ tags:
- v2 - v1.* ```
For a workflow triggered by a pull request event, specify an **Entity type** of
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields: - **Cluster issuer URL** is the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.-- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod.
+- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod.
- **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which can't be changed later.
To delete a specific federated identity credential, select the **Delete** icon f
Run the [az identity federated-credential create](/cli/azure/identity/federated-credential#az-identity-federated-credential-create) command to create a new federated identity credential on your user-assigned managed identity (specified by the name). Specify the *name*, *issuer*, *subject*, and other parameters. ```azurecli
-az login
+az login
-# set variables
+# set variables
location="centralus"
-subscription="{subscription-id}"
-rg="fic-test-rg"
+subscription="{subscription-id}"
+rg="fic-test-rg"
-# user assigned identity name
-uaId="fic-test-ua"
+# user assigned identity name
+uaId="fic-test-ua"
-# federated identity credential name
-ficId="fic-test-fic-name"
+# federated identity credential name
+ficId="fic-test-fic-name"
-# create prerequisites if required.
-# otherwise make sure that existing resources names are set in variables above
+# create prerequisites if required.
+# otherwise make sure that existing resources names are set in variables above
az account set --subscription $subscription az group create --location $location --name $rg
-az identity create --name $uaId --resource-group $rg --location $location --subscription $subscription
+az identity create --name $uaId --resource-group $rg --location $location --subscription $subscription
-# Create/update a federated identity credential
+# Create/update a federated identity credential
az identity federated-credential create --name $ficId --identity-name $uaId --resource-group $rg --issuer 'https://aks.azure.com/issuerGUID' --subject 'system:serviceaccount:ns:svcaccount' --audiences 'api://AzureADTokenExchange' ```
az identity federated-credential create --name $ficId --identity-name $uaId --re
Run the [az identity federated-credential list](/cli/azure/identity/federated-credential#az-identity-federated-credential-list) command to read all the federated identity credentials configured on a user-assigned managed identity: ```azurecli
-az login
+az login
-# Set variables
-rg="fic-test-rg"
+# Set variables
+rg="fic-test-rg"
-# User assigned identity name
-uaId="fic-test-ua"
+# User assigned identity name
+uaId="fic-test-ua"
-# Read all federated identity credentials assigned to the user-assigned managed identity
+# Read all federated identity credentials assigned to the user-assigned managed identity
az identity federated-credential list --identity-name $uaId --resource-group $rg ```
az identity federated-credential list --identity-name $uaId --resource-group $rg
Run the [az identity federated-credential show](/cli/azure/identity/federated-credential#az-identity-federated-credential-show) command to show a federated identity credential (by ID): ```azurecli
-az login
+az login
-# Set variables
-rg="fic-test-rg"
+# Set variables
+rg="fic-test-rg"
-# User assigned identity name
-uaId="fic-test-ua"
+# User assigned identity name
+uaId="fic-test-ua"
-# Federated identity credential name
-ficId="fic-test-fic-name"
+# Federated identity credential name
+ficId="fic-test-fic-name"
-# Show the federated identity credential
+# Show the federated identity credential
az identity federated-credential show --name $ficId --identity-name $uaId --resource-group $rg ```
az identity federated-credential show --name $ficId --identity-name $uaId --reso
Run the [az identity federated-credential delete](/cli/azure/identity/federated-credential#az-identity-federated-credential-delete) command to delete a federated identity credential under an existing user assigned identity. ```azure cli
-az login
+az login
-# Set variables
-# in Linux shell remove $ from set variable statement
-$rg="fic-test-rg"
+# Set variables
+# in Linux shell remove $ from set variable statement
+$rg="fic-test-rg"
-# User assigned identity name
-$uaId="fic-test-ua"
+# User assigned identity name
+$uaId="fic-test-ua"
-# Federated identity credential name
-$ficId="fic-test-fic-name"
+# Federated identity credential name
+$ficId="fic-test-fic-name"
az identity federated-credential delete --name $ficId --identity-name $uaId --resource-group $rg ```
az identity federated-credential delete --name $ficId --identity-name $uaId --re
- To run the example scripts, you have two options: - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks. - Run scripts locally with Azure PowerShell, as described in the next section.-- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-powershell#list-user-assigned-managed-identities-2)
+- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-powershell#list-user-assigned-managed-identities-2)
- Find the name of the user-assigned managed identity, which you need in the following steps. ### Configure Azure PowerShell locally
To use Azure PowerShell locally for this article instead of using Cloud Shell:
Connect-AzAccount ```
-1. Install the [latest version of PowerShellGet](/powershell/scripting/gallery/installing-psget#for-systems-with-powershell-50-or-newer-you-can-install-the-latest-powershellget).
+1. Install the [latest version of PowerShellGet](/powershell/gallery/powershellget/install-powershellget).
```azurepowershell Install-Module -Name PowerShellGet -AllowPrerelease
Federated identity credential and parent user assigned identity can be created o
All of the template parameters are mandatory.
-There's a limit of 3-120 characters for a federated identity credential name length. It must be alphanumeric, dash, underscore. First symbol is alphanumeric only.
+There's a limit of 3-120 characters for a federated identity credential name length. It must be alphanumeric, dash, underscore. First symbol is alphanumeric only.
-You must add exactly one audience to a federated identity credential. The audience is verified during token exchange. Use ΓÇ£api://AzureADTokenExchangeΓÇ¥ as the default value.
+You must add exactly one audience to a federated identity credential. The audience is verified during token exchange. Use "api://AzureADTokenExchange" as the default value.
List, Get, and Delete operations aren't available with template. Refer to Azure CLI for these operations. By default, all child federated identity credentials are created in parallel, which triggers concurrency detection logic and causes the deployment to fail with a 409-conflict HTTP status code. To create them sequentially, specify a chain of dependencies using the *dependsOn* property. Make sure that any kind of automation creates federated identity credentials under the same parent identity sequentially. Federated identity credentials under different managed identities can be created in parallel without any restrictions. ```json
-{
-
-ΓÇ» ΓÇ» "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
-ΓÇ» ΓÇ» "contentVersion": "1.0.0.0",
-ΓÇ» ΓÇ» "variables": {},
-ΓÇ» ΓÇ» "parameters": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "location": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "westcentralus",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Location for identities resources. FIC should be enabled in this region."
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "userAssignedIdentityName": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "FIC_UA",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Name of the User Assigned identity (parent identity)"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "federatedIdentityCredential": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "testCredential",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Name of the Federated Identity Credential"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "federatedIdentityCredentialIssuer": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "https://aks.azure.com/issuerGUID",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Federated Identity Credential token issuer"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "federatedIdentityCredentialSubject": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "system:serviceaccount:ns:svcaccount",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Federated Identity Credential token subject"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "federatedIdentityCredentialAudience": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": " api://AzureADTokenExchange",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Federated Identity Credential audience. Single value is only supported."
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» },
-ΓÇ» ΓÇ» "resources": [
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "apiVersion": "2018-11-30",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "name": "[parameters('userAssignedIdentityName')]",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "location": "[parameters('location')]",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "tags": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "firstTag": "ficTest"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "resources": [
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "apiVersion": "2022-01-31-PREVIEW",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "name": "[concat(parameters('userAssignedIdentityName'), '/', parameters('federatedIdentityCredential'))]",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "dependsOn": [
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentityName'))]"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ],
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "properties": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "issuer": "[parameters('federatedIdentityCredentialIssuer')]",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "subject": "[parameters('federatedIdentityCredentialSubject')]",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "audiences": [
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "[parameters('federatedIdentityCredentialAudience')]"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ]
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ]
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ]
-}
+{
+
+ΓÇ» ΓÇ» "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ΓÇ» ΓÇ» "contentVersion": "1.0.0.0",
+ΓÇ» ΓÇ» "variables": {},
+ΓÇ» ΓÇ» "parameters": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "location": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "westcentralus",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Location for identities resources. FIC should be enabled in this region."
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "userAssignedIdentityName": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "FIC_UA",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Name of the User Assigned identity (parent identity)"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "federatedIdentityCredential": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "testCredential",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Name of the Federated Identity Credential"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "federatedIdentityCredentialIssuer": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "https://aks.azure.com/issuerGUID",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Federated Identity Credential token issuer"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "federatedIdentityCredentialSubject": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "system:serviceaccount:ns:svcaccount",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Federated Identity Credential token subject"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "federatedIdentityCredentialAudience": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": " api://AzureADTokenExchange",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "metadata": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "description": "Federated Identity Credential audience. Single value is only supported."
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» "resources": [
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "apiVersion": "2018-11-30",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "name": "[parameters('userAssignedIdentityName')]",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "location": "[parameters('location')]",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "tags": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "firstTag": "ficTest"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "resources": [
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "apiVersion": "2022-01-31-PREVIEW",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "name": "[concat(parameters('userAssignedIdentityName'), '/', parameters('federatedIdentityCredential'))]",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "dependsOn": [
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentityName'))]"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ],
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "properties": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "issuer": "[parameters('federatedIdentityCredentialIssuer')]",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "subject": "[parameters('federatedIdentityCredentialSubject')]",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "audiences": [
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "[parameters('federatedIdentityCredentialAudience')]"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ]
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ]
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ]
+}
``` ::: zone-end
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust.md
Title: Create a trust relationship between an app and an external identity provider
-description: Set up a trust relationship between an app in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates.
+description: Set up a trust relationship between an app in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates.
zone_pivot_groups: identity-wif-apps-methods
This article describes how to manage a federated identity credential on an application in Azure Active Directory (Azure AD). The federated identity credential creates a trust relationship between an application and an external identity provider (IdP).
-You can then configure an external software workload to exchange a token from the external IdP for an access token from Microsoft identity platform. The external workload can access Azure AD protected resources without needing to manage secrets (in supported scenarios). To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md).
+You can then configure an external software workload to exchange a token from the external IdP for an access token from Microsoft identity platform. The external workload can access Azure AD protected resources without needing to manage secrets (in supported scenarios). To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md).
In this article, you learn how to create, list, and delete federated identity credentials on an application in Azure AD.
To learn more about supported regions, time to propagate federated credential up
::: zone pivot="identity-wif-apps-methods-azp" ## Prerequisites
-[Create an app registration](/azure/active-directory/develop/quickstart-register-app) in Azure AD. Grant your app access to the Azure resources targeted by your external software workload.
+[Create an app registration](/azure/active-directory/develop/quickstart-register-app) in Azure AD. Grant your app access to the Azure resources targeted by your external software workload.
Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal and select your app registration. In **Overview**->**Essentials**, find the **Object ID**.
To add a federated identity for GitHub actions, follow these steps:
1. In the **Federated credential scenario** drop-down box, select **GitHub actions deploying Azure resources**.
-1. Specify the **Organization** and **Repository** for your GitHub Actions workflow.
+1. Specify the **Organization** and **Repository** for your GitHub Actions workflow.
1. For **Entity type**, select **Environment**, **Branch**, **Pull request**, or **Tag** and specify the value. The values must exactly match the configuration in the [GitHub workflow](https://docs.github.com/actions/using-workflows/workflow-syntax-for-github-actions#on). Pattern matching isn't supported for branches and tags. Specify an environment if your on-push workflow runs against many branches or tags. For more info, read the [examples](#entity-type-examples).
To add a federated identity for GitHub actions, follow these steps:
Use the following values from your Azure AD application registration for your GitHub workflow: -- `AZURE_CLIENT_ID` the **Application (client) ID**
+- `AZURE_CLIENT_ID` the **Application (client) ID**
- `AZURE_TENANT_ID` the **Directory (tenant) ID**
-
+ The following screenshot demonstrates how to copy the application ID and tenant ID. ![Screenshot that demonstrates how to copy the application ID and tenant ID from Microsoft Entra portal.](./media/workload-identity-federation-create-trust/copy-client-id.png)
For example, for a workflow triggered by a push to the tag named "v2":
on: push: # Sequence of patterns matched against refs/heads
- branches:
+ branches:
- main - 'mona/octocat' - 'releases/**' # Sequence of patterns matched against refs/tags
- tags:
+ tags:
- v2 - v1.* ```
Select the **Kubernetes accessing Azure resources** scenario from the dropdown m
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields: - **Cluster issuer URL** is the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.-- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod.
+- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod.
- **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which can't be changed later.
To delete a federated identity credential, select the **Delete** icon for the cr
## Configure a federated identity credential on an app
-Run the [az ad app federated-credential create](/cli/azure/ad/app/federated-credential) command to create a new federated identity credential on your app.
+Run the [az ad app federated-credential create](/cli/azure/ad/app/federated-credential) command to create a new federated identity credential on your app.
The `id` parameter specifies the identifier URI, application ID, or object ID of the application. The `parameters` parameter specifies the parameters, in JSON format, for creating the federated identity credential.
The `id` parameter specifies the identifier URI, application ID, or object ID of
The *name* specifies the name of your federated identity credential.
-The *issuer* identifies the path to the GitHub OIDC provider: `https://token.actions.githubusercontent.com/`. This issuer will become trusted by your Azure application.
+The *issuer* identifies the path to the GitHub OIDC provider: `https://token.actions.githubusercontent.com/`. This issuer will become trusted by your Azure application.
*subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token. Before Azure will grant an access token, the request must match the conditions defined here. - For Jobs tied to an environment: `repo:< Organization/Repository >:environment:< Name >`
az ad app federated-credential create --id f6475511-fd81-4965-a00e-41e7792b7b9c
### Kubernetes example
-*issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+*issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
*subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.
To use Azure PowerShell locally for this article instead of using Cloud Shell:
Connect-AzAccount ```
-1. Install the [latest version of PowerShellGet](/powershell/scripting/gallery/installing-psget#for-systems-with-powershell-50-or-newer-you-can-install-the-latest-powershellget).
+1. Install the [latest version of PowerShellGet](/powershell/gallery/powershellget/install-powershellget).
```azurepowershell Install-Module -Name PowerShellGet -AllowPrerelease
Run the [New-AzADAppFederatedCredential](/powershell/module/az.resources/new-aza
### GitHub Actions example - *ApplicationObjectId*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.-- *Issuer* identifies GitHub as the external token issuer.
+- *Issuer* identifies GitHub as the external token issuer.
- *Subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token. - For Jobs tied to an environment: `repo:< Organization/Repository >:environment:< Name >` - For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
New-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -Audience api:/
### Kubernetes example - *ApplicationObjectId*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.-- *Issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *Issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
- *Subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`. - *Name* is the name of the federated credential, which can't be changed later. - *Audience* lists the audiences that can appear in the `aud` claim of the external token.
Remove-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -FederatedCr
::: zone pivot="identity-wif-apps-methods-rest" ## Prerequisites
-[Create an app registration](/azure/active-directory/develop/quickstart-register-app) in Azure AD. Grant your app access to the Azure resources targeted by your external software workload.
+[Create an app registration](/azure/active-directory/develop/quickstart-register-app) in Azure AD. Grant your app access to the Azure resources targeted by your external software workload.
Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal and select your app registration. In **Overview**->**Essentials**, find the **Object ID**.
The Microsoft Graph endpoint (`https://graph.microsoft.com`) exposes REST APIs t
## Configure a federated identity credential on an app
-### GitHub Actions
+### GitHub Actions
Run the following method to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials) on your app (specified by the object ID of the app). The *issuer* identifies GitHub as the external token issuer. *subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token. ```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Testing","issuer":"https://token.actions.githubusercontent.com","subject":"repo:octo-org/octo-repo:environment:Production","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+az rest --method POST --uri 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Testing","issuer":"https://token.actions.githubusercontent.com","subject":"repo:octo-org/octo-repo:environment:Production","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
``` And you get the response:
And you get the response:
Run the following method to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. Specify the following parameters: -- *issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
- *subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`. - *name* is the name of the federated credential, which can't be changed later. - *audiences* lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange". ```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Kubernetes-federated-credential","issuer":"https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/","subject":"system:serviceaccount:erp8asle:pod-identity-sa","description":"Kubernetes service account federated credential","audiences":["api://AzureADTokenExchange"]}'
+az rest --method POST --uri 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Kubernetes-federated-credential","issuer":"https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/","subject":"system:serviceaccount:erp8asle:pod-identity-sa","description":"Kubernetes service account federated credential","audiences":["api://AzureADTokenExchange"]}'
``` And you get the response:
And you get the response:
Run the following method to [list the federated identity credential(s)](/graph/api/application-list-federatedidentitycredentials) for an app (specified by the object ID of the app): ```azurecli
-az rest -m GET -u 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials'
+az rest -m GET -u 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials'
``` And you get a response similar to:
And you get a response similar to:
Run the following method to [get a federated identity credential](/graph/api/federatedidentitycredential-get) for an app (specified by the object ID of the app): ```azurecli
-az rest -m GET -u 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c//federatedIdentityCredentials/1aa3e6a7-464c-4cd2-88d3-90db98132755'
+az rest -m GET -u 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c//federatedIdentityCredentials/1aa3e6a7-464c-4cd2-88d3-90db98132755'
``` And you get a response similar to:
And you get a response similar to:
"issuer": "https://token.actions.githubusercontent.com/", "name": "Testing", "subject": "repo:octo-org/octo-repo:environment:Production"
- }
+ }
} ```
And you get a response similar to:
Run the following method to [delete a federated identity credential](/graph/api/federatedidentitycredential-delete) from an app (specified by the object ID of the app): ```azurecli
-az rest -m DELETE -u 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials/1aa3e6a7-464c-4cd2-88d3-90db98132755'
+az rest -m DELETE -u 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials/1aa3e6a7-464c-4cd2-88d3-90db98132755'
``` ::: zone-end ## Next steps-- To learn how to use workload identity federation for Kubernetes, see [Azure AD Workload Identity for Kubernetes](https://azure.github.io/azure-workload-identity/docs/quick-start.html) open source project.
+- To learn how to use workload identity federation for Kubernetes, see [Azure AD Workload Identity for Kubernetes](https://azure.github.io/azure-workload-identity/docs/quick-start.html) open source project.
- To learn how to use workload identity federation for GitHub Actions, see [Configure a GitHub Actions workflow to get an access token](/azure/developer/github/connect-from-azure). - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources. - For more information, read about how Azure AD uses the [OAuth 2.0 client credentials grant](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
To determine if you have the Azure Az modules installed, run `Get-InstalledModul
### Install using the Install-Script method To use this option, you must not have the Azure Az modules installed on your computer. If they're installed, the following command displays an error. You can either uninstall the Azure Az modules, or use the other option to download the script manually and run it.
-
+ Run the script with the following command to get the latest version: `Install-Script -Name AzureAppGWMigration -Force`
-This command also installs the required Az modules.
+This command also installs the required Az modules.
### Install using the script directly
-If you do have some Azure Az modules installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/scripting/gallery/how-to/working-with-packages/manual-download).
+If you do have some Azure Az modules installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/gallery/how-to/working-with-packages/manual-download).
To run the script:
To run the script:
You can also run the following Azure PowerShell commands to get the Resource ID: ```azurepowershell
- $appgw = Get-AzApplicationGateway -Name <v1 gateway name> -ResourceGroupName <resource group Name>
+ $appgw = Get-AzApplicationGateway -Name <v1 gateway name> -ResourceGroupName <resource group Name>
$appgw.Id ```
To run the script:
This parameter is only optional if you don't have HTTPS listeners configured for your v1 gateway or WAF. If you have at least one HTTPS listener setup, you must specify this parameter.
- ```azurepowershell
+ ```azurepowershell
$password = ConvertTo-SecureString <cert-password> -AsPlainText -Force $mySslCert1 = New-AzApplicationGatewaySslCertificate -Name "Cert01" ` -CertificateFile <Cert-File-Path-1> `
- -Password $password
+ -Password $password
$mySslCert2 = New-AzApplicationGatewaySslCertificate -Name "Cert02" ` -CertificateFile <Cert-File-Path-2> ` -Password $password
To run the script:
You can pass in `$mySslCert1, $mySslCert2` (comma-separated) in the previous example as values for this parameter in the script. * **trustedRootCertificates: [PSApplicationGatewayTrustedRootCertificate]: Optional**. A comma-separated list of PSApplicationGatewayTrustedRootCertificate objects that you create to represent the [Trusted Root certificates](ssl-overview.md) for authentication of your backend instances from your v2 gateway.
-
+ ```azurepowershell $certFilePath = ".\rootCA.cer" $trustedCert = New-AzApplicationGatewayTrustedRootCertificate -Name "trustedCert1" -CertificateFile $certFilePath
To run the script:
First, double check that the script successfully created a new v2 gateway with the exact configuration migrated over from your v1 gateway. You can verify this from the Azure portal. Also, send a small amount of traffic through the v2 gateway as a manual test.
-
+ Here are a few scenarios where your current application gateway (Standard) may receive client traffic, and our recommendations for each one: * **A custom DNS zone (for example, contoso.com) that points to the frontend IP address (using an A record) associated with your Standard v1 or WAF v1 gateway**.
The pricing models are different for the Application Gateway v1 and v2 SKUs. Ple
Yes. See [Caveats/Limitations](#caveatslimitations).
-### Is this article and the Azure PowerShell script applicable for Application Gateway WAF product as well?
+### Is this article and the Azure PowerShell script applicable for Application Gateway WAF product as well?
Yes.
No. The script doesn't replicate this configuration for v2. You must add the lo
No. Currently the script doesn't support certificates in KeyVault. However, this is being considered for a future version. ### I ran into some issues with using this script. How can I get help?
-
+ You can contact Azure Support under the topic "Configuration and Setup/Migrate to V2 SKU". Learn more about [Azure support here](https://azure.microsoft.com/support/options/). ## Next steps
automation Automation Runbook Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-gallery.md
Rather than creating your own runbooks and modules in Azure Automation, you can
> [!NOTE] >- The **Browse gallery** option in the Azure portal has an enhanced user experience. >- In **Process Automation** > **Runbook** blade, you can import runbooks either by **Import a runbook** or **Browse gallery** option and the **Runbooks** page displays two new columns - **Runtime version** and **Runbook type**.
-
+ 1. In the Azure portal, open your Automation account. 1. Select **Runbooks** blade under **Process Automation**. 1. Click **Import a runbook** in the **Runbooks** page.
Rather than creating your own runbooks and modules in Azure Automation, you can
1. Select the file. 1. Enter the **Name**, **Runtime version**, and **Description**. 1. Click **Import**.
-
+ :::image type="content" source="./media/automation-runbook-gallery/import-runbook-upload-runbook-file.png" alt-text="Screenshot of selecting a runbook from file or gallery."::: 1. Alternatively, Select **Browse Gallery** in the **Runbooks** page to browse the available runbooks.
Rather than creating your own runbooks and modules in Azure Automation, you can
:::image type="content" source="./media/automation-runbook-gallery/browse-gallery-github.png" alt-text="Browsing runbook gallery." lightbox="./media/automation-runbook-gallery/browse-gallery-github-expanded.png":::
-1. Click **Select** to select a chosen runbook.
+1. Click **Select** to select a chosen runbook.
1. In the **Import a runbook** page, enter the **Name** and select the **Runtime versions**. 1. The **Runbook type** and **Description** are auto populated. 1. Click **Import**.
Rather than creating your own runbooks and modules in Azure Automation, you can
7. The runbook appears on the **Runbooks** tab for the Automation account.
-
+ ## Runbooks in the PowerShell Gallery > [!IMPORTANT]
PowerShell modules contain cmdlets that you can use in your runbooks. Existing m
You can also find modules to import in the Azure portal. They're listed for your Automation Account in the **Modules** under **Shared resources**.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Do not include the keyword "AzureRm" in any script designed to be executed with the Az module. Inclusion of the keyword, even in a comment, may cause the AzureRm to load and then conflict with the Az module. ## Common scenarios available in the PowerShell Gallery
You can add new PowerShell or Python runbooks to the Runbook gallery with this G
>[!NOTE] >Check out existing runbooks in the gallery for things like formatting, headers, and existing tags that you might use (like `Azure Automation` or `Linux Azure Virtual Machines`).
-To suggest changes to an existing runbook, file a pull request against it.
+To suggest changes to an existing runbook, file a pull request against it.
If you decide to clone and edit an existing runbook, best practice is to give it a different name. If you re-use the old name, it will show up twice in the Runbook gallery listing.
If you decide to clone and edit an existing runbook, best practice is to give it
### Add a PowerShell runbook to the PowerShell gallery
-Microsoft encourages you to add runbooks to the PowerShell Gallery that you think would be useful to other customers. The PowerShell Gallery accepts PowerShell modules and PowerShell scripts. You can add a runbook by [uploading it to the PowerShell Gallery](/powershell/scripting/gallery/how-to/publishing-packages/publishing-a-package).
+Microsoft encourages you to add runbooks to the PowerShell Gallery that you think would be useful to other customers. The PowerShell Gallery accepts PowerShell modules and PowerShell scripts. You can add a runbook by [uploading it to the PowerShell Gallery](/powershell/gallery/how-to/publishing-packages/publishing-a-package).
## Import a module from the Modules gallery in the Azure portal
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md
To fix this issue, you must start the OMS Agent service by using the following c
To validate you can perform process check using the below command: ```
-process_name = "omsagent"
-ps aux | grep %s | grep -v grep" % (process_name)
+process_name="omsagent"
+ps aux | grep %s | grep -v grep" % (process_name)"
``` For more information, see [Troubleshoot issues with the Log Analytics agent for Linux](../../azure-monitor/agents/agent-linux-troubleshoot.md)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Operating system | Azure Monitor agent | Log Analytics agent (legacy) | Diagnostics extension | |:|::|::|::|
-| Windows Server 2022 | X | | |
+| Windows Server 2022 | X | X | |
| Windows Server 2022 Core | X | | | | Windows Server 2019 | X | X | X | | Windows Server 2019 Core | X | | |
azure-monitor Alerts Common Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md
If the custom properties are not set in the Alert rule, this field will be null.
"metricValue": 7.727 } ]
+ }
}, "customProperties":{ "Key1": "Value1", "Key2": "Value2"
- }
} } }
azure-monitor Application Insights Asp Net Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Force
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force Set-PSRepository -Name "PSGallery" -InstallationPolicy Trusted Install-Module -Name PowerShellGet -Force
-```
+```
Close PowerShell. #### Install Application Insights Agent Run PowerShell as an admin.
-```powershell
+```powershell
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Force Install-Module -Name Az.ApplicationMonitor -AllowPrerelease -AcceptLicense
-```
+```
> [!NOTE] > The `AllowPrerelease` switch in the `Install-Module` cmdlet allows installation of the beta release.
These instructions were written and tested on a computer running Windows 10 and
These steps will prepare your server to download modules from PowerShell Gallery.
-> [!NOTE]
+> [!NOTE]
> PowerShell Gallery is supported on Windows 10, Windows Server 2016, and PowerShell 6+.
-> For information about earlier versions, see [Installing PowerShellGet](/powershell/scripting/gallery/installing-psget).
+> For information about earlier versions, see [Installing PowerShellGet](/powershell/gallery/powershellget/install-powershellget).
1. Run PowerShell as Admin with an elevated execution policy.
These steps will prepare your server to download modules from PowerShell Gallery
- Optional parameters: - `-Proxy`. Specifies a proxy server for the request. - `-Force`. Bypasses the confirmation prompt.
-
+ You'll receive this prompt if NuGet isn't set up: ```output NuGet provider is required to continue
- PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories.
+ PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories.
The NuGet provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or 'C:\Users\t\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install and import the NuGet provider now? [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"):
- ```
+ ```
3. Configure PowerShell Gallery as a trusted repository. - Description: By default, PowerShell Gallery is an untrusted repository.
These steps will prepare your server to download modules from PowerShell Gallery
```output Untrusted repository
- You are installing the modules from an untrusted repository.
- If you trust this repository, change its InstallationPolicy value
- by running the Set-PSRepository cmdlet. Are you sure you want to
+ You are installing the modules from an untrusted repository.
+ If you trust this repository, change its InstallationPolicy value
+ by running the Set-PSRepository cmdlet. Are you sure you want to
install the modules from 'PSGallery'? [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): ```
These steps will prepare your server to download modules from PowerShell Gallery
4. Install the newest version of PowerShellGet. - Description: This module contains the tooling used to get other modules from PowerShell Gallery. Version 1.0.0.1 ships with Windows 10 and Windows Server. Version 1.6.0 or higher is required. To determine which version is installed, run the `Get-Command -Module PowerShellGet` command.
- - Reference: [Installing PowerShellGet](/powershell/scripting/gallery/installing-psget).
+ - Reference: [Installing PowerShellGet](/powershell/gallery/powershellget/install-powershellget).
- Command: `Install-Module -Name PowerShellGet`. - Optional parameters: - `-Proxy`. Specifies a proxy server for the request.
For more information, see [Installing a PowerShell Module](/powershell/scripting
If you're installing the module into any other directory, manually import the module by using [Import-Module](/powershell/module/microsoft.powershell.core/import-module).
-> [!IMPORTANT]
+> [!IMPORTANT]
> DLLs will install via relative paths. > Store the contents of the package in your intended runtime directory and confirm that access permissions allow read but not write.
If you're installing the module into any other directory, manually import the mo
2. Find the file path of Az.ApplicationMonitor.psd1. 3. Run PowerShell as Admin with an elevated execution policy. 4. Load the module by using the `Import-Module Az.ApplicationMonitor.psd1` command.
-
+ ### Route traffic through a proxy
This tab describes the following cmdlets, which are members of the [Az.Applicati
- [Set-ApplicationInsightsMonitoringConfig](?tabs=api-reference#set-applicationinsightsmonitoringconfig) - [Start-ApplicationInsightsMonitoringTrace](?tabs=api-reference#start-applicationinsightsmonitoringtrace)
-> [!NOTE]
+> [!NOTE]
> - To get started, you need an instrumentation key. For more information, see [Create a resource](create-new-resource.md#copy-the-instrumentation-key). > - This cmdlet requires that you review and accept our license and privacy statement. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-> [!IMPORTANT]
+> [!IMPORTANT]
> This cmdlet requires a PowerShell session with Admin permissions and an elevated execution policy. For more information, see [Run PowerShell as administrator with an elevated execution policy](?tabs=detailed-instructions#run-powershell-as-admin-with-an-elevated-execution-policy). > - This cmdlet requires that you review and accept our license and privacy statement. > - The instrumentation engine adds additional overhead and is off by default.
In this example:
- Spaces are added for readability. ```powershell
-PS C:\> Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap
+PS C:\> Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap
@(@{MachineFilter='.*';AppFilter='WebAppExclude'}, @{MachineFilter='.*';AppFilter='WebAppOne';InstrumentationSettings=@{InstrumentationKey='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx1'}}, @{MachineFilter='.*';AppFilter='WebAppTwo';InstrumentationSettings=@{InstrumentationKey='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx2'}},
The instrumentation engine adds overhead and is off by default.
When you have a cluster of web servers, you might be using a [shared configuration](/iis/web-hosting/configuring-servers-in-the-windows-web-platform/shared-configuration_211). The HttpModule can't be injected into this shared configuration. This script will fail with the message that extra installation steps are required.
-Use this switch to ignore this check and continue installing prerequisites.
+Use this switch to ignore this check and continue installing prerequisites.
For more information, see [known conflict-with-iis-shared-configuration](status-monitor-v2-troubleshoot.md#conflict-with-iis-shared-configuration) ##### -Verbose **Common parameter.** Use this switch to display detailed logs.
-##### -WhatIf
+##### -WhatIf
**Common parameter.** Use this switch to test and validate your input parameters without actually enabling monitoring. #### Output
Restart IIS for the changes to take effect.
PS C:\> Disable-InstrumentationEngine ```
-#### Parameters
+#### Parameters
##### -Verbose **Common parameter.** Use this switch to output detailed logs.
This cmdlet will remove edits to the IIS applicationHost.config and remove regis
PS C:\> Disable-ApplicationInsightsMonitoring ```
-#### Parameters
+#### Parameters
##### -Verbose **Common parameter.** Use this switch to display detailed logs.
If this process fails for any reason, you can run these commands manually:
Sets the config file without doing a full reinstallation. Restart IIS for your changes to take effect.
-> [!IMPORTANT]
+> [!IMPORTANT]
> This cmdlet requires a PowerShell session with Admin permissions.
C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\applica
### Start-ApplicationInsightsMonitoringTrace
-Collects [ETW Events](/windows/desktop/etw/event-tracing-portal) from the codeless attach runtime.
+Collects [ETW Events](/windows/desktop/etw/event-tracing-portal) from the codeless attach runtime.
This cmdlet is an alternative to running [PerfView](https://github.com/microsoft/perfview). Collected events will be printed to the console in real-time and saved to an ETL file. The output ETL file can be opened by [PerfView](https://github.com/microsoft/perfview) for further investigation.
You have three options when collecting events:
**Optional.** Use this parameter to set how long this script should collect events. Default is 5 minutes. ##### -LogDirectory
-**Optional.** Use this switch to set the output directory of the ETL file.
-By default, this file will be created in the PowerShell Modules directory.
+**Optional.** Use this switch to set the output directory of the ETL file.
+By default, this file will be created in the PowerShell Modules directory.
The full path will be displayed during script execution.
Each of these options is described in the [detailed instructions](?tabs=detailed
- You can use the [Get-ApplicationInsightsMonitoringStatus](?tabs=api-reference#get-applicationinsightsmonitoringstatus) cmdlet to verify that enablement succeeded. - Use [Live Metrics](./live-stream.md) to quickly determine if your app is sending telemetry. - You can also use [Log Analytics](../logs/log-analytics-tutorial.md) to list all the cloud roles currently sending telemetry:
-
+ ```Kusto union * | summarize count() by cloud_RoleName, cloud_RoleInstance ```
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
Title: Application Insights with containers description: This article shows you how to set-up Application Insights Previously updated : 04/06/2023 Last updated : 04/21/2023 ms.devlang: java
For more information, see [Monitoring Azure Functions with Azure Monitor Applica
### Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.12.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.11.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.12.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.12.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.11.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.12.jar" -jar <myapp.jar>
```
FROM ...
COPY target/*.jar app.jar
-COPY agent/applicationinsights-agent-3.4.11.jar applicationinsights-agent-3.4.11.jar
+COPY agent/applicationinsights-agent-3.4.12.jar applicationinsights-agent-3.4.12.jar
COPY agent/applicationinsights.json applicationinsights.json ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
-ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.11.jar", "-jar", "app.jar"]
+ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.12.jar", "-jar", "app.jar"]
``` ### Third-party container images
For more information, see [Using Azure Monitor Application Insights with Spring
If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.12.jar"
``` #### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.12.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.12.jar` to `CATALINA_OPTS`.
### Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.11.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.12.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.11.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.12.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.12.jar` to `CATALINA_OPTS`.
#### Run Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.12.jar` to the `Java Options` under the `Java` tab.
### JBoss EAP 7 #### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.12.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.11.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.12.jar -Xms1303m -Xmx1303m ..."
... ``` #### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.12.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `j
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.11.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.12.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`:
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.11.jar
+-javaagent:path/to/applicationinsights-agent-3.4.12.jar
``` ### Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.12.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.11.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.12.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `j
1. In `Generic JVM arguments`, add the following JVM argument: ```
- -javaagent:path/to/applicationinsights-agent-3.4.11.jar
+ -javaagent:path/to/applicationinsights-agent-3.4.12.jar
``` 1. Save and restart the application server.
Add `-javaagent:path/to/applicationinsights-agent-3.4.11.jar` to the existing `j
Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.11.jar
+-javaagent:path/to/applicationinsights-agent-3.4.12.jar
``` ### Others
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 03/31/2023 Last updated : 04/21/2023 ms.devlang: java
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.12.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.11.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.12.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.12.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.11.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.12.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.12.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.11.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.12.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.11</version>
+ <version>3.4.12</version>
</dependency> ```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 03/31/2023 Last updated : 04/21/2023 ms.devlang: java
You'll find more information and configuration options in the following sections
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.11.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.12.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.11.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.12.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.11.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.12.jar` is located.
```json {
and add `applicationinsights-core` to your application:
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.11</version>
+ <version>3.4.12</version>
</dependency> ```
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.11.jar` is located.
+`applicationinsights-agent-3.4.12.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 03/31/2023 Last updated : 04/21/2023 ms.devlang: java
auto-instrumentation which is provided by the 3.x Java agent.
Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.11.jar
+-javaagent:path/to/applicationinsights-agent-3.4.12.jar
``` If you were using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the above.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 03/31/2023 Last updated : 04/21/2023 ms.devlang: csharp, javascript, typescript, python
dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter -s https://
#### [Java](#tab/java)
-Download the [applicationinsights-agent-3.4.11.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.11/applicationinsights-agent-3.4.11.jar) file.
+Download the [applicationinsights-agent-3.4.12.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.12/applicationinsights-agent-3.4.12.jar) file.
> [!WARNING] >
public class Program
Java auto-instrumentation is enabled through configuration changes; no code changes are required.
-Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.11.jar"` to your application's JVM args.
+Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.12.jar"` to your application's JVM args.
> [!TIP] > For scenario-specific guidance, see [Get Started (Supplemental)](./java-get-started-supplemental.md).
Use one of the following two ways to point the jar file to your Application Insi
APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String> ``` -- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.11.jar` with the following content:
+- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.12.jar` with the following content:
```json {
This is not available in .NET.
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.11</version>
+ <version>3.4.12</version>
</dependency> ```
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
# Monitor virtual machines with Azure Monitor: Alerts
-This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). [Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. There are no preconfigured alert rules for virtual machines, but you can create your own based on data you collect from the Azure Monitor agent. This article presents alerting concepts specific to virtual machines and common alert rules used by other Azure Monitor customers.
+This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). [Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. There are no preconfigured alert rules for virtual machines, but you can create your own based on data you collect from Azure Monitor Agent. This article presents alerting concepts specific to virtual machines and common alert rules used by other Azure Monitor customers.
-> [!NOTE]
-> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md). To quickly enable a recommended set of alerts, see [Enable recommended alert rules for Azure virtual machine](tutorial-monitor-vm-alert-recommended.md)
+This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment:
+
+- To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
+
+- To quickly enable a recommended set of alerts, see [Enable recommended alert rules for an Azure virtual machine](tutorial-monitor-vm-alert-recommended.md).
> [!IMPORTANT]
-> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Before you create any alert rules, refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Before you create any alert rules, see the **Alert rules** section in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
## Data collection
-Alert rules inspect data that's already been collected in Azure Monitor. You need to ensure that data is being collected for a particular scenario before you can create an alert rule. See [Monitor virtual machines with Azure Monitor: Collect data](monitor-virtual-machine-data-collection.md) for guidance on configuring data collection for a variety of scenarios including all of the alert rules in this article.
+Alert rules inspect data that's already been collected in Azure Monitor. You need to ensure that data is being collected for a particular scenario before you can create an alert rule. See [Monitor virtual machines with Azure Monitor: Collect data](monitor-virtual-machine-data-collection.md) for guidance on configuring data collection for various scenarios, including all the alert rules in this article.
## Recommended alert rules
-Azure Monitor provides a set of [recommended alert rules](tutorial-monitor-vm-alert-availability.md) that you can quickly enable for any Azure virtual machine. These are a great starting point for basic monitoring but alone will not provide sufficient alerting for most enterprise implementations for the following reasons:
+Azure Monitor provides a set of [recommended alert rules](tutorial-monitor-vm-alert-availability.md) that you can quickly enable for any Azure virtual machine. These rules are a great starting point for basic monitoring. But alone, they won't provide sufficient alerting for most enterprise implementations for the following reasons:
- Recommended alerts only apply to Azure virtual machines and not hybrid machines.-- Recommended alerts only include host metrics and not guest metrics or logs. These are useful to monitor the health of the machine itself but give you minimal visibility into the workloads and applications running on the machine.-- Recommended alerts are associated with individual machines creating an excessive number of alert rules. Instead of relying on this method for each machine, see [Scaling alert rules](#scaling-alert-rules) for strategies on using a minimal number of alert rules for multiple machines.
+- Recommended alerts only include host metrics and not guest metrics or logs. These metrics are useful to monitor the health of the machine itself. But they give you minimal visibility into the workloads and applications running on the machine.
+- Recommended alerts are associated with individual machines that create an excessive number of alert rules. Instead of relying on this method for each machine, see [Scaling alert rules](#scaling-alert-rules) for strategies on using a minimal number of alert rules for multiple machines.
## Alert types
-The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md).
-The type of alert rule that you create for a particular scenario depends on where the data that you're alerting on is located. You might have cases where data for a particular alerting scenario is available in both Metrics and Logs, and you'll need to determine which rule type to use. You might also have flexibility in how you [collect certain data]() and let your decision of alert rule type drive your decision for data collection method.
+The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario depends on where the data that you're alerting on is located.
+You might have cases where data for a particular alerting scenario is available in both Metrics and Logs. If so, you need to determine which rule type to use. You might also have flexibility in how you collect certain data and let your decision of alert rule type drive your decision for data collection method.
### Metric alerts
-Common uses for metric alerts include:
+Common uses for metric alerts:
- Alert when a particular metric exceeds a threshold. An example is when the CPU of a machine is running high.
-Data sources for metric alerts include:
-- Host metrics for Azure virtual machines, which are collected automatically.-- Metrics collected by the Azure Monitor agent from the guest operating system
+Data sources for metric alerts:
+- Host metrics for Azure virtual machines, which are collected automatically
+- Metrics collected by Azure Monitor Agent from the guest operating system
### Log alerts
-Common uses for log alerts include:
-- Alert when a particular event or pattern of events from Windows event log or syslog are found. These alert rules will typically measure table rows returned from the query.-- Alert based on a calculation of numeric data across multiple machines. These alert rules will typically measure the calculation of a numeric column in the query results.
+Common uses for log alerts:
+- Alert when a particular event or pattern of events from Windows event log or Syslog are found. These alert rules typically measure table rows returned from the query.
+- Alert based on a calculation of numeric data across multiple machines. These alert rules typically measure the calculation of a numeric column in the query results.
+
+Data sources for log alerts:
+- All data collected in a Log Analytics workspace
-Data sources for metric alerts include:
-- All data collected in a Log Analytics workspace. ## Scaling alert rules
-Since you may have many virtual machines that require the same monitoring, you don't want to have to create individual alert rules for each one. You also want to ensure There are different strategies to limit the number of alert rules you need to manage, depending on the type of rule. Each of these strategies depends on understanding the target resource of the alert rule.
+Because you might have many virtual machines that require the same monitoring, you don't want to have to create individual alert rules for each one. You also want to ensure there are different strategies to limit the number of alert rules you need to manage, depending on the type of rule. Each of these strategies depends on understanding the target resource of the alert rule.
### Metric alert rules
-Virtual machines support multiple resource metric alert rules as described in [Monitor multiple resources](../alerts/alerts-types.md#metric-alerts). This allows you to create a single metric alert rule that applies to all virtual machines in a resource group or subscription within the same region. Start with the [recommended alerts](#recommended-alert-rules) and [create a corresponding rule]() for each using your subscription or a resource group as the target resource. You will need to create duplicate rules for each region if you have machines in multiple regions.
+Virtual machines support multiple resource metric alert rules as described in [Monitor multiple resources](../alerts/alerts-types.md#metric-alerts). This capability allows you to create a single metric alert rule that applies to all virtual machines in a resource group or subscription within the same region.
-As you identify requirements for additional metric alert rules, use this same strategy of using a subscription or resource group as the target resource to minimize the number of alert rules you need to manage and ensure that they're automatically applied to any new machines.
+Start with the [recommended alerts](#recommended-alert-rules) and create a corresponding rule for each by using your subscription or a resource group as the target resource. You need to create duplicate rules for each region if you have machines in multiple regions.
+
+As you identify requirements for more metric alert rules, follow this same strategy by using a subscription or resource group as the target resource to:
+- Minimize the number of alert rules you need to manage.
+- Ensure that they're automatically applied to any new machines.
### Log alert rules
-If you set the target resource of a log alert rule to a specific machine, then queries are limited to data associated with that machine giving you individual alerts for it. This would require a separate alert rule for each machine.
+If you set the target resource of a log alert rule to a specific machine, queries are limited to data associated with that machine, which gives you individual alerts for it. This arrangement requires a separate alert rule for each machine.
-If you set the target resource of a log alert rule to a Log Analytics workspace, you have access to all data in that workspace which allows you to alert on data from all machines in the workgroup with a single rule. This gives you the option of creating a single alert for all machines. You can then use dimensions to create a separate alert for each machine.
+If you set the target resource of a log alert rule to a Log Analytics workspace, you have access to all data in that workspace. For this reason, you can alert on data from all machines in the workgroup with a single rule. This arrangement gives you the option of creating a single alert for all machines. You can then use dimensions to create a separate alert for each machine.
-For example, you may want to alert when an error event is created in the Windows event log by any machine. You would first need to create a data collection rule as described in [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) to send these events to the `Event` table in the Log Analytics workspace. You could then create an alert rule that queries this table using the workspace as the target resource and the condition shown below.
+For example, you might want to alert when an error event is created in the Windows event log by any machine. You first need to create a data collection rule as described in [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) to send these events to the `Event` table in the Log Analytics workspace. Then you create an alert rule that queries this table by using the workspace as the target resource and the condition shown in the following image.
-The query will return a record for any error messages on any machine. Use the **Split by dimensions** option and specify **_ResourceId** to instruct the rule to create an alert for each machine if multiple machines are returned in the results.
+The query returns a record for any error messages on any machine. Use the **Split by dimensions** option and specify **_ResourceId** to instruct the rule to create an alert for each machine if multiple machines are returned in the results.
#### Dimensions
-Depending on the information you would like to include in the alert, you might need to split using different dimensions. In this case, make sure the necessary dimensions are projected in the query using the [project](/azure/data-explorer/kusto/query/projectoperator) or [extend](/azure/data-explorer/kusto/query/extendoperator) operator. Set the **Resource ID column** field to **Don't split** and include all the meaningful dimensions in the list. Make sure the **Include all future values** is selected, so any value returned from the query will be included.
+Depending on the information you want to include in the alert, you might need to split by using different dimensions. In this case, make sure the necessary dimensions are projected in the query by using the [project](/azure/data-explorer/kusto/query/projectoperator) or [extend](/azure/data-explorer/kusto/query/extendoperator) operator. Set the **Resource ID column** field to **Don't split** and include all the meaningful dimensions in the list. Make sure **Include all future values** is selected so that any value returned from the query is included.
#### Dynamic thresholds
-An additional benefit using log alert rules is the ability to include complex logic in the query for determining the threshold value. This threshold could be hardcoded, applied to all resources, or calculated dynamically based on some field or calculated value. This allows the threshold to be applied to only resources according to specific conditions. For example, you might create an alert based on available memory but only for machines with a particular amount of total memory.
+Another benefit of using log alert rules is the ability to include complex logic in the query for determining the threshold value. You can hardcode the threshold, apply it to all resources, or calculate it dynamically based on some field or calculated value. The threshold is applied to resources only according to specific conditions. For example, you might create an alert based on available memory but only for machines with a particular amount of total memory.
## Common alert rules
-The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log alerts are provided for each. For guidance on which type of alert to use, see [Alert types](#alert-types). If you're unfamiliar with the process for creating alert rules in Azure Monitor, see [instructions to create a new alert rule](../alerts/alerts-create-new-alert-rule.md).
+The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log alerts are provided for each. For guidance on which type of alert to use, see [Alert types](#alert-types). If you're unfamiliar with the process for creating alert rules in Azure Monitor, see the [instructions to create a new alert rule](../alerts/alerts-create-new-alert-rule.md).
> [!NOTE]
-> The details for log alerts provided below are using data collected using [VM Insights](vminsights-overview.md) which provides a set of common performance counters for the client operating system. This name is independent of the operating system type.
+> The details for log alerts provided here are using data collected by using [VM Insights](vminsights-overview.md), which provides a set of common performance counters for the client operating system. This name is independent of the operating system type.
### Machine unavailable
-One of the most common monitoring requirements for a virtual machine is to create an alert if it stops running. The best method for this is to create a metric alert rule in Azure Monitor using the VM availability metric which is currently in public preview. See [Create availability alert rule for Azure virtual machine](tutorial-monitor-vm-alert-availability.md) for a complete walk-through on this metric.
+One of the most common monitoring requirements for a virtual machine is to create an alert if it stops running. The best method is to create a metric alert rule in Azure Monitor by using the VM availability metric. It's currently in public preview. For a walk-through on this metric, see [Create availability alert rule for Azure virtual machine](tutorial-monitor-vm-alert-availability.md).
-As described in [Scaling alert rules](#scaling-alert-rules), create an availability alert rule using a subscription or resource group as the target resource to have the rule apply to multiple virtual machines, including new machines that you create after the alter rule.
+As described in [Scaling alert rules](#scaling-alert-rules), create an availability alert rule by using a subscription or resource group as the target resource. The rule applies to multiple virtual machines, including new machines that you create after the alert rule.
### Agent heartbeat
-The agent heartbeat is slightly different than the machine unavailable alert because it relies on the Azure Monitor agent to send a heartbeat. This can alert you if the machine is running, but the agent is unresponsive.
+The agent heartbeat is slightly different than the machine unavailable alert because it relies on Azure Monitor Agent to send a heartbeat. The agent heartbeat can alert you if the machine is running but the agent is unresponsive.
#### Metric alert rules
-A metric called *Heartbeat* is included in each Log Analytics workspace. Each virtual machine connected to that workspace sends a heartbeat metric value each minute. Because the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to **Count** and the **Threshold** value to match the **Evaluation granularity**.
-
+A metric called **Heartbeat** is included in each Log Analytics workspace. Each virtual machine connected to that workspace sends a heartbeat metric value each minute. Because the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to **Count** and the **Threshold** value to match the **Evaluation granularity**.
#### Log alert rules Log query alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat), which should have a heartbeat record every minute from each machine.
-Use a rule with the following query.
+Use a rule with the following query:
```kusto Heartbeat
Heartbeat
| summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,5m), _ResourceId ``` - ### CPU alerts
+This section describes CPU alerts.
+ #### Metric alert rules | Target | Metric |
Heartbeat
#### Log alert rules
-**CPU utilization**
+**CPU utilization**
```kusto InsightsMetrics
InsightsMetrics
| where Namespace == "Processor" and Name == "UtilizationPercentage" | summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId ```+ ### Memory alerts
+This section describes memory alerts.
+ #### Metric alert rules | Target | Metric |
InsightsMetrics
#### Log alert rules
-**Available memory in MB**
+**Available memory in MB**
```kusto InsightsMetrics
InsightsMetrics
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId ```
-**Available memory in percentage**
+**Available memory in percentage**
```kusto InsightsMetrics
InsightsMetrics
### Disk alerts
+This section describes disk alerts.
+ #### Metric alert rules | Target | Metric |
InsightsMetrics
#### Log query alert rules
-**Logical disk used - all disks on each computer**
+**Logical disk used - all disks on each computer**
```kusto InsightsMetrics
InsightsMetrics
| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"]) | summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk ```+ **Logical disk data rate** ```kusto
The following sample creates an alert when a specific Windows event is created.
| summarize AggregatedValue = avg(CounterValue) by Computer ``` - ## Next steps
-* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
+[Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md)
azure-monitor Tutorial Monitor Vm Enable Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-enable-insights.md
Title: Enable monitoring with VM insights for Azure virtual machine
+ Title: Enable monitoring with VM insights for an Azure virtual machine
description: Enable monitoring with VM insights in Azure Monitor to monitor an Azure virtual machine.
Last updated 12/03/2022
-# Tutorial: Enable monitoring with VM insights for Azure virtual machine
-VM insights is a feature of Azure Monitor that quickly gets you started monitoring your virtual machines. You can view trends of performance data, running processes on individual machines, and dependencies between machines. VM insights installs the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) which is required to collect the guest operating system and prepares you to configure additional monitoring from your VMs according to your particular requirements.
+# Tutorial: Enable monitoring with VM insights for an Azure virtual machine
+VM insights is a feature of Azure Monitor that quickly gets you started monitoring your virtual machines. You can view trends of performance data, running processes on individual machines, and dependencies between machines. VM insights installs [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md). It's required to collect the guest operating system and prepares you to configure more monitoring from your VMs according to your requirements.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Enable VM insights for a virtual machine which installs the Azure Monitor agent and begins data collection.
+> * Enable VM insights for a virtual machine, which installs Azure Monitor Agent and begins data collection.
> * Enable optional collection of detailed process and telemetry to enable the Map feature of VM insights.
-> * Inspect graphs analyzing performance data collected from the virtual machine.
-> * Inspect map showing processes running on the virtual machine and dependencies with other systems.
-
+> * Inspect graphs analyzing performance data collected from the virtual machine.
+> * Inspect a map showing processes running on the virtual machine and dependencies with other systems.
## Prerequisites
-To complete this tutorial, you need the following:
--- An Azure virtual machine to monitor.
+To complete this tutorial, you need an Azure virtual machine to monitor.
> [!NOTE]
-> If you selected the option to **Enable virtual machine insights** when you created your virtual machine, then VM insights will already be enabled. If the machine was previously enabled for VM insights using Log Analytics agent, see [Enable VM insights in the Azure portal](vminsights-enable-portal.md) for upgrading to Azure Monitor agent.
--
+> If you selected the option to **Enable virtual machine insights** when you created your virtual machine, VM insights is already enabled. If the machine was previously enabled for VM insights by using the Log Analytics agent, see [Enable VM insights in the Azure portal](vminsights-enable-portal.md) for upgrading to Azure Monitor Agent.
## Enable VM insights
-Select **Insights** from your virtual machine's menu in the Azure portal. If VM insights hasn't been enabled, you should see a short description of it and an option to enable it. Click **Enable** to open the **Monitoring configuration** pane. Leave the default option of **Azure Monitor agent**.
-
-In order to reduce cost for data collection, VM insights creates a default [data collection rule](../essentials/data-collection-rule-overview.md) that doesn't include collection of processes and dependencies. To enable this collection, click **Create new** to create a new data collection rule.
+Select **Insights** from your virtual machine's menu in the Azure portal. If VM insights isn't enabled, you see a short description of it and an option to enable it. Select **Enable** to open the **Monitoring configuration** pane. Leave the default option of **Azure Monitor agent**.
+To reduce cost for data collection, VM insights creates a default [data collection rule](../essentials/data-collection-rule-overview.md) that doesn't include collection of processes and dependencies. To enable this collection, select **Create New** to create a new data collection rule.
-Provide a **Data collection rule name** and then select **Enable processes and dependencies (Map)**. You can't disable collection of guest performance since this is required for VM insights.
-Keep the default Log Analytics workspace for the subscription unless you have another workspace that you want to use. Click **Create** to create the new data collection rule. and then **Configure** to start VM insights configuration.
+Provide a **Data collection rule name** and then select **Enable processes and dependencies (Map)**. You can't disable collection of guest performance because it's required for VM insights.
--
-You'll see a message saying that monitoring is being enabled. It may take several minutes for the agent to be installed and for data collection to begin.
+Keep the default Log Analytics workspace for the subscription unless you have another workspace that you want to use. Select **Create** to create the new data collection rule. Select **Configure** to start VM insights configuration.
+A message says that monitoring is being enabled. It might take several minutes for the agent to be installed and for data collection to begin.
## View performance
-When the deployment is complete, you'll see views in the **Performance** tab in VM insights with performance data for the machine. This shows you the values of key guest metrics over time.
+When the deployment is finished, you see views on the **Performance** tab in VM insights with performance data for the machine. This data shows you the values of key guest metrics over time.
## View processes and dependencies
-Select the **Maps** tab to view processes and dependencies for the virtual machine. The current machine is at the center of the view. View the processes running on it by expanding **Processes**.
-
+Select the **Map** tab to view processes and dependencies for the virtual machine. The current machine is at the center of the view. View the processes running on it by expanding **Processes**.
## View machine details
-The **Maps** view provides different tabs with information collected about the virtual machine. Click through the tabs to see what's available.
+The **Map** view provides different tabs with information collected about the virtual machine. Select the tabs to see what's available.
## Next steps
-VM insights collects performance data from the VM guest operating system, but it doesn't collect log data such as Windows event log or syslog. Now that you have the machine monitored with the Azure Monitor agent, you can create an additional data collection rule to perform this collection.
+VM insights collects performance data from the VM guest operating system, but it doesn't collect log data such as Windows event log or Syslog. Now that you have the machine monitored with Azure Monitor Agent, you can create another data collection rule to perform this collection.
> [!div class="nextstepaction"] > [Collect guest logs and metrics from Azure virtual machine](tutorial-monitor-vm-guest.md)-
azure-monitor Vminsights Configure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-configure-workspace.md
Title: Configure Log Analytics workspace for VM insights
-description: Describes how to create and configure the Log Analytics workspace used by VM insights.
+ Title: Configure a Log Analytics workspace for VM insights
+description: This article describes how to create and configure a Log Analytics workspace used by VM insights.
Last updated 06/22/2022
-# Configure Log Analytics workspace for VM insights
-VM insights collects its data from one or more Log Analytics workspaces in Azure Monitor. Prior to onboarding agents, you must create and configure a workspace. This article describes the requirements of the workspace and to configure it for VM insights.
+# Configure a Log Analytics workspace for VM insights
+VM insights collects its data from one or more Log Analytics workspaces in Azure Monitor. Prior to onboarding agents, you must create and configure a workspace. This article describes the requirements of the workspace and how to configure it for VM insights.
> [!IMPORTANT]
-> Configuration of the Log Analytics workspace is only required for using VM insights with virtual machines using Log Analytics agent. Virtual machines using Azure Monitor agent do not use the *VMInsights* solution that's installed in this configuration. To support Azure Monitor agent, a standard Log Analytics workspace just needs be created as described in [Create Log Analytics workspace](#create-log-analytics-workspace).
+> Configuration of the Log Analytics workspace is only required for using VM insights with virtual machines by using the Log Analytics agent. Virtual machines using Azure Monitor Agent don't use the VMInsights solution that's installed in this configuration. To support Azure Monitor Agent, a standard Log Analytics workspace must be created as described in [Create a Log Analytics workspace](#create-a-log-analytics-workspace).
## Overview
-A single subscription can use any number of workspaces depending on your requirements. The only requirement of the workspace is that it be located in a supported location and be configured with the *VMInsights* solution.
+A single subscription can use any number of workspaces depending on your requirements. The only requirement of the workspace is that it must be located in a supported location and be configured with the VMInsights solution.
-Once the workspace has been configured, you can use any of the available options to install the required agents on virtual machine and virtual machine scale set and specify a workspace for them to send their data. VM insights will collect data from any configured workspace in its subscription.
+After the workspace is configured, you can use any of the available options to install the required agents on virtual machines and virtual machine scale sets and specify a workspace for them to send their data. VM insights collects data from any configured workspace in its subscription.
> [!NOTE]
-> When you enable VM insights on a single virtual machine or virtual machine scale set using the Azure portal, you're given the option to select an existing workspace or create a new one. The *VMInsights* solution will be installed in this workspace if it isn't already. You can then use this workspace for other agents.
+> When you enable VM insights on a single virtual machine or virtual machine scale set by using the Azure portal, you can select an existing workspace or create a new one. The VMInsights solution is installed in this workspace if it isn't already. You can then use this workspace for other agents.
-
-## Create Log Analytics workspace
+## Create a Log Analytics workspace
>[!NOTE]
->The information described in this section is also applicable to the [Service Map solution](service-map.md).
-
-Access Log Analytics workspaces in the Azure portal from the **Log Analytics workspaces** menu.
+>The information described in this section also applies to the [Service Map solution](service-map.md).
-[![Log Anlytics workspaces](media/vminsights-configure-workspace/log-analytics-workspaces.png)](media/vminsights-configure-workspace/log-analytics-workspaces.png#lightbox)
+To access Log Analytics workspaces in the Azure portal, use the **Log Analytics workspaces** menu.
-You can create a new Log Analytics workspace using any of the following methods. See [Design a Log Analytics workspace configuration](../logs/workspace-design.md) for guidance on determining the number of workspaces you should use in your environment and how to design their access strategy.
+[![Screenshot that shows a Log Analytics workspace.](media/vminsights-configure-workspace/log-analytics-workspaces.png)](media/vminsights-configure-workspace/log-analytics-workspaces.png#lightbox)
+You can create a new Log Analytics workspace by using any of the following methods:
* [Azure portal](../logs/quick-create-workspace.md) * [Azure CLI](../logs/resource-manager-workspace.md) * [PowerShell](../logs/powershell-workspace-configuration.md) * [Azure Resource Manager](../logs/resource-manager-workspace.md)
+For guidance on how to determine the number of workspaces you should use in your environment and how to design their access strategy, see [Design a Log Analytics workspace configuration](../logs/workspace-design.md).
+ ## Supported regions VM insights supports a Log Analytics workspace in any of the [regions supported by Log Analytics](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). - >[!NOTE] >You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace. ## Azure role-based access control
-To enable and access the features in VM insights, you must have the [Log Analytics contributor role](../logs/manage-access.md#azure-rbac) in the workspace. To view performance, health, and map data, you must have the [monitoring reader role](../roles-permissions-security.md#built-in-monitoring-roles) for the Azure VM. For more information about how to control access to a Log Analytics workspace, see [Manage workspaces](../logs/manage-access.md).
+To enable and access the features in VM insights, you must have the [Log Analytics Contributor role](../logs/manage-access.md#azure-rbac) in the workspace. To view performance, health, and map data, you must have the [Monitoring Reader role](../roles-permissions-security.md#built-in-monitoring-roles) for the Azure VM. For more information about how to control access to a Log Analytics workspace, see [Manage workspaces](../logs/manage-access.md).
-## Add VMInsights solution to workspace
-Before a Log Analytics workspace can be used with VM insights, it must have the *VMInsights* solution installed. The methods for configuring the workspace are described in the following sections.
+## Add the VMInsights solution to a workspace
+Before a Log Analytics workspace can be used with VM insights, it must have the VMInsights solution installed. The methods for configuring the workspace are described in the following sections.
> [!NOTE]
-> When you add the *VMInsights* solution to the workspace, all existing virtual machines connected to the workspace will start to send data to InsightsMetrics. Data for the other data types won't be collected until you add the Dependency Agent to those existing virtual machines connected to the workspace.
+> When you add the VMInsights solution to the workspace, all existing virtual machines connected to the workspace start to send data to InsightsMetrics. Data for the other data types won't be collected until you add the Dependency agent to those existing virtual machines connected to the workspace.
### Azure portal
-There are three options for configuring an existing workspace using the Azure portal. Each is described below.
-
-To configure a single workspace, go the **Virtual Machines** option in the **Azure Monitor** menu, select the **Other onboarding options**, and then **Configure a workspace**. Select a subscription and a workspace and then click **Configure**.
-
-[![Configure workspace](../vm/media/vminsights-enable-policy/configure-workspace.png)](../vm/media/vminsights-enable-policy/configure-workspace.png#lightbox)
+There are three options for configuring an existing workspace by using the Azure portal:
-To configure multiple workspaces, select the **Workspace configuration** tab in the **Virtual Machines** menu in the **Monitor** menu in the Azure portal. Set the filter values to display a list of existing workspaces. Select the box next to each workspace to enable and then click **Configure selected**.
+- To configure a single workspace, on the **Azure Monitor** menu, select **Virtual Machines**. Select **Other onboarding options** and then select **Configure a workspace**. Select a subscription and a workspace and then select **Configure**.
-[![Workspace configuration](../vm/media/vminsights-enable-policy/workspace-configuration.png)](../vm/media/vminsights-enable-policy/workspace-configuration.png#lightbox)
+ [![Screenshot that shows configuring a workspace.](../vm/media/vminsights-enable-policy/configure-workspace.png)](../vm/media/vminsights-enable-policy/configure-workspace.png#lightbox)
+- To configure multiple workspaces, on the **Monitor** menu, select **Virtual Machines**. Then select the **Workspace configuration** tab. Set the filter values to display a list of existing workspaces. Select the checkbox next to each workspace to enable it and then select **Configure selected**.
-When you enable VM insights on a single virtual machine or virtual machine scale set using the Azure portal, you're given the option to select an existing workspace or create a new one. The *VMInsights* solution will be installed in this workspace if it isn't already. You can then use this workspace for other agents.
+ [![Screenshot that shows workspace configuration.](../vm/media/vminsights-enable-policy/workspace-configuration.png)](../vm/media/vminsights-enable-policy/workspace-configuration.png#lightbox)
-[![Enable single VM in portal](../vm/media/vminsights-enable-portal/enable-vminsights-vm-portal.png)](../vm/media/vminsights-enable-portal/enable-vminsights-vm-portal.png#lightbox)
+- When you enable VM insights on a single virtual machine or virtual machine scale set by using the Azure portal, you can select an existing workspace or create a new one. The VMInsights solution is installed in this workspace if it isn't already. You can then use this workspace for other agents.
+ [![Screenshot that shows enabling a single VM in the portal.](../vm/media/vminsights-enable-portal/enable-vminsights-vm-portal.png)](../vm/media/vminsights-enable-portal/enable-vminsights-vm-portal.png#lightbox)
### Resource Manager template
-The Azure Resource Manager templates for VM insights are provided in an archive file (.zip) that you can [download from our GitHub repo](https://aka.ms/VmInsightsARMTemplates). This includes a template called **ConfigureWorkspace** that configures a Log Analytics workspace for VM insights. You deploy this template using any of the standard methods including the sample PowerShell and CLI commands below:
+The Azure Resource Manager templates for VM insights are provided in an archive file (.zip) that you can [download from our GitHub repo](https://aka.ms/VmInsightsARMTemplates). A template called **ConfigureWorkspace** configures a Log Analytics workspace for VM insights. You deploy this template by using any of the standard methods, including the following sample PowerShell and CLI commands.
# [CLI](#tab/CLI)
New-AzResourceGroupDeployment -Name ConfigureWorkspace -ResourceGroupName my-res
-## Remove VMInsights solution from workspace
-If you have completely migrated your virtual machines to Azure Monitor agent and no longer want to support virtual machines with the Log Analytics agent in your workspace, then you should remove the *VMInisghts* solution from the workspace. This will ensure that you don't collect data from any Log Analytics agents that inadvertently remain.
+## Remove the VMInsights solution from a workspace
+If you've migrated your virtual machines to Azure Monitor Agent and no longer want to support virtual machines with the Log Analytics agent in your workspace, remove the VMInsights solution from the workspace. Removing the solution ensures that you don't collect data from any Log Analytics agents that inadvertently remain.
-To remove the *VMInsights*solution, use the same process as [removing any other solution from a workspace](/previous-versions/azure/azure-monitor/insights/solutions#remove-a-monitoring-solution).
+To remove the VMInsights solution, use the same process as [removing any other solution from a workspace](/previous-versions/azure/azure-monitor/insights/solutions#remove-a-monitoring-solution).
1. Select the **Solutions** menu in the Azure portal.
-2. Locate the *VMInsights* solution for your workspace and select it to view its detail.
-3. Click **Delete**
+1. Locate the VMInsights solution for your workspace and select it to view its detail.
+1. Select **Delete**.
+ :::image type="content" source="media/vminsights-configure-workspace/remove-solution.png" lightbox="media/vminsights-configure-workspace/remove-solution.png" alt-text="Screenshot that shows deleting a solution dialog.":::
## Next steps - See [Onboard agents to VM insights](vminsights-enable-overview.md) to connect agents to VM insights.-- See [Targeting monitoring solutions in Azure Monitor (Preview)](/previous-versions/azure/azure-monitor/insights/solution-targeting) to limit the amount of data sent from a solution to the workspace.
+- See [Targeting monitoring solutions in Azure Monitor (preview)](/previous-versions/azure/azure-monitor/insights/solution-targeting) to limit the amount of data sent from a solution to the workspace.
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
If you associate a data collection rule with the Map feature enabled to a machin
- You must remove the Log Analytics agent yourself from any machines that are using it. Before you do this step, ensure that the machine isn't relying on any other solutions that require the Log Analytics agent. For more information, see [Migrate to Azure Monitor Agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md). -- After you verify that no Log Analytics agents are still connected to your Log Analytics workspace, you can [remove the VM Insights solution from the workspace](vminsights-configure-workspace.md#remove-vminsights-solution-from-workspace). It's no longer needed.
+- After you verify that no Log Analytics agents are still connected to your Log Analytics workspace, you can [remove the VM Insights solution from the workspace](vminsights-configure-workspace.md#remove-the-vminsights-solution-from-a-workspace). It's no longer needed.
> [!NOTE] > To check if you have any machines with both agents sending data to your Log Analytics workspace, run the following [log query](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-overview.md). This query will show the last heartbeat for each computer. If a computer has both agents, it will return two records, each with a different `category`. The Azure Monitor agent will have a `category` of *Azure Monitor Agent*. The Log Analytics agent will have a `category` of *Direct Agent*.
azure-monitor Vminsights Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md
Title: View app dependencies with VM insights
-description: Map is a feature of VM insights. It automatically discovers application components on Windows and Linux systems and maps the communication between services. This article provides details on how to use the Map feature in various scenarios.
+description: This article shows how to use the VM insights Map feature. It discovers application components on Windows and Linux systems and maps the communication between services.
Last updated 06/08/2022
# Use the Map feature of VM insights to understand application components
-In VM insights, you can view discovered application components on Windows and Linux virtual machines (VMs) that run in Azure or your environment. You can observe the VMs in two ways. View a map directly from a VM or view a map from Azure Monitor to see the components across groups of VMs. This article will help you understand these two viewing methods and how to use the Map feature.
+In VM insights, you can view discovered application components on Windows and Linux virtual machines (VMs) that run in Azure or your environment. You can observe the VMs in two ways. You can view a map directly from a VM. You can also view a map from Azure Monitor to see the components across groups of VMs. This article helps you to understand these two viewing methods and how to use the Map feature.
For information about configuring VM insights, see [Enable VM insights](vminsights-enable-overview.md). ## Prerequisites
-To enable the map feature in VM insights, the virtual machine requires one of the following. See [Enable VM insights on unmonitored machine](vminsights-enable-overview.md) for details on each.
+To enable the Map feature in VM insights, the virtual machine requires one of the following agents:
-- Azure Monitor agent with **processes and dependencies** enabled.-- Log Analytics agent enabled for VM insights.
+- Azure Monitor Agent with processes and dependencies enabled.
+- The Log Analytics agent enabled for VM insights.
+For more information, see [Enable VM insights on unmonitored machine](vminsights-enable-overview.md).
> [!WARNING]
-> Collecting duplicate data from a single machine with both the Azure Monitor agent and Log Analytics agent can result in the map feature of VM insights being inaccurate since it does not check for duplicate data.
->
-> For more information, see [Migrate from Log Analytics agent](vminsights-enable-overview.md#migrate-from-log-analytics-agent-to-azure-monitor-agent).
+> Collecting duplicate data from a single machine with both Azure Monitor Agent and the Log Analytics agent can result in the Map feature of VM insights being inaccurate because it doesn't check for duplicate data.
+>
+> For more information, see [Migrate from the Log Analytics agent](vminsights-enable-overview.md#migrate-from-log-analytics-agent-to-azure-monitor-agent).
## Introduction to the Map experience
-Before diving into the Map experience, you should understand how it presents and visualizes information. Whether you select the Map feature directly from a VM or from Azure Monitor, the Map feature presents a consistent experience. The only difference is that from Azure Monitor, one map shows all the members of a multiple-tier application or cluster.
+Before diving into the Map experience, you should understand how it presents and visualizes information.
-The Map feature visualizes the VM dependencies by discovering running processes that have:
+Whether you select the Map feature directly from a VM or from Azure Monitor, the Map feature presents a consistent experience. The only difference is that from Azure Monitor, one map shows all the members of a multiple-tier application or cluster.
+
+The Map feature visualizes the VM dependencies by discovering running processes that have:
- Active network connections between servers. - Inbound and outbound connection latency.-- Ports across any TCP-connected architecture over a specified time range.
-
-Expand a VM to show process details and only those processes that communicate with the VM. The client group shows the count of front-end clients that connect into the VM. The server-port groups show the count of back-end servers the VM connects to. Expand a server-port group to see the detailed list of servers that connect over that port.
+- Ports across any TCP-connected architecture over a specified time range.
+
+Expand a VM to show process details and only those processes that communicate with the VM. The client group shows the count of front-end clients that connect into the VM. The server-port groups show the count of back-end servers the VM connects to. Expand a server-port group to see the detailed list of servers that connect over that port.
-When you select the VM, the **Properties** pane shows the VM's properties. Properties include system information reported by the operating system, properties of the Azure VM, and a doughnut chart that summarizes the discovered connections.
+When you select the VM, the **Properties** pane shows the VM's properties. Properties include system information reported by the operating system, properties of the Azure VM, and a doughnut chart that summarizes the discovered connections.
-![The Properties pane](./media/vminsights-maps/properties-pane-01.png)
+![Screenshot that shows the Properties pane.](./media/vminsights-maps/properties-pane-01.png)
-On the right side of the pane, select **Log Events** to show a list of data that the VM has sent to Azure Monitor. This data is available for querying. Select any record type to open the **Logs** page, where you see the results for that record type. You also see a preconfigured query that's filtered against the VM.
+On the right side of the pane, select **Log Events** to show a list of data that the VM has sent to Azure Monitor. This data is available for querying. Select any record type to open the **Logs** page, where you see the results for that record type. You also see a preconfigured query that's filtered against the VM.
-![The Log Events pane](./media/vminsights-maps/properties-pane-logs-01.png)
+![Screenshot that shows the Log Events pane.](./media/vminsights-maps/properties-pane-logs-01.png)
-Close the **Logs** page and return to the **Properties** pane. There, select **Alerts** to view VM health-criteria alerts. The Map feature integrates with Azure Alerts to show alerts for the selected server in the selected time range. The server displays an icon for current alerts, and the **Machine Alerts** pane lists the alerts.
+Close the **Logs** page and return to the **Properties** pane. There, select **Alerts** to view VM health-criteria alerts. The Map feature integrates with Azure alerts to show alerts for the selected server in the selected time range. The server displays an icon for current alerts, and the **Machine Alerts** pane lists the alerts.
-![The Alerts pane](./media/vminsights-maps/properties-pane-alerts-01.png)
+![Screenshot that shows the Alerts pane.](./media/vminsights-maps/properties-pane-alerts-01.png)
To make the Map feature display relevant alerts, create an alert rule that applies to a specific computer: - Include a clause to group alerts by computer (for example, **by Computer interval 1 minute**). - Base the alert on a metric.
-For more information about Azure Alerts and creating alert rules, see [Unified alerts in Azure Monitor](../alerts/alerts-overview.md).
+For more information about Azure alerts and how to create alert rules, see [Unified alerts in Azure Monitor](../alerts/alerts-overview.md).
-In the upper-right corner, the **Legend** option describes the symbols and roles on the map. For a closer look at your map and to move it around, use the zoom controls in the lower-right corner. You can set the zoom level and fit the map to the size of the page.
+In the upper-right corner, the **Legend** option describes the symbols and roles on the map. For a closer look at your map and to move it around, use the zoom controls in the lower-right corner. You can set the zoom level and fit the map to the size of the page.
## Connection metrics
-The **Connections** pane displays standard metrics for the selected connection from the VM over the TCP port. The metrics include response time, requests per minute, traffic throughput, and links.
+The **Connections** pane displays standard metrics for the selected connection from the VM over the TCP port. The metrics include response time, requests per minute, traffic throughput, and links.
-![Network connectivity charts on the Connections pane](./media/vminsights-maps/map-group-network-conn-pane-01.png)
+![Screenshot that shows the Network connectivity charts on the Connections pane.](./media/vminsights-maps/map-group-network-conn-pane-01.png)
### Failed connections The map shows failed connections for processes and computers. A dashed red line indicates a client system is failing to reach a process or port. For systems that use the Dependency agent, the agent reports on failed connection attempts. The Map feature monitors a process by observing TCP sockets that fail to establish a connection. This failure could result from a firewall, a misconfiguration in the client or server, or an unavailable remote service.
-![A failed connection on the map](./media/vminsights-maps/map-group-failed-connection-01.png)
+![Screenshot that shows a failed connection on the map.](./media/vminsights-maps/map-group-failed-connection-01.png)
Understanding failed connections can help you troubleshoot, validate migration, analyze security, and understand the overall architecture of the service. Failed connections are sometimes harmless, but they often point to a problem. Connections might fail, for example, when a failover environment suddenly becomes unreachable or when two application tiers can't communicate with each other after a cloud migration. ### Client groups On the map, client groups represent client machines that connect to the mapped machine. A single client group represents the clients for an individual process or machine.
-![A client group on the map](./media/vminsights-maps/map-group-client-groups-01.png)
+![Screenshot that shows a client group on the map.](./media/vminsights-maps/map-group-client-groups-01.png)
-To see the monitored clients and IP addresses of the systems in a client group, select the group. The contents of the group appear below.
+To see the monitored clients and IP addresses of the systems in a client group, select the group. The contents of the group appear in the following image.
-![A client group's list of IP addresses on the map](./media/vminsights-maps/map-group-client-group-iplist-01.png)
+![Screenshot that shows a client group's list of IP addresses on the map.](./media/vminsights-maps/map-group-client-group-iplist-01.png)
If the group includes monitored and unmonitored clients, you can select the appropriate section of the group's doughnut chart to filter the clients. ### Server-port groups
-Server-port groups represent ports on servers that have inbound connections from the mapped machine. The group contains the server port and a count of the number of servers that have connections to that port. Select the group to see the individual servers and connections.
+Server-port groups represent ports on servers that have inbound connections from the mapped machine. The group contains the server port and a count of the number of servers that have connections to that port. Select the group to see the individual servers and connections.
-![A server-port group on the map](./media/vminsights-maps/map-group-server-port-groups-01.png)
+![Screenshot that shows a server-port group on the map.](./media/vminsights-maps/map-group-server-port-groups-01.png)
If the group includes monitored and unmonitored servers, you can select the appropriate section of the group's doughnut chart to filter the servers.
-## View a map from a VM
+## View a map from a VM
To access VM insights directly from a VM:
-1. In the Azure portal, select **Virtual Machines**.
-2. From the list, choose a VM. In the **Monitoring** section, choose **Insights**.
-3. Select the **Map** tab.
+1. In the Azure portal, select **Virtual Machines**.
+1. From the list, select a VM. In the **Monitoring** section, select **Insights**.
+1. Select the **Map** tab.
-The map visualizes the VM's dependencies by discovering running process groups and processes that have active network connections over a specified time range.
+The map visualizes the VM's dependencies by discovering running process groups and processes that have active network connections over a specified time range.
-By default, the map shows the last 30 minutes. If you want to see how dependencies looked in the past, you can query for historical time ranges of up to one hour. To run the query, use the **TimeRange** selector in the upper-left corner. You might run a query, for example, during an incident or to see the status before a change.
+By default, the map shows the last 30 minutes. If you want to see how dependencies looked in the past, you can query for historical time ranges of up to one hour. To run the query, use the **TimeRange** selector in the upper-left corner. You might run a query, for example, during an incident or to see the status before a change.
-![Screenshot of the Map tab in the Monitoring Insights section of Azure portal showing a diagram of the dependencies between virtual machines.](./media/vminsights-maps/map-direct-vm-01.png)
+![Screenshot that shows the Map tab in the Monitoring Insights section of the Azure portal showing a diagram of the dependencies between virtual machines.](./media/vminsights-maps/map-direct-vm-01.png)
-## View a map from a Virtual Machine Scale Set
+## View a map from a virtual machine scale set
-To access VM insights directly from a Virtual Machine Scale Set:
+To access VM insights directly from a virtual machine scale set:
1. In the Azure portal, select **Virtual machine scale sets**.
-2. From the list, choose a VM. Then in the **Monitoring** section, choose **Insights**.
-3. Select the **Map** tab.
+1. From the list, select a VM. Then in the **Monitoring** section, select **Insights**.
+1. Select the **Map** tab.
-The map visualizes all instances in the scale set as a group node along with the group's dependencies. The expanded node lists the instances in the scale set. You can scroll through these instances 10 at a time.
+The map visualizes all instances in the scale set as a group node along with the group's dependencies. The expanded node lists the instances in the scale set. You can scroll through these instances 10 at a time.
-To load a map for a specific instance, first select that instance on the map. Then select the **ellipsis** button (...) and select **Load Server Map**. In the map that appears, you see process groups and processes that have active network connections over a specified time range.
+To load a map for a specific instance, first select that instance on the map. Then select the **ellipsis** button **(...**) and select **Load Server Map**. In the map that appears, you see process groups and processes that have active network connections over a specified time range.
By default, the map shows the last 30 minutes. If you want to see how dependencies looked in the past, you can query for historical time ranges of up to one hour. To run the query, use the **TimeRange** selector. You might run a query, for example, during an incident or to see the status before a change.
-![Screenshot of the Map tab in the Monitoring Insights section of Azure portal showing a diagram of dependencies between virtual machine scale sets.](./media/vminsights-maps/map-direct-vmss-01.png)
+![Screenshot that shows the Map tab in the Monitoring Insights section of the Azure portal showing a diagram of dependencies between virtual machine scale sets.](./media/vminsights-maps/map-direct-vmss-01.png)
>[!NOTE] >You can also access a map for a specific instance from the **Instances** view for your virtual machine scale set. In the **Settings** section, go to **Instances** > **Insights**.
By default, the map shows the last 30 minutes. If you want to see how dependenci
In Azure Monitor, the Map feature provides a global view of your VMs and their dependencies. To access the Map feature in Azure Monitor:
-1. In the Azure portal, select **Monitor**.
-2. In the **Insights** section, choose **Virtual Machines**.
-3. Select the **Map** tab.
+1. In the Azure portal, select **Monitor**.
+1. In the **Insights** section, select **Virtual Machines**.
+1. Select the **Map** tab.
- ![Azure Monitor overview map of multiple VMs](./media/vminsights-maps/map-multivm-azure-monitor-01.png)
+ ![Screenshot that shows an Azure Monitor overview map of multiple VMs.](./media/vminsights-maps/map-multivm-azure-monitor-01.png)
-Choose a workspace by using the **Workspace** selector at the top of the page. If you've more than one Log Analytics workspace, choose the workspace that's enabled with the solution and that has VMs reporting to it.
+Choose a workspace by using the **Workspace** selector at the top of the page. If you have more than one Log Analytics workspace, choose the workspace that's enabled with the solution and that has VMs reporting to it.
-The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and Virtual Machine Scale Sets of computers that are related to the selected workspace. Your selection applies only to the Map feature and doesn't carry over to Performance or Health.
+The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers that are related to the selected workspace. Your selection applies only to the Map feature and doesn't carry over to Performance or Health.
-By default, the map shows the last 30 minutes. If you want to see how dependencies looked in the past, you can query for historical time ranges of up to one hour. To run the query, use the **TimeRange** selector. You might run a query, for example, during an incident or to see the status before a change.
+By default, the map shows the last 30 minutes. If you want to see how dependencies looked in the past, you can query for historical time ranges of up to one hour. To run the query, use the **TimeRange** selector. You might run a query, for example, during an incident or to see the status before a change.
## Next steps
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-troubleshoot.md
Title: Troubleshoot VM insights
-description: Troubleshoot VM insights installation.
+description: Troubleshooting information for VM insights installation.
# Troubleshoot VM insights
-This article provides troubleshooting information for when you have problems enabling or using VM insights.
+This article provides troubleshooting information to help you with problems you might have experienced when you tried to enable or use VM insights.
-## Cannot enable VM Insights on a machine
-When onboarding an Azure virtual machine from the Azure portal, the following steps occur:
+## Can't enable VM insights on a machine
+When you onboard an Azure virtual machine from the Azure portal, the following steps occur:
- A default Log Analytics workspace is created if that option was selected.-- The Log Analytics agent is installed on Azure virtual machines using a VM extension if the agent is already installed.-- The Dependency agent is installed on Azure virtual machines using an extension, if determined it is required.
-
-During the onboarding process, each of these steps is verified and a notification status shown in the portal. Configuration of the workspace and the agent installation typically takes 5 to 10 minutes. It will take another 5 to 10 minutes for data to become available to view in the portal.
+- The Log Analytics agent is installed on Azure virtual machines by using a VM extension if the agent is already installed.
+- The Dependency agent is installed on Azure virtual machines by using an extension if it's required.
-If you receive a message that the virtual machine needs to be onboarded after you've performed the onboarding process, allow for up to 30 minutes for the process to be completed. If the issue persists, then see the following sections for possible causes.
+During the onboarding process, each of these steps is verified and a notification status appears in the portal. Configuration of the workspace and the agent installation typically takes 5 to 10 minutes. It takes another 5 to 10 minutes for data to become available to view in the portal.
+
+If you receive a message that the virtual machine needs to be onboarded after you've performed the onboarding process, allow for up to 30 minutes for the process to finish. If the issue persists, see the following sections for possible causes.
### Is the virtual machine running?
- If the virtual machine has been turned off for a while, is off currently, or was only recently turned on then you won't have data to display for a bit until data arrives.
+ If the virtual machine has been turned off for a while, is off currently, or was only recently turned on, you won't have data to display until data arrives.
### Is the operating system supported?
-If the operating system is not in the list of [supported operating systems](vminsights-enable-overview.md#supported-operating-systems) then the extension will fail to install and you will see this message that we are waiting for data to arrive.
+If the operating system isn't in the list of [supported operating systems](vminsights-enable-overview.md#supported-operating-systems), the extension fails to install and you see a message that we're waiting for data to arrive.
> [!IMPORTANT]
-> Post April 11th 2022, if you are not seeing your Virtual Machine in the VM insights solution, this might due to running an older version of the Dependency Agent. See more details in the blog post: https://techcommunity.microsoft.com/t5/azure-monitor-status/potential-breaking-changes-for-vm-insights-linux-customers/ba-p/3271989 . Not applicable for Windows machines and before April 11th 2022.
+> Post April 11, 2022, if you aren't seeing your virtual machine in the VM insights solution, you might be running an older version of the Dependency agent. For more information, see the blog post [Potential breaking changes for VM insights Linux customers](https://techcommunity.microsoft.com/t5/azure-monitor-status/potential-breaking-changes-for-vm-insights-linux-customers/ba-p/3271989). Not applicable for Windows machines and before April 11, 2022.
### Did the extension install properly?
-If you still see a message that the virtual machine needs to be onboarded, it may mean that one or both of the extensions failed to install correctly. Check the **Extensions** page for your virtual machine in the Azure portal to verify that the following extensions are listed.
+If you still see a message that the virtual machine needs to be onboarded, it might mean that one or both of the extensions failed to install correctly. Check the **Extensions** page for your virtual machine in the Azure portal to verify that the following extensions are listed.
| Operating system | Agents | |:|:| | Windows | MicrosoftMonitoringAgent<br>Microsoft.Azure.Monitoring.DependencyAgent | | Linux | OMSAgentForLinux<br>DependencyAgentLinux |
-If you do not see the both extensions for your operating system in the list of installed extensions, then they need to be installed. If the extensions are listed but their status does not appear as *Provisioning succeeded*, then the extension should be removed and reinstalled.
+If you don't see both the extensions for your operating system in the list of installed extensions, they must be installed. If the extensions are listed but their status doesn't appear as *Provisioning succeeded*, remove the extensions and reinstall them.
### Do you have connectivity issues?
-For Windows machines, you can use the *TestCloudConnectivity* tool to identify connectivity issue. This tool is installed by default with the agent in the folder *%SystemDrive%\Program Files\Microsoft Monitoring Agent\Agent*. Run the tool from an elevated command prompt. It will return results and highlight where the test fails.
+For Windows machines, you can use the TestCloudConnectivity tool to identify connectivity issue. This tool is installed by default with the agent in the folder *%SystemDrive%\Program Files\Microsoft Monitoring Agent\Agent*. Run the tool from an elevated command prompt. It returns results and highlights where the test fails.
-![TestCloudConnectivity tool](media/vminsights-troubleshoot/test-cloud-connectivity.png)
+![Screenshot that shows the TestCloudConnectivity tool.](media/vminsights-troubleshoot/test-cloud-connectivity.png)
### More agent troubleshooting See the following articles for troubleshooting issues with the Log Analytics agent: -- [How to troubleshoot issues with the Log Analytics agent for Windows](../agents/agent-windows-troubleshoot.md)-- [How to troubleshoot issues with the Log Analytics agent for Linux](../agents/agent-linux-troubleshoot.md)
+- [Troubleshoot issues with the Log Analytics agent for Windows](../agents/agent-windows-troubleshoot.md)
+- [Troubleshoot issues with the Log Analytics agent for Linux](../agents/agent-linux-troubleshoot.md)
## Performance view has no data
-If the agents appear to be installed correctly but you don't see any data in the Performance view, then see the following sections for possible causes.
+If the agents appear to be installed correctly but you don't see any data in the **Performance** view, see the following sections for possible causes.
### Has your Log Analytics workspace reached its data limit? Check the [capacity reservations and the pricing for data ingestion](https://azure.microsoft.com/pricing/details/monitor/).
Heartbeat
| sort by TimeGenerated desc ```
-If you don't see any data or if the computer hasn't sent a heartbeat recently, then you may have problems with your agent. See the section above for agent troubleshooting information.
+If you don't see any data or if the computer hasn't sent a heartbeat recently, you might have problems with your agent. See the preceding section for agent troubleshooting information.
+
+## Virtual machine doesn't appear in the Map view
-## Virtual machine doesn't appear in map view
+See the following sections for issues with the **Map** view.
### Is the Dependency agent installed?
- Use the information in the sections above to determine if the Dependency agent is installed and working properly.
+ Use the information in the preceding sections to determine if the Dependency agent is installed and working properly.
### Are you on the Log Analytics free tier?
-The [Log Analytics free tier](https://azure.microsoft.com/pricing/details/monitor/) This is a legacy pricing plan that allows for up to five unique Service Map machines. Any subsequent machines won't appear in Service Map, even if the prior five are no longer sending data.
+The [Log Analytics free tier](https://azure.microsoft.com/pricing/details/monitor/) is a legacy pricing plan that allows for up to five unique Service Map machines. Any subsequent machines won't appear in Service Map, even if the prior five are no longer sending data.
### Is your virtual machine sending log and performance data to Azure Monitor Logs?
-Use the log query in the [Performance view has no data](#performance-view-has-no-data) section to determine if data is being collected for the virtual machine. If not data is being collected, use the TestCloudConnectivity tool described above to determine if you have connectivity issues.
+Use the log query in the [Performance view has no data](#performance-view-has-no-data) section to determine if data is being collected for the virtual machine. If no data is being collected, use the TestCloudConnectivity tool to determine if you have connectivity issues.
-
-## Virtual machine appears in map view but has missing data
-If the virtual machine is in the map view, then the Dependency agent is installed and running, but the kernel driver didn't load. Check the log file at the following locations:
+## Virtual machine appears in the Map view but has missing data
+If the virtual machine is in the **Map** view, the Dependency agent is installed and running, but the kernel driver didn't load, check the log file at the following locations:
| Operating system | Log | |:|:|
If the virtual machine is in the map view, then the Dependency agent is installe
| Linux | /var/opt/microsoft/dependency-agent/log/service.log | The last lines of the file should indicate why the kernel didn't load. For example, the kernel might not be supported on Linux if you updated your kernel.+ ## Next steps -- For details on onboarding VM insights agents, see [Enable VM insights overview](vminsights-enable-overview.md).
+For more information on onboarding VM insights agents, see [Enable VM insights overview](vminsights-enable-overview.md).
azure-resource-manager Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-cli.md
Title: Manage resources - Azure CLI description: Use Azure CLI and Azure Resource Manager to manage your resources. Shows how to deploy and delete resources. - Last updated 02/11/2019- # Manage Azure resources by using Azure CLI Learn how to use Azure CLI with [Azure Resource Manager](overview.md) to manage your Azure resources. For managing resource groups, see [Manage Azure resource groups by using Azure CLI](manage-resource-groups-cli.md).
-Other articles about managing resources:
--- [Manage Azure resources by using the Azure portal](manage-resources-portal.md)-- [Manage Azure resources by using Azure PowerShell](manage-resources-powershell.md)- ## Deploy resources to an existing resource group You can deploy Azure resources directly by using Azure CLI, or deploy a Resource Manager template to create Azure resources.
azure-resource-manager Manage Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-portal.md
Title: Manage resources - Azure portal description: Use the Azure portal and Azure Resource Manager to manage your resources. Shows how to deploy and delete resources. - Last updated 02/11/2019- # Manage Azure resources by using the Azure portal Learn how to use the [Azure portal](https://portal.azure.com) with [Azure Resource Manager](overview.md) to manage your Azure resources. For managing resource groups, see [Manage Azure resource groups by using the Azure portal](manage-resource-groups-portal.md).
-Other articles about managing resources:
--- [Manage Azure resources by using Azure CLI](manage-resources-cli.md)-- [Manage Azure resources by using Azure PowerShell](manage-resources-powershell.md)- [!INCLUDE [Handle personal data](../../../includes/gdpr-intro-sentence.md)] ## Deploy resources to a resource group
After you have created a Resource Manager template, you can use the Azure portal
## Open resources
-Azure resources are organized by Azure services and by resource groups. The following procedures shows how to open a storage account called **mystorage0207**. The virtual machine resides in a resource group called **mystorage0207rg**.
+Azure resources are organized by Azure services and by resource groups. The following procedures show how to open a storage account called **mystorage0207**. The virtual machine resides in a resource group called **mystorage0207rg**.
To open a resource by the service type:
azure-resource-manager Manage Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-powershell.md
Title: Manage resources - Azure PowerShell description: Use Azure PowerShell and Azure Resource Manager to manage your resources. Shows how to deploy and delete resources. - Last updated 02/11/2019- # Manage Azure resources by using Azure PowerShell
azure-resource-manager Manage Resources Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-python.md
+
+ Title: Manage resources - Python
+description: Use Python and Azure Resource Manager to manage your resources. Shows how to deploy and delete resources.
+ Last updated : 04/21/2023+++
+# Manage Azure resources by using Python
+
+Learn how to use Azure Python with [Azure Resource Manager](overview.md) to manage your Azure resources. For managing resource groups, see [Manage Azure resource groups by using Python](manage-resource-groups-python.md).
++
+## Deploy resources to an existing resource group
+
+You can deploy Azure resources directly by using Python, or deploy an Azure Resource Manager template (ARM template) to create Azure resources.
+
+### Deploy resources by using Python classes
+
+The following example creates a storage account by using [StorageManagementClient.storage_accounts.begin_create](/python/api/azure-mgmt-storage/azure.mgmt.storage.v2022_09_01.operations.storageaccountsoperations#azure-mgmt-storage-v2022-09-01-operations-storageaccountsoperations-begin-create). The name for the storage account must be unique across Azure.
+
+```python
+import os
+import random
+from azure.identity import AzureCliCredential
+from azure.mgmt.storage import StorageManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+random_postfix = ''.join(random.choices('abcdefghijklmnopqrstuvwxyz1234567890', k=13))
+storage_account_name = "demostore" + random_postfix
+
+storage_client = StorageManagementClient(credential, subscription_id)
+
+storage_account_result = storage_client.storage_accounts.begin_create(
+ "exampleGroup",
+ storage_account_name,
+ {
+ "location": "westus",
+ "sku": {
+ "name": "Standard_LRS"
+ }
+ }
+)
+```
+
+### Deploy a template
+
+To deploy an ARM template, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update). The following example deploys a [remote template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-account-create). That template creates a storage account.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import DeploymentMode
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = input("Enter the resource group name: ")
+location = input("Enter the location (i.e. centralus): ")
+template_uri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json"
+
+rg_deployment_result = resource_client.deployments.begin_create_or_update(
+ resource_group_name,
+ "exampleDeployment",
+ {
+ "properties": {
+ "templateLink": {
+ "uri": template_uri
+ },
+ "parameters": {
+ "location": {
+ "value": location
+ },
+ },
+ "mode": DeploymentMode.incremental
+ }
+ }
+)
+```
+
+### Deploy a resource group and resources
+
+You can create a resource group and deploy resources to the group. For more information, see [Create resource group and deploy resources](../templates/deploy-to-subscription.md#resource-groups).
+
+### Deploy resources to multiple subscriptions or resource groups
+
+Typically, you deploy all the resources in your template to a single resource group. However, there are scenarios where you want to deploy a set of resources together but place them in different resource groups or subscriptions. For more information, see [Deploy Azure resources to multiple subscriptions or resource groups](../templates/deploy-to-resource-group.md).
+
+## Delete resources
+
+The following example shows how to delete a storage account.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.storage import StorageManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+storage_client = StorageManagementClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+storage_account = storage_client.storage_accounts.delete(
+ resource_group_name,
+ storage_account_name
+)
+```
+
+For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md).
+
+## Move resources
+
+The following example shows how to move a storage account from one resource group to another resource group.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+src_resource_group_name = "sourceGroup"
+dest_resource_group_name = "destinationGroup"
+storage_account_name = "demostore"
+
+dest_resource_group = resource_client.resource_groups.get(dest_resource_group_name)
+
+storage_account = resource_client.resources.get(
+ src_resource_group_name, "Microsoft.Storage", "", "storageAccounts", storage_account_name, "2022-09-01"
+)
+
+move_result = resource_client.resources.begin_move_resources(
+ src_resource_group_name,
+ {
+ "resources": [storage_account.id],
+ "targetResourceGroup": dest_resource_group.id,
+ }
+)
+```
+
+For more information, see [Move resources to new resource group or subscription](move-resource-group-and-subscription.md).
+
+## Lock resources
+
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
+
+The following example locks a web site so it can't be deleted.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_result = lock_client.management_locks.create_or_update_at_resource_level(
+ "exampleGroup",
+ "Microsoft.Web",
+ "",
+ "sites",
+ "examplesite",
+ "lockSite",
+ {
+ "level": "CanNotDelete"
+ }
+)
+```
+
+The following script gets all locks for a storage account:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.locks import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+lock_client = ManagementLockClient(credential, subscription_id)
+
+resource_group_name = "demoGroup"
+storage_account_name = "demostore"
+
+resource = resource_client.resources.get_by_id(
+ f"/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}",
+ "2021-04-01"
+)
+
+locks = lock_client.management_locks.list_at_resource_level(
+ resource_group_name,
+ "Microsoft.Storage",
+ "",
+ "storageAccounts",
+ storage_account_name
+)
+
+for lock in locks:
+ print(f"Lock Name: {lock.name}, Lock Level: {lock.level}")
+```
+
+The following script deletes a lock of a web site:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_client.management_locks.delete_at_resource_level(
+ "exampleGroup",
+ "Microsoft.Web",
+ "",
+ "sites",
+ "examplesite",
+ "lockSite"
+)
+```
+
+For more information, see [Lock resources with Azure Resource Manager](lock-resources.md).
+
+## Tag resources
+
+Tagging helps organizing your resource group and resources logically. For information, see [Using tags to organize your Azure resources](tag-resources-python.md).
+
+## Next steps
+
+- To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md).
+- To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](../templates/syntax.md).
+- To learn how to develop templates, see the [step-by-step tutorials](../index.yml).
+- To view the Azure Resource Manager template schemas, see [template reference](/azure/templates/).
cognitive-services Tutorial Bing Web Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/tutorial-bing-web-search-single-page-app.md
To use this app, an [Azure Cognitive Services account](../cognitive-services-api
Here are a few things that you'll need to run the app: * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesBingSearch-v7" title="Create a Bing Search resource" target="_blank">create a Bing Search resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
+* Once you have your Azure subscription, <a href="https://ms.portal.azure.com/#create/Microsoft.BingSearch" title="Create a Bing Search resource" target="_blank">create a Bing Search resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
* Node.js 8 or later
Leave the command window open while you use the sample app; closing the window s
## Next steps > [!div class="nextstepaction"]
-> [Bing Web Search API v7 reference](/rest/api/cognitiveservices/bing-web-api-v7-reference)
+> [Bing Web Search API v7 reference](/rest/api/cognitiveservices/bing-web-api-v7-reference)
cognitive-services Image Analysis Client Library 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library-40.md
keywords: computer vision, computer vision service
# Quickstart: Image Analysis 4.0
-Get started with the Image Analysis 4.0 REST API or client libraries to set up a basic image analysis script. The Analyze Image service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code.
+Get started with the Image Analysis 4.0 REST API or client libraries to set up a basic image analysis application. The Image Analysis service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code.
::: zone pivot="programming-language-csharp"
cognitive-services Advanced Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/advanced-prompt-engineering.md
+
+ Title: Prompt engineering techniques with Azure OpenAI
+
+description: Learn about the options for how to use prompt engineering with GPT-3, ChatGPT, and GPT-4 models
++++ Last updated : 04/20/2023+
+keywords: ChatGPT, GPT-4, prompt engineering, meta prompts, chain of thought
+zone_pivot_groups: openai-prompt
++
+# Prompt engineering techniques
+
+This guide will walk you through some advanced techniques in prompt design and prompt engineering. If you're new to prompt engineering, we recommend starting with our [introduction to prompt engineering guide](prompt-engineering.md).
+
+While the principles of prompt engineering can be generalized across many different model types, certain models expect a specialized prompt structure. For Azure OpenAI GPT models, there are currently two distinct APIs where prompt engineering comes into play:
+
+- Chat Completion API.
+- Completion API.
+
+Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the ChatGPT (preview) and GPT-4 (preview) models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries.
+
+The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. Technically the ChatGPT (preview) models can be used with either APIs, but we strongly recommend using the Chat Completion API for these models. To learn more, please consult our [in-depth guide on using these APIs](../how-to/chatgpt.md).
+
+The techniques in this guide will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#limitations), is just as important as understanding how to leverage their strengths.
++++++
communication-services Azure Communication Services Azure Cognitive Services Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md
You can also configure and bind your Communication Services and Cognitive Servic
### Add a Managed Identity to the ACS Resource
-1. Navigate to your ACS Resource in the Azure portal
+1. Navigate to your ACS Resource in the Azure portal.
2. Select the Identity tab. 3. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed.
You can also configure and bind your Communication Services and Cognitive Servic
1. Navigate to your Azure Cognitive Service resource. 2. Select the "Access control (IAM)" tab. 3. Click the "+ Add" button.
-4. Select "Add role assignments" from the menu
+4. Select "Add role assignments" from the menu.
[![Add role from IAM](./media/add-role.png)](./media/add-role.png#lightbox)
You can also configure and bind your Communication Services and Cognitive Servic
6. For the field "Assign access to" choose the "User, group or service principal". 7. Press "+ Select members" and a side tab opens.
-8. Choose your Azure Communication Services subscription from the "Subscriptions" drop down menu and click "Select".
+8. Search for your Azure Communication Services resource name in the text box and click it when it shows up, then click "Select".
[![Select ACS resource](./media/select-acs-resource.png)](./media/select-acs-resource.png#lightbox)
You can also configure and bind your Communication Services and Cognitive Servic
### Option 2: Add role through ACS Identity tab
-1. Navigate to your ACS resource in the Azure portal
-2. Select Identity tab
-3. Click on "Azure role assignments"
+1. Navigate to your ACS resource in the Azure portal.
+2. Select Identity tab.
+3. Click on "Azure role assignments".
[![ACS role assignment](./media/add-role-acs.png)](./media/add-role-acs.png#lightbox)
-4. Click the "Add role assignment (Preview)" button, which opens the "Add role assignment (Preview)" tab
+4. Click the "Add role assignment (Preview)" button, which opens the "Add role assignment (Preview)" tab.
5. Select the "Resource group" for "Scope".
-6. Select the "Subscription" // The CogSvcs subscription?
-7. Select the "Resource Group" containing the Cognitive Service
-8. Select the "Role" "Cognitive Services User"
+6. Select the "Subscription".
+7. Select the "Resource Group" containing the Cognitive Service.
+8. Select the "Role" "Cognitive Services User".
[![ACS role information](./media/acs-roles-cognitive-services.png)](./media/acs-roles-cognitive-services.png#lightbox)
-10. Click Save
+10. Click Save.
Your Communication Service has now been linked to your Azure Cognitive Service resource.
communication-services Domain Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/telephony/domain-validation.md
Title: Azure Communication Services direct routing domain validation
-description: A how-to page about domain validation for direct routing.
+description: Learn how to validate a domain for direct routing.
-# Domain validation
+# Validate a domain for direct routing
-This page describes the process of domain name ownership validation. Fully Qualified Domain Name (FQDN) consists of two parts: host name and domain name. For example, if your session border controller (SBC) name is `sbc1.contoso.com`, then `sbc1` would be a host name, while `contoso.com` would be a domain name. If there's an SBC with FQDN of `acs.sbc1.testing.contoso.com`, `acs` would be a host name, and `sbc1.testing.contoso.com` would be a domain name. To use direct routing, you need to validate that you own a domain part of your FQDN.
+This article describes the process of validating domain name ownership by using the Azure portal.
-Azure Communication Services direct routing configuration consists of the following steps:
+A fully qualified domain name (FQDN) consists of two parts: host name and domain name. For example, if your session border controller (SBC) name is `sbc1.contoso.com`, then `sbc1` is the host name and `contoso.com` is the domain name. If an SBC has an FQDN of `acs.sbc1.testing.contoso.com`, then `acs` is the host name and `sbc1.testing.contoso.com` is the domain name.
-- Verify domain ownership for your SBC FQDN-- Configure SBC FQDN and port number-- Create voice routing rules
+To use direct routing in Azure Communication Services, you need to validate that you own the domain part of your SBC FQDN. After that, you can configure the SBC FQDN and port number and then create voice routing rules.
-## Domain ownership validation
+When you're verifying the domain name portion of the SBC FQDN, keep in mind that the `*.onmicrosoft.com` and `*.azure.com` domain names aren't supported. For example, if you have two domain names, `contoso.com` and `contoso.onmicrosoft.com`, use `sbc.contoso.com` as the SBC name.
-Make sure to add and verify domain name portion of the FQDN and keep in mind that the `*.onmicrosoft.com` and `*.azure.com` domain names aren't supported for the SBC FQDN domain name. For example, if you have two domain names, `contoso.com` and `contoso.onmicrosoft.com`, use `sbc.contoso.com` as the SBC name. If using a subdomain, make sure this subdomain is also added and verified. For example, if you want to use `sbc.acs.contoso.com`, then `acs.contoso.com` needs to be registered.
+If you're using a subdomain, make sure that this subdomain is also added and verified. For example, if you want to use `sbc.acs.contoso.com`, you need to register `acs.contoso.com`.
-### Domain verification using Azure portal
+## Add a new domain name
-#### Add new domain name
-
-1. Open Azure portal and navigate to your [Communication Service resource](../../quickstarts/create-communication-resource.md).
-1. In the left navigation pane, select Direct routing under Voice Calling - PSTN.
-1. Select Connect domain from the Domains tab.
-1. Enter the domain part of SBCΓÇÖs fully qualified domain name.
+1. Open the Azure portal and go to your [Communication Services resource](../../quickstarts/create-communication-resource.md).
+1. On the left pane, under **Voice Calling - PSTN**, select **Direct routing**.
+1. On the **Domains** tab, select **Connect domain**.
+1. Enter the domain part of the SBC FQDN.
1. Reenter the domain name.
-1. Select Confirm and then select Add.
+1. Select **Confirm**, and then select **Add**.
[![Screenshot of adding a custom domain.](./media/direct-routing-add-domain.png)](./media/direct-routing-add-domain.png#lightbox)
-#### Verify domain ownership
+## Verify domain ownership
-1. Select Verify next to new domain that is now visible in DomainΓÇÖs list.
-1. Azure portal generates a value for a TXT record, you need to add that record to
+1. On the **Domains** tab, select **Verify** next to the new domain that you created.
+1. The Azure portal generates a value for a TXT record. Add that record to your domain's registrar or DNS hosting provider with the provided value.
-[![Screenshot of verifying a custom domain.](./media/direct-routing-verify-domain-2.png)](./media/direct-routing-verify-domain-2.png#lightbox)
+ [![Screenshot of verifying a custom domain.](./media/direct-routing-verify-domain-2.png)](./media/direct-routing-verify-domain-2.png#lightbox)
->[!Note]
->It might take up to 30 minutes for new DNS record to propagate on the Internet
+ It might take up to 30 minutes for a new DNS record to propagate on the internet.
-3. Select Next. If everything is set up correctly, you should see Domain status changed to *Verified* next to the added domain.
+1. Select **Next**. If you set up everything correctly, **Domain status** should change to **Verified** next to the added domain.
-[![Screenshot of a verified domain.](./media/direct-routing-domain-verified.png)](./media/direct-routing-domain-verified.png#lightbox)
+ [![Screenshot of a verified domain.](./media/direct-routing-domain-verified.png)](./media/direct-routing-domain-verified.png#lightbox)
-#### Remove domain from Azure Communication Services
+## Remove a domain from Azure Communication Services
-If you want to remove a domain from your Azure Communication Services direct routing configuration, select the checkbox fir a corresponding domain name, and select *Remove*.
+If you want to remove a domain from your Azure Communication Services direct routing configuration, select the checkbox for a corresponding domain name, and then select **Remove**.
[![Screenshot of removing a custom domain.](./media/direct-routing-remove-domain.png)](./media/direct-routing-remove-domain.png#lightbox)
-## Next steps:
+## Next steps
### Conceptual documentation
If you want to remove a domain from your Azure Communication Services direct rou
### Quickstarts - [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)-- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Add Multiple Senders Mgmt Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-multiple-senders-mgmt-sdks.md
+
+ Title: How to add and remove sender addresses in Azure Communication Services using the ACS Management Client Libraries
+
+description: Learn about adding and removing sender addresses in Azure Communication Services using the ACS Management Client Libraries
++++ Last updated : 04/19/2023++
+zone_pivot_groups: acs-js-csharp-java-python
++
+# Quickstart: How to add and remove sender addresses in Azure Communication Services using the ACS Management Client Libraries
+
+In this quick start, you will learn how to add and remove sender addresses in Azure Communication Services using the ACS Management Client Libraries.
++++
communication-services Voice Routing Sdk Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/voice-routing-sdk-config.md
# Quickstart: Configure voice routing programmatically
-Configure outbound voice routing rules for Azure Communication Services direct routing
+Configure outbound voice routing rules for Azure Communication Services direct routing.
::: zone pivot="platform-azp" [!INCLUDE [Azure portal](./includes/voice-routing-sdk-portal.md)]
Configure outbound voice routing rules for Azure Communication Services direct r
## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. [Learn more about cleaning up resources](../create-communication-resource.md#clean-up-resources).
## Next steps
For more information, see the following articles:
- Learn about [Calling SDK capabilities](../voice-video-calling/getting-started-with-calling.md). - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md).-- Call to a telephone number [quickstart](./pstn-call.md).
+- Call to a telephone number by [following a quickstart](./pstn-call.md).
cosmos-db Get Started Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-started-change-data-capture.md
When you check any of the `Capture intermediate updates`, `Capture Deltes`, and
| 4 | TTL_DELETE | Capture Transactional store TTLs| If you have to differentiate the TTL deleted records from documents deleted by users or applications, you have check both `Capture intermediate updates` and `Capture Transactional store TTLs` options. Then you have to adapt your CDC processes or applications or queries to use `__usr_opType` according to what is necessary for your business needs.+
+>[!TIP]
+> If there is a need for the downstream consumers to restore the order of updates with the ΓÇ£capture intermediate updatesΓÇ¥ option checked, the system timestamp `_ts` field can be used as the ordering field.
## Create and configure sink settings for update and delete operations
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Last updated 04/07/2023 -+ # Programmatically create Azure Enterprise Agreement subscriptions with the latest APIs
-This article helps you programmatically create Azure Enterprise Agreement (EA) subscriptions for an EA billing account using the most recent API versions. If you are still using the older preview version, see [Programmatically create Azure subscriptions legacy APIs](programmatically-create-subscription-preview.md).
+This article helps you programmatically create Azure Enterprise Agreement (EA) subscriptions for an EA billing account using the most recent API versions. If you are still using the older preview version, see [Programmatically create Azure subscriptions legacy APIs](programmatically-create-subscription-preview.md).
In this article, you learn how to create subscriptions programmatically using Azure Resource Manager.
A user must have an Owner role on an Enrollment Account to create a subscription
* The Enterprise Administrator of your enrollment can [make you an Account Owner](https://ea.azure.com/helpdocs/addNewAccount) (sign in required) which makes you an Owner of the Enrollment Account. * An existing Owner of the Enrollment Account can [grant you access](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
-To use a service principal (SPN) to create an EA subscription, an Owner of the Enrollment Account must [grant that service principal the ability to create subscriptions](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
+To use a service principal (SPN) to create an EA subscription, an Owner of the Enrollment Account must [grant that service principal the ability to create subscriptions](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
When using an SPN to create subscriptions, use the ObjectId of the Azure AD Enterprise application as the Service Principal ID using [Azure Active Directory PowerShell](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true ) or [Azure CLI](/cli/azure/ad/sp#az-ad-sp-list). You can also use the steps at [Find your SPN and tenant ID](assign-roles-azure-service-principals.md#find-your-spn-and-tenant-id) to find the object ID in the Azure portal for an existing SPN. For more information about the EA role assignment API request, see [Assign roles to Azure Enterprise Agreement service principal names](assign-roles-azure-service-principals.md). The article includes a list of roles (and role definition IDs) that can be assigned to an SPN. > [!NOTE]
- > - Ensure that you use the correct API version to give the enrollment account owner permissions. For this article and for the APIs documented in it, use the [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put) API.
+ > - Ensure that you use the correct API version to give the enrollment account owner permissions. For this article and for the APIs documented in it, use the [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put) API.
> - If you're migrating to use the newer APIs, your previous configuration made with the [2015-07-01 version](grant-access-to-create-subscription.md) doesn't automatically convert for use with the newer APIs. > - The Enrollment Account information is only visible when the user's role is Account Owner. When a user has multiple roles, the API uses the user's least restrictive role.
The API response lists all enrollment accounts you have access to:
```
-The values for a billing scope and `id` are the same thing. The `id` for your enrollment account is the billing scope under which the subscription request is initiated. ItΓÇÖs important to know the ID because itΓÇÖs a required parameter that you use later in the article to create a subscription.
+The values for a billing scope and `id` are the same thing. The `id` for your enrollment account is the billing scope under which the subscription request is initiated. It's important to know the ID because it's a required parameter that you use later in the article to create a subscription.
### [PowerShell](#tab/azure-powershell)
Response lists all enrollment accounts you have access to
}, ```
-The values for a billing scope and `id` are the same thing. The `id` for your enrollment account is the billing scope under which the subscription request is initiated. ItΓÇÖs important to know the ID because itΓÇÖs a required parameter that you use later in the article to create a subscription.
+The values for a billing scope and `id` are the same thing. The `id` for your enrollment account is the billing scope under which the subscription request is initiated. It's important to know the ID because it's a required parameter that you use later in the article to create a subscription.
## Create subscriptions under a specific enrollment account
-The following example creates a subscription named *Dev Team Subscription* in the enrollment account selected in the previous step.
+The following example creates a subscription named *Dev Team Subscription* in the enrollment account selected in the previous step.
Using one of the following methods, you'll create a subscription alias name. We recommend that when you create the alias name, you:
If you have multiple user roles in addition to the Account Owner role, then you
Call the PUT API to create a subscription creation request/alias. ```json
-PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01
+PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01
``` In the request body, provide as the `billingScope` the `id` from one of your `enrollmentAccounts`.
-```json
+```json
{ "properties": {
- "billingScope": "/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321",
- "DisplayName": "Dev Team Subscription", //Subscription Display Name
- "Workload": "Production"
+ "billingScope": "/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321",
+ "DisplayName": "Dev Team Subscription", //Subscription Display Name
+ "Workload": "Production"
} } ```
An in-progress status is returned as an `Accepted` state under `provisioningStat
### [PowerShell](#tab/azure-powershell)
-To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/scripting/gallery/installing-psget).
+To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
-Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/get-azsubscriptionalias) command, using the billing scope `"/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321"`.
+Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/get-azsubscriptionalias) command, using the billing scope `"/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321"`.
```azurepowershell-interactive New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321" -Workload "Production"
You get the subscriptionId as part of the response from the command.
First, install the extension by running `az extension add --name account` and `az extension add --name alias`.
-Run the following [az account alias create](/cli/azure/account/alias#az-account-alias-create) command and provide `billing-scope` and `id` from one of your `enrollmentAccounts`.
+Run the following [az account alias create](/cli/azure/account/alias#az-account-alias-create) command and provide `billing-scope` and `id` from one of your `enrollmentAccounts`.
```azurecli-interactive az account alias create --name "sampleAlias" --billing-scope "/providers/Microsoft.Billing/billingAccounts/1234567/enrollmentAccounts/654321" --display-name "Dev Team Subscription" --workload "Production"
The following ARM template creates a subscription. For `billingScope`, provide t
}, "resources": [ {
- "scope": "/",
+ "scope": "/",
"name": "[parameters('subscriptionAliasName')]", "type": "Microsoft.Subscription/aliases", "apiVersion": "2021-10-01",
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
Last updated 03/27/2023 -+ # Programmatically create Azure subscriptions for a Microsoft Customer Agreement with the latest APIs
-This article helps you programmatically create Azure subscriptions for a Microsoft Customer Agreement using the most recent API versions. If you are still using the older preview version, see [Programmatically create Azure subscriptions with legacy APIs](programmatically-create-subscription-preview.md).
+This article helps you programmatically create Azure subscriptions for a Microsoft Customer Agreement using the most recent API versions. If you are still using the older preview version, see [Programmatically create Azure subscriptions with legacy APIs](programmatically-create-subscription-preview.md).
In this article, you learn how to create subscriptions programmatically using Azure Resource Manager.
Use the `displayName` property to identify the billing account for which you wan
```azurepowershell Get-AzBillingAccount ```
-You will get back a list of all billing accounts that you have access to
+You will get back a list of all billing accounts that you have access to
```json Name : 5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx
Use the `displayName` property to identify the billing account for which you wan
```azurecli az billing account list ```
-You will get back a list of all billing accounts that you have access to
+You will get back a list of all billing accounts that you have access to
```json [
GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/5e9
} ```
-Use the `id` property to identify the invoice section for which you want to create subscriptions. Copy the entire string. For example, `/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx`.
+Use the `id` property to identify the invoice section for which you want to create subscriptions. Copy the entire string. For example, `/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx`.
### [PowerShell](#tab/azure-powershell)
HasReadAccess : True
BillTo : CompanyName : Contoso AddressLine1 : One Microsoft Way
-AddressLine2 :
+AddressLine2 :
City : Redmond Region : WA Country : US
Use the `id` property under the invoice section object to identify the invoice s
## Create a subscription for an invoice section
-The following example creates a subscription named *Dev Team subscription* for the *Development* invoice section. The subscription is billed to the *Contoso Billing Profile* billing profile and appears on the *Development* section of its invoice. You use the copied billing scope from the previous step: `/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx`.
+The following example creates a subscription named *Dev Team subscription* for the *Development* invoice section. The subscription is billed to the *Contoso Billing Profile* billing profile and appears on the *Development* section of its invoice. You use the copied billing scope from the previous step: `/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx`.
### [REST](#tab/rest)
PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampl
{ "properties": {
- "billingScope": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx",
- "DisplayName": "Dev Team subscription",
- "Workload": "Production"
+ "billingScope": "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx",
+ "DisplayName": "Dev Team subscription",
+ "Workload": "Production"
} } ```
An in-progress status is returned as an `Accepted` state under `provisioningStat
### [PowerShell](#tab/azure-powershell)
-To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/scripting/gallery/installing-psget).
+To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
-Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscriptionalias) command and the billing scope `"/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`.
+Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscriptionalias) command and the billing scope `"/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`.
```azurepowershell New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" -Workload "Production"
The following template creates a subscription. For `billingScope`, provide the i
}, "resources": [ {
- "scope": "/",
+ "scope": "/",
"name": "[parameters('subscriptionAliasName')]", "type": "Microsoft.Subscription/aliases", "apiVersion": "2021-10-01",
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
Last updated 04/05/2023 -+ # Programmatically create Azure subscriptions for a Microsoft Partner Agreement with the latest APIs
-This article helps you programmatically create Azure subscriptions for a Microsoft Partner Agreement using the most recent API versions. If you are still using the older preview version, see [Programmatically create Azure subscriptions with legacy APIs](programmatically-create-subscription-preview.md).
+This article helps you programmatically create Azure subscriptions for a Microsoft Partner Agreement using the most recent API versions. If you are still using the older preview version, see [Programmatically create Azure subscriptions with legacy APIs](programmatically-create-subscription-preview.md).
In this article, you learn how to create subscriptions programmatically using Azure Resource Manager.
Use the `description` property to identify the reseller who is associated with t
## Create a subscription for a customer
-The following example creates a subscription named *Dev Team subscription* for *Fabrikam toys* and associate *Wingtip* reseller to the subscription. You use the copied billing scope from previous step: `/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+The following example creates a subscription named *Dev Team subscription* for *Fabrikam toys* and associate *Wingtip* reseller to the subscription. You use the copied billing scope from previous step: `/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
### [REST](#tab/rest)
PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampl
{ "properties": {
- "billingScope": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "DisplayName": "Dev Team subscription",
- "Workload": "Production"
+ "billingScope": "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "DisplayName": "Dev Team subscription",
+ "Workload": "Production"
} } ```
GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sample
} ```
-An in-progress status is returned as an `Accepted` state under `provisioningState`.
+An in-progress status is returned as an `Accepted` state under `provisioningState`.
Pass the optional *resellerId* copied from the second step in the request body of the API. ### [PowerShell](#tab/azure-powershell)
-To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/scripting/gallery/installing-psget).
+To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
-Run the following New-AzSubscriptionAlias command, using the billing scope `"/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"`.
+Run the following New-AzSubscriptionAlias command, using the billing scope `"/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"`.
```azurepowershell New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -Workload 'Production"
Pass the optional *resellerId* copied from the second step in the `New-AzSubscri
First, install the extension by running `az extension add --name account` and `az extension add --name alias`.
-Run the following [az account alias create](/cli/azure/account/alias#az-account-alias-create) command.
+Run the following [az account alias create](/cli/azure/account/alias#az-account-alias-create) command.
```azurecli az account alias create --name "sampleAlias" --billing-scope "/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx" --display-name "Dev Team Subscription" --workload "Production"
The following ARM template creates a subscription. For `billingScope`, provide t
}, "resources": [ {
- "scope": "/",
+ "scope": "/",
"name": "[parameters('subscriptionAliasName')]", "type": "Microsoft.Subscription/aliases", "apiVersion": "2021-10-01",
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
Last updated 04/05/2023 -+
In the response, as part of the header `Location`, you get back a url that you c
### [PowerShell](#tab/azure-powershell)
-To install the latest version of the module that contains the `New-AzSubscription` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/scripting/gallery/installing-psget).
+To install the latest version of the module that contains the `New-AzSubscription` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
Run the [New-AzSubscription](/powershell/module/az.subscription) command below, replacing `<enrollmentAccountObjectId>` with the `ObjectId` collected in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
Use the `description` property to identify the reseller to associate with the su
### Create a subscription for a customer
-The following example creates a subscription named *Dev Team subscription* for *Fabrikam toys* and associate *Wingtip* reseller to the subscription.
+The following example creates a subscription named *Dev Team subscription* for *Fabrikam toys* and associate *Wingtip* reseller to the subscription.
Make the following request, replacing `<customerId>` with the `id` copied from the second step (```/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). Pass the optional *resellerId* copied from the second step in the request parameters of the API.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **Access from a suspicious IP address**<br>(Storage.Blob_SuspiciousIp<br>Storage.Files_SuspiciousIp) | Indicates that this storage account has been successfully accessed from an IP address that is considered suspicious. This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Pre Attack | High/Medium/Low | | **Phishing content hosted on a storage account**<br>(Storage.Blob_PhishingContent<br>Storage.Files_PhishingContent) | A URL used in a phishing attack points to your Azure Storage account. This URL was part of a phishing attack affecting users of Microsoft 365.<br>Typically, content hosted on such pages is designed to trick visitors into entering their corporate credentials or financial information into a web form that looks legitimate.<br>This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files | Collection | High | | **Storage account identified as source for distribution of malware**<br>(Storage.Files_WidespreadeAm) | Antimalware alerts indicate that an infected file(s) is stored in an Azure file share that is mounted to multiple VMs. If attackers gain access to a VM with a mounted Azure file share, they can use it to spread malware to other VMs that mount the same share.<br>Applies to: Azure Files | Execution | Medium |
-| **The access level of a potentially sensitive storage blob container was changed to allow unauthenticated public access**<br>(Storage.Blob_OpenACL) | The alert indicates that someone has changed the access level of a blob container in the storage account, which may contain sensitive data, to the 'Container' level, to allow unauthenticated (anonymous) public access. The change was made through the Azure portal.<br>The blob container is flagged with possible sensitive data because, when statistically, blob containers or storage accounts with similar names have low public exposure.<br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts. | Collection | Medium |
+| **The access level of a potentially sensitive storage blob container was changed to allow unauthenticated public access**<br>(Storage.Blob_OpenACL) | The alert indicates that someone has changed the access level of a blob container in the storage account, which may contain sensitive data, to the 'Container' level, to allow unauthenticated (anonymous) public access. The change was made through the Azure portal.<br>Based on statistical analysis, the blob container is flagged as possibly containing sensitive data. This analysis suggests that blob containers or storage accounts with similar names are typically not exposed to public access.<br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2, or premium block blobs) storage accounts. | Collection | Medium |
| **Authenticated access from a Tor exit node**<br>(Storage.Blob_TorAnomaly<br>Storage.Files_TorAnomaly) | One or more storage container(s) / file share(s) in your storage account were successfully accessed from an IP address known to be an active exit node of Tor (an anonymizing proxy). Threat actors use Tor to make it difficult to trace the activity back to them. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access / Pre Attack | High/Medium | | **Access from an unusual location to a storage account**<br>(Storage.Blob_GeoAnomaly<br>Storage.Files_GeoAnomaly) | Indicates that there was a change in the access pattern to an Azure Storage account. Someone has accessed this account from an IP address considered unfamiliar when compared with recent activity. Either an attacker has gained access to the account, or a legitimate user has connected from a new or unusual geographic location. An example of the latter is remote maintenance from a new application or developer.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access | High/Medium/Low | | **Unusual unauthenticated access to a storage container**<br>(Storage.Blob_AnonymousAccessAnomaly) | This storage account was accessed without authentication, which is a change in the common access pattern. Read access to this container is usually authenticated. This might indicate that a threat actor was able to exploit public read access to storage container(s) in this storage account(s).<br>Applies to: Azure Blob Storage | Initial Access | High/Low |
VM_ThreatIntelCommandLineSuspectDomain | A possible connection to malicious loca
VM_ThreatIntelSuspectLogon | A logon from a malicious IP has been detected | High VM_VbScriptHttpObjectAllocation| VBScript HTTP object allocation detected | High
+## Alerts for Defender for APIs
+
+**Alert (alert type)** | **Description** | **MITRE tactics** | **Severity**
+ | | |
+**(Preview) Suspicious population-level spike in API traffic to an API endpoint**<br/> (API_PopulationSpikeInAPITraffic) | A suspicious spike in API traffic was detected at one of the API endpoints. The detection system used historical traffic patterns to establish a baseline for routine API traffic volume between all IPs and the endpoint, with the baseline being specific to API traffic for each status code (such as 200 Success). The detection system flagged an unusual deviation from this baseline leading to the detection of suspicious activity. | Impact | Medium
+**(Preview) Suspicious spike in API traffic from a single IP address to an API endpoint**<br/> (API_SpikeInAPITraffic) | A suspicious spike in API traffic was detected from a client IP to the API endpoint. The detection system used historical traffic patterns to establish a baseline for routine API traffic volume to the endpoint coming from a specific IP to the endpoint. The detection system flagged an unusual deviation from this baseline leading to the detection of suspicious activity. | Impact | Medium
+**(Preview) Unusually large response payload transmitted between a single IP address and an API endpoint**<br/> (API_SpikeInPayload) | A suspicious spike in API response payload size was observed for traffic between a single IP and one of the API endpoints. Based on historical traffic patterns from the last 30 days, Defender for APIs learns a baseline that represents the typical API response payload size between a specific IP and API endpoint. The learned baseline is specific to API traffic for each status code (e.g., 200 Success). The alert was triggered because an API response payload size deviated significantly from the historical baseline. | Initial access | Medium
+**(Preview) Unusually large request body transmitted between a single IP address and an API endpoint**<br/> (API_SpikeInPayload) | A suspicious spike in API request body size was observed for traffic between a single IP and one of the API endpoints. Based on historical traffic patterns from the last 30 days, Defender for APIs learns a baseline that represents the typical API request body size between a specific IP and API endpoint. The learned baseline is specific to API traffic for each status code (e.g., 200 Success). The alert was triggered because an API request size deviated significantly from the historical baseline. | Initial access | Medium
+**(Preview) Suspicious spike in latency for traffic between a single IP address and an API endpoint**<br/> (API_SpikeInLatency) | A suspicious spike in latency was observed for traffic between a single IP and one of the API endpoints. Based on historical traffic patterns from the last 30 days, Defender for APIs learns a baseline that represents the routine API traffic latency between a specific IP and API endpoint. The learned baseline is specific to API traffic for each status code (e.g., 200 Success). The alert was triggered because an API call latency deviated significantly from the historical baseline. | Initial access | Medium
+**(Preview) API requests spray from a single IP address to an unusually large number of distinct API endpoints**<br/>(API_SprayInRequests) | A single IP was observed making API calls to an unusually large number of distinct endpoints. Based on historical traffic patterns from the last 30 days, Defenders for APIs learns a baseline that represents the typical number of distinct endpoints called by a single IP across 20-minute windows. The alert was triggered because a single IP's behavior deviated significantly from the historical baseline. | Discovery | Medium
+**(Preview) Parameter enumeration on an API endpoint**<br/> (API_ParameterEnumeration) | A single IP was observed enumerating parameters when accessing one of the API endpoints. Based on historical traffic patterns from the last 30 days, Defender for APIs learns a baseline that represents the typical number of distinct parameter values used by a single IP when accessing this endpoint across 20-minute windows. The alert was triggered because a single client IP recently accessed an endpoint using an unusually large number of distinct parameter values. | Initial access | Medium
+**(Preview) Distributed parameter enumeration on an API endpoint**<br/> (API_DistributedParameterEnumeration) | The aggregate user population (all IPs) was observed enumerating parameters when accessing one of the API endpoints. Based on historical traffic patterns from the last 30 days, Defender for APIs learns a baseline that represents the typical number of distinct parameter values used by the user population (all IPs) when accessing an endpoint across 20-minute windows. The alert was triggered because the user population recently accessed an endpoint using an unusually large number of distinct parameter values. | Initial access | Medium
+**(Preview) Parameter value(s) with anomalous data types in an API call**<br/> (API_UnseenParamType) | A single IP was observed accessing one of your API endpoints and using parameter values of a low probability data type (e.g., string, integer, etc.). Based on historical traffic patterns from the last 30 days, Defender for APIs learns the expected data types for each API parameter. The alert was triggered because an IP recently accessed an endpoint using a previously low probability data type as a parameter input. | Impact | Medium
+**(Preview) Previously unseen parameter used in an API call**<br/> (API_UnseenParam) | A single IP was observed accessing one of the API endpoints using a previously unseen or out-of-bounds parameter in the request. Based on historical traffic patterns from the last 30 days, Defender for APIs learns a set of expected parameters associated with calls to an endpoint. The alert was triggered because an IP recently accessed an endpoint using a previously unseen parameter. | Impact | Medium
+**(Preview) Access from a Tor exit node to an API endpoint**<br/> (API_AccessFromTorExitNode) | An IP address from the Tor network accessed one of your API endpoints. Tor is a network that allows people to access the Internet while keeping their real IP hidden. Though there are legitimate uses, it is frequently used by attackers to hide their identity when they target people's systems online. | Pre-attack | Medium
+**(Preview) API Endpoint access from suspicious IP**<br/> (API_AccessFromSuspiciousIP) | An IP address accessing one of your API endpoints was identified by Microsoft Threat Intelligence as having a high probability of being a threat. While observing malicious Internet traffic, this IP came up as involved in attacking other online targets. | Pre-attack | High
+**(Preview) Suspicious User Agent detected**<br/> (API_AccessFromSuspiciousUserAgent) |
+The user agent of a request accessing one of your API endpoints contained anomalous values indicative of an attempt at remote code execution. This does not mean that any of your API endpoints have been breached, but it does suggest that an attempted attack is underway. | Execution | Medium
+ ## Next steps
-To learn more about Microsoft Defender for Cloud security alerts, see the following:
- [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md)-- [Continuously export Defender for Cloud data](continuous-export.md)
+- [Continuously export Defender for Cloud data](continuous-export.md)
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Defender for Cloud continually assesses your resources, subscriptions and organi
- **Foundational CSPM capabilities** - None - **Defender Cloud Security Posture Management (CSPM)** - Agentless scanning requires the **Subscription Owner** to enable the plan. Anyone with a lower level of authorization can enable the Defender CSPM plan but the agentless scanner won't be enabled by default due to lack of permissions. Attack path analysis and security explorer won't be populated with vulnerabilities because the agentless scanner is disabled.
-For commercial and national cloud coverage, see the [features supported in different Azure cloud environments](support-matrix-defender-for-cloud.md#features-supported-in-different-azure-cloud-environments).
+For commercial and national cloud coverage, review [features supported in different Azure cloud environments](support-matrix-cloud-environment.md).
+ ## Defender CSPM plan options
defender-for-cloud Defender For Apis Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-deploy.md
+
+ Title: Enable Defender for APIs in Defender for Cloud
+description: Learn about deploying the Defender for APIs plan in Defender for Cloud
++++ Last updated : 03/23/2023+
+# Onboard Defender for APIs
+
+This article describes how to deploy the [Microsoft Defender for APIs](defender-for-apis-introduction.md) plan in the Microsoft Defender for Cloud portal. Defender for APIs is currently in preview.
+
+## Before you start
+
+- Review [Defender for APIs support, permissions, and requirements](defender-for-apis-introduction.md) before you begin deployment.
+- Make sure that Defender for Cloud is enabled in your Azure subscription. You enable Defender for APIs at the subscription level.
+- Ensure that APIs you want to secure are published in [Azure API management](/azure/api-management/api-management-key-concepts). Follow [these instructions](/azure/api-management/get-started-create-service-instance) to set up Azure API Management.
+
+> [!NOTE]
+> This article describes how to enable and onboard the Defender for APIs plan in the Defender for Cloud portal. Alternately, you can [enable Defender for APIs within an API Management instance](../api-management/protect-with-defender-for-apis.md) in the Azure portal.
+
+## Enable the plan
+
+1. Sign into the [portal](https://portal.azure.com/), and in Defender for Cloud, select **Environment settings**.
+1. Select the subscription that contains the managed APIs that you want to protect.
+1. In the **APIs** plan, select **On**. Then select **Save**.
+
+ :::image type="content" source="media/defender-for-apis-deploy/enable-plan.png" alt-text="Screenshot that shows how to turn on the Defender for APIs plan in the portal." lightbox="media/defender-for-apis-deploy/enable-plan.png":::
+
+> [!NOTE]
+> After enabling Defender for APIs, onboarded APIs take up to 50 minutes to appear in the **Recommendations** tab. Security insights are available in the **Workload protections** > **API security** dashboard within 40 minutes of onboarding.
+
+## Onboard APIs
+
+1. In the Defender for Cloud portal, select **Recommendations**.
+1. Search for *Defender for APIs*.
+1. Under **Enable enhanced security features**, select the security recommendation **Azure API Management APIs should be onboarded to Defender for APIs**.
+
+ :::image type="content" source="media/defender-for-apis-deploy/api-recommendations.png" alt-text="Screenshot that shows how to turn on the Defender for APIs plan from the recommendation." lightbox="media/defender-for-apis-deploy/api-recommendations.png":::
++
+1. In the recommendation page, you can review the recommendation severity, update interval, description, and remediation steps.
+1. Review the resources in scope for the recommendations:
+ - **Unhealthy resources**: Resources that aren't onboarded to Defender for APIs.
+ - **Healthy resources**: API resources that are onboarded to Defender for APIs.
+ - **Not applicable resources**: API resources that aren't applicable for protection.
+
+1. In **Unhealthy resources**, select the APIs that you want to protect with Defender for APIs.
+1. Select **Fix**.
+
+ :::image type="content" source="media/defender-for-apis-deploy/api-recommendation-details.png" alt-text="Screenshot that shows the recommendation details for turning on the plan." lightbox="media/defender-for-apis-deploy/api-recommendation-details.png":::
+
+1. In **Fixing resources**, review the selected APIs, and select **Fix resources**.
+
+ :::image type="content" source="media/defender-for-apis-deploy/fix-resources.png" alt-text="Screenshot that shows how to fix unhealthy resources." lightbox="media/defender-for-apis-deploy/fix-resources.png":::
+
+1. Verify that remediation was successful.
+
+ :::image type="content" source="media/defender-for-apis-deploy/fix-resources-confirm.png" alt-text="Screenshot that confirms that remediation was successful." lightbox="media/defender-for-apis-deploy/fix-resources-confirm.png":::
+
+## Track onboarded API resources
+
+After onboarding the API resources, you can track their status in the Defender for Cloud portal > **Workload protections** > **API security**.
+++
+## Next steps
+
+[Review](defender-for-apis-posture.md) API threats and security posture.
+
defender-for-cloud Defender For Apis Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-introduction.md
+
+ Title: Overview of the Microsoft Defender for APIs plan in Microsoft Defender for Cloud
+description: Learn about the benefits of the Microsoft Defender for APIs plan in Microsoft Defender for Cloud
Last updated : 04/05/2023+++++
+# About Microsoft Defender for APIs
+
+Microsoft Defender for APIs is a plan provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md) that offers full lifecycle protection, detection, and response coverage for APIs.
+
+Defender for APIs helps you to gain visibility into business-critical APIs. You can investigate and improve your API security posture, prioritize vulnerability fixes, and quickly detect active real-time threats.
++
+> [!IMPORTANT]
+> Defender for APIs is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Defender for APIs currently provides security for APIs published in Azure API Management. Defender for APIs can be onboarded in the Defender for Cloud portal, or within the API Management instance in the Azure portal.
+
+## What can I do with Defender for APIs?
+
+- **Inventory**: In a single dashboard, get an aggregated view of all managed APIs.
+- **Security findings**: Analyze API security findings, including information about external, unused, or unauthenticated APIs.
+- **Security posture**: Review and implement security recommendations to improve API security posture, and harden at-risk surfaces.
+- **API data classification**: Classify APIs that receive or respond with sensitive data, to support risk prioritization.
+- **Real time threat detection**: Ingest API traffic and monitor it with runtime anomaly detection, using machine-learning and rule-based analytics, to detect API security threats, including the [OWASP Top 10](https://owasp.org/www-project-top-ten/) critical threats.
+- **Defender CSPM integration**: Integrate with Cloud Security Graph in [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) for API visibility and risk assessment across your organization.
+- **Azure API Management integration**: With the Defender for APIs plan enabled, you can receive API security recommendations and alerts in the Azure API Management portal.
+- **SIEM integration**: Integrate with security information and event management (SIEM) systems, making it easier for security teams to investigate with existing threat response workflows. [Learn more](tutorial-security-incident.md).
+
+## Reviewing API security findings
+
+Review the inventory and security findings for onboarded APIs in the Defender for Cloud API Security dashboard. The dashboard shows the number of onboarded devices, broken down by API collections, endpoints, and Azure API Management services.
++
+You can drill down into API collection to review security findings for onboarded API endpoints.
++
+API endpoint information includes:
+
+- **Endpoint name**: The name of API endpoint/operation as defined in Azure API Management.
+- **Endpoint**: The URL path of the API endpoints, and the HTTPS method.
+Last called data (UTC): The date when API traffic was last observed going to/from API endpoints (in UTC time zone).
+- **30 days unused**: Shows whether API endpoints have received any API call traffic in the last 30 days. APIs that haven't received any traffic in the last 30 days are marked as Inactive.
+- **Authentication**: Shows when a monitored API endpoint has no authentication. Defender for APIs assesses the authentication state using the subscription keys, JSON web token (JWT), and client certificate configured in Azure API Management. If none of these authentication mechanisms are present or executed, the API is marked as "unauthenticated".
+- **External traffic observed date**: The date when external API traffic was observed going to/from the API endpoint.
+- **Data classification**: Classifies API request and response bodies based on supported data types.
+
+> [!NOTE]
+> API endpoints that haven't received any traffic since onboarding to Defender for APIs display the status *Awaiting data* in the API dashboard.
+
+## Investigating API recommendations
+
+Use recommendations to improve your security posture, harden API configurations, identify critical API risks, and mitigate issues by risk priority.
+
+Defender for API provides a number of recommendations, including recommendations to onboard APIs to the Defender for API plan, disable and remove unused APIs, and best practice recommendations for security, authentication, and access control.
+
+[Review the recommendations reference](recommendations-reference.md).
+++
+## Detecting runtime threats
+
+Defender for APIs monitors runtime traffic and threat intelligence feeds, and issues threat detection alerts. API alerts detect the top 10 OWASP threats, data exfiltration, volumetric attacks, anomalous and suspicious API parameters, traffic and IP access anomalies, and usage patterns.
+
+[Review the security alerts reference](alerts-reference.md).
+
+## Responding to threats
+
+Act on recommendations and alerts to mitigate threats and risk. Defender for Cloud alerts and recommendations can be exported into SIEM systems such as Microsoft Sentinel, for investigation within existing threat response workflows for fast and efficient remediation. [Learn more](export-to-siem.md).
+
+## Investigating Cloud Security Graph insights
++
+[Cloud Security Graph](concept-attack-path.md) in the Defender CSPM plan analyses assets and connections across your organization, to expose risks, vulnerabilities, and possible lateral movement paths.
+
+**When Defender for APIs is enabled together with the Defender CSPM plan**, you can use Cloud Security Explorer to proactively and efficiently query your organizational information to locate, identify, and remediate API assets, security issues, and risks.
++
+## Next steps
+
+[Review support and prerequisites](defender-for-apis-prepare.md) for Defender for APIs deployment.
defender-for-cloud Defender For Apis Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-manage.md
+
+ Title: Manage the Defender for APIs plan in Microsoft Defender for Cloud
+description: Manage your Defender for APIs deployment in Microsoft Defender for Cloud
++++ Last updated : 03/23/2023+
+# Manage your Defender for APIs deployment
+
+This article describes how to manage your [Microsoft Defender for APIs](defender-for-apis-introduction.md) plan deployment in Microsoft Defender for Cloud. Management tasks include offboarding APIs from Defender for APIs.
+
+Defender for APIs is currently in preview.
++
+## Offboard an API
+
+1. In the Defender for Cloud portal, select **Workload protections**.
+1. Select **API security**.
+1. Next to the API you want to offboard from Defender for APIs, select the ellipsis (...) > **Remove**.
+
+ :::image type="content" source="media/defender-for-apis-manage/api-remove.png" alt-text="Screenshot of the review API information in Cloud Security Explorer." lightbox="media/defender-for-apis-manage/api-remove.png":::
+++
+## Next steps
+
+[Learn about](defender-for-apis-introduction.md) Defender for APIs.
++
defender-for-cloud Defender For Apis Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-posture.md
+
+ Title: Investigate your API security findings and posture in Microsoft Defender for Cloud
+description: Learn how to analyze your API security alerts and posture in Microsoft Defender for Cloud
++++ Last updated : 03/23/2023+
+# Investigate API findings, recommendations, and alerts
+
+This article describes how to investigate API security findings, alerts, and security posture recommendations for APIs protected by [Microsoft Defender for APIs](defender-for-apis-introduction.md). Defender for APIs is currently in preview.
+
+## Before you start
+
+- [Onboard your API resources](defender-for-apis-deploy.md) to Defender for APIs.
+- To explore security risks within your organization using Cloud Security Explorer, the Defender Cloud Security Posture Management (CSPM) plan must be enabled. [Learn more](concept-cloud-security-posture-management.md).
+
+## View recommendations and runtime alerts
+
+1. In the Defender for Cloud portal, select **Workload protections**.
+1. Select **API security (Preview)**.
+1. In the **API Security** dashboard, select an API collection.
+
+ :::image type="content" source="media/defender-for-apis-posture/api-collection-details.png" alt-text="Screenshot that shows the onboarded API collections."lightbox="media/defender-for-apis-posture/api-collection-details.png":::
+
+1. In the API collection page, to drill down into an API endpoint, select the ellipses (...) > **View resource**.
+
+ :::image type="content" source="media/defender-for-apis-posture/view-resource.png" alt-text="Screenshot that shows an API endpoint details." lightbox="media/defender-for-apis-posture/view-resource.png":::
+
+1. In the **Resource health** page, review the endpoint settings.
+1. In the **Recommendations** tab, review recommendation details and status.
+1. In the **Alerts** tab review security alerts for the endpoint. Defender for Endpoint monitors API traffic to and from endpoints, to provide runtime protection against suspicious behavior and malicious attacks.
+
+ :::image type="content" source="media/defender-for-apis-posture/resource-health.png" alt-text="Screenshot that shows the health of an endpoint." lightbox="media/defender-for-apis-posture/resource-health.png":::
+
+## Create sample security alerts
+
+In Defender for Cloud you can use sample alerts to evaluate your Defender for Cloud plans, and validate your security configuration. [Follow these instructions](alert-validation.md#generate-sample-security-alerts) to set up sample alerts, and select the relevant APIs within your subscriptions.
+
+## Build queries in Cloud Security Explorer
+
+In Defender CSPM, [Cloud Security Graph](concept-attack-path.md) collects data to provide a map of assets and connections across organization, to expose security risks, vulnerabilities, and possible lateral movement paths.
+
+When the Defender CSPM plan is enabled together with Defender for APIs, you can use Cloud Security Explorer to query Cloud Security Graph, to identify, review and analyze API security risks across your organization.
+
+1. In the Defender for Cloud portal, select **Cloud Security Explorer**.
+1. You can build your own query, or select the API query template.
+ 1. To build your own query, in **What would you like to search?** select the **APIs** category. You can query:
+ - API collections that contain one or more API endpoints.
+ - API endpoints for Azure API Management operations.
+
+ :::image type="content" source="media/defender-for-apis-posture/api-insights.png" alt-text="Screenshot that shows the predefined API query." lightbox="media/defender-for-apis-posture/api-insights.png":::
+
+ The search resultS display each API resource with its associated insights, so that you can review, prioritize, and fix any issues.
+
+ Alternatively, you can select the predefined query **Unauthenticated API endpoints containing sensitive data are outside the virtual network** > **Open query**. The query returns all unauthenticated API endpoints that contain sensitive data and aren't part of the Azure API management network.
+
+ :::image type="content" source="media/defender-for-apis-posture/predefined-query.png" alt-text="Screenshot that shows a predefined API query.":::
+
+
+## Next steps
+
+[Manage](defender-for-apis-manage.md) your Defender for APIs deployment.
+++
defender-for-cloud Defender For Apis Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-prepare.md
+
+ Title: Support and prerequisites for deploying the Defender for APIs plan in Microsoft Defender for Cloud
+description: Learn about the requirements for Defender for APIs deployment in Microsoft Defender for Cloud
++++ Last updated : 03/23/2023++
+# Support and prerequisites for Defender for APIs deployment
+
+Review the requirements on this page before setting up [Microsoft Defender for APIs](defender-for-apis-introduction.md). Defender for APIs is currently in preview.
+
+## Cloud and region support
+
+Defender for APIs is in public preview in the Azure commercial cloud, in these regions:
+- Asia (Southeast Asia, EastAsia)
+- Australia (Australia East, Australia Southeast, Australia Central, Australia Central 2)
+- Brazil (Brazil South, Brazil Southeast)
+- Canada (Canada Central, Canada East)
+- Europe (West Europe, North Europe)
+- India (Central India, South India, West India)
+- Japan (Japan East, Japan West)
+- UK (UK South, UK West)
+- US (East US, East US 2, West US, West US 2, West US 3, Central US, North Central US, South Central US, West Central US, East US 2 EUAP, Central US EUAP)
+
+Review the latest cloud support information for Defender for Cloud plans and features in the [cloud support matrix](support-matrix-cloud-environment.md).
++
+## API support
+
+**Feature** | **Supported**
+ |
+Availability | This feature is available in the Premium, Standard, Basic, and Developer tiers of Azure API Management.
+API gateways | Azure API Management<br/><br/> Defender for APIs currently doesn't onboard APIs that are exposed using the API Management [self-hosted gateway](../api-management/self-hosted-gateway-overview.md), or managed using API Management [workspaces](../api-management/workspaces-overview.md).
+API types | Currently, Defender for APIs discovers and analyzes REST APIs.
+Multi-region support | In multi-region Azure API Management instances, some ML-based detections and security insights (data classification, authentication check, unused and external APIs) aren't supported in secondary regions. In such cases, data residency requirements are still met.ΓÇ»
+
+## Defender CSPM integration
+
+To explore API security risks using Cloud Security Explorer, the Defender Cloud Security Posture Management (CSPM) plan must be enabled. [Learn more](concept-cloud-security-posture-management.md).
++
+## Onboarding requirements
+
+Onboarding requirements for Defender for APIs are as follows.
+
+**Requirement** | **Details**
+ |
+API Management instance | At least one API Management instance in an Azure subscription. Defender for APIs is enabled at the level of a subscription.<br/><br/> One or more supported APIs must be imported to the API Management instance.
+Azure account | You need an Azure account to sign into the Azure portal.
+Onboarding permissions | To enable and onboard Defender for APIs, you need the Owner or Contributor role on the Azure subscriptions, resource groups, or Azure API Management instance that you want to secure. If you don't have the Contributor role, you need to enable these roles:<br/><br/> - Security Admin role for full access in Defender for Cloud.<br/> - Security Reader role to view inventory and recommendations in Defender for Cloud.
+Onboarding location | You can [enable Defender for APIs in the Defender for Cloud portal](defender-for-apis-deploy.md), or in the [Azure API Management portal](../api-management/protect-with-defender-for-apis.md).
+
+## Next steps
+
+[Enable and onboard](defender-for-apis-deploy.md) Defender for APIs.
+
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Defender for Containers assists you with the three core aspects of container sec
- [**Vulnerability assessment**](#vulnerability-assessment) - Vulnerability assessment and management tools for images stored in Azure Container Registry and Elastic Container Registry -- [**Run-time threat protection for nodes and clusters**](#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities.
+- [**Run-time threat protection for nodes and clusters**](#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and nodes generates security alerts for suspicious activities.
You can learn more by watching this video from the Defender for Cloud in the Field video series: [Microsoft Defender for Containers](episode-three.md).
Learn more about:
## Run-time protection for Kubernetes nodes and clusters
-Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
+Defender for Containers provides real-time threat protection for [supported containerized environments](support-matrix-defender-for-containers.md) and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
Defender for Containers also includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. For a full list of the cluster level alerts, see the [reference table of alerts](alerts-reference.md#alerts-k8scluster).
defender-for-cloud Episode Twenty Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-eight.md
+
+ Title: Zero Trust and Defender for Cloud | Defender for Cloud in the Field
+
+description: Learn about Zero Trust best practices and Zero Trust visibility and analytics tools
+ Last updated : 04/20/2023++
+# Zero Trust and Defender for Cloud | Defender for Cloud in the field
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Mekonnen Kassa joins Yuri Diogenes to discuss the importance of using Zero Trust. Mekonnen covers the principles of Zero Trust, the importance of switching your mindset to adopt this strategy and how Defender for Cloud can help. Mekonnen also talks about best practices to get started, visibility and analytics as part of Zero Trust, and what tools can be leveraged to achieve it.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=125af768-01bd-45ac-8503-4dba5eb53ff7" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:21](/shows/mdc-in-the-field/zero-trust#time=01m21s) - What is Zero Trust?
+- [04:12](/shows/mdc-in-the-field/zero-trust#time=04m12s) - Current challenges with multicloud and hybrid workloads
+- [06:47](/shows/mdc-in-the-field/zero-trust#time=06m47s) - How can Defender for Cloud help with Zero Trust?
+- [11:38](/shows/mdc-in-the-field/zero-trust#time=11m38s) - Azure Network Security Controls that can help with Zero Trust
+- [14:50](/shows/mdc-in-the-field/zero-trust#time=14m50s) - Visibility and Analytics for Zero Trust
+- [18:09](/shows/mdc-in-the-field/zero-trust#time=18m09s) - Final recommendations to start your Zero Trust journey
++
+## Recommended resources
+ - Learn more about [Zero Trust](https://www.microsoft.com/security/business/zero-trust)
+ - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+ - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+ - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Twenty Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-seven.md
Title: Demystifying Defender for Servers | Defender for Cloud in the field
description: Learn about different deployment options in Defender for Servers Previously updated : 03/05/2023 Last updated : 04/19/2023 # Demystifying Defender for Servers | Defender for Cloud in the field
Last updated 03/05/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Zero Trust and Defender for Cloud](episode-twenty-eight.md)
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
The native cloud connector requires:
Auto provisioning is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If you already have the SSM agent pre-installed, the AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you'll need to install it using either of the following relevant instructions from Amazon: - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
+ Ensure that your SSM agent has the managed policy ["AmazonSSMManagedInstanceCore"] (https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html) that enables AWS Systems Manager service core functionality.
> [!NOTE] > To enable the Azure Arc auto-provisioning, you'll need **Owner** permission on the relevant Azure subscription.
The native cloud connector requires:
Auto provisioning is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If that is the case, their AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you'll need to install it using either of the following relevant instructions from Amazon: - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
+ Ensure that your SSM agent has the managed policy ["AmazonSSMManagedInstanceCore"] (https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html) that enables AWS Systems Manager service core functionality.
+
> [!NOTE] > To enable the Azure Arc auto-provisioning, you'll need an **Owner** permission on the relevant Azure subscription.
Connecting your AWS account is part of the multicloud experience available in Mi
- [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). - [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)-- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
This article lists the recommendations you might see in Microsoft Defender for C
shown in your environment depend on the resources you're protecting and your customized configuration.
-Defender for Cloud's recommendations are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
+Recommendations in Defender for Cloud are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
the Microsoft cloud security benchmark is the Microsoft-authored set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/)
impact on your secure score.
[!INCLUDE [asc-recs-networking](../../includes/asc-recs-networking.md)]
+## API recommendations
+
+|Recommendation|Description & related policy|Severity|
+|-|-|-|
+|(Preview) Microsoft Defender for APIs should be enabled|Enable the Defender for APIs plan to discover and protect API resources against attacks and security misconfigurations. [Learn more](defender-for-apis-deploy.md)|High|
+(Preview) Azure API Management APIs should be onboarded to Defender for APIs. | Onboarding APIs to Defender for APIs requires compute and memory utilization on the Azure API Management service. Monitor performance of your Azure API Management service while onboarding APIs, and scale out your Azure API Management resources as needed.|High|
+(Preview) API endpoints that are unused should be disabled and removed from the Azure API Management service|As a security best practice, API endpoints that haven't received traffic for 30 days are considered unused, and should be removed from the Azure API Management service. Keeping unused API endpoints might pose a security risk. These might be APIs that should have been deprecated from the Azure API Management service, but have accidentally been left active. Such APIs typically do not receive the most up-to-date security coverage.|Low|
+(Preview) API endpoints in Azure API Management should be authenticated|API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. For APIs published in Azure API Management, this recommendation assesses the execution of authentication via the Subscription Keys, JWT, and Client Certificate configured within Azure API Management. If none of these authentication mechanisms are executed during the API call, the API will receive this recommendation.|High
+
+## API management recommendations
+
+|Recommendation|Description & related policy|Severity|
+|-|-|-|
+|(Preview) API Management subscriptions should not be scoped to all APIs|API Management subscriptions should be scoped to a product or an individual API instead of all APIs, which could result in excessive data exposure.|Medium|
+(Preview) API Management calls to API backends should not bypass certificate thumbprint or name validation| API Management should validate the backend server certificate for all API calls. Enable SSL certificate thumbprint and name validation to improve the API security.|Medium|
+(Preview) API Management direct management endpoint should not be enabled|The direct management REST API in Azure API Management bypasses Azure Resource Manager role-based access control, authorization, and throttling mechanisms, thus increasing the vulnerability of your service.|Low|
+(Preview) API Management APIs should use only encrypted protocols|APIs should be available only through encrypted protocols, like HTTPS or WSS. Avoid using unsecured protocols, such as HTTP or WS to ensure security of data in transit.|High
+(Preview) API Management secret named values should be stored in Azure Key Vault|Named values are a collection of name and value pairs in each API Management service. Secret values can be stored either as encrypted text in API Management (custom secrets) or by referencing secrets in Azure Key Vault. Reference secret named values from Azure Key Vault to improve security of API Management and secrets. Azure Key Vault supports granular access management and secret rotation policies.|Medium
+(Preview) API Management should disable public network access to the service configuration endpoints|To improve the security of API Management services, restrict connectivity to service configuration endpoints, like direct access management API, Git configuration management endpoint, or self-hosted gateways configuration endpoint.| Medium
+(Preview) API Management minimum API version should be set to 2019-12-01 or higher|To prevent service secrets from being shared with read-only users, the minimum API version should be set to 2019-12-01 or higher.|Medium
+(Preview) API Management calls to API backends should be authenticated|Calls from API Management to backends should use some form of authentication, whether via certificates or credentials. Does not apply to Service Fabric backends.|Medium
+++ ## Deprecated recommendations |Recommendation|Description & related policy|Severity|
defender-for-cloud Support Matrix Cloud Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-cloud-environment.md
+
+ Title: Microsoft Defender for Cloud support across cloud types
+description: Review Defender for Cloud features and plans supported across different clouds
+++ Last updated : 03/08/2023++
+# Defender for Cloud support for commercial/government clouds
+
+This article indicates which Defender for Cloud features are supported in Azure commercial and government clouds.
+
+## Cloud support
+
+In the support table, **NA** indicates that the feature is not available.
+
+**Feature/Plan** | **Details** | **Azure** | **Azure Government** | **Azure China**<br/><br/>**21Vianet**
+ | | | |
+**Foundational CSPM** | | | |
+[Continuous export](./continuous-export.md) | GA | GA | GA
+[Workflow automation](./workflow-automation.md) | GA | GA | GA
+[Recommendation exemption rules](./exempt-resource.md) | Public preview | NA | NA
+[Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA
+[Alert email notifications](./configure-email-notifications.md) | GA | GA | GA
+[Agent/extension deployment](monitoring-components.md) | GA | GA | GA
+[Asset inventory](./asset-inventory.md) | GA | GA | GA
+[Azure Workbooks support](./custom-dashboards-azure-workbooks.md) | GA | GA | GA
+[Microsoft Defender for Cloud Apps integration](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps) | GA | GA | GA
+**[Defender CSPM](concept-cloud-security-posture-management.md)** | GA | NA | NA
+**[Defender for APIs](defender-for-apis-introduction.md)** | Public preview | NA | NA
+**[Defender for App Service](defender-for-app-service-introduction.md)** | GA | NA | NA
+**[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)** | Public preview | NA | NA
+**[Defender for Azure SQL database servers](defender-for-sql-introduction.md)**<br/><br/> Partial GA in Vianet21<br/> - A subset of alerts/vulnerability assessments is available.<br/>- Behavioral threat protection isn't available. | GA | GA | GA
+**[Defender for Containers](defender-for-containers-introduction.md)**| GA | GA | GA
+[Azure Arc extension for Kubernetes clusters/servers/data services](defender-for-kubernetes-azure-arc.md): | Public preview | NA | NA
+Runtime visibility of vulnerabilities in container images | Public preview | NA | NA
+**[Defender for DNS](defender-for-dns-introduction.md)** | GA | GA | GA
+**[Defender for Key Vault](./defender-for-key-vault-introduction.md)** | GA | NA | NA
+[Defender for Kubernetes](./defender-for-kubernetes-introduction.md)<br/><br/> Defender for Kubernetes is deprecated and doesn't include new features. [Learn more](defender-for-kubernetes-introduction.md) | GA | GA | GA
+**[Defender for open-source relational databases](defender-for-databases-introduction.md)** | GA | NA | NA
+**[Defender for Resource Manager](./defender-for-resource-manager-introduction.md)** | GA | GA | GA
+**[Defender for Servers](plan-defender-for-servers.md)** | | | |
+[Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA
+[File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA
+[Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA
+[Adaptive network hardening](./adaptive-network-hardening.md) | GA | GA | NA
+[Docker host hardening](./harden-docker-hosts.md) | | GA | GA | GA
+[Integrated Qualys scanner](./deploy-vulnerability-assessment-vm.md) | GA | NA | NA
+[Compliance dashboard/reports](./regulatory-compliance-dashboard.md)<br/><br/> Compliance standards might differ depending on the cloud type.| GA | GA | GA
+[Defender for Endpoint integration](./integration-defender-for-endpoint.md) | | GA | GA | NA
+[Connect AWS account](./quickstart-onboard-aws.md) | GA | NA | NA
+[Connect GCP project](./quickstart-onboard-gcp.md) | GA | NA | NA
+**[Defender for Storage](./defender-for-storage-introduction.md)**<br/><br/> Some alerts in Defender for Storage are in public preview. | GA | GA | NA
+**[Defender for SQL servers on machines](./defender-for-sql-introduction.md)** | GA | GA | NA
+**[Microsoft Sentinel bi-directional alert synchronization](../sentinel/connect-azure-security-center.md)** | Public preview | NA | NA
+++
+## Next steps
+
+Start reading about [Defender for Cloud features](defender-for-cloud-introduction.md).
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
Defender for Cloud provides recommendations, security alerts, and vulnerability
\*\* Azure Active Directory (Azure AD) recommendations are available only for subscriptions with [enhanced security features enabled](enable-enhanced-security.md).
-## Features supported in different Azure cloud environments
-
-Microsoft Defender for Cloud is available in the following Azure cloud environments:
-
-| Feature/Service | Azure | Azure Government | Azure China 21Vianet |
-||-|--|--|
-| **Defender for Cloud free features** | | | |
-| - [Continuous export](./continuous-export.md) | GA | GA | GA |
-| - [Workflow automation](./workflow-automation.md) | GA | GA | GA |
-| - [Recommendation exemption rules](./exempt-resource.md) | Public Preview | Not Available | Not Available |
-| - [Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA |
-| - [Email notifications for security alerts](./configure-email-notifications.md) | GA | GA | GA |
-| - [Deployment of agents and extensions](monitoring-components.md) | GA | GA | GA |
-| - [Asset inventory](./asset-inventory.md) | GA | GA | GA |
-| - [Azure Monitor Workbooks reports in Microsoft Defender for Cloud's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA |
-| - [Integration with Microsoft Defender for Cloud Apps](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps) | GA | GA | Not Available |
-| **Microsoft Defender plans and extensions** | | | |
-| - [Microsoft Defender for Servers](./defender-for-servers-introduction.md) | GA | GA | GA |
-| - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available |
-| - [Microsoft Defender CSPM](./concept-cloud-security-posture-management.md) | GA | Not Available | Not Available |
-| - [Agentless discovery for Kubernetes](concept-agentless-containers.md) | Public Preview | Not Available | Not Available |
-| [Agentless vulnerability assessments for container images](defender-for-containers-vulnerability-assessment-azure.md), including registry scanning (\* Up to 20 unique images per billable resource) | Public Preview | Not Available | Not Available |
-| - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA |
-| - [Microsoft Defender for Kubernetes](./defender-for-kubernetes-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA | GA |
-| - [Microsoft Defender for Containers](./defender-for-containers-introduction.md) <sup>[7](#footnote7)</sup> | GA | GA | GA |
-| - [Defender extension for Azure Arc-enabled Kubernetes clusters, servers or data services](./defender-for-kubernetes-azure-arc.md) <sup>[2](#footnote2)</sup> | Public Preview | Not Available | Not Available |
-| - [Microsoft Defender for Azure SQL database servers](./defender-for-sql-introduction.md) | GA | GA | GA <sup>[6](#footnote6)</sup> |
-| - [Microsoft Defender for SQL servers on machines](./defender-for-sql-introduction.md) | GA | GA | Not Available |
-| - [Microsoft Defender for open-source relational databases](./defender-for-databases-introduction.md) | GA | Not Available | Not Available |
-| - [Microsoft Defender for Key Vault](./defender-for-key-vault-introduction.md) | GA | Not Available | Not Available |
-| - [Microsoft Defender for Resource Manager](./defender-for-resource-manager-introduction.md) | GA | GA | GA |
-| - [Microsoft Defender for Storage](./defender-for-storage-introduction.md) <sup>[3](#footnote3)</sup> | GA | GA (Activity monitoring) | Not Available |
-| - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Public Preview | Not Available | Not Available |
-| - [Kubernetes workload protection](./kubernetes-workload-protections.md) | GA | GA | GA |
-| - [Bi-directional alert synchronization with Sentinel](../sentinel/connect-azure-security-center.md) | Public Preview | Not Available | Not Available |
-| **Microsoft Defender for Servers features** <sup>[4](#footnote4)</sup> | | | |
-| - [Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA |
-| - [File Integrity Monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA |
-| - [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA |
-| - [Adaptive network hardening](./adaptive-network-hardening.md) | GA | GA | Not Available |
-| - [Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA |
-| - [Integrated Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md) | GA | Not Available | Not Available |
-| - [Regulatory compliance dashboard & reports](./regulatory-compliance-dashboard.md) <sup>[5](#footnote5)</sup> | GA | GA | GA |
-| - [Microsoft Defender for Endpoint deployment and integrated license](./integration-defender-for-endpoint.md) | GA | GA | Not Available |
-| - [Connect AWS account](./quickstart-onboard-aws.md) | GA | Not Available | Not Available |
-| - [Connect GCP project](./quickstart-onboard-gcp.md) | GA | Not Available | Not Available |
-
-<sup><a name="footnote1"></a>1</sup> Partially GA: Support for Azure Arc-enabled clusters is in public preview and not available on Azure Government.
-
-<sup><a name="footnote2"></a>2</sup> Requires Microsoft Defender for Kubernetes or Microsoft Defender for Containers.
-
-<sup><a name="footnote3"></a>3</sup> Partially GA: Some of the threat protection alerts from Microsoft Defender for Storage are in public preview.
-
-<sup><a name="footnote4"></a>4</sup> These features all require [Microsoft Defender for Servers](./defender-for-servers-introduction.md).
-
-<sup><a name="footnote5"></a>5</sup> There may be differences in the standards offered per cloud type.
-
-<sup><a name="footnote6"></a>6</sup> Partially GA: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.
-
-<sup><a name="footnote7"></a>7</sup> Partially GA: Support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is in public preview and not available on Azure Government. Run-time visibility of vulnerabilities in container images is also a preview feature.
+ ## Supported operating systems
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
|--|--| | [Deprecation of legacy compliance standards across cloud environments](#deprecation-of-legacy-compliance-standards-across-cloud-environments) | April 2023 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 |
+| [Release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM](#release-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-cspm) | May 2023 |
+|[Renaming container recommendations powered by Qualys](#renaming-container-recommendations-powered-by-qualys) | May 2023 |
| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | June 2023 | ### Deprecation of legacy compliance standards across cloud environments
The following security recommendations will be released as GA and replace the V1
| Blocked accounts with owner permissions on Azure resources should be removed | 050ac097-3dda-4d24-ab6d-82568e7a50cf | | Blocked accounts with read and write permissions on Azure resources should be removed | 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
+### Release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM
+
+**Estimated date for change: May 2023**
+
+We're announcing the release of Vulnerability Assessment for Linux images in Azure container registries powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM. This release includes daily scanning of images. Findings used in the Security Explorer and attack paths will rely on MDVM Vulnerability Assessment instead of the Qualys scanner.
+
+The existing recommendation "Container registry images should have vulnerability findings resolved" is replaced by a new recommendation powered by MDVM:
+
+|Recommendation | Description | Assessment Key|
+|--|--|--|
+| Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to  improving your security posture, significantly reducing the attack surface for your containerized workloads. |dbd0cb49-b563-45e7-9724-889e799fa648 <br> is replaced by c0b7cfc6-3172-465a-b378-53c7ff2cc0d5
+
+The recommendation "Running container images should have vulnerability findings resolved" (assessment key 41503391-efa5-47ee-9282-4eff6131462c) is temporarily removed and will be replaced soon by a new recommendation powered by MDVM.
+
+Learn more about [Microsoft Defender Vulnerability Management (MDVM)](https://learn.microsoft.com/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management).
+
+### Renaming container recommendations powered by Qualys
+
+**Estimated date for change: May 2023**
+
+ The current container recommendations in Defender for Containers are renamed as follows:
+
+|Recommendation | Description | Assessment Key|
+|--|--|--|
+| Container registry images should have vulnerability findings resolved (powered by Qualys) | Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | dbd0cb49-b563-45e7-9724-889e799fa648 |
+| Running container images should have vulnerability findings resolved (powered by Qualys) | Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | 41503391-efa5-47ee-9282-4eff6131462c |
++ ### DevOps Resource Deduplication for Defender for DevOps **Estimated date for change: June 2023**
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
WHERE STEP in (3,4,6);
- **Cause**: The network share where the database backups are stored is in the same machine as the self-hosted Integration Runtime (SHIR). -- **Recommendation**: The latest version of Integration Runtime (**5.28.8488**) prevents access to a network file share on a local host. Please ensure you run Integration Runtime on a different machine than the network share hosting. If hosting the self-hosted Integration Runtime and the network share on different machines is not possible with your current migration setup, you can use the option to opt-out using ```DisableLocalFolderPathValidation```.
+- **Recommendation**: The latest version of Integration Runtime (**5.28.8488**) prevents access to a network file share on a local host. Ensure you run Integration Runtime on a different machine than the network share hosting. If hosting the self-hosted Integration Runtime and the network share on different machines isn't possible with your current migration setup, you can use the option to opt out using ```DisableLocalFolderPathValidation```.
> [!NOTE] > For more information, see [Set up an existing self-hosted IR via local PowerShell](../data-factory/create-self-hosted-integration-runtime.md#set-up-an-existing-self-hosted-ir-via-local-powershell). Use the disabling option with discretion as this is less secure.
WHERE STEP in (3,4,6);
## Error code: Ext_RestoreSettingsError -- **Message**: Unable to read blobs in storage container, exception: The remote server returned an error: (403) Forbidden.;The remote server returned an error: (403) Forbidden
+- **Message**: Unable to read blobs in storage container, exception: The remote server returned an error: (403) Forbidden.; The remote server returned an error: (403) Forbidden
- **Cause**: The Azure SQL target is unable to connect to blob storage.
Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure
[!INCLUDE [sql-vm-limitations](includes/sql-virtual-machines-limitations.md)]
+## Azure Data Studio Limitations
+
+### Failed to start Sql Migration Service: Error: Request error:
+
+- **Message**: `Error at ClientRequest.<anonymous> (c:\Users\MyUser\.azuredatastudio\extensions\microsoft.sql-migration-1.4.2\dist\main.js:2:7448) at ClientRequest.emit (node:events:538:35) at TLSSocket.socketOnEnd (node:_http_client:466:9) at TLSSocket.emit (node:events:538:35) at endReadableNT (node:internal/streams/readable:1345:12) at process.processTicksAndRejections (node:internal/process/task_queues:83:21)`
+- **Cause**: This issue occurs when Azure Data Studio isn't able to download the MigrationService package from https://github.com/microsoft/sqltoolsservice/releases. The download failure can be due to disconnected network work or unresolved proxy settings.
+- **Recommendation**: The sure fire way of solving this issue is by downloading the package manually. Follow the mitigation steps outlined in this link: https://github.com/microsoft/azuredatastudio/issues/22558#issuecomment-1496307891
+ ## Next steps - For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension)
event-grid Auth0 Log Stream Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-log-stream-blob-storage.md
This article shows you how to send Auth0 events to Azure Blob Storage via Azure
- [Get connection string to Azure Storage account](../storage/common/storage-account-keys-manage.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal#view-account-access-keys). Make sure you select the **Copy** button to copy connection string to the clipboard. ## Create an Azure function
-1. Create an Azure function by following instructions from the **Create a local project** section of [Quickstart: Create a JavaScript function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-node.md).
+1. Create an Azure function by following instructions from the **Create a local project** section of [Quickstart: Create a JavaScript function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-node.md?pivots=nodejs-model-v3).
1. Select **Azure Event Grid trigger** for the function template instead of **HTTP trigger** as mentioned in the quickstart. 1. Continue to follow the steps, but use the following **index.js** and **function.json** files.
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
Title: Scale SNAT ports with Azure Virtual Network NAT
+ Title: Scale SNAT ports with Azure NAT Gateway
description: You can integrate Azure Firewall with a NAT gateway to increase SNAT ports.
-# Scale SNAT ports with Azure Virtual Network NAT
+# Scale SNAT ports with Azure NAT Gateway
Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale set instance (Minimum of 2 instances), and you can associate up to [250 public IP addresses](./deploy-multi-public-ip-powershell.md). Depending on your architecture and traffic patterns, you might need more than the 1,248,000 available SNAT ports with this configuration. For example, when you use it to protect large [Azure Virtual Desktop deployments](./protect-azure-virtual-desktop.md) that integrate with Microsoft 365 Apps. Another challenge with using a large number of public IP addresses is when there are downstream IP address filtering requirements. Azure Firewall randomly selects the source public IP address to use for a connection, so you need to allow all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes.
-A better option to scale outbound SNAT ports is to use an [Azure Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) as a NAT gateway. It provides 64,512 SNAT ports per public IP address and supports up to 16 public IP addresses, effectively providing up to 1,032,192 outbound SNAT ports.
+A better option to scale outbound SNAT ports is to use an [Azure NAT Gateway](../virtual-network/nat-gateway/nat-overview.md). It provides 64,512 SNAT ports per public IP address and supports up to 16 public IP addresses, effectively providing up to 1,032,192 outbound SNAT ports.
When a NAT gateway resource is associated with an Azure Firewall subnet, all outbound Internet traffic automatically uses the public IP address of the NAT gateway. ThereΓÇÖs no need to configure [User Defined Routes](../virtual-network/tutorial-create-route-table-portal.md). Response traffic uses the Azure Firewall public IP address to maintain flow symmetry. If there are multiple IP addresses associated with the NAT gateway, the IP address is randomly selected. It isn't possible to specify what address to use.
ThereΓÇÖs no double NAT with this architecture. Azure Firewall instances send th
> [!NOTE] > Deploying NAT gateway with a [zone redundant firewall](deploy-availability-zone-powershell.md) is not recommended deployment option, as the NAT gateway does not support zonal deployment at this time. In order to use NAT gateway with Azure Firewall, a zonal Firewall deployment is required. >
-> In addition, Azure Virtual Network NAT integration is not currently supported in secured virtual hub network architectures. You must deploy using a hub virtual network architecture. For detailed guidance on integrating NAT gateway with Azure Firewall in a hub and spoke network architecture refer to the [NAT gateway and Azure Firewall integration tutorial](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md).
+> In addition, Azure NAT Gateway integration is not currently supported in secured virtual hub network architectures. You must deploy using a hub virtual network architecture. For detailed guidance on integrating NAT gateway with Azure Firewall in a hub and spoke network architecture refer to the [NAT gateway and Azure Firewall integration tutorial](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md).
## Associate a NAT gateway with an Azure Firewall subnet - Azure PowerShell
governance Manage Assignments Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/how-to/manage-assignments-ps.md
The Azure Blueprints module requires the following software:
- Azure PowerShell 1.5.0 or higher. If it isn't yet installed, follow [these instructions](/powershell/azure/install-az-ps). - PowerShellGet 2.0.1 or higher. If it isn't installed or updated, follow
- [these instructions](/powershell/scripting/gallery/installing-psget).
+ [these instructions](/powershell/gallery/powershellget/install-powershellget).
### Install the module
governance Migrate From Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/migrate-from-azure-automation.md
configuration stored in Azure Automation by making a REST request to the service
[01]: ./overview.md [02]: /powershell/dsc/getting-started/wingettingstarted [03]: /powershell/dsc/getting-started/lnxgettingstarted
-[04]: /powershell/scripting/gallery/how-to/working-with-local-psrepositories
+[04]: /powershell/gallery/how-to/working-with-local-psrepositories
[05]: ./how-to-create-package.md [06]: ./how-to-create-package.md#author-a-configuration [07]: /powershell/gallery/how-to/working-with-local-psrepositories
governance First Query Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-powershell.md
The Azure Resource Graph module requires the following software:
[these instructions](/powershell/azure/install-az-ps). - PowerShellGet 2.0.1 or higher. If it isn't installed or updated, follow
- [these instructions](/powershell/scripting/gallery/installing-psget).
+ [these instructions](/powershell/gallery/powershellget/install-powershellget).
### Install the module
governance Paginate Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/paginate-powershell.md
The Azure Resource Graph module requires the following software:
[these instructions](/powershell/azure/install-az-ps). - PowerShellGet 2.0.1 or higher. If it isn't installed or updated, follow
- [these instructions](/powershell/scripting/gallery/installing-psget).
+ [these instructions](/powershell/gallery/powershellget/install-powershellget).
### Install the module
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Title: Executing the import by invoking $import operation on FHIR service in Azure Health Data Services description: This article describes how to import FHIR data using $import.-+ Last updated 06/06/2022-+ # Bulk-import FHIR data
healthcare-apis Deploy New Choose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-choose.md
Title: Choose a deployment method for the MedTech service - Azure Health Data Services
-description: In this article, you'll learn about the different methods for deploying the MedTech service.
+description: In this article, learn about the different methods for deploying the MedTech service.
Previously updated : 03/10/2023 Last updated : 04/20/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-The MedTech service provides multiple methods for deployment into Azure. Each deployment method has different advantages that will allow you to customize your deployment to suit your needs and use cases.
+The MedTech service provides multiple methods for deployment into Azure. Each deployment method has different advantages allow you to customize your deployment to suit your needs and use cases.
-In this quickstart, you'll learn about these deployment methods:
+In this quickstart, learn about these deployment methods:
-- Azure Resource Manager template (ARM template) including an Azure Iot Hub using the **Deploy to Azure** button. -- ARM template using the **Deploy to Azure** button.-- ARM template using Azure PowerShell or the Azure CLI.-- Manually in the Azure portal.
+* Azure Resource Manager template (ARM template) including an Azure Iot Hub using the **Deploy to Azure** button.
+* ARM template using the **Deploy to Azure** button.
+* ARM template using Azure PowerShell or the Azure CLI.
+* Manually in the Azure portal.
+
+## Deployment overview
+
+The following diagram outlines the basic steps of the MedTech service deployment. These basic steps may help you analyze the deployment options and determine which deployment method is best for you.
+ ## ARM template including an Azure Iot Hub using the Deploy to Azure button
To learn more about deploying the MedTech service including an Azure IoT Hub usi
## ARM template using the Deploy to Azure button
-Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service will still require conforming and valid device and FHIR destination mappings to be fully functional.
+Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service requires conforming and valid device and FHIR destination mappings to be fully functional.
[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json).
To learn more about deploying the MedTech service using an ARM template and the
## ARM template using Azure PowerShell or the Azure CLI
-Using an ARM template with Azure PowerShell or the Azure CLI is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments. The deployed MedTech service will still require conforming and valid device and FHIR destination mappings to be fully functional.
+Using an ARM template with Azure PowerShell or the Azure CLI is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments. The deployed MedTech service requires conforming and valid device and FHIR destination mappings to be fully functional.
To learn more about deploying the MedTech service using an ARM template and Azure PowerShell or the Azure CLI, see [Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI](deploy-new-powershell-cli.md). ## Manually in the Azure portal
-Using the Azure portal manual deployment will allow you to see the details of each deployment step. The manual deployment has many steps, but it provides valuable technical information that may be useful for customizing and troubleshooting your MedTech service.
+Using the Azure portal manual deployment allows you to see the details of each deployment step. The manual deployment has many steps, but it provides valuable technical information that may be useful for customizing and troubleshooting your MedTech service.
To learn more about deploying the MedTech service manually using the Azure portal, see [Deploy the MedTech service manually using the Azure portal](deploy-new-manual.md).
-## Deployment architecture overview
-
-The following diagram outlines the basic steps of the MedTech service deployment and shows how these steps fit together with its data processing procedures. These basic steps may help you analyze the deployment options and determine which deployment method is best for you.
-- > [!IMPORTANT] > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. >
healthcare-apis Deploy New Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-manual.md
Previously updated : 03/10/2022 Last updated : 04/19/2022
The explanation of the MedTech service manual deployment using the Azure portal
- Part 2: Configuration (see [Configure for manual deployment](deploy-new-config.md)) - Part 3: Deployment and Post Deployment (see [Manual deployment and post-deployment](deploy-new-deploy.md))
-If you need a diagram with information on the MedTech service deployment, there's an architecture overview at [Choose a deployment method](deploy-new-choose.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a FHIR Observation.
+If you need a diagram with information on the MedTech service deployment, there's an overview at [Choose a deployment method](deploy-new-choose.md#deployment-overview). This diagram shows the steps of deployment and how MedTech service processes device data into FHIR Observations.
## Part 1: Prerequisites
healthcare-apis Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md
Previously updated : 04/14/2023 Last updated : 04/21/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article will show you how to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). There are six steps you need to follow to be able to deploy the MedTech service.
+This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These basic steps may help you analyze the MedTech service deployment options and determine which deployment method is best for you.
-The following diagram outlines the basic architectural path that enables the MedTech service to receive data from a device and send it to the FHIR service. This diagram shows how the six-step implementation process is divided into three key deployment stages: deployment, post-deployment, and data processing.
+As a prerequisite, you need an Azure subscription and have been granted proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, and REST API scripts.
-Follow these six steps to set up and start using the MedTech service.
+> [!TIP]
+> See the MedTech service article, [Quickstart: Choose a deployment method for the MedTech service](deploy-new-choose.md), for a description of the different deployment methods that can help to simply and automate the deployment of the MedTech service.
-## Step 1: Prerequisites for deployment
+## Deploy resources
-In order to begin deployment, you need to determine if you have: an Azure subscription and correct Azure role-based access control (Azure RBAC) role assignments. If you already have the appropriate subscription and roles, you can skip this step.
+After you obtain the required subscription prerequisites, the first step is to create and deploy the MedTech service prerequisite resources:
-- If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+* Azure resource group.
+* Azure Event Hubs namespace and event hub.
+* Azure Health Data services workspace.
+* Azure Health Data Services FHIR service.
-- You must have the appropriate RBAC roles for the subscription resources you want to use. The roles required for a user to complete the provisioning would be Contributor AND User Access Administrator OR Owner. The Contributor role allows the user to provision resources, and the User Access Administrator role allows the user to grant access so resources can send data between them. The Owner role can perform both. For more information, see [Azure role-based access control (RBAC)](/azure/cloud-adoption-framework/ready/considerations/roles).
+Once the prerequisite resources are available, deploy:
+
+* Azure Health Data Services MedTech service.
-## Step 2: Provision services for deployment
+### Deploy a resource group
-After you obtain the required prerequisites, the next phase of deployment is to create a workspace and provision instances of the Event Hubs service, FHIR service, and MedTech service. You must also give the Event Hubs permission to read data from your device and give the MedTech service permission to read and write to the FHIR service. There are four parts of this provisioning process.
+Deploy a [resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md) to contain the prerequisite resources and the MedTech service.
-### Create a resource group and workspace
+### Deploy an Event Hubs namespace and event hub
-You must first create a resource group to contain the deployed instances of a workspace, Event Hubs service, FHIR service, and MedTech service. A [workspace](../workspace-overview.md) is required as a container for the Azure Health Data Services. After you create a workspace from the [Azure portal](../healthcare-apis-quickstart.md), a FHIR service and MedTech service can be deployed to the workspace.
+Deploy an Event Hubs namespace into the resource group. Event Hubs namespaces are logical containers for event hubs. Once the namespace is deployed, you can deploy an event hub, which the MedTech service reads from. For information about deploying Event Hubs namespaces and event hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
-> [!NOTE]
-> There are limits to the number of workspaces and the number of MedTech service instances you can create in each Azure subscription. For more information, see [Frequently asked questions about the MedTech service](frequently-asked-questions.md).
-
-### Provision an Event Hubs instance to a namespace
-
-In order to provision an Event Hubs service, an Event Hubs namespace must first be provisioned, because Event Hubs namespaces are logical containers for event hubs. Namespace must be associated with a resource. The event hub and namespace need to be provisioned in the same Azure subscription. For more information, see [Event Hubs](../../event-hubs/event-hubs-create.md).
-
-Once an event hub is provisioned, you must give permission to the event hub to read data from the device. Then, the MedTech service can retrieve data from the event hub using a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md). This system-assigned managed identity is assigned the **Azure Event Hubs Data Receiver** role. For more information on how to assign access to the MedTech service from an Event Hubs service instance, see [Granting access to the device message event hub](deploy-iot-connector-in-azure.md#granting-access-to-the-device-message-event-hub).
-
-### Provision a FHIR service instance to the same workspace
-
-You must provision a [FHIR service](../fhir/fhir-portal-quickstart.md) instance in your workspace. The MedTech service persists the data to FHIR service store using the system-managed identity. See details on how to assign the role to the MedTech service from the [FHIR service](deploy-iot-connector-in-azure.md#granting-access-to-the-fhir-service).
-
-Once the FHIR service is provisioned, you must give the MedTech service permission to read and write to FHIR service. This permission enables the data to be persisted in the FHIR service store using system-assigned managed identity. See details on how to assign the **FHIR Data Writer** role to the MedTech service from the [FHIR service](deploy-iot-connector-in-azure.md#granting-access-to-the-fhir-service).
-
-By design, the MedTech service retrieves data from the specified event hub using the system-assigned managed identity. For more information on how to assign the role to the MedTech service from [Event Hubs](deploy-iot-connector-in-azure.md#granting-access-to-the-device-message-event-hub).
-
-### Provision a MedTech service instance in the workspace
-
-You must provision a MedTech service instance from the [Azure portal](deploy-iot-connector-in-azure.md) in your workspace. You can make the provisioning process easier and more efficient by automating everything with Azure PowerShell, Azure CLI, or Azure REST API. You can find automation scripts at the [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts) website.
-
-The MedTech service persists the data to the FHIR store using the system-managed identity. See details on how to assign the role to the MedTech service from the [FHIR service](deploy-iot-connector-in-azure.md#granting-access-to-the-fhir-service).
-
-## Step 3: Configure MedTech for deployment
-
-After you've fulfilled the prerequisites and provisioned your services, the next phase of deployment is to configure the MedTech services to ingest data, set up device mappings, and set up destination mappings. These configuration settings will ensure that the data can be translated from your device to Observations in the FHIR service. There are four parts in this configuration process.
-
-### Configuring the MedTech service to ingest data
-
-The MedTech service must be configured to ingest data it will receive from an event hub. First you must begin the official deployment process at the Azure portal. For more information about deploying the MedTech service using the Azure portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](deploy-new-manual.md) and [Prerequisites for manually deploying the MedTech service using the Azure portal](deploy-new-manual.md#part-1-prerequisites).
-
-Once you have starting using the portal and added the MedTech service to your workspace, you must then configure the MedTech service to ingest data from an event hub. For more information about configuring the MedTech service to ingest data, see [Configure the MedTech service to ingest data](deploy-new-config.md).
-
-### Configuring the device mapping
-
-You must configure the MedTech service to map it to the device you want to receive data from. Each device has unique settings that the MedTech service must use. For more information on how to use the device mapping, see [How to use device mappings](how-to-configure-device-mappings.md).
--- Azure Health Data Services provides an open source tool you can use called [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/main/tools/data-mapper). The IoMT Connector Data Mapper will help you map your device's data structure to a form that the MedTech service can use. For more information on device content mapping, see [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/main/docs/Configuration.md#device-content-mapping). --- When you're deploying the MedTech service, you must set specific device mapping properties. For more information on device mapping properties, see [Configure the device mapping properties](deploy-new-config.md).-
-### Configuring the FHIR destination mapping
-
-Once your device's data is properly mapped to your device's data format, you must then map it to an Observation in the FHIR service. For an overview of the FHIR destination mapping, see [How to use the FHIR destination mappings](how-to-configure-fhir-mappings.md).
-
-For step-by-step destination property mapping, see [Configure destination properties](deploy-new-config.md).
-
-### Create and deploy the MedTech service
-
-If you've completed the prerequisites, provisioning, and configuration, you're now ready to deploy the MedTech service. Create and deploy your MedTech service by following the procedures at [Create your MedTech service](deploy-new-deploy.md).
+### Deploy a workspace
-## Step 4: Connect to required services (post deployment)
+ Deploy a [workspace](../workspace-overview.md). After you create a workspace using the [Azure portal](../healthcare-apis-quickstart.md), a FHIR service and MedTech service can be deployed from the workspace.
-When you complete the final [deployment procedure](deploy-new-deploy.md) and don't get any errors, you must link the MedTech service to an Event Hubs and the FHIR service. This will enable a connection from the MedTech service to an Event Hubs instance and the FHIR service, so that data can flow smoothly from device to FHIR Observation. In order to do this, the Event Hubs instance for device message flow must be granted access via role assignment, so the MedTech service can receive Event Hubs data. You must also grant access to The FHIR service via role assignments in order for MedTech to receive the data. There are two parts of the process to connect to required services.
+### Deploy a FHIR service
-For more information about granting access via role assignments, see [Granting the MedTech service access to the device message event hub and FHIR service](deploy-new-deploy.md#manual-post-deployment-requirements).
+Deploy a [FHIR service](../fhir/fhir-portal-quickstart.md) into your resource group using your workspace. The MedTech service persists transformed device data into the FHIR service.
-### Granting access to the device message event hub
+### Deploy a MedTech service
-The Event Hubs instance for device message event hub must be granted access using managed identity in order for the MedTech service to receive data sent to the event hub from a device. The step-by-step procedure for doing this is at [Granting access to the device message event hub](deploy-iot-connector-in-azure.md#granting-access-to-the-device-message-event-hub).
+If you have successfully deployed the prerequisite resources, you're now ready to deploy a [MedTech service](deploy-new-manual.md) using your workspace.
-For more information about authorizing access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md).
-
-For more information about application roles, see [Authentication and Authorization for Azure Health Data Services](../authentication-authorization.md).
-
-### Granting access to FHIR service
-
-You must also grant access via role assignments to the FHIR service. This will enable FHIR service to receive data from the MedTech service by granting access using managed identity. The step-by-step procedure for doing this is at [Granting access to the FHIR service](deploy-iot-connector-in-azure.md#granting-access-to-the-fhir-service).
-
-For more information about assigning roles to the FHIR services, see [Configure Azure RBAC role for Azure Health Data Services](../configure-azure-rbac.md).
-
-For more information about application roles, see [Authentication and Authorization for Azure Health Data Services](../authentication-authorization.md).
-
-## Step 5: Send the device data for processing
-
-When the MedTech service is deployed and connected to the Event Hubs and FHIR services, it's ready to process device data and transform it into FHIR Observations. There are three parts of the sending process.
-
-### Device data sent to Event Hubs
-
-The device data is sent to an Event Hubs instance so that it can wait until the MedTech service is ready to receive it. The device data transfer needs to be asynchronous because it's sent over the Internet and delivery times can't be precisely measured. Normally the data won't sit on an event hub longer than 24 hours.
-
-For more information about Event Hubs, see [Event Hubs](../../event-hubs/event-hubs-about.md).
-
-For more information on Event Hubs data retention, see [Event Hubs quotas](../../event-hubs/event-hubs-quotas.md)
-
-### Device data sent from Event Hubs to the MedTech service
-
-MedTech requests the device data from the Event Hubs instance and the device data is sent from the event hub to the MedTech service. This procedure is called ingestion.
-
-### The MedTech service processes the device data
-
-The MedTech service processes the device data in five steps:
--- Ingest-- Normalize-- Group-- Transform-- Persist-
-If the processing was successful and you didn't get any error messages, your device data is now a FHIR service [Observation](http://hl7.org/fhir/observation.html) resource.
-
-For more information on the MedTech service device data transformation, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
-
-## Step 6: Verify the processed device data
+## Next steps
-You can verify that the device data was processed correctly by checking to see if there's now a new Observation resource in the FHIR service. If the device data isn't mapped or if the mapping isn't authored properly, the device data will be skipped. If there are any problems, check the [device mapping](overview-of-device-mapping.md) or the [FHIR destination mapping](overview-of-fhir-destination-mapping.md).
+This article described the basic steps needed to get started using the MedTech service.
-### Metrics
+To learn about methods of deploying the MedTech service, see
-You can verify that the device data is correctly persisted in the FHIR service by using the [MedTech service metrics](how-to-configure-metrics.md) in the Azure portal.
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-## Next steps
+For an overview of the MedTech service device mapping, see
-This article only described the basic steps needed to get started using the MedTech service.
+> [!div class="nextstepaction"]
+> [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-To learn about other methods of deploying the MedTech service, see
+For an overview of the MedTech service FHIR destination mapping, see
> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+> [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Git Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/git-projects.md
Previously updated : 04/14/2023 Last updated : 04/20/2023 # Open-source projects
Check out our open-source projects on GitHub that provide source code and instru
### FHIR integration
-* [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): Open-source version of the Azure Health Data Services MedTech service managed service. Can be used with any FHIR service that supports [FHIR R4&#174;](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491)
+* [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): Open-source version of the Azure Health Data Services MedTech service managed service. Can be used with any FHIR service that supports [FHIR](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491)
### Wearables integration
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
Previously updated : 04/14/2023 Last updated : 04/20/2023
The MedTech service can be customized and configured by using [device](overview-
Useful options could include: -- Link devices and consumers together for enhanced insights, trend captures, interoperability between systems, and proactive and remote monitoring.
+* Link devices and consumers together for enhanced insights, trend captures, interoperability between systems, and proactive and remote monitoring.
-- Update or create FHIR Observations according to existing or new mapping template types.
+* Update or create FHIR Observations according to existing or new mapping template types.
-- Choose data terms that work best for your organization and provide consistency in device data ingestion.
+* Choose data terms that work best for your organization and provide consistency in device data ingestion.
-- Customize, edit, test, and troubleshoot MedTech service device and FHIR destination mappings with the [Mapping debugger](how-to-use-mapping-debugger.md) tool.
+* Customize, edit, test, and troubleshoot MedTech service device and FHIR destination mappings with the [Mapping debugger](how-to-use-mapping-debugger.md) tool.
### Scalable
The MedTech service enables you to easily modify and extend the capabilities of
The MedTech service may also be integrated for ingesting device data from these wearables using our [open-source projects](git-projects.md): -- Fitbit&#174;
+* Fitbit&#174;
-- Apple&#174;
+* Apple&#174;
-- Google&#174;
+* Google&#174;
The following Microsoft solutions can use MedTech service for extra functionality: -- [**Microsoft Azure IoT Hub**](../../iot-hub/iot-concepts-and-iot-hub.md) - enhances workflow and ease of use.
+* [**Microsoft Azure IoT Hub**](../../iot-hub/iot-concepts-and-iot-hub.md) - enhances workflow and ease of use.
-- [**Azure Machine Learning Service**](concepts-machine-learning.md) - helps build, deploy, and manage models, integrate tools, and increase open-source operability.
+* [**Azure Machine Learning Service**](concepts-machine-learning.md) - helps build, deploy, and manage models, integrate tools, and increase open-source operability.
-- [**Microsoft Power BI**](concepts-power-bi.md) - enables data visualization features.
+* [**Microsoft Power BI**](concepts-power-bi.md) - enables data visualization features.
-- [**Microsoft Teams**](concepts-teams.md) - facilitates virtual visits.
+* [**Microsoft Teams**](concepts-teams.md) - facilitates virtual visits.
## Next steps
iot-hub Iot Hub Create Use Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-use-iot-toolkit.md
Title: Create an Azure IoT Hub using Azure IoT Tools for Visual Studio Code
-description: Learn how to use the Azure IoT tools for Visual Studio Code to create an Azure IoT hub in a resource group.
+ Title: Create an Azure IoT hub using the Azure IoT Hub extension for Visual Studio Code
+description: Learn how to use the Azure IoT Hub extension for Visual Studio Code to create an Azure IoT hub in a resource group.
Last updated 01/04/2019
-# Create an IoT hub using the Azure IoT Tools for Visual Studio Code
+# Create an IoT hub using the Azure IoT Hub extension for Visual Studio Code
[!INCLUDE [iot-hub-resource-manager-selector](../../includes/iot-hub-resource-manager-selector.md)]
-This article shows you how to use the [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) to create an Azure IoT hub. You can create one without an existing IoT project or create one from an existing IoT project.
+This article shows you how to use the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) to create an Azure IoT hub.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
This article shows you how to use the [Azure IoT Tools for Visual Studio Code](h
- [Visual Studio Code](https://code.visualstudio.com/) -- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) installed for Visual Studio Code
+- [Azure IoT Hub extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) installed for Visual Studio Code
+
+- An Azure subscription: [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin
- An Azure resource group: [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) in the Azure portal
-## Create an IoT hub without an IoT Project
+## Create an IoT hub
-The following steps show how to create an IoT Hub without an IoT Project in Visual Studio Code (VS Code).
+The following steps show how to create an IoT hub in Visual Studio Code (VS Code):
1. In VS Code, open the **Explorer** view.
The following steps show how to create an IoT Hub without an IoT Project in Visu
:::image type="content" source="./media/iot-hub-create-use-iot-toolkit/create-iot-hub.png" alt-text="A screenshot that shows the location of the Create IoT Hub list item in Visual Studio Code." lightbox="./media/iot-hub-create-use-iot-toolkit/create-iot-hub.png":::
-4. A pop-up shows in the bottom-right corner to let you sign in to Azure for the first time, if you're not signed in already.
+4. If you're not signed into Azure, a pop-up notification is shown in the bottom right corner to let you sign in to Azure. Select **Sign In** and follow the instructions to sign into Azure.
5. From the command palette at the top of VS Code, select your Azure subscription. 6. Select your resource group.
-7. Select a location.
+7. Select a region.
8. Select a pricing tier.
-9. Enter a globally unique name for your IoT hub, then press **Enter**.
-
-10. Wait a few minutes until the IoT hub is created. You'll see a confirmation in the output console.
-
-## Create an IoT hub and device in an existing IoT project
-
-The following steps show how to create an IoT Hub and register a device to the hub within an existing IoT project in Visual Studio (VS) Code.
-
-This method allows you to provision in Visual Studio Code without leaving your development environment.
-
-1. In the new opened project window, press `F1` to open the command palette, type and select **Azure IoT Device Workbench: Provision Azure Services...**.
-
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/provision.png" alt-text="A screenshot that shows how to open the command palette in VS Code." lightbox="media/iot-hub-create-use-iot-toolkit/provision.png":::
-
- > [!NOTE]
- > If you have not signed in Azure. Follow the pop-up notification for signing in.
-
-1. Select the subscription you want to use.
-
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/select-subscription.png" alt-text="A screenshot that shows how to choose your Azure subscription in Visual Studio Code." lightbox="media/iot-hub-create-use-iot-toolkit/select-subscription.png":::
-
-1. Select an existing resource group or create a new [resource group](../azure-resource-manager/management/overview.md#terminology).
-
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/select-resource-group.png" alt-text="A screenshot that shows how to choose a resource group or create a new one in Visual Studio Code." lightbox="media/iot-hub-create-use-iot-toolkit/select-resource-group.png":::
-
-1. In the resource group you specified, follow the prompts to select an existing IoT Hub or create a new Azure IoT Hub.
-
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/iot-hub-provision.png" alt-text="A screenshot that shows the first prompt in choosing an existing IoT Hub in Visual Studio Code." lightbox="media/iot-hub-create-use-iot-toolkit/iot-hub-provision.png":::
-
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/select-iot-hub.png" alt-text="A screenshot that shows the second prompt in choosing an existing IoT Hub in Visual Studio Code." lightbox="media/iot-hub-create-use-iot-toolkit/select-iot-hub.png":::
-
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/iot-hub-selected.png" alt-text="A screenshot that shows the third prompt in choosing an existing IoT Hub in Visual Studio Code." lightbox="media/iot-hub-create-use-iot-toolkit/iot-hub-selected.png":::
-
-1. In the output window, you see the Azure IoT Hub provisioned.
-
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/iot-hub-provisioned.png" alt-text="A screenshot that shows the output window in Visual Studio Code." lightbox="media/iot-hub-create-use-iot-toolkit/iot-hub-provisioned.png":::
-
-1. Select or create a new IoT Hub Device in the Azure IoT Hub you provisioned.
-
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/iot-device-provision.png" alt-text="A screenshot that shows the fourth prompt in choosing an existing IoT Hub in Visual Studio Code." lightbox="media/iot-hub-create-use-iot-toolkit/iot-device-provision.png":::
-
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/select-iot-device.png" alt-text="A screenshot that shows an example of an existing IoT Hub in Visual Studio Code." lightbox="media/iot-hub-create-use-iot-toolkit/select-iot-device.png":::
-
-1. Now you have an Azure IoT Hub provisioned and a device created in it. The device connection string is saved in VS Code.
+9. Enter a globally unique name for your IoT hub, and then select the Enter key.
- :::image type="content" source="media/iot-hub-create-use-iot-toolkit/provision-done.png" alt-text="A screenshot that shows IoT Hub details in the output window in Visual Studio Code." lightbox="media/iot-hub-create-use-iot-toolkit/provision-done.png":::
+10. Wait a few minutes until the IoT hub is created and confirmation is displayed in the **Output** panel.
> [!TIP]
-> To delete a device from your IoT hub, use the `Azure IoT Hub: Delete Device` option from the Command Palette. There is no option to delete your IoT hub in Visual Studio Code, however you can [delete your hub in the Azure portal](iot-hub-create-through-portal.md#delete-an-iot-hub).
+> There is no option to delete your IoT hub in Visual Studio Code, however you can [delete your hub in the Azure portal](iot-hub-create-through-portal.md#delete-an-iot-hub).
## Next steps
-Now that you've deployed an IoT hub using the Azure IoT Tools for Visual Studio Code, explore these articles:
+Now that you've deployed an IoT hub using the Azure IoT Hub extension for Visual Studio Code, explore these articles:
-- [Use the Azure IoT Tools for Visual Studio Code to send and receive messages between your device and an IoT Hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).
+- [Use the Azure IoT Hub extension for Visual Studio Code to send and receive messages between your device and an IoT hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).
-- [Use the Azure IoT Tools for Visual Studio Code for Azure IoT Hub device management](iot-hub-device-management-iot-toolkit.md)
+- [Use the Azure IoT Hub extension for Visual Studio Code for Azure IoT Hub device management](iot-hub-device-management-iot-toolkit.md)
-- [See the Azure IoT Hub for Visual Studio Code wiki page](https://github.com/microsoft/vscode-azure-iot-toolkit/wiki).
+- [See the Azure IoT Hub extension for Visual Studio Code wiki page](https://github.com/microsoft/vscode-azure-iot-toolkit/wiki).
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
HTTPS implements authentication by including a valid token in the **Authorizatio
For example, Username (DeviceId is case-sensitive): `iothubname.azure-devices.net/DeviceId`
-Password (You can generate a SAS token with the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token), or the [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)):
+Password (You can generate a SAS token with the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token), or the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)):
`SharedAccessSignature sr=iothubname.azure-devices.net%2fdevices%2fDeviceId&sig=kPszxZZZZZZZZZZZZZZZZZAhLT%2bV7o%3d&se=1487709501`
The result, which grants access to all functionality for device1, would be:
`SharedAccessSignature sr=myhub.azure-devices.net%2fdevices%2fdevice1&sig=13y8ejUk2z7PLmvtwR5RqlGBOVwiq7rQR3WZ5xZX3N4%3D&se=1456971697` > [!NOTE]
-> It's possible to generate a SAS token with the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token), or the [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
+> It's possible to generate a SAS token with the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token), or the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
#### Use a shared access policy to access on behalf of a device
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md
To try out some of the concepts described in this article, see the following IoT
* [How to use the device twin](device-twins-node.md) * [How to use device twin properties](tutorial-device-twins.md)
-* [Device management with Azure IoT Tools for VS Code](iot-hub-device-management-iot-toolkit.md)
+* [Device management with the Azure IoT Hub extension for VS Code](iot-hub-device-management-iot-toolkit.md)
iot-hub Iot Hub Devguide Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-direct-methods.md
 Title: Understand Azure IoT Hub direct methods
-description: This article describes how use direct methods to invoke code on your devices from a service app.
+description: This article describes how to use direct methods to invoke code on your devices from a service app.
Now you have learned how to use direct methods, you may be interested in the fol
If you would like to try out some of the concepts described in this article, you may be interested in the following IoT Hub tutorial: * [Use direct methods](quickstart-control-device.md)
-* [Device management with Azure IoT Tools for VS Code](iot-hub-device-management-iot-toolkit.md)
+* [Device management with the Azure IoT Hub extension for VS Code](iot-hub-device-management-iot-toolkit.md)
iot-hub Iot Hub Device Management Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-device-management-iot-toolkit.md
Title: Azure IoT device management with Azure IoT Tools for VSCode
-description: Use the Azure IoT Tools for Visual Studio Code for Azure IoT Hub device management, featuring the Direct methods and the Twin's desired properties management options.
+ Title: Azure IoT Hub device management with the Azure IoT Hub extension for Visual Studio Code
+description: Use the Azure IoT Hub extension for Visual Studio Code for Azure IoT Hub device management, featuring the Direct methods and the Twin's desired properties management options.
Last updated 01/04/2019
-# Use Azure IoT Tools for Visual Studio Code for Azure IoT Hub device management
+# Use the Azure IoT Hub extension for Visual Studio Code for Azure IoT Hub device management
![End-to-end diagram](media/iot-hub-get-started-e2e-diagram/2.png)
-In this article, you learn how to use Azure IoT Tools for Visual Studio Code with various management options on your development machine. [Azure IoT Hub for VS Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) is a useful Visual Studio Code extension that makes IoT Hub management and IoT application development easier. It comes with management options that you can use to perform various tasks.
+In this article, you learn how to use the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) with various management options on your development machine. The IoT Hub extension is a useful Visual Studio (VS) Code extension that makes IoT Hub management and IoT application development easier. It comes with management options that you can use to perform various tasks.
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
In this article, you learn how to use Azure IoT Tools for Visual Studio Code wit
| Direct methods | Make a device act such as starting or stopping sending messages or rebooting the device. | | Read device twin | Get the reported state of a device. For example, the device reports the LED is blinking now. | | Update device twin | Put a device into certain states, such as setting an LED to green or setting the telemetry send interval to 30 minutes. |
-| Cloud-to-device messages | Send notifications to a device. For example, "It is very likely to rain today. Don't forget to bring an umbrella." |
+| Cloud-to-device messages | Send notifications to a device. For example, "It's likely to rain today. Don't forget to bring an umbrella." |
For more detailed explanation on the differences and guidance on using these options, see [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance.md) and [Cloud-to-device communication guidance](iot-hub-devguide-c2d-guidance.md).
Device twins are JSON documents that store device state information (metadata, c
* An active Azure subscription. * An Azure IoT hub under your subscription. * [Visual Studio Code](https://code.visualstudio.com/)
-* [Azure IoT Hub for VS Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) or copy this URL and paste it into a browser window:`vscode:extension/vsciot-vscode.azure-iot-toolkit`.
+* [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) or copy this URL and paste it into a browser window:`vscode:extension/vsciot-vscode.azure-iot-toolkit`.
## Sign in to access your IoT hub
-1. In **Explorer** view of VS Code, expand **Azure IoT Hub Devices** section in the bottom left corner.
+Follow these steps to sign into Azure and access your IoT hub from your Azure subscription:
-2. Click **Select IoT Hub** in context menu.
+1. In the **Explorer** view of VS Code, expand the **Azure IoT Hub** section in the side bar.
-3. A pop-up will show in the bottom right corner to let you sign in to Azure for the first time.
+1. Select the ellipsis (…) button of the **Azure IoT Hub** section to display the action menu, and then select **Select IoT Hub** from the action menu.
-4. After you sign in, your Azure Subscription list will be shown, then select Azure Subscription and IoT Hub.
+1. If you're not signed into Azure, a pop-up notification is shown in the bottom right corner to let you sign in to Azure. Select **Sign In** and follow the instructions to sign into Azure.
-5. The device list will be shown in **Azure IoT Hub Devices** tab in a few seconds.
+1. From the command palette at the top of VS Code, select your Azure subscription from the **Select Subscription** dropdown list.
- > [!Note]
- > You can also complete the set up by choosing **Set IoT Hub Connection String**. Enter the **iothubowner** policy connection string for the IoT hub that your IoT device connects to in the pop-up window.
+1. Select your IoT hub from the **Select IoT Hub** dropdown list.
+
+1. The devices for your IoT hub are retrieved from IoT Hub and shown under the **Devices** node in the **Azure IoT Hub** section of the side bar.
+
+ > [!NOTE]
+ > You can also use a connection string to access your IoT hub, by selecting **Set IoT Hub Connection String** from the action menu and entering the **iothubowner** policy connection string for your IoT hub in the **IoT Hub Connection String** input box.
## Direct methods
-1. Right-click your device and select **Invoke Direct Method**.
+To invoke a direct method from your IoT device, follow these steps:
+
+1. In the side bar, expand the **Devices** node under the **Azure IoT Hub** section.
+
+1. Right-click your IoT device and select **Invoke Device Direct Method**.
-2. Enter the method name and payload in input box.
+1. Enter the method name in the input box, and then select the Enter key.
-3. Results will be shown in **OUTPUT** > **Azure IoT Hub** view.
+1. Enter the payload in the input box, and then select the Enter key.
+
+1. The results are shown in the **Output** panel.
## Read device twin
-1. Right-click your device and select **Edit Device Twin**.
+To display the JSON document for the device twin of your IoT device, follow these steps:
+
+1. In the side bar, expand the **Devices** node under the **Azure IoT Hub** section.
+
+1. Right-click your IoT device and select **Edit Device Twin**.
-2. An **azure-iot-device-twin.json** file will be opened with the content of device twin.
+1. The JSON document for the device twin, named **azure-iot-device-twin.json**, is shown in the editor.
## Update device twin
-1. Make some edits of **tags** or **properties.desired** field.
+After [reading the device twin](#read-device-twin), follow these steps to update the device twin for your IoT device:
-2. Right-click on the **azure-iot-device-twin.json** file.
+1. Make changes to arrays or values in the JSON document for the device twin. For example, add tags in the **tags** array, or change the values of desired properties in the **properties.desired** array.
-3. Select **Update Device Twin** to update the device twin.
+1. Right-click the content area of the **azure-iot-device-twin.json** file and select **Update Device Twin**.
## Send cloud-to-device messages To send a message from your IoT hub to your device, follow these steps:
-1. Right-click your device and select **Send C2D Message to Device**.
+1. In the side bar, expand the **Devices** node under the **Azure IoT Hub** section.
+
+1. Right-click your IoT device and select **Send C2D Message to Device**.
-2. Enter the message in input box.
+1. Enter the message in the input box, and then select the Enter key.
-3. Results will be shown in **OUTPUT** > **Azure IoT Hub** view.
+1. The results are shown in the **Output** panel.
## Next steps
-You've learned how to use Azure IoT Hub for Visual Studio Code with various management options.
+You've learned how to use the Azure IoT Hub extension for Visual Studio Code with various management options.
[!INCLUDE [iot-hub-get-started-next-steps](../../includes/iot-hub-get-started-next-steps.md)]
iot-hub Iot Hub Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-distributed-tracing.md
Microsoft Azure IoT Hub currently supports distributed tracing as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-IoT Hub is one of the first Azure services to support distributed tracing. As more Azure services support distributed tracing, you'll be able to trace Internet of Things (IoT) messages throughout the Azure services involved in your solution. For a background on the feature, see [What is distributed tracing?](../azure-monitor/app/distributed-tracing-telemetry-correlation.md).
+IoT Hub is one of the first Azure services to support distributed tracing. As more Azure services support distributed tracing, you're able to trace Internet of Things (IoT) messages throughout the Azure services involved in your solution. For a background on the feature, see [What is distributed tracing?](../azure-monitor/app/distributed-tracing-telemetry-correlation.md).
When you enable distributed tracing for IoT Hub, you can:
In this section, you configure an IoT hub to log distributed tracing attributes
:::image type="content" source="media/iot-hub-distributed-tracing/diagnostic-setting-name.png" alt-text="Screenshot that shows where to add a name for your diagnostic settings." lightbox="media/iot-hub-distributed-tracing/diagnostic-setting-name.png":::
-1. Choose one or more of the following options under **Destination details** to determine where the logging will be sent:
+1. Choose one or more of the following options under **Destination details** to determine where to send logging information:
- **Archive to a storage account**: Configure a storage account to contain the logging information. - **Stream to an event hub**: Configure an event hub to contain the logging information.
These instructions are for building the sample on Windows. For other environment
cmake .. ```
- If CMake can't find your C++ compiler, you might get build errors while running the preceding command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
+ If CMake can't find your C++ compiler, you might encounter build errors while running the preceding command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
After the build succeeds, the last few output lines will look similar to the following output:
To change the percentage of messages to be traced from the cloud, you must updat
1. (Optional) Change the sampling rate to a different value, and observe the change in frequency that messages include `tracestate` in the application properties.
-### Update by using Azure IoT Hub for Visual Studio Code
+### Update by using the Azure IoT Hub extension for Visual Studio Code
-1. With Visual Studio Code installed, install the latest version of [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) for Visual Studio Code.
+1. With Visual Studio Code installed, install the latest version of the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
1. Open Visual Studio Code, and go to the **Explorer** tab and the **Azure IoT Hub** section.
To change the percentage of messages to be traced from the cloud, you must updat
**Enable Distributed Tracing: Enabled** now appears under **Distributed Tracing Setting (Preview)** > **Desired**.
-1. In the pop-up pane that appears for the sampling rate, type **100**, and then select the Enter key.
+1. In the pop-up pane that appears for the sampling rate, enter **100** and then select the Enter key.
![Screenshot that shows entering a sampling rate](./media/iot-hub-distributed-tracing/update-distributed-tracing-setting-3.png)
To understand the types of logs, see [Azure IoT Hub distributed tracing logs](mo
Many IoT solutions, including the [Azure IoT reference architecture](/azure/architecture/reference-architectures/iot) (English only), generally follow a variant of the [microservice architecture](/azure/architecture/microservices/). As an IoT solution grows more complex, you end up using a dozen or more microservices. These microservices might or might not be from Azure.
-Pinpointing where IoT messages are dropping or slowing down can be challenging. For example, imagine that you have an IoT solution that uses 5 different Azure services and 1,500 active devices. Each device sends 10 device-to-cloud messages per second, for a total of 15,000 messages per second. But you notice that your web app sees only 10,000 messages per second. How do you find the culprit?
+Pinpointing where IoT messages are dropping or slowing down can be challenging. For example, imagine that you have an IoT solution that uses five different Azure services and 1,500 active devices. Each device sends 10 device-to-cloud messages per second, for a total of 15,000 messages per second. But you notice that your web app sees only 10,000 messages per second. How do you find the culprit?
For you to reconstruct the flow of an IoT message across services, each service should propagate a *correlation ID* that uniquely identifies the message. After Azure Monitor collects correlation IDs in a centralized system, you can use those IDs to see message flow. This method is called the [distributed tracing pattern](/azure/architecture/microservices/logging-monitoring#distributed-tracing).
-To support wider adoption for distributed tracing, Microsoft is contributing to [W3C standard proposal for distributed tracing](https://w3c.github.io/trace-context/). When distributed tracing support for IoT Hub is enabled, it will follow this flow:
+To support wider adoption for distributed tracing, Microsoft is contributing to [W3C standard proposal for distributed tracing](https://w3c.github.io/trace-context/). When distributed tracing support for IoT Hub is enabled, it follows this flow:
1. A message is generated on the IoT device. 1. The IoT device decides (with help from the cloud) that this message should be assigned with a trace context.
To support wider adoption for distributed tracing, Microsoft is contributing to
- The proposal for the W3C Trace Context standard is currently a working draft. - The only development language that the client SDK currently supports is C.-- Cloud-to-device twin capability isn't available for the [IoT Hub basic tier](iot-hub-scaling.md#basic-and-standard-tiers). However, IoT Hub will still log to Azure Monitor if it sees a properly composed trace context header.-- To ensure efficient operation, IoT Hub will impose a throttle on the rate of logging that can occur as part of distributed tracing.
+- Cloud-to-device twin capability isn't available for the [IoT Hub basic tier](iot-hub-scaling.md#basic-and-standard-tiers). However, IoT Hub still logs to Azure Monitor if it sees a properly composed trace context header.
+- To ensure efficient operation, IoT Hub imposes a throttle on the rate of logging that can occur as part of distributed tracing.
## Next steps
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
IoT Hub isn't a full-featured MQTT broker and doesn't support all the behaviors
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
-All device communication with IoT Hub must be secured using TLS/SSL. Therefore, IoT Hub doesn't support non-secure connections over TCP port 1883.
+All device communication with IoT Hub must be secured using TLS/SSL. Therefore, IoT Hub doesn't support nonsecure connections over TCP port 1883.
## Connecting to IoT Hub
In order to ensure a client/IoT Hub connection stays alive, both the service and
|C# | 300 seconds* | [Yes](/dotnet/api/microsoft.azure.devices.client.transport.mqtt.mqtttransportsettings.keepaliveinseconds) | |Python | 60 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-python/blob/v2/azure-iot-device/azure/iot/device/iothub/abstract_clients.py#L343) |
-*The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds. In reality, the SDK sends a ping request four times per keep-alive duration set. This means the SDK sends a keep-alive ping every 75 seconds.
+*The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds. In reality, the SDK sends a ping request four times per keep-alive duration set. In other words, the SDK sends a keep-alive ping once every 75 seconds.
Following the [MQTT v3.1.1 specification](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718081), IoT Hub's keep-alive ping interval is 1.5 times the client keep-alive value; however, IoT Hub limits the maximum server-side timeout to 29.45 minutes (1767 seconds). This limit exists because all Azure services are bound to the Azure load balancer TCP idle timeout, which is 29.45 minutes. For example, a device using the Java SDK sends the keep-alive ping, then loses network connectivity. 230 seconds later, the device misses the keep-alive ping because it's offline. However, IoT Hub doesn't close the connection immediately - it waits another `(230 * 1.5) - 230 = 115` seconds before disconnecting the device with the error [404104 DeviceConnectionClosedRemotely](iot-hub-troubleshoot-error-404104-deviceconnectionclosedremotely.md).
-The maximum client keep-alive value you can set is `1767 / 1.5 = 1177` seconds. Any traffic will reset the keep-alive. For example, a successful shared access signature (SAS) token refresh resets the keep-alive.
+The maximum client keep-alive value you can set is `1767 / 1.5 = 1177` seconds. Any traffic resets the keep-alive. For example, a successful shared access signature (SAS) token refresh resets the keep-alive.
### Migrating a device app from AMQP to MQTT
If a device can't use the device SDKs, it can still connect to the public device
For more information about how to generate SAS tokens, see the [Use SAS tokens as a device](iot-hub-dev-guide-sas.md#use-sas-tokens-as-a-device) section of [Control access to IoT Hub using Shared Access Signatures](iot-hub-dev-guide-sas.md).
- You can also use the cross-platform Azure IoT Tools for Visual Studio Code or the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token) to quickly generate a SAS token. You can then copy and paste the SAS token into your own code for testing purposes.
+ You can also use the cross-platform [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) or the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token) to quickly generate a SAS token. You can then copy and paste the SAS token into your own code for testing purposes.
-### For Azure IoT Tools
-
-1. Expand the **AZURE IOT HUB DEVICES** tab in the bottom left corner of Visual Studio Code.
+### Using the Azure IoT Hub extension for Visual Studio Code
-2. Right-click your device and select **Generate SAS Token for Device**.
+1. In the side bar, expand the **Devices** node under the **Azure IoT Hub** section.
+
+1. Right-click your IoT device and select **Generate SAS Token for Device** from the context menu.
-3. Set **expiration time** and press 'Enter'.
+1. Enter the expiration time, in hours, for the SAS token in the input box, and then select the Enter key.
-4. The SAS token is created and copied to clipboard.
+1. The SAS token is created and copied to clipboard.
The SAS token that's generated has the following structure:
The following list describes IoT Hub implementation-specific behaviors:
* IoT Hub doesn't persist Retain messages. If a device sends a message with the **RETAIN** flag set to 1, IoT Hub adds the **mqtt-retain** application property to the message. In this case, instead of persisting the retain message, IoT Hub passes it to the backend app.
-* IoT Hub only supports one active MQTT connection per device. Any new MQTT connection on behalf of the same device ID causes IoT Hub to drop the existing connection and **400027 ConnectionForcefullyClosedOnNewConnection** will be logged into IoT Hub Logs
+* IoT Hub only supports one active MQTT connection per device. Any new MQTT connection on behalf of the same device ID causes IoT Hub to drop the existing connection and **400027 ConnectionForcefullyClosedOnNewConnection** is logged into IoT Hub Logs
* To route messages based on message body, you must first add property 'contentType' (`ct`) to the end of the MQTT topic and set its value to be `application/json;charset=utf-8` as shown in the following example. For more information about routing messages either based on message properties or message body, see the [IoT Hub message routing query syntax documentation](iot-hub-devguide-routing-query-syntax.md).
In cloud-to-device messages, values in the property bag are represented as in th
|-|-|-| | `null` | `key` | Only the key appears in the property bag | | empty string | `key=` | The key followed by an equal sign with no value |
-| non-null, non-empty value | `key=value` | The key followed by an equal sign and the value |
+| non-null, nonempty value | `key=value` | The key followed by an equal sign and the value |
The following example shows a property bag that contains three application properties: **prop1** with a value of `null`; **prop2**, an empty string (""); and **prop3** with a value of "a string".
For more information, see [Understand and use device twins in IoT Hub](iot-hub-d
## Update device twin's reported properties
-To update reported properties, the device issues a request to IoT Hub via a publication over a designated MQTT topic. After IoT Hub processes the request, it responds the success or failure status of the update operation via a publication to another topic. This topic can be subscribed by the device in order to notify it about the result of its twin update request. To implement this type of request/response interaction in MQTT, we use the notion of request ID (`$rid`) provided initially by the device in its update request. This request ID is also included in the response from IoT Hub to allow the device to correlate the response to its particular earlier request.
+To update reported properties, the device issues a request to IoT Hub via a publication over a designated MQTT topic. After IoT Hub processes the request, it responds the success or failure status of the update operation via a publication to another topic. The device can subscribe to this topic in order to notify it about the result of its twin update request. To implement this type of request/response interaction in MQTT, we use the notion of request ID (`$rid`) provided initially by the device in its update request. This request ID is also included in the response from IoT Hub to allow the device to correlate the response to its particular earlier request.
The following sequence describes how a device updates the reported properties in the device twin in IoT Hub:
client.publish("$iothub/twin/PATCH/properties/reported/?$rid=" +
rid, twin_reported_property_patch, qos=0) ```
-Upon success of the twin reported properties update process in the previous code snippet, the publication message from IoT Hub will have the following topic: `$iothub/twin/res/204/?$rid=1&$version=6`, where `204` is the status code indicating success, `$rid=1` corresponds to the request ID provided by the device in the code, and `$version` corresponds to the version of reported properties section of device twins after the update.
+Upon success of the twin reported properties update process in the previous code snippet, the publication message from IoT Hub has the following topic: `$iothub/twin/res/204/?$rid=1&$version=6`, where `204` is the status code indicating success, `$rid=1` corresponds to the request ID provided by the device in the code, and `$version` corresponds to the version of reported properties section of device twins after the update.
For more information, see [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md).
iot-hub Iot Hub Raspberry Pi Kit C Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-kit-c-get-started.md
You should see the following output that shows the sensor data and the messages
## Read the messages received by your hub
-One way to monitor messages received by your IoT hub from your device is to use the Azure IoT Tools for Visual Studio Code. To learn more, see [Use Azure IoT Tools for Visual Studio Code to send and receive messages between your device and IoT Hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).
+One way to monitor messages received by your IoT hub from your device is to use the Azure IoT Hub extension for Visual Studio Code. To learn more, see [Use the Azure IoT Hub extension for Visual Studio Code to send and receive messages between your device and IoT Hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).
For more ways to process data sent by your device, continue on to the next section.
iot-hub Iot Hub Raspberry Pi Kit Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md
Turn on Pi by using the micro USB cable and the power supply. Use the Ethernet c
> [!NOTE] > The default username is `pi` and the password is `raspberry`.
-2. Install Node.js and NPM to your Pi.
+2. Install Node.js and npm to your Pi.
First check your Node.js version.
You should see the following output that shows the sensor data and the messages
## Read the messages received by your hub
-One way to monitor messages received by your IoT hub from your device is to use the Azure IoT Tools for Visual Studio Code. To learn more, see [Use Azure IoT Tools for Visual Studio Code to send and receive messages between your device and IoT Hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).
+One way to monitor messages received by your IoT hub from your device is to use the Azure IoT Hub extension for Visual Studio Code. To learn more, see [Use the Azure IoT Hub extension for Visual Studio Code to send and receive messages between your device and IoT Hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).
For more ways to process data sent by your device, continue on to the next section.
iot-hub Iot Hub Raspberry Pi Web Simulator Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started.md
You should see the following output that shows the sensor data and the messages
## Read the messages received by your hub
-One way to monitor messages received by your IoT hub from the simulated device is to use the Azure IoT Tools for Visual Studio Code. To learn more, see [Use Azure IoT Tools for Visual Studio Code to send and receive messages between your device and IoT Hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).
+One way to monitor messages received by your IoT hub from the simulated device is to use the Azure IoT Hub extension for Visual Studio Code. To learn more, see [Use the Azure IoT Hub extension for Visual Studio Code to send and receive messages between your device and IoT Hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).
For more ways to process data sent by your device, continue on to the next section.
iot-hub Iot Hub Vscode Iot Toolkit Cloud Device Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-vscode-iot-toolkit-cloud-device-messaging.md
Title: Use Azure IoT Tools for VSCode to manage IoT Hub messaging
-description: Learn how to use Azure IoT Tools for Visual Studio Code to monitor device to cloud messages and send cloud to device messages in Azure IoT Hub.
+ Title: Use the Azure IoT Hub extension for Visual Studio Code to manage IoT Hub messaging
+description: Learn how to use the Azure IoT Hub extension for Visual Studio Code to monitor device to cloud messages and send cloud to device messages in Azure IoT Hub.
Last updated 01/18/2019
-# Use Azure IoT Tools for Visual Studio Code to send and receive messages between your device and IoT Hub
+# Use the Azure IoT Hub extension for Visual Studio Code to send and receive messages between your device and IoT Hub
![End-to-end diagram](./media/iot-hub-vscode-iot-toolkit-cloud-device-messaging/e-to-e-diagram.png)
-In this article, you learn how to use Azure IoT Tools for Visual Studio Code to monitor device-to-cloud messages and to send cloud-to-device messages. Device-to-cloud messages could be sensor data that your device collects and then sends to your IoT hub. Cloud-to-device messages could be commands that your IoT hub sends to your device to blink an LED that is connected to your device.
+In this article, you learn how to use the Azure IoT Hub extension for Visual Studio Code to monitor device-to-cloud messages and to send cloud-to-device messages. Device-to-cloud messages could be sensor data that your device collects and then sends to your IoT hub. Cloud-to-device messages could be commands that your IoT hub sends to your device to blink an LED that is connected to your device.
-[Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) is a useful Visual Studio Code extension that makes IoT Hub management and IoT application development easier. This article focuses on how to use Azure IoT Tools for Visual Studio Code to send and receive messages between your device and your IoT hub.
+The [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) is a useful extension that makes IoT Hub management and IoT application development easier. This article focuses on how to use the extension to send and receive messages between your device and your IoT hub.
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
In this article, you learn how to use Azure IoT Tools for Visual Studio Code to
* [Visual Studio Code](https://code.visualstudio.com/)
-* [Azure IoT Tools for VS Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) or copy and paste this URL into a browser window: `vscode:extension/vsciot-vscode.azure-iot-toolkit`
+* [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) or copy and paste this URL into a browser window: `vscode:extension/vsciot-vscode.azure-iot-toolkit`
## Sign in to access your IoT hub
-1. In **Explorer** view of VS Code, expand **Azure IoT Hub Devices** section in the bottom left corner.
+Follow these steps to sign into Azure and access your IoT hub from your Azure subscription:
-2. Click **Select IoT Hub** in context menu.
+1. In the **Explorer** view of VS Code, expand the **Azure IoT Hub** section in the side bar.
-3. A pop-up will show in the bottom right corner to let you sign in to Azure for the first time.
+1. Select the ellipsis (…) button of the **Azure IoT Hub** section to display the action menu, and then select **Select IoT Hub** from the action menu.
-4. After you sign in, your Azure Subscription list will be shown, then select Azure Subscription and IoT Hub.
+1. If you're not signed into Azure, a pop-up notification is shown in the bottom right corner to let you sign in to Azure. Select **Sign In** and follow the instructions to sign into Azure.
-5. The device list will be shown in **Azure IoT Hub Devices** tab in a few seconds.
+1. Select your Azure subscription from the **Select Subscription** dropdown list.
- > [!Note]
- > You can also complete the set up by choosing **Set IoT Hub Connection String**. Enter the **iothubowner** policy connection string for the IoT hub that your IoT device connects to in the pop-up window.
+1. Select your IoT hub from the **Select IoT Hub** dropdown list.
+
+1. The devices for your IoT hub are retrieved from IoT Hub and shown under the **Devices** node in the **Azure IoT Hub** section of the side bar.
+
+ > [!NOTE]
+ > You can also use a connection string to access your IoT hub, by selecting **Set IoT Hub Connection String** from the action menu and entering the **iothubowner** policy connection string for your IoT hub in the **IoT Hub Connection String** input box.
## Monitor device-to-cloud messages To monitor messages that are sent from your device to your IoT hub, follow these steps:
-1. Right-click your device and select **Start Monitoring Built-in Event Endpoint**.
+1. In the side bar, expand the **Devices** node under the **Azure IoT Hub** section.
+
+1. Right-click your IoT device and select **Start Monitoring Built-in Event Endpoint**.
-2. The monitored messages will be shown in **OUTPUT** > **Azure IoT Hub** view.
+1. The monitored messages are shown in the **Output** panel.
-3. To stop monitoring, right-click the **OUTPUT** view and select **Stop Monitoring Built-in Event Endpoint**.
+1. To stop monitoring messages, right-click the **Output** panel and select **Stop Monitoring Built-in Event Endpoint**.
## Send cloud-to-device messages To send a message from your IoT hub to your device, follow these steps:
+
+1. In the side bar, expand the **Devices** node under the **Azure IoT Hub** section.
-1. Right-click your device and select **Send C2D Message to Device**.
+1. Right-click your IoT device and select **Send C2D Message to Device** from the context menu.
-2. Enter the message in input box.
+1. Enter the message in the input box, and then select the Enter key.
-3. Results will be shown in **OUTPUT** > **Azure IoT Hub** view.
+1. The results are shown in the **Output** panel.
## Next steps
iot-hub Iot Hubs Manage Device Twin Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hubs-manage-device-twin-tags.md
To try out some of the concepts described in this article, see the following IoT
* [How to use the device twin](device-twins-node.md) * [How to use device twin properties](tutorial-device-twins.md)
-* [Device management with Azure IoT Tools for VS Code](iot-hub-device-management-iot-toolkit.md)
+* [Device management with the Azure IoT Hub extension for VS Code](iot-hub-device-management-iot-toolkit.md)
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-bicep.md
Previously updated : 04/08/2022 Last updated : 04/21/2023 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-template.md
Previously updated : 04/27/2021 Last updated : 04/23/2023 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
An Azure PowerShell script is available that does the following procedures:
The following are the recommended steps to change the allocation method. 1. Sign in to the [Azure portal](https://portal.azure.com).
-
+ 2. Select **All resources** in. the left menu. Select the **basic public IP address associated with the basic load balancer** from the resource list.
-
+ 3. In the **Settings** of the basic public IP address, select **Configurations**.
-
+ 4. In **Assignment**, select **Static**.
-
+ 5. Select **Save**.
-
+ >[!NOTE] >For virtual machines which have public IPs, you must create standard IP addresses first. The same IP address is not guaranteed. Disassociate the VMs from the basic IPs and associate them with the newly created standard IP addresses. You'll then be able to follow the instructions to add VMs into the backend pool of the Standard Azure Load Balancer.
Download the migration script from the [PowerShell Gallery](https://www.powershe
There are two options depending on your local PowerShell environment setup and preferences:
-* If you donΓÇÖt have the Az PowerShell module installed, or donΓÇÖt mind uninstalling the Az PowerShell module, use the `Install-Script` option to run the script.
+* If you don't have the Az PowerShell module installed, or don't mind uninstalling the Az PowerShell module, use the `Install-Script` option to run the script.
* If you need to keep the Az PowerShell module, download the script and run it directly.
To determine if you have the Az PowerShell module installed, run `Get-InstalledM
### Install with Install-Script To use this option, don't have the Az PowerShell module installed on your computer. If they're installed, the following command displays an error. Uninstall the Az PowerShell module, or use the other option to download the script manually and run it.
-
+ Run the script with the following command: ```azurepowershell Install-Script -Name AzurePublicLBUpgrade ```
-This command also installs the required Az PowerShell module.
+This command also installs the required Az PowerShell module.
### Install with the script directly
-If you do have Az PowerShell module installed and can't uninstall it, or don't want to uninstall it,you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/gallery/gallery/how-to/working-with-packages/manual-download)
+If you do have Az PowerShell module installed and can't uninstall it, or don't want to uninstall it,you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/gallery/how-to/working-with-packages/manual-download)
To run the script:
To run the script:
3. Examine the required parameters: * **oldRgName: [String]: Required** ΓÇô This parameter is the resource group for your existing basic load balancer you want to upgrade. To find this string value, navigate to the Azure portal, select your basic load balancer source, and select the **Overview** for the load balancer. The resource group is located on that page
-
+ * **oldLBName: [String]: Required** ΓÇô This parameter is the name of your existing the basic load balancer you want to upgrade.
-
+ * **newLBName: [String]: Required** ΓÇô This parameter is the name for the standard load balancer to be created 4. Run the script using the appropriate parameters. It may take five to seven minutes to finish.
To run the script:
### Create a NAT gateway for outbound access
-The script creates an outbound rule that enables outbound connectivity. Azure Virtual Network NAT is the recommended service for outbound connectivity. For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
+The script creates an outbound rule that enables outbound connectivity. Azure Virtual Network NAT is the recommended service for outbound connectivity. For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
To create a NAT gateway resource and associate it with a subnet of your virtual network see, [Create NAT gateway](quickstart-load-balancer-standard-public-portal.md#create-nat-gateway).
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
This article introduces a PowerShell script that creates a Standard Load Balance
### Caveats\Limitations
-* Script only supports Internal Load Balancer upgrade where no outbound connection is required. If you required [outbound connection](./load-balancer-outbound-connections.md) for some of your VMs, refer to this [page](upgrade-InternalBasic-To-PublicStandard.md) for instructions.
+* Script only supports Internal Load Balancer upgrade where no outbound connection is required. If you required [outbound connection](./load-balancer-outbound-connections.md) for some of your VMs, refer to this [page](upgrade-InternalBasic-To-PublicStandard.md) for instructions.
* The Basic Load Balancer needs to be in the same resource group as the backend VMs and NICs.
-* If the Standard load balancer is created in a different region, you wonΓÇÖt be able to associate the VMs existing in the old region to the newly created Standard Load Balancer. To work around this limitation, make sure to create a new VM in the new region.
+* If the Standard load balancer is created in a different region, you won't be able to associate the VMs existing in the old region to the newly created Standard Load Balancer. To work around this limitation, make sure to create a new VM in the new region.
* If your Load Balancer doesn't have any frontend IP configuration or backend pool, you're likely to hit an error running the script. Make sure they aren't empty. * The script can't migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. For this type of upgrade, see [Upgrade a basic load balancer used with Virtual Machine Scale Sets](./upgrade-basic-standard-virtual-machine-scale-sets.md) for instructions and more information.
This article introduces a PowerShell script that creates a Standard Load Balance
1. Select **All services** in the left-hand menu, select **All resources**, and then select your Basic Load Balancer from the resources list.
-2. Under **Settings**, select **Frontend IP configuration**, and select the first frontend IP configuration.
+2. Under **Settings**, select **Frontend IP configuration**, and select the first frontend IP configuration.
3. For **Assignment**, select **Static**
Download the migration script from the [PowerShell Gallery](https://www.powersh
There are two options for you depending on your local PowerShell environment setup and preferences:
-* If you donΓÇÖt have the Azure Az PowerShell module installed, or donΓÇÖt mind uninstalling the Azure Az PowerShell module, the best option is to use the `Install-Script` option to run the script.
+* If you don't have the Azure Az PowerShell module installed, or don't mind uninstalling the Azure Az PowerShell module, the best option is to use the `Install-Script` option to run the script.
* If you need to keep the Azure Az PowerShell module, your best bet is to download the script and run it directly. To determine if you have the Azure Az PowerShell module installed, run `Get-InstalledModule -Name az`. If you don't see any installed Az PowerShell module, then you can use the `Install-Script` method.
To determine if you have the Azure Az PowerShell module installed, run `Get-Inst
### Install using the Install-Script method To use this option, you must not have the Azure Az PowerShell module installed on your computer. If they're installed, the following command displays an error. You can either uninstall the Azure Az PowerShell module, or use the other option to download the script manually and run it.
-
+ Run the script with the following command: `Install-Script -Name AzureILBUpgrade`
-This command also installs the required Az PowerShell module.
+This command also installs the required Az PowerShell module.
### Install using the Manual Download method
-If you do have some Azure Az PowerShell module installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/gallery/gallery/how-to/working-with-packages/manual-download).
+If you do have some Azure Az PowerShell module installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/gallery/how-to/working-with-packages/manual-download).
### Run the script
If you do have some Azure Az PowerShell module installed and can't uninstall the
1. Examine the required parameters: * **rgName: [String]: Required** ΓÇô This parameter is the resource group for your existing Basic Load Balancer and new Standard Load Balancer. To find this string value, navigate to Azure portal, select your Basic Load Balancer source, and select the **Overview** for the load balancer. The Resource Group is located on that page.
- * **oldLBName: [String]: Required** ΓÇô This parameter is the name of your existing Basic Balancer you want to upgrade.
+ * **oldLBName: [String]: Required** ΓÇô This parameter is the name of your existing Basic Balancer you want to upgrade.
* **newlocation: [String]: Required** ΓÇô This parameter is the location in which the Standard Load Balancer will be created. It's recommended to inherit the same location of the chosen Basic Load Balancer to the Standard Load Balancer for better association with other existing resources. * **newLBName: [String]: Required** ΓÇô This parameter is the name for the Standard Load Balancer to be created. 1. Run the script using the appropriate parameters. It may take five to seven minutes to finish.
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
# Upgrade an internal basic load balancer - Outbound connections required >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date.
A standard [Azure Load Balancer](load-balancer-overview.md) offers increased functionality and high availability through zone redundancy. For more information about Azure Load Balancer SKUs, see [Azure Load Balancer SKUs](./skus.md#skus). A standard internal Azure Load Balancer doesn't provide outbound connectivity. The PowerShell script in this article, migrates the basic load balancer configuration to a standard public load balancer.
An Azure PowerShell script is available that does the following procedures:
* The script supports an internal load balancer upgrade where outbound connectivity is required. If outbound connectivity isn't required, see [Upgrade an internal basic load balancer - Outbound connections not required](./upgrade-basicinternal-standard.md).
-* The standard load balancer has a new public address. ItΓÇÖs impossible to move the IP addresses associated with existing basic internal load balancer to a standard public load balancer because of different SKUs.
+* The standard load balancer has a new public address. It's impossible to move the IP addresses associated with existing basic internal load balancer to a standard public load balancer because of different SKUs.
-* If the standard load balancer is created in a different region, you wonΓÇÖt be able to associate the VMs in the old region. To avoid this constraint, ensure you create new VMs in the new region.
+* If the standard load balancer is created in a different region, you won't be able to associate the VMs in the old region. To avoid this constraint, ensure you create new VMs in the new region.
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
Download the migration script from the [PowerShell Gallery](https://www.powershe
There are two options depending on your local PowerShell environment setup and preferences:
-* If you donΓÇÖt have the Az PowerShell module installed, or donΓÇÖt mind uninstalling the Az PowerShell module, use the `Install-Script` option to run the script.
+* If you don't have the Az PowerShell module installed, or don't mind uninstalling the Az PowerShell module, use the `Install-Script` option to run the script.
* If you need to keep the Az PowerShell module, download the script and run it directly.
To determine if you have the Az PowerShell module installed, run `Get-InstalledM
### Install with Install-Script To use this option, don't have the Az PowerShell module installed on your computer. If they're installed, the following command displays an error. Uninstall the Az PowerShell module, or use the other option to download the script manually and run it.
-
+ Run the script with the following command: ```azurepowershell Install-Script -Name AzureLBUpgrade ```
-This command also installs the required Az PowerShell module.
+This command also installs the required Az PowerShell module.
### Install with the script directly
-If you do have Az PowerShell module installed and can't uninstall them, or don't want to uninstall them, you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/gallery/gallery/how-to/working-with-packages/manual-download).
+If you do have Az PowerShell module installed and can't uninstall them, or don't want to uninstall them, you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/gallery/how-to/working-with-packages/manual-download).
To run the script:
To run the script:
3. Examine the required parameters: * **oldRgName: [String]: Required** ΓÇô This parameter is the resource group for your existing basic load balancer you want to upgrade. To find this string value, navigate to the Azure portal, select your basic load balancer source, and select the **Overview** for the load balancer. The resource group is located on that page
-
+ * **oldLBName: [String]: Required** ΓÇô This parameter is the name of your existing the basic load balancer you want to upgrade
- * **newRgName: [String]: Required** ΓÇô This parameter is the resource group where the standard load balancer is created. The resource group can be new or existing. If you choose an existing resource group, the name of the load balancer must be unique within the resource group.
-
+ * **newRgName: [String]: Required** ΓÇô This parameter is the resource group where the standard load balancer is created. The resource group can be new or existing. If you choose an existing resource group, the name of the load balancer must be unique within the resource group.
+ * **newLocation: [String]: Required** ΓÇô This parameter is the location where the standard load balancer is created. We recommend you choose the same location as the basic load balancer to ensure association of existing resources
-
+ * **newLBName: [String]: Required** ΓÇô This parameter is the name for the standard load balancer to be created 4. Run the script using the appropriate parameters. It may take five to seven minutes to finish.
To run the script:
Ensure that the script successfully created a new standard public load balancer with the exact configuration from your basic internal load balancer. You can verify the configuration from the Azure portal. Send a small amount of traffic through the standard load balancer as a manual test.
-
+ The following scenarios explain how you add VMs to the backend pools of the newly created standard public load balancer, and our recommendations for each scenario: * **Move existing VMs from the backend pools of the old basic internal load balancer to the backend pools of the new standard public load balancer** 1. Sign in to the [Azure portal](https://portal.azure.com).
-
+ 2. Select **All resources** in the left menu. Select the **new standard load balancer** from the resource list.
-
+ 3. In the **Settings** in the load balancer page, select **Backend pools**.
-
+ 4. Select the backend pool that matches the backend pool of the basic load balancer.
-
+ 5. Select **Virtual Machine**
-
+ 6. Select the VMs from the matching backend pool of the basic load balancer.
-
+ 7. Select **Save**.
-
+ >[!NOTE] >For virtual machines which have public IPs, you must create standard IP addresses first. The same IP address is not guaranteed. Disassociate the VMs from the basic IPs and associate them with the newly created standard IP addresses. You'll then be able to follow the instructions to add VMs into the backend pool of the Standard Azure Load Balancer. * **Create new VMs to add to the backend pools of the new standard public load balancer**.
-
+ * To create a virtual machine and associate it with the load balancer, see [Create virtual machines](./quickstart-load-balancer-standard-public-portal.md#create-virtual-machines). ### Create a NAT gateway for outbound access
-The script creates an outbound rule that enables outbound connectivity. Azure NAT Gateway is the recommended service for outbound connectivity. For more information about Azure NAT Gateway, see [What is Azure NAT Gateway?](../virtual-network/nat-gateway/nat-overview.md).
+The script creates an outbound rule that enables outbound connectivity. Azure NAT Gateway is the recommended service for outbound connectivity. For more information about Azure NAT Gateway, see [What is Azure NAT Gateway?](../virtual-network/nat-gateway/nat-overview.md).
To create a NAT gateway resource and associate it with a subnet of your virtual network see, [Create NAT gateway](quickstart-load-balancer-standard-public-portal.md#create-nat-gateway).
machine-learning Tutorial Power Bi Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-power-bi-custom-model.md
-+ Last updated 12/22/2021
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-create-cluster.md
You can select to use a specific version of ARO when creating your cluster. Firs
`az aro get-versions --location <region>`
-Once you've chosen the version, specify it using the `--version` parameter in the `aro create` command:
+Once you've chosen the version, specify it using the `--version` parameter in the `az aro create` command:
```azurecli-interactive az aro create \
az aro create \
--name $CLUSTER \ --vnet aro-vnet \ --master-subnet master-subnet \
- --worker-subnet worker-subnet
+ --worker-subnet worker-subnet \
--version <x.y.z> ```
sap Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/plan-deployment.md
management_dns_subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
use_custom_dns_a_registration = false ```
-Without these values, a Private DNS Zone will be created in the SAP Library resource group.
+Without these values, a Private DNS Zone will be created in the SAP Library resource group.
For more information, see the [in-depth explanation of how to configure the deployer](configure-control-plane.md).
The SAP library resource group provides storage for SAP installation media, Bill
## Workload zone planning
-Most SAP application landscapes are partitioned in different tiers. In SDAF these are called workload zones, for example, you might have different workload zones for development, quality assurance, and production. See [workload zones](deployment-framework.md#deployment-components).
+Most SAP application landscapes are partitioned in different tiers. In SDAF these are called workload zones, for example, you might have different workload zones for development, quality assurance, and production. See [workload zones](deployment-framework.md#deployment-components).
-The default naming convention for workload zones is `[ENVIRONMENT]-[REGIONCODE]-[NETWORK]-INFRASTRUCTURE`, for example, `DEV-WEEU-SAP01-INFRASTRUCTURE` for a development environment hosted in the West Europe region using the SAP01 virtual network or `PRD-WEEU-SAP02-INFRASTRUCTURE` for a production environment hosted in the West Europe region using the SAP02 virtual network.
+The default naming convention for workload zones is `[ENVIRONMENT]-[REGIONCODE]-[NETWORK]-INFRASTRUCTURE`, for example, `DEV-WEEU-SAP01-INFRASTRUCTURE` for a development environment hosted in the West Europe region using the SAP01 virtual network or `PRD-WEEU-SAP02-INFRASTRUCTURE` for a production environment hosted in the West Europe region using the SAP02 virtual network.
The `SAP01` and `SAP02` define the logical names for the Azure virtual networks, these can be used to further partition the environments. If you need two Azure Virtual Networks for the same workload zone, for example, for a multi subscription scenario where you host development environments in two subscriptions, you can use the different logical names for each virtual network. For example, `DEV-WEEU-SAP01-INFRASTRUCTURE` and `DEV-WEEU-SAP02-INFRASTRUCTURE`.
Before you design your workload zone layout, consider the following questions:
* In which regions do you need to deploy workloads? * How many workload zones does your scenario require (development, quality assurance, production etc.)?
-* Are you deploying into new Virtual networks or are you using existing virtual networks
+* Are you deploying into new Virtual networks or are you using existing virtual networks
* How is DNS configured (integrate with existing DNS or deploy a Private DNS zone in the control plane)? * What storage type do you need for the shared storage (Azure Files NFS, Azure NetApp Files)?
For more information, see [how to configure a workload zone deployment for autom
### Windows based deployments
-When doing Windows based deployments the Virtual Machines in the workload zone's Virtual Network need to be able to communicate with Active Directory in order to join the SAP Virtual Machines to the Active Directory Domain. The provided DNS name needs to be resolvable by the Active Directory.
+When doing Windows based deployments the Virtual Machines in the workload zone's Virtual Network need to be able to communicate with Active Directory in order to join the SAP Virtual Machines to the Active Directory Domain. The provided DNS name needs to be resolvable by the Active Directory.
As SDAF won't create accounts in Active Directory the accounts need to be precreated and stored in the workload zone key vault. | Credential | Name | Example |
-| | -- | -- |
-| Account that can perform domain join activities | [IDENTIFIER]-ad-svc-account | DEV-WEEU-SAP01-ad-svc-account |
-| Password for the account that performs the domain join | [IDENTIFIER]-ad-svc-account-password | DEV-WEEU-SAP01-ad-svc-account-password |
-| 'sidadm' account password | [IDENTIFIER]-[SID]-win-sidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
-| SID Service account password | [IDENTIFIER]-[SID]-svc-sidadm-password | DEV-WEEU-SAP01-W01-svc-sidadm-password |
-| SQL Server Service account | [IDENTIFIER]-[SID]-sql-svc-account | DEV-WEEU-SAP01-W01-sql-svc-account |
-| SQL Server Service account password | [IDENTIFIER]-[SID]-sql-svc-password | DEV-WEEU-SAP01-W01-sql-svc-password |
-| SQL Server Agent Service account | [IDENTIFIER]-[SID]-sql-agent-account | DEV-WEEU-SAP01-W01-sql-agent-account |
-| SQL Server Agent Service account password | [IDENTIFIER]-[SID]-sql-agent-password | DEV-WEEU-SAP01-W01-sql-agent-password |
+| | -- | -- |
+| Account that can perform domain join activities | [IDENTIFIER]-ad-svc-account | DEV-WEEU-SAP01-ad-svc-account |
+| Password for the account that performs the domain join | [IDENTIFIER]-ad-svc-account-password | DEV-WEEU-SAP01-ad-svc-account-password |
+| 'sidadm' account password | [IDENTIFIER]-[SID]-win-sidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
+| SID Service account password | [IDENTIFIER]-[SID]-svc-sidadm-password | DEV-WEEU-SAP01-W01-svc-sidadm-password |
+| SQL Server Service account | [IDENTIFIER]-[SID]-sql-svc-account | DEV-WEEU-SAP01-W01-sql-svc-account |
+| SQL Server Service account password | [IDENTIFIER]-[SID]-sql-svc-password | DEV-WEEU-SAP01-W01-sql-svc-password |
+| SQL Server Agent Service account | [IDENTIFIER]-[SID]-sql-agent-account | DEV-WEEU-SAP01-W01-sql-agent-account |
+| SQL Server Agent Service account password | [IDENTIFIER]-[SID]-sql-agent-password | DEV-WEEU-SAP01-W01-sql-agent-password |
#### DNS settings
The automation framework uses [Service Principals](#service-principal-creation)
The automation framework will use the workload zone key vault for storing both the automation user credentials and the SAP system credentials. The virtual machine credentials are named as follows: | Credential | Name | Example |
-| - | - | - |
-| Private key | [IDENTIFIER]-sshkey | DEV-WEEU-SAP01-sid-sshkey |
-| Public key | [IDENTIFIER]-sshkey-pub | DEV-WEEU-SAP01-sid-sshkey-pub |
-| Username | [IDENTIFIER]-username | DEV-WEEU-SAP01-sid-username |
-| Password | [IDENTIFIER]-password | DEV-WEEU-SAP01-sid-password |
-| sidadm Password | [IDENTIFIER]-[SID]-sap-password | DEV-WEEU-SAP01-X00-sap-password |
-| sidadm account password | [IDENTIFIER]-[SID]-winsidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
-| SID Service account password | [IDENTIFIER]-[SID]-svc-sidadm-password | DEV-WEEU-SAP01-W01-svc-sidadm-password |
+| - | - | - |
+| Private key | [IDENTIFIER]-sshkey | DEV-WEEU-SAP01-sid-sshkey |
+| Public key | [IDENTIFIER]-sshkey-pub | DEV-WEEU-SAP01-sid-sshkey-pub |
+| Username | [IDENTIFIER]-username | DEV-WEEU-SAP01-sid-username |
+| Password | [IDENTIFIER]-password | DEV-WEEU-SAP01-sid-password |
+| sidadm Password | [IDENTIFIER]-[SID]-sap-password | DEV-WEEU-SAP01-X00-sap-password |
+| sidadm account password | [IDENTIFIER]-[SID]-winsidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
+| SID Service account password | [IDENTIFIER]-[SID]-svc-sidadm-password | DEV-WEEU-SAP01-W01-svc-sidadm-password |
### Service principal creation
Create your service principal:
"tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } ```
-1. Optionally assign the User Access Administrator role to your service principal. For example:
+1. Optionally assign the User Access Administrator role to your service principal. For example:
```azurecli az role assignment create --assignee <your-application-ID> --role "User Access Administrator" --scope /subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group-name> ```
Create your service principal:
For more information, see [the Azure CLI documentation for creating a service principal](/cli/azure/create-an-azure-service-principal-azure-cli) ### Permissions management
-In a locked down environment, you might need to assign another permissions to the service principals. For example, you might need to assign the User Access Administrator role to the service principal.
+In a locked down environment, you might need to assign another permissions to the service principals. For example, you might need to assign the User Access Administrator role to the service principal.
#### Required permissions
The following table shows the required permissions for the service principals:
> | Azure CLI | Installing [Azure CLI](/cli/azure/install-azure-cli-linux) | Setup of Deployer and during deployments | The firewall requirements for Azure CLI installation are defined here: [Installing Azure CLI](/cli/azure/azure-cli-endpoints) | > | PIP | 'bootstrap.pypa.io' | Setup of Deployer | See [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) | > | Ansible | 'pypi.org', 'pythonhosted.org', 'galaxy.ansible.com' | Setup of Deployer | |
-> | PowerShell Gallery | 'onegetcdn.azureedge.net', 'psg-prod-centralus.azureedge.net', 'psg-prod-eastus.azureedge.net' | Setup of Windows based systems | See [PowerShell Gallery](/powershell/gallery/gallery/getting-started#network-access-to-the-powershell-gallery) |
+> | PowerShell Gallery | 'onegetcdn.azureedge.net', 'psg-prod-centralus.azureedge.net', 'psg-prod-eastus.azureedge.net' | Setup of Windows based systems | See [PowerShell Gallery](/powershell/gallery/getting-started#network-access-to-the-powershell-gallery) |
> | Windows components | 'download.visualstudio.microsoft.com', 'download.visualstudio.microsoft.com', 'download.visualstudio.com' | Setup of Windows based systems | See [Visual Studio components](/visualstudio/install/install-and-use-visual-studio-behind-a-firewall-or-proxy-server#install-visual-studio) | > | SAP Downloads | 'softwaredownloads.sap.com'ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé | SAP Software download | See [SAP Downloads](https://launchpad.support.sap.com/#/softwarecenter) | > | Azure DevOps Agent | 'https://vstsagentpackage.azureedge.net'ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé | Setup Azure DevOps | | ## DevOps structure
-The deployment framework uses three separate repositories for the deployment artifacts. For your own parameter files, it's a best practice to keep these files in a source control repository that you manage.
+The deployment framework uses three separate repositories for the deployment artifacts. For your own parameter files, it's a best practice to keep these files in a source control repository that you manage.
### Main repository
-This repository contains the Terraform parameter files and the files needed for the Ansible playbooks for all the workload zone and system deployments.
+This repository contains the Terraform parameter files and the files needed for the Ansible playbooks for all the workload zone and system deployments.
You can create this repository by cloning the [SAP on Azure Deployment Automation Framework bootstrap repository](https://github.com/Azure/sap-automation-bootstrap/) into your source control repository.
Before you configure the SAP system, consider the following questions:
* How many database servers do you need? * Does your scenario require high availability? * How many application servers do you need?
-* How many web dispatchers do you need, if any?
+* How many web dispatchers do you need, if any?
* How many central services instances do you need? * What size virtual machine (VM) do you need? * Which VM image do you want to use? Is the image on Azure Marketplace or custom?
When planning a deployment, it's important to consider the overall flow. There a
## Naming conventions
-The automation framework uses a default naming convention. If you'd like to use a custom naming convention, plan and define your custom names before deployment. For more information, see [how to configure the naming convention](naming-module.md).
+The automation framework uses a default naming convention. If you'd like to use a custom naming convention, plan and define your custom names before deployment. For more information, see [how to configure the naming convention](naming-module.md).
## Disk sizing
sentinel Workspace Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/workspace-manager.md
+
+ Title: Manage multiple Microsoft Sentinel workspaces with workspace manager
+description: Learn how to centrally manage multiple Microsoft Sentinel workspaces within one or more Azure tenants with workspace manager. This article takes you through provisioning and usage of Workspace Manager to help you gain operational efficiency and operate at scale.
+++ Last updated : 04/24/2023+++
+# Centrally manage multiple Microsoft Sentinel workspaces with workspace manager
+
+Learn how to centrally manage multiple Microsoft Sentinel workspaces within one or more Azure tenants with workspace manager. This article takes you through provisioning and usage of workspace manager. Whether you're a global enterprise or a Managed Security Services Provider (MSSP), workspace manager helps you operate at scale efficiently.
+
+Here are the active content types supported with workspace
+- Analytics rules
+- Automation rules (excluding Playbooks)
+- Parsers, Saved Searches and Functions
+- Hunting and Livestream queries
+- Workbooks
+
+## Prerequisites
+
+- You need at least two Microsoft Sentinel workspaces. One workspace to manage from and at least one other workspace to be managed.
+- The [Microsoft Sentinel Contributor role assignment](/azure/role-based-access-control/built-in-roles#microsoft-sentinel-contributor) is required on the central workspace (where workspace manager is enabled on), and on the member workspace(s) the contributor needs to manage. To learn more about roles in Microsoft Sentinel, see [Roles and permissions in Microsoft Sentinel](roles.md).
+- Enable Azure Lighthouse if you're managing workspaces across multiple Azure AD tenants. To learn more, see [Manage Microsoft Sentinel workspaces at scale](/azure/lighthouse/how-to/manage-sentinel-workspaces).
++
+## Considerations
+Configure a central workspace to be the environment where you consolidate content items and configurations to be published at scale to member workspaces. Create a new Microsoft Sentinel workspace or utilize an existing one to serve as the central workspace.
+
+Depending on your scenario, consider these architectures:
+- **Direct-link** is the least complex setup. Control all member workspaces with only one central workspace.
+- **Co-Management** supports scenarios where more than one central workspace needs to manage a member workspace. For example, workspaces simultaneously managed by an in-house SOC team and an MSSP.
+- **N-Tier** supports complex scenarios where a central workspace controls another central workspace. For example, a conglomerate that manages multiple subsidiaries, where each subsidiary also manages multiple workspaces.
++
+## Enable workspace manager on the central workspace
+Enable the central workspace once you have decided which Microsoft Sentinel workspace should be the workspace manager.
+
+1. Navigate to the **Settings** blade in the parent workspace, and toggle **On** the workspace manager configuration setting to "Make this workspace a parent".
+1. Once enabled, a new menu **Workspace manager (preview)** appears under **Configuration**.
+
+ :::image type="content" source="media/workspace-manager/enable-workspace-manager-on.png" alt-text="Screenshot shows the workspace manager configuration settings. The menu item added for workspace manager is highlighted and the toggle button on.":::
+
+## Onboard member workspaces
+Member workspaces are the set of workspaces managed by workspace manager. Onboard some or all of the workspaces in the tenant, and across multiple tenants as well (if Azure Lighthouse is enabled).
+1. Navigate to workspace manager and select "Add workspaces"
+ :::image type="content" source="media/workspace-manager/add-workspace.png" alt-text="Screenshot shows the add workspace menu." lightbox="media/workspace-manager/add-workspace.png":::
+1. Select the member workspace(s) you would like to onboard to workspace manager.
+ :::image type="content" source="media/workspace-manager/add-workspace-select.png" alt-text="Screenshot shows the add workspace selection menu.":::
+1. Once successfully onboarded, the **Members** count increases and your member workspaces are reflected in the **Workspaces** tab.
+ :::image type="content" source="media/workspace-manager/add-workspace-selected.png" alt-text="Screenshot shows the added workspaces and the Members count incremented to 2.":::
+
+## Create a group
+
+Workspace manager groups allow you to organize workspaces together based on business groups, verticals, geography, etc. Use groups to pair content items relevant to the workspaces.
+
+> [!TIP]
+> Make sure you have at least one active content item deployed in the central workspace. This allows you to select content items from the central workspace to be published in the member workspace(s) in the subsequent steps.
+>
+
+1. To create a group:
+ - To add one workspace, select **Add** > **Group**.
+ - To add multiple workspaces, select the workspaces and **Add** > **Group from selected**.
+ :::image type="content" source="media/workspace-manager/add-group.png" alt-text="Screenshot shows the add group menu.":::
+
+1. On the **Create or update group** page, enter a **Name** and **Description** for the group.
+ :::image type="content" source="media/workspace-manager/add-group-name.png" alt-text="Screenshot shows the group create or update configuration page.":::
+
+1. In the **Select workspaces** tab, select **Add** and select the member workspaces that you would like to add to the group.
+1. In the **Select content** tab, you have 2 ways to add content items.
+ - Method 1: Select the **Add** menu and choose **All content**. All active content currently deployed in the central workspace is added. This list is a point-in-time snapshot that selects only active content, not templates.
+ - Method 2: Select the **Add** menu and choose **Content**. A **Select content** window opens to custom select the content added.
+ :::image type="content" source="media/workspace-manager/add-group-content.png" alt-text="Screenshot shows the group content selection.":::
+
+1. Filter the content as needed before you **Review + create**.
+1. Once created, the **Group count** increases and your groups are reflected in the **Groups tab**.
+
+## Publish the Group definition
+At this point, the content items selected haven't been published to the member workspace(s) yet.
+
+1. Select the group > **Publish content**.
+
+ :::image type="content" source="media/workspace-manager/publish-group.png" alt-text="Screenshot shows the group publish window.":::
+
+ To bulk publish, multi-select the desired groups and select **Publish**.
+ :::image type="content" source="media/workspace-manager/publish-groups.png" alt-text="Screenshot shows the multi-select group publishing window.":::
+
+1. The **Last publish status** column updates to reflect **In progress**.
+ :::image type="content" source="media/workspace-manager/publish-groups-in-progress.png" alt-text="Screenshot shows the multi group publishing progress column.":::
+
+1. If successful, the **Last publish status** updates to reflect **Succeeded**. The selected content items now exist in the member workspaces.
+ :::image type="content" source="media/workspace-manager/publish-groups-success.png" alt-text="Screenshot shows the last published column with entries that succeeded.":::
+
+ If just one content item fails to publish for the entire group, the **Last publish status** updates to reflect **Failed**.
++
+### Troubleshooting
+Each publish attempt has a link to help with troubleshooting if content items fail to publish.
+
+1. Select the **Failed** hyperlink to open the job failure details window. A status for each content item and target workspace pair is displayed.
+1. Filter the **Status** for failed item pairs.
+
+ :::image type="content" source="media/workspace-manager/publish-groups-job-details-failure.png" alt-text="Screenshot shows the job details of a group publishing failure event." lightbox="media/workspace-manager/publish-groups-job-details-failure.png":::
+
+Common reasons for failure include:
+- Content items referenced in the group definition no longer exist at the time of publish (have been deleted).
+- Permissions have changed at the time of publish. For example, the user is no longer a Microsoft Sentinel Contributor or doesn't have sufficient permissions on the member workspace anymore.
+- A member workspace has been deleted.
+
+### Known limitations
+- Playbooks attributed or attached to analytics and automation rules aren't currently supported.
+- Workbooks stored in bring-your-own-storage aren't currently supported.
+- Workspace manager only manages content items published from the central workspace. It doesn't manage content created locally from member workspace(s).
+- Currently, deleting content residing in member workspace(s) centrally via workspace manager isn't supported.
+
+### API references
+- [Workspace Manager Assignment Jobs](/rest/api/securityinsights/preview/workspace-manager-assignment-jobs)
+- [Workspace Manager Assignments](/rest/api/securityinsights/preview/workspace-manager-assignments)
+- [Workspace Manager Configurations](/rest/api/securityinsights/preview/workspace-manager-configurations)
+- [Workspace Manager Groups](/rest/api/securityinsights/preview/workspace-manager-groups)
+- [Workspace Manager Members](/rest/api/securityinsights/preview/workspace-manager-members)
+
+## Next steps
+- [Manage multiple tenants in Microsoft Sentinel as an MSSP](multiple-tenants-service-providers.md)
+- [Work with Microsoft Sentinel incidents in many workspaces at once](multiple-workspace-view.md)
+- [Protecting MSSP intellectual property in Microsoft Sentinel](mssp-protect-intellectual-property.md)
storage Classic Account Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migrate.md
To migrate a classic storage account to the Azure Resource Manager deployment mo
> > To manage Azure Resource Manager resources, we recommend that you use the Az PowerShell module. The Az module replaces the deprecated AzureRM module. For more information about moving from the AzureRM module to the Az module, see [Migrate Azure PowerShell scripts from AzureRM to Az](/powershell/azure/migrate-from-azurerm-to-az).
-First, install PowerShellGet if you don't already have it installed. For more information on how to install PowerShellGet, see [Installing PowerShellGet](/powershell/scripting/gallery/installing-psget#installing-the-latest-version-of-powershellget). After you install PowerShellGet, close and reopen the PowerShell console.
+First, install PowerShellGet if you don't already have it installed. For more information on how to install PowerShellGet, see [Installing PowerShellGet](/powershell/gallery/powershellget/install-powershellget). After you install PowerShellGet, close and reopen the PowerShell console.
Next, install the Azure Service Management module. If you also have the AzureRM module installed, you'll need to include the `-AllowClobber` parameter, as described in [Step 2: Install Azure PowerShell](/powershell/azure/servicemanagement/install-azure-ps#step-2-install-azure-powershell). After the installation is complete, import the Azure Service Management module.
storage Videos Azure Files And File Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/videos-azure-files-and-file-sync.md
Title: Azure Files and File Sync videos
-description: Identify and troubleshoot performance issues in Azure Storage accounts.
+description: View a comprehensive list of Azure Files and Azure File Sync video content released over time.
-+ Last updated 04/19/2023 -- # Azure Files and Azure File Sync videos
-If you're new to Azure Files and File Sync or looking to deepen your understanding, this article provides a comprehensive list of video content released over time. Note that some videos may lack the latest updates.
+If you're new to Azure Files and File Sync or looking to deepen your understanding, this article provides a comprehensive list of video content released over time. Some videos might not reflect the latest updates.
+
+## Video list
+
+ :::column:::
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ :::column-end:::
+ :::column:::
+ **Domain join Azure file share with on-premises Active Directory and replace your file server with Azure file share**
+ :::column-end:::
+
+ :::column:::
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/bmRZi9iGsK0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ :::column-end:::
+ :::column:::
+ **Mount an Azure file share in Windows**
+ :::column-end:::
+
+ :::column:::
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/44qVRZg-bMA?list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ :::column-end:::
+ :::column:::
+ **NFS 4.1 for Azure file shares**
+ :::column-end:::
+
+ :::column:::
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/V43p6qIhFkc?list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ :::column-end:::
+ :::column:::
+ **How to set up Azure File Sync**
+ :::column-end:::
-### Video list:
-- [Domain join Azure File Share with On-Premise Active Directory & replace your file server with Azure File Share.](https://www.youtube.com/watch?v=jd49W33DxkQ)-- [How to mount Azure File Share in Windows?](https://www.youtube.com/watch?v=bmRZi9iGsK0)-- [NFS 4.1 for Azure File Shares](https://www.youtube.com/watch?v=44qVRZg-bMA&list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1&index=10)-- [How to setup Azure File Sync?](https://www.youtube.com/watch?v=V43p6qIhFkc&list=PLEq-KSMM-P-0jRrVF5peNCA0GbBZrOhE1&index=13)-- [Integrating HPC Pack with Azure Files](https://www.youtube.com/watch?v=uStaB09y6TE)
+ :::column:::
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/uStaB09y6TE" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
+ :::column-end:::
+ :::column:::
+ **Integrating HPC Pack with Azure Files**
+ :::column-end:::
virtual-desktop Fslogix Profile Container Configure Azure Files Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-profile-container-configure-azure-files-active-directory.md
To set up a storage account:
- If you select Premium performance, set the **Premium account type** to **File shares**. - For **Redundancy**, select **Locally-redundant storage (LRS)** as a minimum. - The defaults on the remaining tabs don't need to be changed.
-
+ > [!TIP] > Your organization may have requirements to change these defaults: >
- > - Whether you should select **Premium** depends on your IOPS and latency requirements. For more information, see [Storage options for FSLogix Profile Containers in Azure Virtual Desktop](store-fslogix-profile.md).
+ > - Whether you should select **Premium** depends on your IOPS and latency requirements. For more information, see [Storage options for FSLogix Profile Containers in Azure Virtual Desktop](store-fslogix-profile.md).
> - On the **Advanced** tab, **Enable storage account key access** must be left enabled. > - For more information on the remaining configuration options, see [Planning for an Azure Files deployment](../storage/files/storage-files-planning.md).
To use Active Directory accounts for the share permissions of your file share, y
``` > [!IMPORTANT]
- > This module requires requires the [PowerShell Gallery](/powershell/scripting/gallery/overview) and [Azure PowerShell](/powershell/azure/what-is-azure-powershell). You may be prompted to install these if they are not already installed or they need updating. If you are prompted for these, install them, then close all instances of PowerShell. Re-open an elevated PowerShell prompt and import the `AzFilesHybrid` module again before continuing.
+ > This module requires requires the [PowerShell Gallery](/powershell/gallery/overview) and [Azure PowerShell](/powershell/azure/what-is-azure-powershell). You may be prompted to install these if they are not already installed or they need updating. If you are prompted for these, install them, then close all instances of PowerShell. Re-open an elevated PowerShell prompt and import the `AzFilesHybrid` module again before continuing.
1. Sign in to Azure by running the command below. You will need to use an account that has one of the following role-based access control (RBAC) roles:
To use Active Directory accounts for the share permissions of your file share, y
> If your Azure account has access to multiple tenants and/or subscriptions, you will need to select the correct subscription by setting your context. For more information, see [Azure PowerShell context objects](/powershell/azure/context-persistence) 1. Join the storage account to your domain by running the commands below, replacing the values for `$subscriptionId`, `$resourceGroupName`, and `$storageAccountName` with your values. You can also add the parameter `-OrganizationalUnitDistinguishedName` to specify an Organizational Unit (OU) in which to place the computer account.
-
+ ```powershell $subscriptionId = "subscription-id" $resourceGroupName = "resource-group-name"
To configure Profile Container on your session host VMs:
You have now finished the setting up Profile Container. If you are installing Profile Container in your custom image, you will need to finish creating the custom image. For more information, follow the steps in [Create a custom image in Azure](set-up-golden-image.md) from the section [Take the final snapshot](set-up-golden-image.md#take-the-final-snapshot) onwards.
-## Validate profile creation
+## Validate profile creation
-Once you've installed and configured Profile Container, you can test your deployment by signing in with a user account that's been assigned an application group or desktop on the host pool.
+Once you've installed and configured Profile Container, you can test your deployment by signing in with a user account that's been assigned an application group or desktop on the host pool.
If the user has signed in before, they'll have an existing local profile that they'll use during this session. Either delete the local profile first, or create a new user account to use for tests.
virtual-desktop Set Up Customize Master Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-customize-master-image.md
Title: Prepare and customize a VHD image of Azure Virtual Desktop - Azure
description: How to prepare, customize and upload a Azure Virtual Desktop image to Azure. Previously updated : 06/01/2022 Last updated : 04/21/2023
This article tells you how to prepare a master virtual hard disk (VHD) image for upload to Azure, including how to create virtual machines (VMs) and install software on them. These instructions are for a Azure Virtual Desktop-specific configuration that can be used with your organization's existing processes. >[!IMPORTANT]
->We recommend you use an image from the Azure Image Gallery. However, if you do need to use a customized image, make sure you don't already have the Azure Virtual Desktop Agent installed on your VM. Using a customized image with the Azure Virtual Desktop Agent can cause problems with the image, such as blocking registration as the host pool registration token will have expired which will prevent user session connections.
+>We recommend you use an image from the Azure Compute Gallery or the Azure portal. However, if you do need to use a customized image, make sure you don't already have the Azure Virtual Desktop Agent installed on your VM. If you do, either follow the instructions in [Step 1: Uninstall all agent, boot loader, and stack component programs](troubleshoot-agent.md#step-1-uninstall-all-agent-boot-loader-and-stack-component-programs) to uninstall the Agent and all related components from your VM or create a new image from a VM with the Agent uninstalled. Using a customized image with the Azure Virtual Desktop Agent can cause problems with the image, such as blocking registration as the host pool registration token will have expired which will prevent user session connections.
## Create a VM
-Windows 10 Enterprise multi-session is available in the Azure Image Gallery. There are two options for customizing this image.
+Windows 10 Enterprise multi-session is available in the Azure Compute Gallery or the Azure portal. There are two options for customizing this image.
The first option is to provision a virtual machine (VM) in Azure by following the instructions in [Create a VM from a managed image](../virtual-machines/windows/create-vm-generalized-managed.md), and then skip ahead to [Software preparation and installation](set-up-customize-master-image.md#software-preparation-and-installation).
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 04/11/2023 Last updated : 04/21/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-| | Public | 1.2.4157 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
-| Insider | 1.2.4155 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+| Insider | 1.2.4157 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
## Updates for version 1.2.4157
virtual-machines Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption.md
The following sections describe each of the options for key management in greate
### Platform-managed keys
-By default, managed disks use platform-managed encryption keys. All managed disks, snapshots, images, and data written to existing managed disks are automatically encrypted-at-rest with platform-managed keys.
+By default, managed disks use platform-managed encryption keys. All managed disks, snapshots, images, and data written to existing managed disks are automatically encrypted-at-rest with platform-managed keys. Platform-managed keys are managed by Microsoft.
### Customer-managed keys
virtual-machines Disks Copy Incremental Snapshot Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-copy-incremental-snapshot-across-regions.md
description: Learn how to copy an incremental snapshot of a managed disk to a di
Previously updated : 01/25/2023 Last updated : 04/10/2023
This article covers copying an incremental snapshot from one region to another.
- You can copy 100 incremental snapshots in parallel at the same time per subscription per region. - If you use the REST API, you must use version 2020-12-01 or newer of the Azure Compute REST API.
+- You can only copy one incremental snapshot of a particular disk at a time.
+- Snapshots must be copied in the order they were created.
## Managed copy
targetRegion=<validRegion>
sourceSnapshotId=$(az snapshot show -n $sourceSnapshotName -g $resourceGroupName --query [id] -o tsv) az snapshot create -g $resourceGroupName -n $targetSnapshotName -l $targetRegion --source $sourceSnapshotId --incremental --copy-start
+```
+
+### Check copy status
+
+You can check the status of an individual snapshot by checking the `CompletionPercent` property. Replace `$sourceSnapshotName` with the name of your snapshot then run the following command. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
+```azurecli
az snapshot show -n $sourceSnapshotName -g $resourceGroupName --query [completionPercent] -o tsv ```
$sourceSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotNa
$snapshotconfig = New-AzSnapshotConfig -Location $targetRegion -CreateOption CopyStart -Incremental -SourceResourceId $sourceSnapshot.Id New-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $targetSnapshotName -Snapshot $snapshotconfig
+```
+
+### Check copy status
-$targetSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $targetSnapshotName
+You can check the `CompletionPercent` property of an individual snapshot to get its status. Replace `yourResourceGroupNameHere` and `yourSnapshotName` then run the script. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
+
+```azurepowershell
+$resourceGroupName = "yourResourceGroupNameHere"
+$snapshotName = "yourSnapshotName"
+
+$targetSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName
$targetSnapshot.CompletionPercent ``` + # [Portal](#tab/azure-portal) You can also copy an incremental snapshot across regions in the Azure portal. However, you must use this specific link to access the portal, for now: https://aka.ms/incrementalsnapshot
Incremental snapshots offer a differential capability. They enable you to get th
## Next steps If you'd like to see sample code demonstrating the differential capability of incremental snapshots, using .NET, see [Copy Azure Managed Disks backups to another region with differential capability of incremental snapshots](https://github.com/Azure-Samples/managed-disks-dotnet-backup-with-incremental-snapshots).+
+If you have additional questions on snapshots, see the [snapshots](faq-for-disks.yml#snapshots) section of the FAQ.
virtual-machines Disks Enable Private Links For Import Export Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-private-links-for-import-export-portal.md
description: Enable Private Link for your managed disks with Azure portal. This
Previously updated : 09/03/2021 Last updated : 03/31/2023
You've now configured a private link that you can use to import and export your
- Upload a VHD to Azure or copy a managed disk to another region - [Azure CLI](linux/disks-upload-vhd-to-managed-disk-cli.md) or [Azure PowerShell module](windows/disks-upload-vhd-to-managed-disk-powershell.md) - Download a VHD - [Windows](windows/download-vhd.md) or [Linux](linux/download-vhd.md)-- [FAQ for private links and managed disks](./faq-for-disks.yml)
+- [FAQ for private links and managed disks](./faq-for-disks.yml#private-links-for-managed-disks)
- [Export/Copy managed snapshots as VHD to a storage account in different region with PowerShell](/previous-versions/azure/virtual-machines/scripts/virtual-machines-powershell-sample-copy-snapshot-to-storage-account)
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
Update-AzDisk -ResourceGroupName $resourceGroup -DiskName $diskName -DiskUpdate
- [Use Azure ultra disks on Azure Kubernetes Service (preview)](../aks/use-ultra-disks.md). - [Migrate log disk to an ultra disk](/azure/azure-sql/virtual-machines/windows/storage-migrate-to-ultradisk).
+- For additional questions on Ultra Disks, see the [Ultra Disks](faq-for-disks.yml#ultra-disks) section of the FAQ.
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
description: Learn about incremental snapshots for managed disks, including how
Previously updated : 02/22/2023 Last updated : 03/31/2023
az snapshot show -g resourcegroupname -n snapshotname --query [creationData.logi
See [Copy an incremental snapshot to a new region](disks-copy-incremental-snapshot-across-regions.md) to learn how to copy an incremental snapshot across regions.
+If you have additional questions on snapshots, see the [snapshots](faq-for-disks.yml#snapshots) section of the FAQ.
+ If you'd like to see sample code demonstrating the differential capability of incremental snapshots, using .NET, see [Copy Azure Managed Disks backups to another region with differential capability of incremental snapshots](https://github.com/Azure-Samples/managed-disks-dotnet-backup-with-incremental-snapshots).
virtual-machines Disks Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-reserved-capacity.md
You can cancel, exchange, or refund reservations within certain limitations. For
## Expiration of a reservation
-When a reservation expires, any Azure Disk Storage capacity that you use under that reservation is billed at the pay-as-you-go rate. Reservations don't renew automatically.
+When a reservation expires, any Azure Disk Storage capacity that you use under that reservation is billed at the [pay-as-you-go rate](https://azure.microsoft.com/pricing/details/managed-disks/). Reservations don't renew automatically.
You'll receive an email notification 30 days before the expiration of the reservation and again on the expiration date. To continue taking advantage of the cost savings that a reservation provides, renew it no later than the expiration date.
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared-enable.md
If you prefer to use Azure Resource Manager templates to deploy your disk, the f
- [Premium SSD](https://aka.ms/SharedPremiumDiskARMtemplate) - [Regional ultra disks](https://aka.ms/SharedUltraDiskARMtemplateRegional) - [Zonal ultra disks](https://aka.ms/SharedUltraDiskARMtemplateZonal)+
+If you've additional questions, see the [shared disks](faq-for-disks.yml#azure-shared-disks) section of the FAQ.
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
Both shared Ultra Disks and shared Premium SSD v2 managed disks are priced based
## Next steps If you're interested in enabling and using shared disks for your managed disks, proceed to our article [Enable shared disk](disks-shared-enable.md)+
+If you've additional questions, see the [shared disks](faq-for-disks.yml#azure-shared-disks) section of the FAQ.
virtual-machines Disks Export Import Private Links Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-export-import-private-links-cli.md
description: Enable Private Links for your managed disks with Azure CLI. Allowin
Previously updated : 07/15/2021 Last updated : 03/31/2023
az snapshot create -n $snapshotNameSecuredWithPL \
- Upload a VHD to Azure or copy a managed disk to another region - [Azure CLI](disks-upload-vhd-to-managed-disk-cli.md) or [Azure PowerShell module](../windows/disks-upload-vhd-to-managed-disk-powershell.md) - Download a VHD - [Windows](../windows/download-vhd.md) or [Linux](download-vhd.md)-- [FAQ on Private Links](../faq-for-disks.yml)
+- [FAQ on Private Links](../faq-for-disks.yml#private-links-for-managed-disks)
- [Export/Copy managed snapshots as VHD to a storage account in different region with CLI](/previous-versions/azure/virtual-machines/scripts/virtual-machines-cli-sample-copy-managed-disks-vhd)
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
az disk revoke-access -n $targetDiskName -g $targetRG
## Next steps Now that you've successfully uploaded a VHD to a managed disk, you can attach the disk as a [data disk to an existing VM](add-disk.md) or [attach the disk to a VM as an OS disk](upload-vhd.md#create-the-vm), to create a new VM.+
+If you've additional questions, see the [uploading a managed disk](../faq-for-disks.yml#uploading-to-a-managed-disk) section in the FAQ.
virtual-machines Disks Upload Vhd To Managed Disk Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md
Title: Upload a VHD to Azure or copy a disk across regions - Azure PowerShell
description: Learn how to upload a VHD to an Azure managed disk and copy a managed disk across regions, using Azure PowerShell, via direct upload. Previously updated : 01/03/2023 Last updated : 03/31/2023 linux
Revoke-AzDiskAccess -ResourceGroupName $targetRG -DiskName $targetDiskName
Now that you've successfully uploaded a VHD to a managed disk, you can attach your disk to a VM and begin using it. To learn how to attach a data disk to a VM, see our article on the subject: [Attach a data disk to a Windows VM with PowerShell](attach-disk-ps.md). To use the disk as the OS disk, see [Create a Windows VM from a specialized disk](create-vm-specialized.md#create-the-new-vm).+
+If you've additional questions, see the section on [uploading a managed disk](../faq-for-disks.yml#uploading-to-a-managed-disk) in the FAQ.
virtual-network Troubleshoot Nat And Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat-and-azure-services.md
Update your idle timeout timer configuration on your User-Assigned NAT gateway w
## Azure Firewall
-### How NAT gateway integration with Azure Firewall works
+### SNAT exhaustion when connecting outbound with Azure Firewall
-Azure Firewall can provide outbound connectivity to the internet from virtual networks. Azure Firewall provides only 2,496 SNAT ports per public IP address. While Azure Firewall can be associated with up to 250 public IP addresses to handle egress traffic, often, customers require much fewer public IP addresses for connecting outbound due to various architectural requirements and limitations by destination endpoints for the number of public IP addresses they can allowlist. One method by which to get around this allowlist IP limitation and to also reduce the risk of SNAT port exhaustion is to use NAT gateway in the same subnet with Azure Firewall. To learn how to set up NAT gateway in an Azure Firewall subnet, see [Scale SNAT ports with Azure NAT Gateway](../../firewall/integrate-with-nat-gateway.md).
+Azure Firewall can provide outbound connectivity to the internet from virtual networks. Azure Firewall provides only 2,496 SNAT ports per public IP address. While Azure Firewall can be associated with up to 250 public IP addresses to handle egress traffic, users may require much fewer public IP addresses for connecting outbound. The requirement for egressing with fewer public IP addresses may be due to various architectural requirements and allowlist limitations by destination endpoints.
+
+One method by which to provide greater scalability for outbound traffic and also reduce the risk of SNAT port exhaustion is to use NAT gateway in the same subnet with Azure Firewall. To set up NAT gateway in an Azure Firewall subnet, see [integrate NAT gateway with Azure Firewall](/azure/virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall). See [Scale SNAT ports with Azure NAT Gateway](../../firewall/integrate-with-nat-gateway.md) to learn more about how NAT gateway works with Firewall.
+
+> [!NOTE]
+> NAT gateway is not supported in a vWAN architecture. NAT gateway cannot be configured to an Azure Firewall subnet in a vWAN hub.
## Azure Databricks