Updates from: 04/19/2023 01:10:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## March 2023
+
+### Updated articles
+
+- [Configure SAML identity provider options with Azure Active Directory B2C](identity-provider-generic-saml-options.md)
+- [Tutorial: Configure BioCatch with Azure Active Directory B2C](partner-biocatch.md)
+- [Tutorial: Configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication](partner-nok-nok.md)
+- [Pass an identity provider access token to your application in Azure Active Directory B2C](idp-pass-through-user-flow.md)
+- [Tutorial: Configure Haventec Authenticate with Azure Active Directory B2C for single-step, multi-factor passwordless authentication](partner-haventec.md)
+- [Configure Trusona Authentication Cloud with Azure Active Directory B2C](partner-trusona.md)
+- [Tutorial: Configure IDEMIA Mobile ID with Azure Active Directory B2C](partner-idemia.md)
+- [Configure Azure Active Directory B2C with Bluink eID-Me for identity verification](partner-eid-me.md)
+- [Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication](partner-bloksec.md)
+- [Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall](partner-azure-web-application-firewall.md)
+- [Tutorial to configure Saviynt with Azure Active Directory B2C](partner-saviynt.md)
+- [Tutorial: Configure Keyless with Azure Active Directory B2C](partner-keyless.md)
+- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel](azure-sentinel.md)
+- [Configure authentication in a sample Python web app by using Azure AD B2C](configure-authentication-sample-python-web-app.md)
+- [Billing model for Azure Active Directory B2C](billing.md)
+- [Azure Active Directory B2C: Region availability & data residency](data-residency.md)
+- ['Azure AD B2C: Frequently asked questions (FAQ)'](faq.yml)
+- [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
+
## February 2023 ### Updated articles
active-directory-domain-services Migrate From Classic Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/migrate-from-classic-vnet.md
Title: Migrate Azure AD Domain Services from a Classic virtual network | Microso
description: Learn how to migrate an existing Azure AD Domain Services managed domain from the Classic virtual network model to a Resource Manager-based virtual network. + Previously updated : 03/14/2023 Last updated : 04/17/2023 # Migrate Azure Active Directory Domain Services from the Classic virtual network model to Resource Manager
-Azure Active Directory Domain Services (Azure AD DS) supports a one-time move for customers currently using the Classic virtual network model to the Resource Manager virtual network model. Azure AD DS managed domains that use the Resource Manager deployment model provide additional features such as fine-grained password policy, audit logs, and account lockout protection.
+Starting April 1, 2023, Azure Active Directory Domain Services (Azure AD DS) has shut down all IaaS virtual machines that host domain controller services for customers who use the Classic virtual network model. Azure AD Domain Services offers a best-effort offline migration solution for customers currently using the Classic virtual network model to the Resource Manager virtual network model. Azure AD DS managed domains that use the Resource Manager deployment model have more features, such as fine-grained password policy, audit logs, and account lockout protection.
-This article outlines considerations for migration, then the required steps to successfully migrate an existing managed domain. For some of the benefits, see [Benefits of migration from the Classic to Resource Manager deployment model in Azure AD DS][migration-benefits].
+This article outlines considerations for migration, followed by the required steps to successfully migrate an existing managed domain. For some of the benefits, see [Benefits of migration from the Classic to Resource Manager deployment model in Azure AD DS][migration-benefits].
> [!NOTE] > In 2017, Azure AD Domain Services became available to host in an Azure Resource Manager network. Since then, we have been able to build a more secure service using the Azure Resource Manager's modern capabilities. Because Azure Resource Manager deployments fully replace classic deployments, Azure AD DS classic virtual network deployments will be retired on March 1, 2023.
This article outlines considerations for migration, then the required steps to s
## Overview of the migration process
-The migration process takes an existing managed domain that runs in a Classic virtual network and moves it to an existing Resource Manager virtual network. The migration is performed using PowerShell, and has two main stages of execution: *preparation* and *migration*.
-
-![Overview of the migration process for Azure AD DS](media/migrate-from-classic-vnet/migration-overview.png)
-
-In the *preparation* stage, Azure AD DS takes a backup of the domain to get the latest snapshot of users, groups, and passwords synchronized to the managed domain. Synchronization is then disabled, and the cloud service that hosts the managed domain is deleted. During the preparation stage, the managed domain is unable to authenticate users.
-
-![Preparation stage for migrating Azure AD DS](media/migrate-from-classic-vnet/migration-preparation.png)
-
-In the *migration* stage, the underlying virtual disks for the domain controllers from the Classic managed domain are copied to create the VMs using the Resource Manager deployment model. The managed domain is then recreated, which includes the LDAPS and DNS configuration. Synchronization to Azure AD is restarted, and LDAP certificates are restored. There's no need to rejoin any machines to a managed domainΓÇôthey continue to be joined to the managed domain and run without changes.
-
-![Migration of Azure AD DS](media/migrate-from-classic-vnet/migration-process.png)
-
-## Example scenarios for migration
-
-Some common scenarios for migrating a managed domain include the following examples.
-
-> [!NOTE]
-> Don't convert the Classic virtual network until you have confirmed a successful migration. Converting the virtual network removes the option to roll back or restore the managed domain if there are any problems during the migration and verification stages.
-
-### Migrate Azure AD DS to an existing Resource Manager virtual network (recommended)
-
-A common scenario is where you've already moved other existing Classic resources to a Resource Manager deployment model and virtual network. Peering is then used from the Resource Manager virtual network to the Classic virtual network that continues to run Azure AD DS. This approach lets the Resource Manager applications and services use the authentication and management functionality of the managed domain in the Classic virtual network. Once migrated, all resources run using the Resource Manager deployment model and virtual network.
-
-![Migrate Azure AD DS to an existing Resource Manager virtual network](media/migrate-from-classic-vnet/migrate-to-existing-vnet.png)
-
-High-level steps involved in this example migration scenario include the following parts:
-
-1. Remove existing VPN gateways or virtual network peering configured on the Classic virtual network.
-1. Migrate the managed domain using the steps outlined in this article.
-1. Test and confirm a successful migration, then delete the Classic virtual network.
-
-### Migrate multiple resources including Azure AD DS
-
-In this example scenario, you migrate Azure AD DS and other associated resources from the Classic deployment model to the Resource Manager deployment model. If some resources continued to run in the Classic virtual network alongside the managed domain, they can all benefit from migrating to the Resource Manager deployment model.
-
-![Migrate multiple resources to the Resource Manager deployment model](media/migrate-from-classic-vnet/migrate-multiple-resources.png)
-
-High-level steps involved in this example migration scenario include the following parts:
-
-1. Remove existing VPN gateways or virtual network peering configured on the Classic virtual network.
-1. Migrate the managed domain using the steps outlined in this article.
-1. Set up virtual network peering between the Classic virtual network and Resource Manager network.
-1. Test and confirm a successful migration.
-1. [Move additional Classic resources like VMs][migrate-iaas].
-
-### Migrate Azure AD DS but keep other resources on the Classic virtual network
-
-With this example scenario, you have the minimum amount of downtime in one session. You only migrate Azure AD DS to a Resource Manager virtual network, and keep existing resources on the Classic deployment model and virtual network. In a following maintenance period, you can migrate the additional resources from the Classic deployment model and virtual network as desired.
-
-![Migrate only Azure AD DS to the Resource Manager deployment model](media/migrate-from-classic-vnet/migrate-only-azure-ad-ds.png)
-
-High-level steps involved in this example migration scenario include the following parts:
-
-1. Remove existing VPN gateways or virtual network peering configured on the Classic virtual network.
-1. Migrate the managed domain using the steps outlined in this article.
-1. Set up virtual network peering between the Classic virtual network and the new Resource Manager virtual network.
-1. Later, [migrate the additional resources][migrate-iaas] from the Classic virtual network as needed.
+The offline migration process copies the underlying virtual disks for the domain controllers from the Classic managed domain to create the VMs using the Resource Manager deployment model. The managed domain is then recreated, which includes the LDAPS and DNS configuration. Synchronization to Azure AD is restarted, and LDAP certificates are restored. There's no need to rejoin any machines to a managed domainΓÇôthey continue to be joined to the managed domain and run without changes.
## Before you begin
-As you prepare and then migrate a managed domain, there are some considerations around the availability of authentication and management services. The managed domain is unavailable for a period of time during migration. Applications and services that rely on Azure AD DS experience downtime during migration.
+As you prepare for migration, there are some considerations around the availability of authentication and management services. The managed domain remains unavailable until the migration completes successfully.
> [!IMPORTANT] > Read all of this migration article and guidance before you start the migration process. The migration process affects the availability of the Azure AD DS domain controllers for periods of time. Users, services, and applications can't authenticate against the managed domain during the migration process.
As you prepare and then migrate a managed domain, there are some considerations
The domain controller IP addresses for a managed domain change after migration. This change includes the public IP address for the secure LDAP endpoint. The new IP addresses are inside the address range for the new subnet in the Resource Manager virtual network.
-If you need to roll back, the IP addresses may change after rolling back.
- Azure AD DS typically uses the first two available IP addresses in the address range, but this isn't guaranteed. You can't currently specify the IP addresses to use after migration.
-### Downtime
-
-The migration process involves the domain controllers being offline for a period of time. Domain controllers are inaccessible while Azure AD DS is migrated to the Resource Manager deployment model and virtual network.
-
-On average, the downtime is around 1 to 3 hours. This time period is from when the domain controllers are taken offline to the moment the first domain controller comes back online. This average doesn't include the time it takes for the second domain controller to replicate, or the time it may take to migrate additional resources to the Resource Manager deployment model.
- ### Account lockout Managed domains that run on Classic virtual networks don't have AD account lockout policies in place. If VMs are exposed to the internet, attackers could use password-spray methods to brute-force their way into accounts. There's no account lockout policy to stop those attempts. For managed domains that use the Resource Manager deployment model and virtual networks, AD account lockout policies protect against these password-spray attacks.
-By default, 5 bad password attempts in 2 minutes lock out an account for 30 minutes.
+By default, five (5) bad password attempts in two (2) minutes lock out an account for 30 minutes.
A locked out account can't be used to sign in, which may interfere with the ability to manage the managed domain or applications managed by the account. After a managed domain is migrated, accounts can experience what feels like a permanent lockout due to repeated failed attempts to sign in. Two common scenarios after migration include the following:
A locked out account can't be used to sign in, which may interfere with the abil
If you suspect that some accounts may be locked out after migration, the final migration steps outline how to enable auditing or change the fine-grained password policy settings.
-### Roll back and restore
-
-If the migration isn't successful, there's process to roll back or restore a managed domain. Rollback is a self-service option to immediately return the state of the managed domain to before the migration attempt. Azure support engineers can also restore a managed domain from backup as a last resort. For more information, see [how to roll back or restore from a failed migration](#roll-back-and-restore-from-migration).
- ### Restrictions on available virtual networks There are some restrictions on the virtual networks that a managed domain can be migrated to. The destination Resource Manager virtual network must meet the following requirements:
You must also create a network security group to restrict traffic in the virtual
For more information on what rules are required, see [Azure AD DS network security groups and required ports](network-considerations.md#network-security-groups-and-required-ports).
-### LDAPS and TLS/SSL certificate expiration
-
-If your managed domain is configured for LDAPS, confirm that your current TLS/SSL certificate is valid for more than 30 days. A certificate that expires within the next 30 days causes the migration processes to fail. If needed, renew the certificate and apply it to your managed domain, then begin the migration process.
- ## Migration steps
-The migration to the Resource Manager deployment model and virtual network is split into 5 main steps:
+The migration to the Resource Manager deployment model and virtual network is split into four main steps:
-| Step | Performed through | Estimated time | Downtime | Roll back/Restore? |
-||--|--|--|-|
-| [Step 1 - Update and locate the new virtual network](#update-and-verify-virtual-network-settings) | Azure portal | 15 minutes | No downtime required | N/A |
-| [Step 2 - Prepare the managed domain for migration](#prepare-the-managed-domain-for-migration) | PowerShell | 15 ΓÇô 30 minutes on average | Downtime of Azure AD DS starts after this command is completed. | Roll back and restore available. |
-| [Step 3 - Move the managed domain to an existing virtual network](#migrate-the-managed-domain) | PowerShell | 1 ΓÇô 3 hours on average | One domain controller is available once this command is completed. | On failure, both rollback (self-service) and restore are available. |
-| [Step 4 - Test and wait for the replica domain controller](#test-and-verify-connectivity-after-the-migration)| PowerShell and Azure portal | 1 hour or more, depending on the number of tests | Both domain controllers are available and should function normally, downtime ends. | N/A. Once the first VM is successfully migrated, there's no option for rollback or restore. |
-| [Step 5 - Optional configuration steps](#optional-post-migration-configuration-steps) | Azure portal and VMs | N/A | No downtime required | N/A |
+| Step | Performed through | Estimated time | Downtime |
+||--|--|--|
+| [Step 1 - Update and locate the new virtual network](#update-and-verify-virtual-network-settings) | Azure portal | 15 minutes | |
+| [Step 2 - Perform offline migration](#perform-offline-migration) | PowerShell | 1 ΓÇô 3 hours on average | One domain controller is available once this command is completed. |
+| [Step 3 - Test and wait for the replica domain controller](#test-and-verify-connectivity-after-the-migration)| PowerShell and Azure portal | 1 hour or more, depending on the number of tests | Both domain controllers are available and should function normally, downtime ends. |
+| [Step 4 - Optional configuration steps](#optional-post-migration-configuration-steps) | Azure portal and VMs | N/A | |
> [!IMPORTANT] > To avoid additional downtime, read all of this migration article and guidance before you start the migration process. The migration process affects the availability of the Azure AD DS domain controllers for a period of time. Users, services, and applications can't authenticate against the managed domain during the migration process.
Before you begin the migration process, complete the following initial checks an
1. Update your local Azure PowerShell environment to the latest version. To complete the migration steps, you need at least version *2.3.2*.
- For information on how to check and update your PowerShell version, see [Azure PowerShell overview][azure-powershell].
+ For information about how to check and update your PowerShell version, see [Azure PowerShell overview][azure-powershell].
1. Create, or choose an existing, Resource Manager virtual network.
- Make sure that network settings don't block necessary ports required for Azure AD DS. Ports must be open on both the Classic virtual network and the Resource Manager virtual network. These settings include route tables (although it's not recommended to use route tables) and network security groups.
+ Make sure that network settings don't block ports required for Azure AD DS. Ports must be open on both the Classic virtual network and the Resource Manager virtual network. These settings include route tables (although it's not recommended to use route tables) and network security groups.
Azure AD DS needs a network security group to secure the ports needed for the managed domain and block all other incoming traffic. This network security group acts as an extra layer of protection to lock down access to the managed domain.
Before you begin the migration process, complete the following initial checks an
| Source | Source service tag | Source port ranges | Destination | Service | Destination port ranges | Protocol | Action | Required | Purpose | |:--:|:-:|::|:-:|:-:|:--:|:--:|::|:--:|:--| | Service tag | AzureActiveDirectoryDomainServices | * | Any | WinRM | 5986 | TCP | Allow | Yes | Management of your domain |
- | Service tag | CorpNetSaw | * | Any | RDP | 3389 | TCP | Allow | Optional | Debugging for support |
+ | Service tag | CorpNetSaw | * | Any | RDP | 3389 | TCP | Allow | Optional | Debugging for support |
Make a note of the target resource group, target virtual network, and target virtual network subnet. These resource names are used during the migration process.
- Note that the **CorpNetSaw** service tag isn't available by using Azure portal, and the network security group rule for **CorpNetSaw** has to be added by using [PowerShell](powershell-create-instance.md#create-a-network-security-group).
+ > [!NOTE]
+ > The **CorpNetSaw** service tag isn't available by using Azure portal, and the network security group rule for **CorpNetSaw** has to be added by using [PowerShell](powershell-create-instance.md#create-a-network-security-group).
1. Check the managed domain health in the Azure portal. If you have any alerts for the managed domain, resolve them before you start the migration process. 1. Optionally, if you plan to move other resources to the Resource Manager deployment model and virtual network, confirm that those resources can be migrated. For more information, see [Platform-supported migration of IaaS resources from Classic to Resource Manager][migrate-iaas].
Before you begin the migration process, complete the following initial checks an
> [!NOTE] > Don't convert the Classic virtual network to a Resource Manager virtual network. If you do, there's no option to roll back or restore the managed domain.
-## Prepare the managed domain for migration
-
-Azure PowerShell is used to prepare the managed domain for migration. These steps include taking a backup, pausing synchronization, and deleting the cloud service that hosts Azure AD DS. When this step completes, Azure AD DS is taken offline for a period of time. If the preparation step fails, you can [roll back to the previous state](#roll-back).
+## Perform offline migration
-To prepare the managed domain for migration, complete the following steps:
+Azure PowerShell is used to perform offline migration of the managed domain:
1. Install the `Migrate-Aaads` script from the [PowerShell Gallery][powershell-script]. This PowerShell migration script is a digitally signed by the Azure AD engineering team.
To prepare the managed domain for migration, complete the following steps:
Install-Script -Name Migrate-Aadds ```
-1. Create a variable to hold the credentials for by the migration script using the [Get-Credential][get-credential] cmdlet.
+2. Create a variable to hold the credentials for by the migration script using the [Get-Credential][get-credential] cmdlet.
The user account you specify needs [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS and [Domain Services Contributor](../role-based-access-control/built-in-roles.md#contributor) Azure role to create the required Azure AD DS resources.
To prepare the managed domain for migration, complete the following steps:
$creds = Get-Credential ```
-1. Define a variable for your Azure subscription ID. If needed, you can use the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet to list and view your subscription IDs. Provide your own subscription ID in the following command:
+3. Define a variable for your Azure subscription ID. If needed, you can use the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet to list and view your subscription IDs. Provide your own subscription ID in the following command:
```powershell $subscriptionId = 'yourSubscriptionId' ```
-1. Now run the `Migrate-Aadds` cmdlet using the *-Prepare* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain, such as *aaddscontoso.com*:
+4. Now run the `Migrate-Aadds` cmdlet using the *-Offline* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain, such as *aaddscontoso.com*. Specify the target resource group that contains the virtual network you want to migrate Azure AD DS to, such as *myResourceGroup*. Provide the target virtual network, such as *myVnet*, and the subnet, such as *DomainServices*. This step can take 1 to 3 hours to complete.
```powershell Migrate-Aadds `
- -Prepare `
+ -Offline `
-ManagedDomainFqdn aaddscontoso.com `
+ -VirtualNetworkResourceGroupName myResourceGroup `
+ -VirtualNetworkName myVnet `
+ -VirtualSubnetName DomainServices `
-Credentials $creds ` -SubscriptionId $subscriptionId ```
-## Migrate the managed domain
-
-With the managed domain prepared and backed up, the domain can be migrated. This step recreates the Azure AD DS domain controller VMs using the Resource Manager deployment model. This step can take 1 to 3 hours to complete.
-
-Run the `Migrate-Aadds` cmdlet using the *-Commit* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain prepared in the previous section, such as *aaddscontoso.com*.
-
-Specify the target resource group that contains the virtual network you want to migrate Azure AD DS to, such as *myResourceGroup*. Provide the target virtual network, such as *myVnet*, and the subnet, such as *DomainServices*.
-
-After this command runs, you can't then roll back:
-
-```powershell
-Migrate-Aadds `
- -Commit `
- -ManagedDomainFqdn aaddscontoso.com `
- -VirtualNetworkResourceGroupName myResourceGroup `
- -VirtualNetworkName myVnet `
- -VirtualSubnetName DomainServices `
- -Credentials $creds `
- -SubscriptionId $subscriptionId
-```
-
-After the script validates the managed domain is prepared for migration, enter *Y* to start the migration process.
- > [!IMPORTANT]
-> Don't convert the Classic virtual network to a Resource Manager virtual network during the migration process. If you convert the virtual network, you can't then rollback or restore the managed domain as the original virtual network won't exist anymore.
+> As part of the offline migration workflow, you cannot convert the Classic virtual network to a Resource Manager virtual network.
Every two minutes during the migration process, a progress indicator reports the current status, as shown in the following example output:
If needed, you can update the fine-grained password policy to be less restrictiv
1. Use a network trace on the VM to locate the source of the attacks and block those IP addresses from being able to attempt sign-ins. 1. When there are minimal lockout issues, update the fine-grained password policy to be as restrictive as necessary.
-## Roll back and restore from migration
-
-Up to a certain point in the migration process, you can choose to roll back or restore the managed domain.
-
-### Roll back
-
-If there's an error when you run the PowerShell cmdlet to prepare for migration in step 2 or for the migration itself in step 3, the managed domain can roll back to the original configuration. This roll back requires the original Classic virtual network. The IP addresses may still change after rollback.
-
-Run the `Migrate-Aadds` cmdlet using the *-Abort* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain prepared in a previous section, such as *aaddscontoso.com*, and the Classic virtual network name, such as *myClassicVnet*:
-
-```powershell
-Migrate-Aadds `
- -Abort `
- -ManagedDomainFqdn aaddscontoso.com `
- -ClassicVirtualNetworkName myClassicVnet `
- -Credentials $creds `
- -SubscriptionId $subscriptionId
-```
-
-### Restore
-
-As a last resort, Azure AD Domain Services can be restored from the last available backup. A backup is taken in step 1 of the migration to make sure that the most current backup is available. This backup is stored for 30 days.
-
-To restore the managed domain from backup, [open a support case ticket using the Azure portal][azure-support]. Provide your directory ID, domain name, and reason for restore. The support and restore process may take multiple days to complete.
- ## Troubleshooting If you have problems after migration to the Resource Manager deployment model, review some of the following common troubleshooting areas:
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
This article lists the versions and features of Azure Active Directory Connect P
Microsoft provides direct support for the latest agent version and one version before. ### Download link
-You can download the latest version of the agent using [this link](https://aka.ms/onpremprovisioningagent).
+On-premises app provisioning has been rolled into the provisioning agent and is available from the portal. See [installing the provisioning agent](../cloud-sync/how-to-install.md).
### 1.1.892.0
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Previously updated : 04/17/2023 Last updated : 04/18/2023
Run the initial configuration in a [pilot environment](../fundamentals/active-di
To facilitate Azure AD provisioning workflows between the cloud HR app and Active Directory, you can add multiple provisioning connector apps from the Azure AD app gallery: - **Cloud HR app to Active Directory user provisioning**: This provisioning connector app facilitates user account provisioning from the cloud HR app to a single Active Directory domain. If you have multiple domains, you can add one instance of this app from the Azure AD app gallery for each Active Directory domain you need to provision to.-- **Cloud HR app to Azure AD user provisioning**: While Azure AD Connect is the tool that should be used to synchronize Active Directory users to Azure AD, this provisioning connector app can be used to facilitate the provisioning of cloud-only users from the cloud HR app to a single Azure AD tenant.
+- **Cloud HR app to Azure AD user provisioning**: Azure AD Connect is the tool used to synchronize Active Directory on premises users to Azure Active Directory. The Cloud HR app to Azure AD user provisioning is a connector you use to provision cloud-only users from the cloud HR app to a single Azure AD tenant.
- **Cloud HR app write-back**: This provisioning connector app facilitates the write-back of the user's email addresses from Azure AD to the cloud HR app. For example, the following image lists the Workday connector apps that are available in the Azure AD app gallery.
We recommend the following production configuration:
|Requirement|Recommendation| |:-|:-|
-|Number of Azure AD Connect provisioning agents to deploy|Two (for high availability and failover)
-|Number of provisioning connector apps to configure|One app per child domain|
-|Server host for Azure AD Connect provisioning agent|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers</br>Can coexist with Azure AD Connect service|
+|Number of Azure AD Connect provisioning agents to deploy.|Two (for high availability and failover).
+|Number of provisioning connector apps to configure.|One app per child domain.|
+|Server host for Azure AD Connect provisioning agent.|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers. </br>Can coexist with Azure AD Connect service.|
![Flow to on-premises agents](media/plan-cloud-hr-provision/plan-cloudhr-provisioning-img4.png)
We recommend the following production configuration:
|Requirement|Recommendation| |:-|:-|
-|Number of Azure AD Connect provisioning agents to deploy on-premises|Two per disjoint Active Directory forest|
-|Number of provisioning connector apps to configure|One app per child domain|
-|Server host for Azure AD Connect provisioning agent|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers</br>Can coexist with Azure AD Connect service|
+|Number of Azure AD Connect provisioning agents to deploy on-premises|Two per disjoint Active Directory forest.|
+|Number of provisioning connector apps to configure|One app per child domain.|
+|Server host for Azure AD Connect provisioning agent.|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers. </br>Can coexist with Azure AD Connect service.|
![Single cloud HR app tenant disjoint Active Directory forest](media/plan-cloud-hr-provision/plan-cloudhr-provisioning-img5.png) ### Azure AD Connect provisioning agent requirements
-The cloud HR app to Active Directory user provisioning solution requires that you deploy one or more Azure AD Connect provisioning agents on servers that run Windows Server 2016 or greater. The servers must have a minimum of 4-GB RAM and .NET 4.7.1+ runtime. Ensure that the host server has network access to the target Active Directory domain.
+The cloud HR app to Active Directory user provisioning solution requires the deployment of one or more Azure AD Connect provisioning agents. These agents must be deployed on servers that run Windows Server 2016 or greater. The servers must have a minimum of 4-GB RAM and .NET 4.7.1+ runtime. Ensure that the host server has network access to the target Active Directory domain.
To prepare the on-premises environment, the Azure AD Connect provisioning agent configuration wizard registers the agent with your Azure AD tenant, [opens ports](../app-proxy/application-proxy-add-on-premises-application.md#open-ports), [allows access to URLs](../app-proxy/application-proxy-add-on-premises-application.md#allow-access-to-urls), and supports [outbound HTTPS proxy configuration](../saas-apps/workday-inbound-tutorial.md#how-do-i-configure-the-provisioning-agent-to-use-a-proxy-server-for-outbound-http-communication).
This is the most common deployment topology. Use this topology, if you need to p
* Setup two provisioning agent nodes for high availability and failover. * Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register your AD domain with your Azure AD tenant. * When configuring the provisioning app, select the AD domain from the dropdown of registered domains.
-* If you are using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
+* If you're using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
### Deployment topology 2: Separate apps to provision distinct user sets from Cloud HR to single on-premises Active Directory domain
This topology supports business requirements where attribute mapping and provisi
### Deployment topology 3: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (no cross-domain visibility)
-Use this topology to manage multiple independent child AD domains belonging to the same forest, if managers always exist in the same domain as the user and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* does not require a forest-wide lookup. It also offers the flexibility of delegating the administration of each provisioning job by domain boundary.
+Use this topology to manage multiple independent child AD domains belonging to the same forest, if managers always exist in the same domain as the user and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* doesn't require a forest-wide lookup. It also offers the flexibility of delegating the administration of each provisioning job by domain boundary.
For example: In the diagram below, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Delegated administration of the provisioning app is possible so that *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region.
For example: In the diagram below, the provisioning apps are set up for each geo
### Deployment topology 5: Single app to provision all users from Cloud HR to multiple on-premises Active Directory domains (with cross-domain visibility)
-Use this topology if you want to use a single provisioning app to manage users belonging to all your parent and child AD domains. This topology is recommended if provisioning rules are consistent across all domains and there is no requirement for delegated administration of provisioning jobs. This topology supports resolving cross-domain manager references and can perform forest-wide uniqueness check.
+Use this topology if you want to use a single provisioning app to manage users belonging to all your parent and child AD domains. This topology is recommended if provisioning rules are consistent across all domains and there's no requirement for delegated administration of provisioning jobs. This topology supports resolving cross-domain manager references and can perform forest-wide uniqueness check.
For example: In the diagram below, a single provisioning app manages users present in three different child domains grouped by region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). The attribute mapping for *parentDistinguishedName* is used to dynamically create a user in the appropriate child domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent.
For example: In the diagram below, a single provisioning app manages users prese
* Create a single HR2AD provisioning app for the entire forest. * When configuring the provisioning app, select the parent AD domain from the dropdown of available AD domains. This ensures forest-wide lookup while generating unique values for attributes like *userPrincipalName*, *samAccountName* and *mail*. * Use *parentDistinguishedName* with expression mapping to dynamically create user in the correct child domain and [OU container](#configure-active-directory-ou-container-assignment).
-* If you are using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
+* If you're using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
### Deployment topology 6: Separate apps to provision distinct users from Cloud HR to disconnected on-premises Active Directory forests
Use this topology if your IT infrastructure has disconnected/disjoint AD forests
### Deployment topology 7: Separate apps to provision distinct users from multiple Cloud HR to disconnected on-premises Active Directory forests
-In large organizations, it is not uncommon to have multiple HR systems. During business M&A (mergers and acquisitions) scenarios, you may come across a need to connect your on-premises Active Directory to multiple HR sources. We recommend the topology below if you have multiple HR sources and would like to channel the identity data from these HR sources to either the same or different on-premises Active Directory domains.
+In large organizations, it isn't uncommon to have multiple HR systems. During business M&A (mergers and acquisitions) scenarios, you may come across a need to connect your on-premises Active Directory to multiple HR sources. We recommend the topology below if you have multiple HR sources and would like to channel the identity data from these HR sources to either the same or different on-premises Active Directory domains.
:::image type="content" source="media/plan-cloud-hr-provision/topology-7-separate-apps-from-multiple-hr-to-disconnected-ad-forests.png" alt-text="Screenshot of separate apps to provision users from multiple Cloud HR to disconnected AD forests" lightbox="media/plan-cloud-hr-provision/topology-7-separate-apps-from-multiple-hr-to-disconnected-ad-forests.png":::
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 01/29/2023 Last updated : 04/17/2023
Microsoft doesn't guarantee consistent SMS or voice-based Azure AD Multi-Factor
### Text message verification
-With text message verification during SSPR or Azure AD Multi-Factor Authentication, an SMS is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
+With text message verification during SSPR or Azure AD Multi-Factor Authentication, a Short Message Service (SMS) text is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
+
+Android users can enable Rich Communication Services (RCS) on their devices. RCS offers encryption and other improvements over SMS. For Android, MFA text messages may be sent over RCS rather than SMS. The MFA text message is similar to SMS, but RCS messages have more Microsoft branding and a verified checkmark so users know they can trust the message.
+ ### Phone call verification
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
These browsers support device authentication, allowing the device to be identifi
> [!NOTE] > Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario. >
-> Safari is supported for device-based Conditional Access, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements.
+> Safari is supported for device-based Conditional Access on a managed device, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements.
> On iOS with 3rd party MDM solution only Microsoft Edge browser supports device policy. > > [Firefox 91+](https://support.mozilla.org/kb/windows-sso) is supported for device-based Conditional Access, but "Allow Windows single sign-on for Microsoft, work, and school accounts" needs to be enabled.
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Some applications require the group membership information to appear in the role
Group filtering allows for fine control of the list of groups that's included as part of the group claim. When a filter is configured, only groups that match the filter will be included in the group's claim that's sent to that application. The filter will be applied against all groups regardless of the group hierarchy. > [!NOTE]
-> Group filtering applies to tokens emitted for apps where group claims and filtering was configured in the **Enterprise apps** blade in the portal.
+> Group filtering applies to tokens emitted for apps where group claims and filtering was configured in the **Enterprise apps** blade in the portal.
+> Group filtering does not apply to Azure AD Roles.
You can configure filters to be applied to the group's display name or `SAMAccountName` attribute. The following filtering operations are supported:
You can also configure group claims in the [optional claims](../../active-direct
| Selection | Description | |-|-| | `All` | Emits security groups, distribution lists, and roles. |
- | `SecurityGroup` | Emits security groups that the user is a member of in the group claim. |
+ | `SecurityGroup` | Emits security groups and Azure AD roles that the user is a member of in the group claim. |
| `DirectoryRole` | If the user is assigned directory roles, they're emitted as a `wids` claim. (A group claim won't be emitted.) | | `ApplicationGroup` | Emits only the groups that are explicitly assigned to the application and that the user is a member of. | | `None` | No groups are returned. (It's not case-sensitive, so `none` also works. It can be set directly in the application manifest.) |
active-directory How To Connect Health Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-data-retrieval.md
To retrieve the email addresses for all of your users that are configured in Azu
4. On the **Notification Setting** blade, you will find the list of email addresses that have been enabled as recipients for health Alert notifications. ![Emails](./media/how-to-connect-health-data-retrieval/retrieve5a.png)
-## Retrieve accounts that were flagged with AD FS Bad Password attempts
+## Retrieve all sync errors
-To retrieve accounts that were flagged with AD FS Bad Password attempts, use the following steps.
+To retrieve a list of all sync errors, use the following steps.
1. Starting on the Azure Active Directory Health blade, select **Sync Errors**. ![Sync errors](./media/how-to-connect-health-data-retrieval/retrieve6.png)
active-directory Protect Against Consent Phishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/protect-against-consent-phishing.md
Administrators, users, or Microsoft security researchers may flag OAuth applicat
When Azure AD disables an OAuth application, the following actions occur: - The malicious application and related service principals are placed into a fully disabled state. Any new token requests or requests for refresh tokens are denied, but existing access tokens are still valid until their expiration.-- The disabled state is surfaced through an exposed property called *disabledByMicrosoftStatus* on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph.
+- These applications will show `DisabledDueToViolationOfServicesAgreement` on the `disabledByMicrosoftStatus` property on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph. To prevent them from being instantiated in your organization again in the future, you cannot delete these objects.
- An email is sent to a global administrator when a user in an organization consented to an application before it was disabled. The email specifies the action taken and recommended steps they can do to investigate and improve their security posture. ## Recommended response and remediation
Administrators should be in control of application use by providing the right in
- [Managing access to applications](./what-is-access-management.md) - [Restrict user consent operations in Azure AD](../../security/fundamentals/steps-secure-identity.md#restrict-user-consent-operations) - [Compromised and malicious applications investigation](/security/compass/incident-response-playbook-compromised-malicious-app)+
active-directory Concept Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-audit-logs.md
With an application-centric view, you can get answers to questions such as:
## How do I access it?
-The audit activity report is available in all editions of Azure AD. To access the audit logs, you need to have one of the following roles:
+To access the audit log for a tenant, you must have one of the following roles:
- Reports Reader - Security Reader
The audit activity report is available in all editions of Azure AD. To access th
Sign in to the Azure portal and go to **Azure AD** and select **Audit log** from the **Monitoring** section.
-You can also access the audit log through the [Microsoft Graph API](/graph/api/resources/azure-ad-auditlog-overview).
+The audit activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the audit log through the [Microsoft Graph API](/graph/api/resources/azure-ad-auditlog-overview). See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
## What do the logs show?
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
You can further restrict permissions by assigning roles at smaller scopes or by
> | - | | - | > | Manage identity providers | [External Identity Provider Administrator](permissions-reference.md#external-identity-provider-administrator) | | > | Manage settings | [Global Administrator](permissions-reference.md#global-administrator) | |
-> | Manage terms of use | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Manage privacy statement and contact | [Global Administrator](permissions-reference.md#global-administrator) | |
> | Read all configuration | [Global Reader](permissions-reference.md#global-reader) | | ## Password reset
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Azure CNI Overlay has the following limitations:
- Windows Server 2019 node pools are **not** supported for Overlay - Traffic from host network pods is not able to reach Windows Overlay pods. - Sovereign Clouds are not supported-- Virtual Machine Scale Sets (VMAS) are not supported for Overlay
+- Virtual Machine Availability Sets (VMAS) are not supported for Overlay
- Dualstack networking is not supported in Overlay - You can't use [DCsv2-series](/azure/virtual-machines/dcv2-series) virtual machines in node pools. To meet Confidential Computing requirements, consider using [DCasv5 or DCadsv5-series confidential VMs](/azure/virtual-machines/dcasv5-dcadsv5-series) instead.
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
Title: Provision Azure NetApp Files volumes on Azure Kubernetes Service
description: Learn how to provision Azure NetApp Files volumes on an Azure Kubernetes Service cluster. Previously updated : 02/08/2023 Last updated : 04/18/2023 # Provision Azure NetApp Files volumes on Azure Kubernetes Service
Before proceeding to the next section, you need to:
This section walks you through the installation of Astra Trident using the operator.
-1. Download Astra Trident from its [GitHub repository](https://github.com/NetApp/trident/releases). Choose from the desired version and download the installer bundle.
-
- ```bash
- wget https://github.com/NetApp/trident/releases/download/v21.07.1/trident-installer-21.07.1.tar.gz
- tar xzvf trident-installer-21.07.1.tar.gz
- ```
-
-2. Run the [kubectl create][kubectl-create] command to create the *trident* namespace:
+1. Run the [kubectl create][kubectl-create] command to create the *trident* namespace:
```bash kubectl create ns trident
This section walks you through the installation of Astra Trident using the opera
namespace/trident created ```
-3. Run the [kubectl apply][kubectl-apply] command to deploy the Trident operator using the bundle file:
+2. Run the [kubectl apply][kubectl-apply] command to deploy the Trident operator using the bundle file:
+ - For AKS cluster version less than 1.25, run following command:
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/NetApp/trident/v23.01.1/deploy/bundle_pre_1_25.yaml -n trident
+ ```
+ - For AKS cluster 1.25+ version, run following command:
```bash
- kubectl apply -f trident-installer/deploy/bundle.yaml -n trident
+ kubectl apply -f https://raw.githubusercontent.com/NetApp/trident/v23.01.1/deploy/bundle_post_1_25.yaml -n trident
``` The output of the command resembles the following example:
This section walks you through the installation of Astra Trident using the opera
podsecuritypolicy.policy/tridentoperatorpods created ```
-4. Run the following command to create a `TridentOrchestrator` to install Astra Trident.
+3. Run the following command to create a `TridentOrchestrator` to install Astra Trident.
```bash
- kubectl apply -f trident-installer/deploy/crds/tridentorchestrator_cr.yaml
+ kubectl apply -f https://raw.githubusercontent.com/NetApp/trident/v23.01.1/deploy/crds/tridentorchestrator_cr.yaml
``` The output of the command resembles the following example:
This section walks you through the installation of Astra Trident using the opera
The operator installs by using the parameters provided in the `TridentOrchestrator` spec. You can learn about the configuration parameters and example backends from the [Trident install guide][trident-install-guide] and [backend guide][trident-backend-install-guide].
-5. To confirm Astra Trident was installed successfully, run the following [kubectl describe][kubectl-describe] command:
+4. To confirm Astra Trident was installed successfully, run the following [kubectl describe][kubectl-describe] command:
```bash kubectl describe torc trident
This section walks you through the installation of Astra Trident using the opera
Current Installation Params: IPv6: false Autosupport Hostname:
- Autosupport Image: netapp/trident-autosupport:21.01
+ Autosupport Image: netapp/trident-autosupport:23.01
Autosupport Proxy: Autosupport Serial Number: Debug: true
This section walks you through the installation of Astra Trident using the opera
Kubelet Dir: /var/lib/kubelet Log Format: text Silence Autosupport: false
- Trident Image: netapp/trident:21.07.1
+ Trident Image: netapp/trident:23.01.1
Message: Trident installed Namespace: trident Status: Installed
- Version: v21.07.1
+ Version: v23.01.1
Events: Type Reason Age From Message - - - -
This section walks you through the installation of Astra Trident using the opera
### Create a backend
-1. Before creating a backend, you need to update `backend-anf.yaml` to include details about the Azure NetApp Files subscription, such as:
+1. Before creating a backend, you need to update [backend-anf.yaml][backend-anf.yaml] to include details about the Azure NetApp Files subscription, such as:
* `subscriptionID` for the Azure subscription where Azure NetApp Files will be enabled. * `tenantID`, `clientID`, and `clientSecret` from an [App Registration][azure-ad-app-registration] in Azure Active Directory (AD) with sufficient permissions for the Azure NetApp Files service. The App Registration include the `Owner` or `Contributor` role that's predefined by Azure.
This section walks you through the installation of Astra Trident using the opera
2. After Astra Trident is installed, create a backend that points to your Azure NetApp Files subscription by running the following command. ```bash
- kubectl apply -f trident-installer/sample-input/backends-samples/azure-netapp-files/backend-anf.yaml -n trident
+ kubectl apply -f backend-anf.yaml -n trident
``` The output of the command resembles the following example:
After the PVC is created, a pod can be spun up to access the Azure NetApp Files
spec: containers: - name: nginx
- image: mcr.microsoft.com/oss/nginx/nginx:latest1.15.5-alpine
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
resources: requests: cpu: 100m
Astra Trident supports many features with Azure NetApp Files. For more informati
<!-- EXTERNAL LINKS --> [astra-trident]: https://docs.netapp.com/us-en/trident/https://docsupdatetracker.net/index.html
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe [kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
Astra Trident supports many features with Azure NetApp Files. For more informati
[expand-trident-volumes]: https://docs.netapp.com/us-en/trident/trident-use/vol-expansion.html [on-demand-trident-volume-snapshots]: https://docs.netapp.com/us-en/trident/trident-use/vol-snapshots.html [importing-trident-volumes]: https://docs.netapp.com/us-en/trident/trident-use/vol-import.html
+[backend-anf.yaml]: https://raw.githubusercontent.com/NetApp/trident/v23.01.1/trident-installer/sample-input/backends-samples/azure-netapp-files/backend-anf.yaml
<!-- INTERNAL LINKS --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
This article shows you how certificate rotation works in your AKS cluster.
This article requires that you are running the Azure CLI version 2.0.77 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-## Limitation
-
-Certificate rotation is not supported for stopped AKS clusters.
- ## AKS certificates, Certificate Authorities, and Service Accounts AKS generates and uses the following certificates, Certificate Authorities, and Service Accounts:
aks Cilium Enterprise Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cilium-enterprise-marketplace.md
+
+ Title: Isovalent Cilium Enterprise on Azure Marketplace (Preview)
+
+description: Learn about Isovalent Cillium Enterprise on Azure Marketplace and how to deploy it on Azure.
+++++ Last updated : 04/18/2023+++
+# Isovalent Cilium Enterprise on Azure Marketplace (Preview)
+
+Isovalent Cilium Enterprise on Azure Marketplace is a powerful tool for securing and managing KubernetesΓÇÖ workloads on Azure. Cilium Enterprise's range of features and easy deployment make it an ideal solution for organizations of all sizes looking to secure their cloud-native applications.
+
+Isovalent Cilium Enterprise is a network security platform for modern cloud-native workloads that provides visibility, security, and compliance across Kubernetes clusters. It uses eBPF technology to deliver network and application-layer security, while also providing observability and tracing for Kubernetes workloads. Azure Marketplace is an online store for buying and selling cloud computing solutions that allows you to deploy Isovalent Cilium Enterprise to Azure with ease.
++
+> [!IMPORTANT]
+> Isovalent Cilium Enterprise is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Designed for platform teams and using the power of eBPF, Isovalent Cilium Enterprise:
+
+* Combines network and runtime behavior with Kubernetes identity to provide a single source of data for cloud native forensics, audit, compliance monitoring, and threat detection. Isovalent Cilium Enterprise is integrated into your SIEM/Log aggregation platform of choice.
+
+* Scales effortlessly for any deployment size. With capabilities such as traffic management, load balancing, and infrastructure monitoring.
+
+* Fully back-ported and tested. Available with 24x7 support.
+
+* Enables self-service for monitoring, troubleshooting, and security workflows in Kubernetes. Teams can access current and historical views of flow data, metrics, and visualizations for their specific namespaces.
+
+> [!NOTE]
+> If you are upgrading an existing AKS cluster, then it must be created with Azure CNI powered by Cilium. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)](azure-cni-powered-by-cilium.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing Azure Kubernetes Service (AKS) cluster running Azure CNI powered by Cilium. If you don't have an existing AKS cluster, you can create one from the Azure portal. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)](azure-cni-powered-by-cilium.md).
+
+## Deploy Isovalent Cilium Enterprise on Azure Marketplace
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search box at the top of the portal, enter **Cilium Enterprise** and select **Isovalent Cilium Enterprise** from the results.
+
+1. In the **Basics** tab of **Create Isovalent Cilium Enterprise**, enter or select the following information:
+
+| Setting | Value |
+| | |
+| **Project details** | |
+| Subscription | Select your subscription |
+| Resource group | Select **Create new** </br> Enter **test-rg** in **Name**. </br> Select **OK**. </br> Or, select an existing resource group that contains your AKS cluster. |
+| **Instance details** | |
+| Supported Regions | Select **West US 2**. |
+| Create new dev cluster? | Leave the default of **No**. |
+
+1. Select **Next: Cluster Details**.
+
+1. Select your AKS cluster from the **AKS Cluster Name** dropdown.
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+Azure deploys Isovalent Cilium Enterprise to your selected subscription and resource group. This process may take some time and must be completed.
+
+> [!IMPORTANT]
+> Note that Marketplace applications are deployed as AKS extensions onto AKS clusters. If you are upgrading the existing AKS cluster, AKS replaces the Cilium OSS images with Isovalent Cilium Enterprise images seamlessly without any downtime.
+
+When the deployment is complete, you can access the Isovalent Cilium Enterprise by navigating to the resource group that contains the **Cilium Enterprise** resource in the Azure portal.
+
+Cilium can be reconfigured after deployment by updating the Helm values with Azure CLI:
+
+```azurecli
+az k8s-extension update -c <cluster> -t managedClusters -g <region> -n cilium --configuration-settings debug.enabled=true
+```
+
+You can uninstall an Isovalent Cilium Enterprise offer using the AKS extension delete command. Uninstall flow per AKS Cluster isn't added in Marketplace yet until ISVΓÇÖs stop sell the whole offer. For more information about AKS extension delete, see [az k8s-extension delete](/cli/azure/k8s-extension#az-k8s-extension-delete).
+
+## Next steps
+
+- [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)](azure-cni-powered-by-cilium.md)
+
+- [What is Azure Kubernetes Service?](intro-kubernetes.md)
aks Istio About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-about.md
+
+ Title: Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+description: Istio-based service mesh add-on for Azure Kubernetes Service.
+ Last updated : 04/09/2023+++
+# Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+
+[Istio][istio-overview] addresses the challenges developers and operators face with a distributed or microservices architecture. The Istio-based service mesh add-on provides an officially supported and tested integration for Azure Kubernetes Service (AKS).
++
+## What is a Service Mesh?
+
+Modern applications are typically architected as distributed collections of microservices, with each collection of microservices performing some discrete business function. A service mesh is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code. The term **service mesh** describes both the type of software you use to implement this pattern, and the security or network domain that is created when you use that software.
+
+As the deployment of distributed services, such as in a Kubernetes-based system, grows in size and complexity, it can become harder to understand and manage. You may need to implement capabilities such as discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh can also address more complex operational requirements like A/B testing, canary deployments, rate limiting, access control, encryption, and end-to-end authentication.
+
+Service-to-service communication is what makes a distributed application possible. Routing this communication, both within and across application clusters, becomes increasingly complex as the number of services grow. Istio helps reduce this complexity while easing the strain on development teams.
+
+## What is Istio?
+
+Istio is an open-source service mesh that layers transparently onto existing distributed applications. IstioΓÇÖs powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio enables load balancing, service-to-service authentication, and monitoring ΓÇô with few or no service code changes. Its powerful control plane brings vital features, including:
+
+* Secure service-to-service communication in a cluster with TLS encryption, strong identity-based authentication and authorization.
+* Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
+* Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
+* A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
+* Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
+
+## How is the add-on different from open-source Istio?
+
+This service mesh add-on uses and builds on top of open-source Istio. The add-on flavor provides the following extra benefits:
+
+* Istio versions are tested and verified to be compatible with supported versions of Azure Kubernetes Service.
+* Microsoft handles scaling and configuration of Istio control plane
+* Microsoft adjusts scaling of AKS components like `coredns` when Istio is enabled.
+* Microsoft provides managed lifecycle (upgrades) for Istio components when triggered by user.
+* Verified external and internal ingress set-up.
+* Verified to work with [Azure Monitor managed service for Prometheus][managed-prometheus-overview] and [Azure Managed Grafana][managed-grafana-overview].
+* Official Azure support provided for the add-on.
+
+## Limitations
+
+Istio-based service mesh add-on for AKS has the following limitations:
+
+* The add-on currently doesn't work on AKS clusters using [Azure CNI Powered by Cilium][azure-cni-cilium].
+* The add-on doesn't work on AKS clusters that are using [Open Service Mesh addon for AKS][open-service-mesh-about].
+* The add-on doesn't work on AKS clusters that have Istio installed on them already outside the add-on installation.
+* Managed lifecycle of mesh on how Istio versions are installed and later made available for upgrades.
+* Istio doesn't support Windows Server containers.
+* Customization of mesh based on the following custom resources is blocked for now - `EnvoyFilter, ProxyConfig, WorkloadEntry, WorkloadGroup, Telemetry, IstioOperator, WasmPlugin`
+
+## Next steps
+
+* [Deploy Istio-based service mesh add-on][istio-deploy-addon]
+
+[istio-overview]: https://istio.io/latest/
+[managed-prometheus-overview]: ../azure-monitor/essentials/prometheus-metrics-overview.md
+[managed-grafana-overview]: ../managed-grafan
+[azure-cni-cilium]: azure-cni-powered-by-cilium.md
+[open-service-mesh-about]: open-service-mesh-about.md
+
+[istio-deploy-addon]: istio-deploy-addon.md
aks Istio Deploy Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-addon.md
+
+ Title: Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+description: Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview)
++ Last updated : 04/09/2023+++
+# Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+
+This article shows you how to install the Istio-based service mesh add-on for Azure Kubernetes Service (AKS) cluster.
+
+For more information on Istio and the service mesh add-on, see [Istio-based service mesh add-on for Azure Kubernetes Service][istio-about].
++
+## Before you begin
+
+### Set environment variables
+
+```bash
+export CLUSTER=<cluster-name>
+export RESOURCE_GROUP=<resource-group-name>
+export LOCATION=<location>
+```
+
+### Verify Azure CLI and aks-preview extension versions
+The add-on requires:
+* Azure CLI version 2.44.0 or later installed. To install or upgrade, see [Install Azure CLI][install-azure-cli].
+* `aks-preview` Azure CLI extension of version 0.5.133 or later installed
+
+You can run `az --version` to verify above versions.
+
+To install the aks-preview extension, run the following command:
+
+```azurecli-interactive
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli-interactive
+az extension update --name aks-preview
+```
+
+### Register the _AzureServiceMeshPreview_ feature flag
+
+Register the `AzureServiceMeshPreview` feature flag by using the [az feature register][az-feature-register] command:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AzureServiceMeshPreview"
+```
+
+It takes a few minutes for the feature to register. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "AzureServiceMeshPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Install Istio add-on at the time of cluster creation
+
+To install the Istio add-on when creating the cluster, use the `--enable-azure-service-mesh` or`--enable-asm` parameter.
+
+```azurecli-interactive
+az group create --name ${RESOURCE_GROUP} --location ${LOCATION}
+
+az aks create \
+--resource-group ${RESOURCE_GROUP} \
+--name ${CLUSTER} \
+--enable-asm
+```
+
+## Install Istio add-on for existing cluster
+
+The following example enables Istio add-on for an existing AKS cluster:
+
+> [!IMPORTANT]
+> You can't enable the Istio add-on on an existing cluster if an OSM add-on is already on your cluster. Uninstall the OSM add-on before installing the Istio add-on.
+> For more information, see [uninstall the OSM add-on from your AKS cluster][uninstall-osm-addon].
+> Istio add-on can only be enabled on AKS clusters of version >= 1.23.
+
+```azurecli-interactive
+az aks mesh enable --resource-group ${RESOURCE_GROUP} --name ${CLUSTER}
+```
+
+## Verify successful installation
+
+To verify the Istio add-on is installed on your cluster, run the following command:
+
+```azurecli-interactive
+az aks show --resource-group ${RESOURCE_GROUP} --name ${CLUSTER} --query 'serviceMeshProfile.mode'
+```
+
+Confirm the output shows `Istio`.
+
+Use `az aks get-credentials` to the credentials for your AKS cluster:
+
+```azurecli-interactive
+az aks get-credentials --resource-group ${RESOURCE_GROUP} --name ${CLUSTER}
+```
+
+Use `kubectl` to verify that `istiod` (Istio control plane) pods are running successfully:
+
+```bash
+kubectl get pods -n aks-istio-system
+```
+
+Confirm the `istiod` pod has a status of `Running`. For example:
+
+```
+NAME READY STATUS RESTARTS AGE
+istiod-asm-1-17-74f7f7c46c-xfdtl 2/2 Running 0 2m
+```
+
+## Enable sidecar injection
+
+To automatically install sidecar to any new pods, annotate your namespaces:
+
+```bash
+kubectl label namespace default istio.io/rev=asm-1-17
+```
+
+> [!IMPORTANT]
+> The default `istio-injection=enabled` labeling doesn't work. Explicit versioning (`istio.io/rev=asm-1-17`) is required.
++
+For manual injection of sidecar using `istioctl kube-inject`, you need to specify extra parameters for `istioNamespace` (`-i`) and `revision` (`-r`). Example:
+
+```bash
+kubectl apply -f <(istioctl kube-inject -f sample.yaml -i aks-istio-system -r asm-1-17) -n foo
+```
+
+## Deploy sample application
+
+Use `kubectl apply` to deploy the sample application on the cluster:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/bookinfo/platform/kube/bookinfo.yaml
+```
+
+Confirm several deployments and services are created on your cluster. For example:
+
+```
+service/details created
+serviceaccount/bookinfo-details created
+deployment.apps/details-v1 created
+service/ratings created
+serviceaccount/bookinfo-ratings created
+deployment.apps/ratings-v1 created
+service/reviews created
+serviceaccount/bookinfo-reviews created
+deployment.apps/reviews-v1 created
+deployment.apps/reviews-v2 created
+deployment.apps/reviews-v3 created
+service/productpage created
+serviceaccount/bookinfo-productpage created
+deployment.apps/productpage-v1 created
+```
+
+Use `kubectl get services` to verify that the services were created successfully:
+
+```bash
+kubectl get services
+```
+
+Confirm the following services were deployed:
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+details ClusterIP 10.0.180.193 <none> 9080/TCP 87s
+kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 15m
+productpage ClusterIP 10.0.112.238 <none> 9080/TCP 86s
+ratings ClusterIP 10.0.15.201 <none> 9080/TCP 86s
+reviews ClusterIP 10.0.73.95 <none> 9080/TCP 86s
+```
+
+```bash
+kubectl get pods
+```
+
+Confirm that all the pods have status of `Running`.
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+details-v1-558b8b4b76-2llld 2/2 Running 0 2m41s
+productpage-v1-6987489c74-lpkgl 2/2 Running 0 2m40s
+ratings-v1-7dc98c7588-vzftc 2/2 Running 0 2m41s
+reviews-v1-7f99cc4496-gdxfn 2/2 Running 0 2m41s
+reviews-v2-7d79d5bd5d-8zzqd 2/2 Running 0 2m41s
+reviews-v3-7dbcdcbc56-m8dph 2/2 Running 0 2m41s
+```
+
+> [!NOTE]
+> Each pod has two containers, one of which is the Envoy sidecar injected by Istio and the other is the application container.
+
+To test this sample application against ingress, check out [next-steps](#next-steps).
+
+## Delete resources
+
+Use `kubectl delete` to delete the sample application:
+
+```bash
+kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/bookinfo/platform/kube/bookinfo.yaml
+```
+
+If you don't intend to enable Istio ingress on your cluster and want to disable the Istio add-on, run the following command:
+
+```azurecli-interactive
+az aks mesh disable --resource-group ${RESOURCE_GROUP} --name ${CLUSTER}
+```
+
+> [!CAUTION]
+> Disabling the service mesh addon will completely remove the Istio control plane from the cluster.
+
+Istio `CustomResourceDefintion`s (CRDs) aren't be deleted by default. To clean them up, use:
+
+```bash
+kubectl delete crd $(kubectl get crd -A | grep "istio.io" | awk '{print $1}')
+```
+
+Use `az group delete` to delete your cluster and the associated resources:
+
+```azurecli-interactive
+az group delete --name ${RESOURCE_GROUP} --yes --no-wait
+```
+
+## Next steps
+
+* [Deploy external or internal ingresses for Istio service mesh add-on][istio-deploy-ingress]
+
+[istio-about]: istio-about.md
+
+[azure-cli-install]: /cli/azure/install-azure-cli
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-provider-register]: /cli/azure/provider#az-provider-register
+
+[uninstall-osm-addon]: open-service-mesh-uninstall-add-on.md
+[uninstall-istio-oss]: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio
+
+[istio-deploy-ingress]: istio-deploy-ingress.md
aks Istio Deploy Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-ingress.md
+
+ Title: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)
+description: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)
++ Last updated : 04/09/2023+++
+# Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)
+
+This article shows you how to deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (AKS) cluster.
++
+## Prerequisites
+
+This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster, deploy a sample application and set environment variables.
+
+## Enable external ingress gateway
+
+Use `az aks mesh enable-ingress-gateway` to enable an externally accessible Istio ingress on your AKS cluster:
+
+```azurecli-interactive
+az aks mesh enable-ingress-gateway --resource-group $RESOURCE_GROUP --name $CLUSTER --ingress-gateway-type external
+```
+
+Use `kubectl get svc` to check the service mapped to the ingress gateway:
+
+```bash
+kubectl get svc aks-istio-ingressgateway-external -n aks-istio-ingress
+```
+
+Observe from the output that the external IP address of the service is a publicly accessible one:
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+aks-istio-ingressgateway-external LoadBalancer 10.0.10.249 <EXTERNAL_IP> 15021:30705/TCP,80:32444/TCP,443:31728/TCP 4m21s
+```
+
+Applications aren't accessible from outside the cluster by default after enabling the ingress gateway. To make an application accessible, map the sample deployment's ingress to the Istio ingress gateway using the following manifest:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.istio.io/v1alpha3
+kind: Gateway
+metadata:
+ name: bookinfo-gateway-external
+spec:
+ selector:
+ istio: aks-istio-ingressgateway-external
+ servers:
+ - port:
+ number: 80
+ name: http
+ protocol: HTTP
+ hosts:
+ - "*"
+
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ name: bookinfo-vs-external
+spec:
+ hosts:
+ - "*"
+ gateways:
+ - bookinfo-gateway-external
+ http:
+ - match:
+ - uri:
+ exact: /productpage
+ - uri:
+ prefix: /static
+ - uri:
+ exact: /login
+ - uri:
+ exact: /logout
+ - uri:
+ prefix: /api/v1/products
+ route:
+ - destination:
+ host: productpage
+ port:
+ number: 9080
+EOF
+```
+
+> [!NOTE]
+> The selector used in the Gateway object points to `istio: aks-istio-ingressgateway-external`, which can be found as label on the service mapped to the external ingress that was enabled earlier.
+
+Set environment variables for external ingress host and ports:
+
+```bash
+export INGRESS_HOST_EXTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-external -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+export INGRESS_PORT_EXTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-external -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
+export GATEWAY_URL_EXTERNAL=$INGRESS_HOST_EXTERNAL:$INGRESS_PORT_EXTERNAL
+```
+
+Retrieve the external address of the sample application:
+
+```bash
+echo "http://$GATEWAY_URL_EXTERNAL/productpage"
+```
+
+Navigate to the URL from the output of the previous command and confirm that the sample application's product page is displayed. Alternatively, you can also use `curl` to confirm the sample application is accessible. For example:
+
+```bash
+curl -s "http://${GATEWAY_URL_EXTERNAL}/productpage" | grep -o "<title>.*</title>"
+```
+
+Confirm that the sample application's product page is accessible. The expected output is:
+
+```html
+<title>Simple Bookstore App</title>
+```
+
+## Enable internal ingress gateway
+
+Use `az aks mesh enable-ingress-gateway` to enable an internal Istio ingress on your AKS cluster:
+
+```azurecli-interactive
+az aks mesh enable-ingress-gateway --resource-group $RESOURCE_GROUP --name $CLUSTER --ingress-gateway-type internal
+```
++
+Use `kubectl get svc` to check the service mapped to the ingress gateway:
+
+```bash
+kubectl get svc aks-istio-ingressgateway-internal -n aks-istio-ingress
+```
+
+Observe from the output that the external IP address of the service isn't a publicly accessible one and is instead only locally accessible:
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+aks-istio-ingressgateway-internal LoadBalancer 10.0.182.240 <IP> 15021:30764/TCP,80:32186/TCP,443:31713/TCP 87s
+```
+
+Applications aren't mapped to the Istio ingress gateway after enabling the ingress gateway. Use the following manifest to map the sample deployment's ingress to the Istio ingress gateway:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.istio.io/v1alpha3
+kind: Gateway
+metadata:
+ name: bookinfo-internal-gateway
+spec:
+ selector:
+ istio: aks-istio-ingressgateway-internal
+ servers:
+ - port:
+ number: 80
+ name: http
+ protocol: HTTP
+ hosts:
+ - "*"
+
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ name: bookinfo-vs-internal
+spec:
+ hosts:
+ - "*"
+ gateways:
+ - bookinfo-internal-gateway
+ http:
+ - match:
+ - uri:
+ exact: /productpage
+ - uri:
+ prefix: /static
+ - uri:
+ exact: /login
+ - uri:
+ exact: /logout
+ - uri:
+ prefix: /api/v1/products
+ route:
+ - destination:
+ host: productpage
+ port:
+ number: 9080
+EOF
+```
+
+> [!NOTE]
+> The selector used in the Gateway object points to `istio: aks-istio-ingressgateway-internal`, which can be found as label on the service mapped to the internal ingress that was enabled earlier.
+
+Set environment variables for internal ingress host and ports:
+
+```bash
+export INGRESS_HOST_INTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-internal -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+export INGRESS_PORT_INTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-internal -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
+export GATEWAY_URL_INTERNAL=$INGRESS_HOST_INTERNAL:$INGRESS_PORT_INTERNAL
+```
+
+Retrieve the address of the sample application:
+
+```bash
+echo "http://$GATEWAY_URL_INTERNAL/productpage"
+```
+
+Navigate to the URL from the output of the previous command and confirm that the sample application's product page is **NOT** displayed. Alternatively, you can also use `curl` to confirm the sample application is **NOT** accessible. For example:
+
+```bash
+curl -s "http://${GATEWAY_URL_INTERNAL}/productpage" | grep -o "<title>.*</title>"
+```
+
+Use `kubectl exec` to confirm application is accessible from inside the cluster's virtual network:
+
+```bash
+kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS "http://$GATEWAY_URL_INTERNAL/productpage" | grep -o "<title>.*</title>"
+```
+
+Confirm that the sample application's product page is accessible. The expected output is:
+
+```html
+<title>Simple Bookstore App</title>
+```
+
+## Delete resources
+
+If you want to clean up the Istio service mesh and the ingresses (leaving behind the cluster), run the following command:
+
+```azurecli-interactive
+az aks mesh disable --resource-group ${RESOURCE_GROUP} --name ${CLUSTER}
+```
+
+If you want to clean up all the resources created from the Istio how-to guidance documents, run the following command:
+
+```azurecli-interactive
+az group delete --name ${RESOURCE_GROUP} --yes --no-wait
+```
+
+[istio-deploy-addon]: istio-deploy-addon.md
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
Last updated 04/06/2023
-# Open Service Mesh (OSM) add-on in Azure Kubernetes Service (OSM)
+# Open Service Mesh (OSM) add-on in Azure Kubernetes Service (AKS)
[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, cloud native service mesh that allows you to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
The OSM AKS add-on has the following limitations:
- After installation, you must enable Iptables redirection for port IP address and port range exclusion using `kubectl patch`. For more information, see [iptables redirection][ip-tables-redirection]. - Any pods that need access to IMDS, Azure DNS, or the Kubernetes API server must have their IP addresses added to the global list of excluded outbound IP ranges using [Global outbound IP range exclusions][global-exclusion].
+* The add-on doesn't work on AKS clusters that are using [Istio based service mesh addon for AKS][istio-about].
- OSM doesn't support Windows Server containers. ## Next steps
After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep t
[osm-contour]: https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_contour [osm-nginx]: https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx [web-app-routing]: web-app-routing.md
+[istio-about]: istio-about.md
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Title: Supported Kubernetes versions in Azure Kubernetes Service
-description: Understand the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS)
+ Title: Supported Kubernetes versions in Azure Kubernetes Service (AKS).
+description: Learn the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS).
Last updated 11/21/2022
Aim to run the latest patch release of the minor version you're running. For exa
View the upcoming version releases on the AKS Kubernetes release calendar. To see real-time updates of region release status and version release notes, visit the [AKS release status webpage][aks-release]. To learn more about the release status webpage, see [AKS release tracker][aks-tracker]. > [!NOTE]
-> AKS follows 12 months of support for a generally available (GA) Kubernetes version. To read more about our support policy for Kubernetes versioning, please read our [FAQ](https://learn.microsoft.com/azure/aks/supported-kubernetes-versions?tabs=azure-cli#faq).
+> AKS follows 12 months of support for a generally available (GA) Kubernetes version. To read more about our support policy for Kubernetes versioning, please read our [FAQ](./supported-kubernetes-versions.md#faq).
For the past release history, see [Kubernetes history](https://en.wikipedia.org/wiki/Kubernetes#History).
For the past release history, see [Kubernetes history](https://en.wikipedia.org/
With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster will run the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*.
-When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` won't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch. If you wish to upgrade your patch version in the same minor version, please use [auto-upgrade](https://learn.microsoft.com/azure/aks/auto-upgrade-cluster#using-cluster-auto-upgrade).
+When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` won't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch. If you wish to upgrade your patch version in the same minor version, please use [auto-upgrade](./auto-upgrade-cluster.md#using-cluster-auto-upgrade).
To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The `currentKubernetesVersion` property shows the whole Kubernetes version.
To see what patch you're on, run the `az aks show --resource-group myResourceGro
## Kubernetes version support policy
-AKS defines a GA version as a version enabled in all SLO or SLA measurements and available in all regions. AKS supports three GA minor versions of Kubernetes:
+AKS defines a generally available (GA) version as a version available in all regions and enabled in all SLO or SLA measurements. AKS supports three GA minor versions of Kubernetes:
-* The latest GA minor version released in AKS (which we'll refer to as N).
+* The latest GA minor version released in AKS (which we'll refer to as *N*).
* Two previous minor versions.
- * Each supported minor version also supports a maximum of two (2) stable patches.
+ * Each supported minor version also supports a maximum of two stable patches.
AKS may also support preview versions, which are explicitly labeled and subject to [preview terms and conditions][preview-terms]. > [!NOTE] > AKS uses safe deployment practices which involve gradual region deployment. This means it may take up to 10 business days for a new release or a new version to be available in all regions.
-The supported window of Kubernetes versions on AKS is known as "N-2": (N (Latest release) - 2 (minor versions)).
+The supported window of Kubernetes versions on AKS is known as "N-2": (N (Latest release) - 2 (minor versions)), and ".letter" is representative of patch versions.
For example, if AKS introduces *1.17.a* today, support is provided for the following versions:
When a new minor version is introduced, the oldest minor version and patch relea
When AKS releases 1.18.\*, all the 1.15.\* versions go out of support 30 days later. > [!NOTE]
-> If customers are running an unsupported Kubernetes version, they'll be asked to upgrade when requesting support for the cluster. Clusters running unsupported Kubernetes releases aren't covered by the [AKS support policies](./support-policies.md).
+> If you're running an unsupported Kubernetes version, you'll be asked to upgrade when requesting support for the cluster. Clusters running unsupported Kubernetes releases aren't covered by the [AKS support policies](./support-policies.md).
-In addition to the above, AKS supports a maximum of two **patch** releases of a given minor version. So given the following supported versions:
+AKS also supports a maximum of two **patch** releases of a given minor version. For example, given the following supported versions:
``` Current Supported Version List
Install-AzAksKubectl -Version latest
+## Long Term Support (LTS)
+
+AKS provides a Long Term Support (LTS) version of Kubernetes for a two-year period. There's only a single minor version of Kubernetes deemed LTS at any one time.
+
+| | Community Support |Long Term Support |
+||||
+| **When to use** | When you can keep up with upstream Kubernetes releases | When you need control over when to migrate from one version to another |
+| **Support versions** | Three GA minor versions | One Kubernetes version (currently *1.27*) for two years |
+| **Pricing** | Included | Per hour cluster cost |
+
+The upstream community maintains a minor release of Kubernetes for one year from release. After this period, Microsoft creates and applies security updates to the LTS version of Kubernetes to provide a total of two years of support on AKS.
+
+> [!IMPORTANT]
+> AKS will begin its support for the LTS version of Kubernetes upon the release of Kubernetes version 1.27.
+ ## Release and deprecation process You can reference upcoming version releases and deprecations on the [AKS Kubernetes release calendar](#aks-kubernetes-release-calendar). For new **minor** versions of Kubernetes:
-* AKS publishes a pre-announcement with the planned date of the new version release and respective old version deprecation. This announcement is published on the [AKS release notes](https://aka.ms/aks/releasenotes) at least 30 days before removal.
-* AKS uses [Azure Advisor](../advisor/advisor-overview.md) to alert users if a new version will cause issues in their cluster because of deprecated APIs. Azure Advisor is also used to alert the user if they're currently out of support.
+* AKS publishes an announcement with the planned date of a new version release and respective old version deprecation on the [AKS Release notes](https://aka.ms/aks/releasenotes) at least 30 days prior to removal.
+* AKS uses [Azure Advisor](../advisor/advisor-overview.md) to alert you if a new version could cause issues in your cluster because of deprecated APIs. Azure Advisor also alerts you if you're out of support
* AKS publishes a [service health notification](../service-health/service-health-overview.md) available to all users with AKS and portal access and sends an email to the subscription administrators with the planned version removal dates.- > [!NOTE]
- > Visit [manage Azure subscriptions](../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator) to determine who your subscription administrators are and make any necessary changes.
-
-* Users have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support.
+ > To find out who is your subscription administrators or to change it, please refer to [manage Azure subscriptions](../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator).
+* You have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support.
For new **patch** versions of Kubernetes:
-* Because of the urgent nature of patch versions, they can be introduced into the service as they become available. Once available, patches will have a two month minimum lifecycle.
-* In general, AKS doesn't broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify users to upgrade to the newly available patch.
-* Users have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support. However, you'll **no longer be able to create clusters or node pools once the version is deprecated/removed.**
+* Because of the urgent nature of patch versions, they can be introduced into the service as they become available. Once available, patches have a two month minimum lifecycle.
+* In general, AKS doesn't broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify you to upgrade to the newly available patch.
+* You have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support. However, you'll **no longer be able to create clusters or node pools once the version is deprecated/removed.**
### Supported versions policy exceptions
When you deploy an AKS cluster with Azure portal, Azure CLI, Azure PowerShell, t
### [Azure CLI](#tab/azure-cli) To find out what versions are currently available for your subscription and region, use the
-[az aks get-versions][az-aks-get-versions] command. The following example lists available Kubernetes versions for the *EastUS* region:
+[`az aks get-versions`][az-aks-get-versions] command. The following example lists the available Kubernetes versions for the *EastUS* region:
```azurecli-interactive az aks get-versions --location eastus --output table
Get-AzAksVersion -Location eastus
### How does Microsoft notify me of new Kubernetes versions?
-The AKS team publishes pre-announcements with planned dates of the new Kubernetes versions in the AKS docs, our [GitHub](https://github.com/Azure/AKS/releases), and emails to subscription administrators who own clusters that are going to fall out of support. AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to alert customers in the Azure portal to notify users if they're out of support. It also alerts them of deprecated APIs that will affect their application or development processes.
+The AKS team publishes announcements with planned dates of the new Kubernetes versions in our documentation, our [GitHub](https://github.com/Azure/AKS/releases), and in emails to subscription administrators who own clusters that are going to fall out of support. AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to alert you inside the Azure portal if you're out of support and inform you of deprecated APIs that will affect your application or development process.
### How often should I expect to upgrade Kubernetes versions to stay in support?
-Starting with Kubernetes 1.19, the [open source community has expanded support to one year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you'll be able to upgrade at a minimum of once a year to stay on a supported version.
+Starting with Kubernetes 1.19, the [open source community has expanded support to one year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you can upgrade at a minimum of once a year to stay on a supported version.
-### What happens when a user upgrades a Kubernetes cluster with a minor version that isn't supported?
+**What happens when you upgrade a Kubernetes cluster with a minor version that isn't supported?**
If you're on the *n-3* version or older, it means you're outside of support and will be asked to upgrade. When your upgrade from version n-3 to n-2 succeeds, you're back within our support policies. For example:
For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes
<!-- LINKS - Internal --> [aks-upgrade]: upgrade-cluster.md
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-update]: /cli/azure/aks#az_aks_update
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az-extension-update [az-aks-get-versions]: /cli/azure/aks#az_aks_get_versions
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser
description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 11/01/2022 Last updated : 03/23/2023 # Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)
Last updated 11/01/2022
Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitives to associate [managed identities for Azure resources][az-managed-identities] and identities in Azure AD with pods. Administrators create identities and bindings as Kubernetes primitives that allow pods to access Azure resources that rely on Azure AD as an identity provider. > [!NOTE]
-> We recommend you review [Azure AD workload identity][workload-identity-overview] (preview).
+> We recommend you review [Azure AD workload identity][workload-identity-overview].
> This authentication method replaces pod-managed identity (preview), which integrates with the > Kubernetes native capabilities to federate with any external identity providers on behalf of the > application.
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workload identity (preview)
-description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview).
+ Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workload identity
+description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity.
- Previously updated : 04/12/2023 Last updated : 04/18/2023+
-# Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster
+# Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage Kubernetes clusters. In this article, you will:
-* Deploy an AKS cluster using the Azure CLI that includes the OpenID Connect Issuer and an Azure AD workload identity (preview)
+* Deploy an AKS cluster using the Azure CLI that includes the OpenID Connect Issuer and an Azure AD workload identity
* Grant access to your Azure Key Vault * Create an Azure Active Directory (Azure AD) workload identity and Kubernetes service account * Configure the managed identity for token federation.
-This article assumes you have a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. If you aren't familiar with Azure AD workload identity (preview), see the following [Overview][workload-identity-overview] article.
+This article assumes you have a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. If you aren't familiar with Azure AD workload identity, see the following [Overview][workload-identity-overview] article.
- This article requires version 2.40.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
This article assumes you have a basic understanding of Kubernetes concepts. For
- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account][az-account] command.
-## Install the aks-preview Azure CLI extension
--
-To install the aks-preview extension, run the following command:
-
-```azurecli
-az extension add --name aks-preview
-```
-
-Run the following command to update to the latest version of the extension released:
-
-```azurecli
-az extension update --name aks-preview
-```
-
-## Register the 'EnableWorkloadIdentityPreview' feature flag
-
-Register the `EnableWorkloadIdentityPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
-
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
-```
-
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ## Create AKS cluster Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*:
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Last updated 03/14/2023
-# Migrate from pod managed-identity to workload identity (preview)
+# Migrate from pod managed-identity to workload identity
This article focuses on migrating from a pod-managed identity to Azure Active Directory (Azure AD) workload identity (preview) for your Azure Kubernetes Service (AKS) cluster. It also provides guidance depending on the version of the [Azure Identity][azure-identity-supported-versions] client library used by your container-based application. - ## Before you begin - The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
az identity federated-credential create --name federatedIdentityName --identity-
## Deploy the workload with migration sidecar
+> [!NOTE]
+> The migration sidecar is **not supported for production usage**. This feature was designed to give customers time to migrate there application SDK's to a supported version and not be a long running process.
+ If your application is using managed identity and still relies on IMDS to get an access token, you can use the workload identity migration sidecar to start migrating to workload identity. This sidecar is a migration solution and in the long-term applications, you should modify their code to use the latest Azure Identity SDKs that support client assertion. To update or deploy the workload, add these pod annotations only if you want to use the migration sidecar. You inject the following [annotation][pod-annotations] values to use the sidecar in your pod specification:
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service (AKS) description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 03/27/2023 Last updated : 04/18/2023
-# Use Azure AD workload identity (preview) with Azure Kubernetes Service (AKS)
+# Use Azure AD workload identity with Azure Kubernetes Service (AKS)
-Today with Azure Kubernetes Service (AKS), you can assign [managed identities at the pod-level][use-azure-ad-pod-identity], which has been a preview feature. This pod-managed identity allows the hosted workload or application access to resources through Azure Active Directory (Azure AD). For example, a workload stores files in Azure Storage, and when it needs to access those files, the pod authenticates itself against the resource as an Azure managed identity. This authentication method has been replaced with [Azure Active Directory (Azure AD) workload identities][azure-ad-workload-identity] (preview), which integrate with the Kubernetes native capabilities to federate with any external identity providers. This approach is simpler to use and deploy, and overcomes several limitations in Azure AD pod-managed identity:
+Workloads deployed on an Azure Kubernetes Services (AKS) cluster require Azure Active Directory (Azure AD) application credentials or managed identities to access Azure AD protected resources, such as Azure Key Vault and Microsoft Graph. Azure AD workload identity integrates with the capabilities native to Kubernetes to federate with external identity providers.
-- Removes the scale and performance issues that existed for identity assignment-- Supports Kubernetes clusters hosted in any cloud or on-premises-- Supports both Linux and Windows workloads-- Removes the need for Custom Resource Definitions and pods that intercept [Azure Instance Metadata Service][azure-instance-metadata-service] (IMDS) traffic-- Avoids the complicated and error-prone installation steps such as cluster role assignment from the previous iteration
+[Azure AD workload identity][azure-ad-workload-identity] uses [Service Account Token Volume Projection][service-account-token-volume-projection] enabling pods to use a Kubernetes identity (that is, a service account). A Kubernetes token is issued and [OIDC federation][oidc-federation] enables Kubernetes applications to access Azure resources securely with Azure AD based on annotated service accounts.
Azure AD workload identity works especially well with the Azure Identity client library using the [Azure SDK][azure-sdk-download] and the [Microsoft Authentication Library][microsoft-authentication-library] (MSAL) if you're using [application registration][azure-ad-application-registration]. Your workload can use any of these libraries to seamlessly authenticate and access Azure cloud resources.
-This article helps you understand this new authentication feature, and reviews the options available to plan your migration phases and project strategy.
-
+This article helps you understand this new authentication feature, and reviews the options available to plan your project strategy and potential migration from Azure AD pod-managed identity.
## Dependencies - AKS supports Azure AD workload identities on version 1.22 and higher. -- The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+- The Azure CLI version 2.47.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+## Azure Identity SDK
+
+The following client libraries are the **minimum** version required
-- The `aks-preview` extension version 0.5.102 or later.
+| Language | Library | Minimum Version | Example |
+|--|--|-|-|
+| Go | [azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go) | [sdk/azidentity/v1.3.0-beta.1](https://github.com/Azure/azure-sdk-for-go/releases/tag/sdk/azidentity/v1.3.0-beta.1)| [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) |
+| C# | [azure-sdk-for-net](https://github.com/Azure/azure-sdk-for-net) | [Azure.Identity_1.5.0](https://github.com/Azure/azure-sdk-for-net/releases/tag/Azure.Identity_1.5.0)| [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) |
+| JavaScript/TypeScript | [azure-sdk-for-js](https://github.com/Azure/azure-sdk-for-js) | [@azure/identity_2.0.0](https://github.com/Azure/azure-sdk-for-js/releases/tag/@azure/identity_2.0.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) |
+| Python | [azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python) | [azure-identity_1.7.0](https://github.com/Azure/azure-sdk-for-python/releases/tag/azure-identity_1.7.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) |
+| Java | [azure-sdk-for-java]() | [azure-identity_1.4.0](https://github.com/Azure/azure-sdk-for-java/releases/tag/azure-identity_1.4.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) |
-- The following are the minimum versions of the [Azure Identity][azure-identity-libraries] client library supported:
+## Microsoft Authentication Library (MSAL)
- * [.NET][dotnet-azure-identity-client-library] 1.5.0
- * [Java][java-azure-identity-client-library] 1.4.0
- * [JavaScript][javascript-azure-identity-client-library] 2.0.0
- * [Python][python-azure-identity-client-library] 1.7.0
+The following client libraries are the **minimum** version required
+
+| Language | Library | Image | Example | Has Windows |
+|--|--|-|-|-|
+| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes |
+| C# | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes |
+| JavaScript/TypeScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No |
+| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No |
+| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No |
## Limitations - You can only have 20 federated identity credentials per managed identity. - It takes a few seconds for the federated identity credential to be propagated after being initially added.
-## Language SDK examples
- - [Azure Identity SDK](https://azure.github.io/azure-workload-identity/docs/topics/language-specific-examples/azure-identity-sdk.html)
- - [MSAL](https://azure.github.io/azure-workload-identity/docs/topics/language-specific-examples/msal.html)
- ## How it works In this security model, the AKS cluster acts as token issuer, Azure Active Directory uses OpenID Connect to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library or the Microsoft Authentication Library.
The following diagram summarizes the authentication sequence using OpenID Connec
### Webhook Certificate Auto Rotation
-Similar to other webhook addons, the certificate will be rotated by cluster certificate [auto rotation](https://learn.microsoft.com/azure/aks/certificate-rotation#certificate-auto-rotation) operation.
+Similar to other webhook addons, the certificate will be rotated by cluster certificate [auto rotation][auto-rotation] operation.
## Service account labels and annotations
The following table summarizes our migration or deployment recommendations for w
<!-- EXTERNAL LINKS --> [azure-sdk-download]: https://azure.microsoft.com/downloads/ [custom-resource-definition]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/-
+[service-account-token-volume-projection]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection
+[oidc-federation]: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens
<!-- INTERNAL LINKS --> [use-azure-ad-pod-identity]: use-azure-ad-pod-identity.md [azure-ad-workload-identity]: ../active-directory/develop/workload-identities-overview.md
-[azure-instance-metadata-service]: ../virtual-machines/linux/instance-metadata-service.md
[microsoft-authentication-library]: ../active-directory/develop/msal-overview.md [azure-ad-application-registration]: ../active-directory/develop/application-model.md#register-an-application [install-azure-cli]: /cli/azure/install-azure-cli
The following table summarizes our migration or deployment recommendations for w
[deploy-configure-workload-identity-new-cluster]: workload-identity-deploy-cluster.md [tutorial-use-workload-identity]: ./learn/tutorial-kubernetes-workload-identity.md [workload-identity-migration-sidecar]: workload-identity-migrate-from-pod-identity.md
-[dotnet-azure-identity-client-library]: /dotnet/api/overview/azure/identity-readme
-[java-azure-identity-client-library]: /java/api/overview/azure/identity-readme
-[javascript-azure-identity-client-library]: /javascript/api/overview/azure/identity-readme
-[python-azure-identity-client-library]: /python/api/overview/azure/identity-readme
+[auto-rotation]: certificate-rotation.md#certificate-auto-rotation
analysis-services Analysis Services Addservprinc Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-addservprinc-admins.md
The following Resource Manager template deploys an Analysis Services server with
## Using managed identities
-A managed identity can also be added to the Analysis Services Admins list. For example, you might have a [Logic App with a system-assigned managed identity](../logic-apps/create-managed-service-identity.md), and want to grant it the ability to administer your server.
-
-In most parts of the Azure portal and APIs, managed identities are identified using their service principal object ID. However, Analysis Services requires that they be identified using their client ID. To obtain the client ID for a service principal, you can use the Azure CLI:
-
-```azurecli
-az ad sp show --id <ManagedIdentityServicePrincipalObjectId> --query appId -o tsv
-```
-
-Alternatively you can use PowerShell:
-
-```powershell
-(Get-AzureADServicePrincipal -ObjectId <ManagedIdentityServicePrincipalObjectId>).AppId
-```
-
-You can then use this client ID in conjunction with the tenant ID to add the managed identity to the Analysis Services Admins list, as described above.
+Managed identies that are added to database or server roles will be unable to login to the service or do any operations. Managed identities for service principals are not supported in Azure Analysis Services.
## Related information
analysis-services Analysis Services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-service-principal.md
Service principals are an Azure Active Directory application resource you create
In Analysis Services, service principals are used with Azure Automation, PowerShell unattended mode, custom client applications, and web apps to automate common tasks. For example, provisioning servers, deploying models, data refresh, scale up/down, and pause/resume can all be automated by using service principals. Permissions are assigned to service principals through role membership, much like regular Azure AD UPN accounts.
-Analysis Services also supports operations performed by managed identities using service principals. To learn more, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) and [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-analysis-services).
+Analysis Services does not support operations performed by managed identities using service principals. To learn more, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) and [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-analysis-services).
## Create service principals
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
Managed and self-hosted gateways support all available [policies](api-management
| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted<sup>1</sup> | | | -- | -- | - | | [Dapr integration](api-management-policies.md#dapr-integration-policies) | ❌ | ❌ | ✔️ |
-| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ❌ | ❌ |
+| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ | ❌ |
| [Quota and rate limit](api-management-policies.md#access-restriction-policies) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup> | [Set GraphQL resolver](set-graphql-resolver-policy.md) | ✔️ | ❌ | ❌ |
api-management Authentication Authorization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-authorization-overview.md
There are different reasons for wanting to do this. For example:
### Token management by API Management
-API Management also supports acquisition and secure storage of OAuth 2.0 tokens for certain downstream services using the [authorizations](authorizations-overview.md) (preview) feature, including through use of custom policies and caching.
+API Management also supports acquisition and secure storage of OAuth 2.0 tokens for certain downstream services using the [authorizations](authorizations-overview.md) feature, including through use of custom policies and caching.
-With authorizations, API Management manages the tokens for access to OAuth 2.0 backends, simplifying the development of client apps that access APIs.
+With authorizations, API Management manages the tokens for access to OAuth 2.0 backends, allowing you to delegate authentication to your API Management instance to simplify access by client apps to a given backend service or SaaS platform.
### Other options
api-management Authorizations Configure Common Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-configure-common-providers.md
+
+ Title: Configure authorization providers - Azure API Management | Microsoft Docs
+description: Learn how to configure common identity providers for authorizations in Azure API Management. Example providers are Azure Active Directory and a generic OAuth 2.0 provider. An authorization manages authorization tokens to an OAuth 2.0 backend service.
++++ Last updated : 02/07/2023+++
+# Configure identity providers for API authorizations
+
+In this article, you learn about configuring identity providers for [authorizations](authorizations-overview.md) in your API Management instance. Settings for the following common providers are shown:
+
+* Azure AD provider
+* Generic OAuth 2.0 provider
+
+You add identity provider settings when configuring an authorization in your API Management instance. For a step-by-step example of configuring an Azure AD provider and authorization, see:
+
+* [Create an authorization with the Microsoft Graph API](authorizations-how-to-azure-ad.md)
+
+## Prerequisites
+
+To configure any of the supported providers in API Management, first configure an OAuth 2.0 app in the identity provider that will be used to authorize API access. For configuration details, see the provider's developer documentation.
+
+* If you're creating an authorization that uses the authorization code grant type, configure a **Redirect URL** (sometimes called Authorization Callback URL or a similar name) in the app. For the value, enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`.
+
+* Depending on your scenario, configure app settings such as scopes (API permissions).
+
+* Minimally, retrieve the following app credentials that will be configured in API Management: the app's **client id** and **client secret**.
+
+* Depending on the provider and your scenario, you might need to retrieve other settings such as authorization endpoint URLs or scopes.
+
+## Azure AD provider
+
+Authorizations support the Azure AD identity provider, which is the identity service in Microsoft Azure that provides identity management and access control capabilities. It allows users to securely sign in using industry-standard protocols.
+
+* **Supported grant types**: authorization code, client credentials
+
+> [!NOTE]
+> Currently, the Azure AD authorization provider supports only the Azure AD v1.0 endpoints.
+
+
+### Azure AD provider settings
+
++
+## Generic OAuth 2.0 providers
+
+Authorizations support two generic providers:
+* Generic OAuth 2.0
+* Generic OAuth 2.0 with PKCE
+
+A generic provider allows you to use your own OAuth 2.0 identity provider based on your specific needs.
+
+> [!NOTE]
+> We recommend using the generic OAuth 2.0 with PKCE provider for improved security if your identity provider supports it. [Learn more](https://oauth.net/2/pkce/)
+
+* **Supported grant types**: authorization code, client credentials
+
+### Generic authorization provider settings
++
+## Other identity providers
+
+API Management supports several providers for popular SaaS offerings, such as GitHub. You can select from a list of these providers in the Azure portal when you create an authorization.
++
+**Supported grant types**: authorization code, client credentials (depends on provider)
+
+Required settings for these providers differ from provider to provider but are similar to those for the [generic OAuth 2.0 providers](#generic-oauth-20-providers). Consult the developer documentation for each provider.
+
+## Next steps
+
+* Learn more about [authorizations](authorizations-overview.md) in API Management.
+* Create an authorization for [Azure AD](authorizations-how-to-azure-ad.md) or [GitHub](authorizations-how-to-github.md).
api-management Authorizations How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to-azure-ad.md
+
+ Title: Create authorization with Microsoft Graph API - Azure API Management | Microsoft Docs
+description: Learn how to create and use an authorization to the Microsoft Graph API in Azure API Management. An authorization manages authorization tokens to an OAuth 2.0 backend service.
++++ Last updated : 04/10/2023+++
+# Create an authorization with the Microsoft Graph API
+
+This article guides you through the steps required to create an [authorization](authorizations-overview.md) with the Microsoft Graph API within Azure API Management. The authorization code grant type is used in this example.
+
+You learn how to:
+
+> [!div class="checklist"]
+> * Create an Azure AD application
+> * Create and configure an authorization in API Management
+> * Configure an access policy
+> * Create a Microsoft Graph API in API Management and configure a policy
+> * Test your Microsoft Graph API in API Management
+
+## Prerequisites
+
+- Access to an Azure Active Directory (Azure AD) tenant where you have permissions to create an app registration and to grant admin consent for the app's permissions. [Learn more](../active-directory/roles/delegate-app-roles.md#restrict-who-can-create-applications)
+
+ If you want to create your own developer tenant, you can sign up for the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program).
+- A running API Management instance. If you need to, [create an Azure API Management instance](get-started-create-service-instance.md).
+- Enable a [system-assigned managed identity](api-management-howto-use-managed-service-identity.md) for API Management in the API Management instance.
+
+## Step 1: Create an Azure AD application
+
+Create an Azure AD application for the API and give it the appropriate permissions for the requests that you want to call.
+
+1. Sign into the [Azure portal](https://portal.azure.com/) with an account with sufficient permissions in the tenant.
+1. Under **Azure Services**, search for **Azure Active Directory**.
+1. On the left menu, select **App registrations**, and then select **+ New registration**.
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-registration.png" alt-text="Screenshot of creating an Azure AD app registration in the portal.":::
+
+1. On the **Register an application** page, enter your application registration settings:
+ 1. In **Name**, enter a meaningful name that will be displayed to users of the app, such as *MicrosoftGraphAuth*.
+ 1. In **Supported account types**, select an option that suits your scenario, for example, **Accounts in this organizational directory only (Single tenant)**.
+ 1. Set the **Redirect URI** to **Web**, and enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the name of the API Management service where you will configure the authorization provider.
+ 1. Select **Register**.
+1. On the left menu, select **API permissions**, and then select **+ Add a permission**.
+ :::image type="content" source="./media/authorizations-how-to-azure-ad/add-permission.png" alt-text="Screenshot of adding an API permission in the portal.":::
+
+ 1. Select **Microsoft Graph**, and then select **Delegated permissions**.
+ > [!NOTE]
+ > Make sure the permission **User.Read** with the type **Delegated** has already been added.
+ 1. Type **Team**, expand the **Team** options, and then select **Team.ReadBasic.All**. Select **Add permissions**.
+ 1. Next, select **Grant admin consent for Default Directory**. The status of the permissions will change to **Granted for Default Directory**.
+1. On the left menu, select **Overview**. On the **Overview** page, find the **Application (client) ID** value and record it for use in Step 2.
+1. On the left menu, select **Certificates & secrets**, and then select **+ New client secret**.
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-secret.png" alt-text="Screenshot of creating an app secret in the portal.":::
+
+ 1. Enter a **Description**.
+ 1. Select any option for **Expires**.
+ 1. Select **Add**.
+ 1. Copy the client secret's **Value** before leaving the page. You will need it in Step 2.
+
+## Step 2: Configure an authorization in API Management
+
+1. Sign into the [portal](https://portal.azure.com) and go to your API Management instance.
+1. On the left menu, select **Authorizations**, and then select **+ Create**.
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-authorization.png" alt-text="Screenshot of creating an API authorization in the portal.":::
+1. On the **Create authorization** page, enter the following settings, and select **Create**:
+
+ |Settings |Value |
+ |||
+ |**Provider name** | A name of your choice, such as *aad-01* |
+ |**Identity provider** | Select **Azure Active Directory v1** |
+ |**Grant type** | Select **Authorization code** |
+ |**Client id** | Paste the value you copied earlier from the app registration |
+ |**Client secret** | Paste the value you copied earlier from the app registration |
+ |**Resource URL** | `https://graph.microsoft.com` |
+ |**Tenant ID** | Optional for Azure AD identity provider. Default is *Common* |
+ |**Scopes** | Optional for Azure AD identity provider. Automatically configured from AD app's API permissions. |
+ |**Authorization name** | A name of your choice, such as *aad-auth-01* |
+
+1. After the authorization provider and authorization are created, select **Next**.
+
+## Step 3: Authorize with Azure AD and configure an access policy
+
+1. On the **Login** tab, select **Login with Azure Active Directory**. Before the authorization will work, it needs to be authorized.
+ :::image type="content" source="media/authorizations-how-to-azure-ad/login-azure-ad.png" alt-text="Screenshot of login with Azure AD in the portal.":::
+
+1. When prompted, sign in to your organizational account.
+1. On the confirmation page, select **Allow access**.
+1. After successful authorization, the browser is redirected to API Management and the window is closed. In API Management, select **Next**.
+1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
+1. For this example, select **API Management service `<service name>`**.
+
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-access-policy.png" alt-text="Screenshot of selecting a managed identity to use the authorization.":::
+
+1. Select **Complete**.
+
+> [!NOTE]
+> If you update your Microsoft Graph permissions after this step, you will have to repeat Steps 2 and 3.
+
+## Step 4: Create a Microsoft Graph API in API Management and configure a policy
+
+1. Sign into the [portal](https://portal.azure.com) and go to your API Management instance.
+1. On the left menu, select **APIs > + Add API**.
+1. Select **HTTP** and enter the following settings. Then select **Create**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *msgraph* |
+ |**Web service URL** | `https://graph.microsoft.com/v1.0` |
+ |**API URL suffix** | *msgraph* |
+
+1. Navigate to the newly created API and select **Add Operation**. Enter the following settings and select **Save**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getprofile* |
+ |**URL** for GET | /me |
+
+1. Follow the preceding steps to add another operation with the following settings.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getJoinedTeams* |
+ |**URL** for GET | /me/joinedTeams |
+
+1. Select **All operations**. In the **Inbound processing** section, select the (**</>**) (code editor) icon.
+1. Copy the following, and paste in the policy editor. Make sure the `provider-id` and `authorization-id` correspond to the values you configured in Step 2. Select **Save**.
+
+ ```xml
+ <policies>
+ <inbound>
+ <base />
+ <get-authorization-context provider-id="aad-01" authorization-id="aad-auth-01" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />
+ <set-header name="authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+ </set-header>
+ </inbound>
+ <backend>
+ <base />
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
+ <on-error>
+ <base />
+ </on-error>
+ </policies>
+ ```
+The preceding policy definition consists of two parts:
+
+* The [get-authorization-context](get-authorization-context-policy.md) policy fetches an authorization token by referencing the authorization provider and authorization that were created earlier.
+* The [set-header](set-header-policy.md) policy creates an HTTP header with the fetched authorization token.
+
+## Step 5: Test the API
+1. On the **Test** tab, select one operation that you configured.
+1. Select **Send**.
+
+ :::image type="content" source="media/authorizations-how-to-azure-ad/graph-api-response.png" alt-text="Screenshot of testing the Graph API in the portal.":::
+
+ A successful response returns user data from the Microsoft Graph.
+
+## Next steps
+
+* Learn more about [access restriction policies](api-management-access-restriction-policies.md)
+* Learn more about [scopes and permissions](../active-directory/develop/scopes-oidc.md) in Azure AD.
api-management Authorizations How To Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to-github.md
+
+ Title: Create authorization with GitHub API - Azure API Management | Microsoft Docs
+description: Learn how to create and use an authorization to the GitHub API in Azure API Management. An authorization manages authorization tokens to an OAuth 2.0 backend service.
++++ Last updated : 04/10/2023+++
+# Create an authorization with the GitHub API
+
+In this article, you learn how to create an [authorization](authorizations-overview.md) in API Management and call a GitHub API that requires an authorization token. The authorization code grant type is used in this example.
+
+You learn how to:
+
+> [!div class="checklist"]
+> * Register an application in GitHub
+> * Configure an authorization in API Management.
+> * Authorize with GitHub and configure access policies.
+> * Create an API in API Management and configure a policy.
+> * Test your GitHub API in API Management
+
+## Prerequisites
+
+- A GitHub account is required.
+ A running API Management instance. If you need to, [create an Azure API Management instance](get-started-create-service-instance.md).
+- Enable a [system-assigned managed identity](api-management-howto-use-managed-service-identity.md) for API Management in the API Management instance.
+
+## Step 1: Register an application in GitHub
+
+1. Sign in to GitHub.
+1. In your account profile, go to **Settings > Developer Settings > OAuth Apps > New OAuth app**.
+
+
+ :::image type="content" source="media/authorizations-how-to-github/register-application.png" alt-text="Screenshot of registering a new OAuth application in GitHub.":::
+ 1. Enter an **Application name** and **Homepage URL** for the application. For this example, you can supply a placeholder URL such as `http://localhost`.
+ 1. Optionally, add an **Application description**.
+ 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the name of the API Management instance where you will configure the authorization provider.
+1. Select **Register application**.
+1. On the **General** page, copy the **Client ID**, which you'll use in Step 2.
+1. Select **Generate a new client secret**. Copy the secret, which won't be displayed again, and which you'll use in Step 2.
+
+ :::image type="content" source="media/authorizations-how-to-github/generate-secret.png" alt-text="Screenshot showing how to get client ID and client secret for the application in GitHub.":::
+
+## Step 2: Configure an authorization in API Management
+
+1. Sign into the [portal](https://portal.azure.com) and go to your API Management instance.
+1. On the left menu, select **Authorizations** > **+ Create**.
+
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-authorization.png" alt-text="Screenshot of creating an API Management authorization in the Azure portal.":::
+1. On the **Create authorization** page, enter the following settings, and select **Create**:
+
+ |Settings |Value |
+ |||
+ |**Provider name** | A name of your choice, such as *github-01* |
+ |**Identity provider** | Select **GitHub** |
+ |**Grant type** | Select **Authorization code** |
+ |**Client ID** | Paste the value you copied earlier from the app registration |
+ |**Client secret** | Paste the value you copied earlier from the app registration |
+ |**Scope** | For this example, set the scope to *User* |
+ |**Authorization name** | A name of your choice, such as *github-auth-01* |
+
+1. After the authorization provider and authorization are created, select **Next**.
+
+## Step 3: Authorize with GitHub and configure access policies
+
+1. On the **Login** tab, select **Login with GitHub**. Before the authorization will work, it needs to be authorized at GitHub.
+
+ :::image type="content" source="media/authorizations-how-to-github/authorize-with-github.png" alt-text="Screenshot of logging into the GitHub authorization from the portal.":::
+
+1. If prompted, sign in to your GitHub account.
+1. Select **Authorize** so that the application can access the signed-in userΓÇÖs account.
+1. On the confirmation page, select **Allow access**.
+1. After successful authorization, the browser is redirected to API Management and the window is closed. In API Management, select **Next**.
+1. After successful authorization, the browser is redirected to API Management and the window is closed. When prompted during redirection, select **Allow access**. In API Management, select **Next**.
+1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
+
+1. For this example, select **API Management service `<service name>`**.
+
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-access-policy.png" alt-text="Screenshot of selecting a managed identity to use the authorization.":::
+1. Select **Complete**.
+
+
+## Step 4: Create an API in API Management and configure a policy
+
+1. Sign into the [portal](https://portal.azure.com) and go to your API Management instance.
+1. On the left menu, select **APIs > + Add API**.
+1. Select **HTTP** and enter the following settings. Then select **Create**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *githubuser* |
+ |**Web service URL** | `https://api.github.com` |
+ |**API URL suffix** | *githubuser* |
+
+2. Navigate to the newly created API and select **Add Operation**. Enter the following settings and select **Save**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getauthdata* |
+ |**URL** for GET | /user |
+
+ :::image type="content" source="media/authorizations-how-to-github/add-operation.png" alt-text="Screenshot of adding a getauthdata operation to the API in the portal.":::
+
+1. Follow the preceding steps to add another operation with the following settings.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getauthfollowers* |
+ |**URL** for GET | /user/followers |
+
+1. Select **All operations**. In the **Inbound processing** section, select the (**</>**) (code editor) icon.
+1. Copy the following, and paste in the policy editor. Make sure the provider-id and authorization-id correspond to the names in Step 2. Select **Save**.
+
+ ```xml
+ <policies>
+ <inbound>
+ <base />
+ <get-authorization-context provider-id="github-01" authorization-id="github-auth-01" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />
+ <set-header name="Authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+ </set-header>
+ <set-header name="User-Agent" exists-action="override">
+ <value>API Management</value>
+ </set-header>
+ </inbound>
+ <backend>
+ <base />
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
+ <on-error>
+ <base />
+ </on-error>
+ </policies>
+ ```
+
+The preceding policy definition consists of three parts:
+
+* The [get-authorization-context](get-authorization-context-policy.md) policy fetches an authorization token by referencing the authorization provider and authorization that were created earlier.
+* The first [set-header](set-header-policy.md) policy creates an HTTP header with the fetched authorization token.
+* The second [set-header](set-header-policy.md) policy creates a `User-Agent` header (GitHub API requirement).
+
+## Step 5: Test the API
+
+1. On the **Test** tab, select one operation that you configured.
+1. Select **Send**.
+
+ :::image type="content" source="media/authorizations-how-to-github/test-api.png" alt-text="Screenshot of testing the API successfully in the portal.":::
+
+ A successful response returns user data from the GitHub API.
+
+## Next steps
+
+* Learn more about [access restriction policies](api-management-access-restriction-policies.md).
+* Learn more about GitHub's [REST API](https://docs.github.com/en/rest?apiVersion=2022-11-28)
api-management Authorizations How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to.md
- Title: Create and use authorization in Azure API Management | Microsoft Docs
-description: Learn how to create and use an authorization in Azure API Management. An authorization manages authorization tokens to OAuth 2.0 backend services. The example uses GitHub as an identity provider.
---- Previously updated : 06/03/2022---
-# Configure and use an authorization
-
-In this article, you learn how to create an [authorization](authorizations-overview.md) (preview) in API Management and call a GitHub API that requires an authorization token. The authorization code grant type will be used.
-
-Four steps are needed to set up an authorization with the authorization code grant type:
-
-1. Register an application in the identity provider (in this case, GitHub).
-1. Configure an authorization in API Management.
-1. Authorize with GitHub and configure access policies.
-1. Create an API in API Management and configure a policy.
-
-## Prerequisites
--- A GitHub account is required.-- Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).-- Enable a [managed identity](api-management-howto-use-managed-service-identity.md) for API Management in the API Management instance. -
-## Step 1: Register an application in GitHub
-
-1. Sign in to GitHub.
-1. In your account profile, go to **Settings > Developer Settings > OAuth Apps > Register a new application**.
-
-
- :::image type="content" source="media/authorizations-how-to/register-application.png" alt-text="Screenshot of registering a new OAuth application in GitHub.":::
- 1. Enter an **Application name** and **Homepage URL** for the application.
- 1. Optionally, add an **Application description**.
- 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the API Management service name that is used.
-1. Select **Register application**.
-1. In the **General** page, copy the **Client ID**, which you'll use in a later step.
-1. Select **Generate a new client secret**. Copy the secret, which won't be displayed again, and which you'll use in a later step.
-
- :::image type="content" source="media/authorizations-how-to/generate-secret.png" alt-text="Screenshot showing how to get client ID and client secret for the application in GitHub.":::
-
-## Step 2: Configure an authorization in API Management
-
-1. Sign into Azure portal and go to your API Management instance.
-1. In the left menu, select **Authorizations** > **+ Create**.
-
- :::image type="content" source="media/authorizations-how-to/create-authorization.png" alt-text="Screenshot of creating an API Management authorization in the Azure portal.":::
-1. In the **Create authorization** window, enter the following settings, and select **Create**:
-
- |Settings |Value |
- |||
- |**Provider name** | A name of your choice, such as *github-01* |
- |**Identity provider** | Select **GitHub** |
- |**Grant type** | Select **Authorization code** |
- |**Client id** | Paste the value you copied earlier from the app registration |
- |**Client secret** | Paste the value you copied earlier from the app registration |
- |**Scope** | Set the scope to `User` |
- |**Authorization name** | A name of your choice, such as *auth-01* |
-
-
-
-1. After the authorization provider and authorization are created, select **Next**.
-
-1. On the **Login** tab, select **Login with GitHub**. Before the authorization will work, it needs to be authorized at GitHub.
-
- :::image type="content" source="media/authorizations-how-to/authorize-with-github.png" alt-text="Screenshot of logging into the GitHub authorization from the portal.":::
-
-## Step 3: Authorize with GitHub and configure access policies
-
-1. Sign in to your GitHub account if you're prompted to do so.
-1. Select **Authorize** so that the application can access the signed-in userΓÇÖs account.
-
- :::image type="content" source="media/authorizations-how-to/consent-to-authorization.png" alt-text="Screenshot of consenting to authorize with GitHub.":::
-
- After authorization, the browser is redirected to API Management and the window is closed. If prompted during redirection, select **Allow access**. In API Management, select **Next**.
-1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
-
-1. Select **Managed identity** **+ Add members** and then select your subscription.
-1. In **Managed identity**, select **API Management service**, and then select the API Management instance that is used. Click **Select** and then **Complete**.
-
- :::image type="content" source="media/authorizations-how-to/select-managed-identity.png" alt-text="Screenshot of selecting a managed identity to use the authorization.":::
-
-## Step 4: Create an API in API Management and configure a policy
-
-1. Sign into Azure portal and go to your API Management instance.
-1. In the left menu, select **APIs > + Add API**.
-1. Select **HTTP** and enter the following settings. Then select **Create**.
-
- |Setting |Value |
- |||
- |**Display name** | *github* |
- |**Web service URL** | https://api.github.com/users |
- |**API URL suffix** | *github* |
-
-2. Navigate to the newly created API and select **Add Operation**. Enter the following settings and select **Save**.
-
- |Setting |Value |
- |||
- |**Display name** | *getdata* |
- |**URL** | /data |
-
- :::image type="content" source="media/authorizations-how-to/add-operation.png" alt-text="Screenshot of adding a getdata operation to the API in the portal.":::
-
-1. In the **Inbound processing** section, select the (**</>**) (code editor) icon.
-1. Copy the following, and paste in the policy editor. Make sure the provider-id and authorization-id correspond to the names in step 2.3. Select **Save**.
-
- ```xml
- <policies>
- <inbound>
- <base />
- <get-authorization-context provider-id="github-01" authorization-id="auth-01" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />
- <set-header name="Authorization" exists-action="override">
- <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
- </set-header>
- <rewrite-uri template="@(context.Request.Url.Query.GetValueOrDefault("username",""))" copy-unmatched-params="false" />
- <set-header name="User-Agent" exists-action="override">
- <value>API Management</value>
- </set-header>
- </inbound>
- <backend>
- <base />
- </backend>
- <outbound>
- <base />
- </outbound>
- <on-error>
- <base />
- </on-error>
- </policies>
- ```
-
- The policy to be used consists of four parts.
-
- - Fetch an authorization token.
- - Create an HTTP header with the fetched authorization token.
- - Create an HTTP header with a `User-Agent` header (GitHub requirement). [Learn more](https://docs.github.com/rest/overview/resources-in-the-rest-api#user-agent-required)
- - Because the incoming request to API Management will consist of a query parameter called *username*, add the username to the backend call.
-
- > [!NOTE]
- > The `get-authorization-context` policy references the authorization provider and authorization that were created earlier. [Learn more](get-authorization-context-policy.md) about how to configure this policy.
-
- :::image type="content" source="media/authorizations-how-to/policy-configuration-cropped.png" lightbox="media/authorizations-how-to/policy-configuration.png" alt-text="Screenshot of configuring policy in the portal.":::
-1. Test the API.
- 1. On the **Test** tab, enter a query parameter with the name *username*.
- 1. As value, enter the username that was used to sign into GitHub, or another valid GitHub username.
- 1. Select **Send**.
- :::image type="content" source="media/authorizations-how-to/test-api.png" alt-text="Screenshot of testing the API successfully in the portal.":::
-
- A successful response returns user data from the GitHub API.
-
-## Next steps
-
-Learn more about [access restriction policies](api-management-access-restriction-policies.md).
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
Title: About OAuth 2.0 authorizations in Azure API Management | Microsoft Docs
-description: Learn about authorizations in Azure API Management, a feature that simplifies the process of managing OAuth 2.0 authorization tokens to APIs
+ Title: About API authorizations in Azure API Management
+description: Learn about API authorizations in Azure API Management, a feature that simplifies the process of managing OAuth 2.0 authorization tokens to backend SaaS APIs
Previously updated : 06/03/2022 Last updated : 04/10/2023 +
-# Authorizations overview
+# What are API authorizations?
-API Management authorizations (preview) simplify the process of managing authorization tokens to OAuth 2.0 backend services.
-By configuring any of the supported identity providers and creating an authorization using the standardized OAuth 2.0 flow, API Management can retrieve and refresh access tokens to be used inside of API management or sent back to a client.
-This feature enables APIs to be exposed with or without a subscription key, and the authorization to the backend service uses OAuth 2.0.
+API Management *authorizations* provide a simple and reliable way to unbundle and abstract authorizations from web APIs. Authorizations greatly simplify the process of authenticating and authorizing users across one or more backend or SaaS services. With authorizations, easily configure OAuth 2.0, consent, acquire tokens, cache tokens, and refresh tokens without writing a single line of code. Use authorizations to delegate authentication to your API Management instance.
-Some example scenarios that will be possible through this feature are:
+This feature enables APIs to be exposed with or without a subscription key, use OAuth 2.0 authorizations to the backend services, and reduce development costs in ramping up, implementing, and maintaining security features with service integrations.
-- Citizen/low code developers using Power Apps or Power Automate can easily connect to SaaS providers that are using OAuth 2.0. -- Unattended scenarios such as an Azure function using a timer trigger can utilize this feature to connect to a backend API using OAuth 2.0. -- A marketing team in an enterprise company could use the same authorization for interacting with a social media platform using OAuth 2.0.-- Exposing APIs in API Management as a custom connector in Logic Apps where the backend service requires OAuth 2.0 flow. -- On behalf of a scenario where a service such as Dropbox or any other service protected by OAuth 2.0 flow is used by multiple clients. -- Connect to different services that require OAuth 2.0 authorization using synthetic GraphQL in API Management. -- Enterprise Application Integration (EAI) patterns using service-to-service authorization can use the client credentials grant type against backend APIs that use OAuth 2.0. -- Single-page applications that only want to retrieve an access token to be used in a client's SDK against an API using OAuth 2.0.
-The feature consists of two parts, management and runtime:
+## Key scenarios
-* The **management** part takes care of configuring identity providers, enabling the consent flow for the identity provider, and managing access to the authorizations.
+Using authorizations in API Management, customers can enable different scenarios and easily connect to SaaS providers or backend services that are using OAuth 2.0. Here are some example scenarios where this feature could be used:
+* Easily connect to a SaaS backend by attaching the stored authorization token and proxying requests
-* The **runtime** part uses the [`get-authorization-context`](get-authorization-context-policy.md) policy to fetch and store access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, the refresh token is used to try to fetch a new authorization and refresh token from the configured identity provider. If the call to the backend provider is successful, the new authorization token will be used, and both the authorization token and refresh token will be stored encrypted.
+* Proxy requests to an Azure App Service web app or Azure Functions backend by attaching the authorization token, which can later send requests to a SaaS backend applying transformation logic
+* Proxy requests to GraphQL federation backends by attaching multiple access tokens to easily perform federation
+* Expose a retrieve token endpoint, acquire a cached token, and call a SaaS backend on behalf of user from any compute, for example, a console app or Kubernetes daemon. Combine your favorite SaaS SDK in a supported language.
+
+* Azure Functions unattended scenarios when connecting to multiple SaaS backends.
+
+* Durable Functions gets a step closer to Logic Apps with SaaS connectivity.
+
+* With authorizations every API in API Management can act as a Logic Apps custom connector.
+
+## How do authorizations work?
+
+Authorizations consist of two parts, **management** and **runtime**.
+
+* The **management** part takes care of configuring identity providers, enabling the consent flow for the identity provider, and managing access to the authorizations. For details, see [Process flow - management](#process-flowmanagement).
+
+* The **runtime** part uses the [`get-authorization-context` policy](get-authorization-context-policy.md) to fetch and store the authorization's access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, API Management uses an OAuth 2.0 flow to refresh the stored tokens from the identity provider. Then the access token is used to authorize access to the backend service. For details, see [Process flow - runtime](#process-flowruntime).
+
During the policy execution, access to the tokens is also validated using access policies.
+### Process flow - management
+
+The following image summarizes the process flow for creating an authorization in API Management that uses the authorization code grant type.
-### Requirements
-- Managed system-assigned identity must be enabled for the API Management instance. -- API Management instance must have outbound connectivity to internet on port `443` (HTTPS).
+| Step | Description
+| | |
+| 1 | Client sends a request to create an authorization provider |
+| 2 | Authorization provider is created, and a response is sent back |
+| 3| Client sends a request to create an authorization |
+| 4| Authorization is created, and a response is sent back with the information that the authorization isn't "connected"|
+|5| Client sends a request to retrieve a login URL to start the OAuth 2.0 consent at the identity provider. The request includes a post-redirect URL to be used in the last step|
+|6|Response is returned with a login URL that should be used to start the consent flow. |
+|7|Client opens a browser with the login URL that was provided in the previous step. The browser is redirected to the identity provider OAuth 2.0 consent flow |
+|8|After the consent is approved, the browser is redirected with an authorization code to the redirect URL configured at the identity provider|
+|9|API Management uses the authorization code to fetch access and refresh tokens|
+|10|API Management receives the tokens and encrypts them|
+|11 |API Management redirects to the provided URL from step 5|
-### Limitations
+### Process flow - runtime
-For public preview the following limitations exist:
-- Authorizations feature only supports Service Principal and Managed Identity as access policies.-- Authorizations feature only supports /.default app-only scopes while acquire token for https://.../authorizationmanager audience.-- Authorizations feature is not supported in the following regions: swedencentral, australiacentral, australiacentral2, jioindiacentral.-- Authorizations feature is not supported in National Clouds.-- Authorizations feature is not supported on self-hosted gateways.-- Supported identity providers can be found in [this](https://github.com/Azure/APIManagement-Authorizations/blob/main/docs/identityproviders.md) GitHub repository.-- Maximum configured number of authorization providers per API Management instance: 1,000-- Maximum configured number of authorizations per authorization provider: 10,000-- Maximum configured number of access policies per authorization: 100-- Maximum requests per minute per service: 250
+The following image shows the process flow to fetch and store authorization and refresh tokens based on an authorization that uses the authorization code grant type. After the tokens have been retrieved, a call is made to the backend API.
-### Authorization providers
-
-Authorization provider configuration includes which identity provider and grant type are used. Each identity provider requires different configurations.
-* An authorization provider configuration can only have one grant type.
-* One authorization provider configuration can have multiple authorizations.
-* You can find the supported identity providers for public preview in [this](https://github.com/Azure/APIManagement-Authorizations/blob/main/docs/identityproviders.md) GitHub repository.
+| Step | Description
+| | |
+| 1 |Client sends request to API Management instance|
+|2|The [`get-authorization-context`](get-authorization-context-policy.md) policy checks if the access token is valid for the current authorization|
+|3|If the access token has expired but the refresh token is valid, API Management tries to fetch new access and refresh tokens from the configured identity provider|
+|4|The identity provider returns both an access token and a refresh token, which are encrypted and saved to API Management|
+|5|After the tokens have been retrieved, the access token is attached using the `set-header` policy as an authorization header to the outgoing request to the backend API|
+|6| Response is returned to API Management|
+|7| Response is returned to the client|
-With the Generic OAuth 2.0 provider, other identity providers that support the standards of OAuth 2.0 flow can be used.
+## How to configure authorizations?
-### Authorizations
+### Requirements
-To use an authorization provider, at least one *authorization* is required. The process of configuring an authorization differs based on the used grant type. Each authorization provider configuration only supports one grant type. For example, if you want to configure Azure AD to use both grant types, two authorization provider configurations are needed.
+* Managed system-assigned identity must be enabled for the API Management instance.
-**Authorization code grant type**
+* API Management instance must have outbound connectivity to internet on port 443 (HTTPS).
-Authorization code grant type is bound to a user context, meaning a user needs to consent to the authorization. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All identity providers support authorization code. [Read more about Authorization code grant type](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1).
+### Availability
-**Client credentials grant type**
+* All API Management service tiers
-Client credentials grant type isn't bound to a user and is often used in application-to-application scenarios. No consent is required for client credentials grant type, and the authorization doesn't become invalid. [Read more about Client Credentials grant type](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.4).
+* Not supported in self-hosted gateway
+* Not supported in sovereign clouds or in the following regions: australiacentral, australiacentral2, jioindiacentral
-### Access policies
-Access policies determine which identities can use the authorization that the access policy is related to. The supported identities are managed identities, user identities, and service principals. The identities must belong to the same tenant as the API Management tenant.
+### Configuration steps
-- **Managed identities** - System- or user-assigned identity for the API Management instance that is being used.-- **User identities** - Users in the same tenant as the API Management instance. -- **Service principals** - Applications in the same Azure AD tenant as the API Management instance.
+Configuring an authorization in your API Management instance consists of three steps: configuring an authorization provider, consenting to access by logging in, and creating access policies.
-### Process flow for creating authorizations
-The following image shows the process flow for creating an authorization in API Management using the grant type authorization code. For public preview no API documentation is available.
+#### Step 1 - Authorization provider
+During Step 1, you configure your authorization provider. You can choose between different [identity providers](authorizations-configure-common-providers.md) and grant types (authorization code or client credential). Each identity provider requires specific configurations. Important things to keep in mind:
+* An authorization provider configuration can only have one grant type.
+* One authorization provider configuration can have [multiple authorization connections](configure-authorization-connection.md).
-1. Client sends a request to create an authorization provider.
-1. Authorization provider is created, and a response is sent back.
-1. Client sends a request to create an authorization.
-1. Authorization is created, and a response is sent back with the information that the authorization is not "connected".
-1. Client sends a request to retrieve a login URL to start the OAuth 2.0 consent at the identity provider. The request includes a post-redirect URL to be used in the last step.
-1. Response is returned with a login URL that should be used to start the consent flow.
-1. Client opens a browser with the login URL that was provided in the previous step. The browser is redirected to the identity provider OAuth 2.0 consent flow.
-1. After the consent is approved, the browser is redirected with an authorization code to the redirect URL configured at the identity provider.
-1. API Management uses the authorization code to fetch access and refresh tokens.
-1. API Management receives the tokens and encrypts them.
-1. API Management redirects to the provided URL from step 5.
+> [!NOTE]
+> With the Generic OAuth 2.0 provider, other identity providers that support the standards of [OAuth 2.0 flow](https://oauth.net/2/) can be used.
+>
-### Process flow for runtime
+To use an authorization provider, at least one *authorization* is required. Each authorization is a separate connection to the authorization provider. The process of configuring an authorization differs based on the configured grant type. Each authorization provider configuration only supports one grant type. For example, if you want to configure Azure AD to use both grant types, two authorization provider configurations are needed. The following table summarizes the two grant types.
-The following image shows the process flow to fetch and store authorization and refresh tokens based on a configured authorization. After the tokens have been retrieved a call is made to the backend API.
+|Grant type |Description |
+|||
+|Authorization code | Bound to a user context, meaning a user needs to consent to the authorization. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All identity providers support authorization code. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1) |
+|Client credentials | Isn't bound to a user and is often used in application-to-application scenarios. No consent is required for client credentials grant type, and the authorization doesn't become invalid. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.4) |
-1. Client sends request to API Management instance.
-1. The policy [`get-authorization-context`](get-authorization-context-policy.md) checks if the access token is valid for the current authorization.
-1. If the access token has expired but the refresh token is valid, API Management tries to fetch new access and refresh tokens from the configured identity provider.
-1. The identity provider returns both an access token and a refresh token, which are encrypted and saved to API Management.
-1. After the tokens have been retrieved, the access token is attached using the `set-header` policy as an authorization header to the outgoing request to the backend API.
-1. Response is returned to API Management.
-1. Response is returned to the client.
+### Step 2 - Log in
-### Error handling
+For authorizations based on the authorization code grant type, you must authenticate to the provider and *consent* to authorization. After successful login and authorization by the identity provider, the provider returns valid access and refresh tokens, which are encrypted and saved by API Management. For details, see [Process flow - runtime](#process-flowruntime).
-If acquiring the authorization context results in an error, the outcome depends on how the attribute `ignore-error` is configured in the policy `get-authorization-context`. If the value is set to `false` (default), an error with `500 Internal Server Error` will be returned. If the value is set to `true`, the error will be ignored and execution will proceed with the context variable set to `null`.
+### Step 3 - Access policy
-If the value is set to `false`, and the on-error section in the policy is configured, the error will be available in the property `context.LastError`. By using the on-error section, the error that is sent back to the client can be adjusted. Errors from API Management can be caught using standard Azure alerts. Read more about [handling errors in policies](api-management-error-handling-policies.md).
+You configure one or more *access policies* for each authorization. The access policies determine which [Azure AD identities](../active-directory/develop/app-objects-and-service-principals.md) can gain access to your authorizations at runtime. Authorizations currently support managed identities and service principals.
-### Authorizations FAQ
-##### How can I provide feedback and influence the roadmap for this feature?
+|Identity |Description | Benefits | Considerations |
+|||--|-|
+|Service principal | Identity whose tokens can be used to authenticate and grant access to specific Azure resources, when an organization is using Azure Active Directory (Azure AD). By using a service principal, organizations avoid creating fictitious users to manage authentication when they need to access a resource. A service principal is an Azure AD identity that represents a registered Azure AD application. | Permits more tightly scoped access to authorization. Isn't tied to specific API Management instance. Relies on Azure AD for permission enforcement. | Getting the [authorization context](get-authorization-context-policy.md) requires an Azure AD token. |
+|Managed identity | Service principal of a special type that represents an Azure AD identity for an Azure service. Managed identities are tied to, and can only be used with, an Azure resource. Managed identities eliminate the need for you to manually create and manage service principals directly.<br/><br/>When a system-assigned managed identity is enabled, a service principal representing that managed identity is created in your tenant automatically and tied to your resource's lifecycle.|No credentials are needed.|Identity is tied to specific Azure infrastructure. Anyone with Contributor access to API Management instance can access any authorization granting managed identity permissions. |
+| Managed identity `<Your API Management instance name>` | This option corresponds to a managed identity tied to your API Management instance. | Quick selection of system-assigned managed identity for the corresponding API management instance. | Identity is tied to your API Management instance. Anyone with Contributor access to API Management instance can access any authorization granting managed identity permissions. |
-Please use [this](https://aka.ms/apimauthorizations/feedback) form to provide feedback.
+## Security considerations
-##### How are the tokens stored in API Management?
+The access token and other authorization secrets (for example, client secrets) are encrypted with an envelope encryption and stored in an internal, multitenant storage. The data are encrypted with AES-128 using a key that is unique per data. Those keys are encrypted asymmetrically with a master certificate stored in Azure Key Vault and rotated every month.
-The access token and other secrets (for example, client secrets) are encrypted with an envelope encryption and stored in an internal, multitenant storage. The data are encrypted with AES-128 using a key that is unique per data; those keys are encrypted asymmetrically with a master certificate stored in Azure Key Vault and rotated every month.
+### Limits
-##### When are the access tokens refreshed?
+| Resource | Limit |
+| --| -|
+| Maximum number of authorization providers per service instance| 1,000 |
+| Maximum number of authorizations per authorization provider| 10,000 |
+| Maximum number of access policies per authorization | 100 |
+| Maximum number of authorization requests per minute per authorization | 250 |
-When the policy `get-authorization-context` is executed at runtime, API Management checks if the stored access token is valid. If the token has expired or is near expiry, API Management uses the refresh token to fetch a new access token and a new refresh token from the configured identity provider. If the refresh token has expired, an error is thrown, and the authorization needs to be reauthorized before it will work.
-##### What happens if the client secret expires at the identity provider?
-At runtime API Management can't fetch new tokens, and an error will occur.
+## Frequently asked questions (FAQ)
++
+### When are the access tokens refreshed?
+
+For an authorization of type authorization code, access tokens are refreshed as follows: When the `get-authorization-context` policy is executed at runtime, API Management checks if the stored access token is valid. If the token has expired or is near expiry, API Management uses the refresh token to fetch a new access token and a new refresh token from the configured identity provider. If the refresh token has expired, an error is thrown, and the authorization needs to be reauthorized before it will work.
+
+### What happens if the client secret expires at the identity provider?
+
+At runtime API Management can't fetch new tokens, and an error occurs.
* If the authorization is of type authorization code, the client secret needs to be updated on authorization provider level. * If the authorization is of type client credentials, the client secret needs to be updated on authorizations level.
-##### Is this feature supported using API Management running inside a VNet?
+### Is this feature supported using API Management running inside a VNet?
-Yes, as long as API Management gateway has outbound internet connectivity on port `443`.
+Yes, as long as outbound connectivity on port 443 is enabled to the **ServiceConnectors** service tag. For more information, see [Virtual network configuration reference](virtual-network-reference.md#required-ports).
-##### What happens when an authorization provider is deleted?
+### What happens when an authorization provider is deleted?
All underlying authorizations and access policies are also deleted.
-##### Are the access tokens cached by API Management?
+### Are the access tokens cached by API Management?
The access token is cached by the API management until 3 minutes before the token expiration time.
-##### What grant types are supported?
-
-For public preview, the Azure AD identity provider supports authorization code and client credentials.
-
-The other identity providers support authorization code. After public preview, more identity providers and grant types will be added.
-
-### Next steps
-- Learn how to [configure and use an authorization](authorizations-how-to.md).-- See [reference](authorizations-reference.md) for supported identity providers in authorizations.-- Use [policies]() together with authorizations. -- Authorizations [samples](https://github.com/Azure/APIManagement-Authorizations) GitHub repository. -- Learn more about OAuth 2.0:
+## Next steps
- * [OAuth 2.0 overview](https://aaronparecki.com/oauth-2-simplified/)
- * [OAuth 2.0 specification](https://oauth.net/2/)
+Learn how to:
+- Configure [identity providers](authorizations-configure-common-providers.md) for authorizations
+- Configure and use an authorization for the [Microsoft Graph API](authorizations-how-to-azure-ad.md) or the [GitHub API](authorizations-how-to-github.md)
+- Configure [multiple authorization connections](configure-authorization-connection.md) for a provider
api-management Authorizations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-reference.md
- Title: Reference for OAuth 2.0 authorizations - Azure API Management | Microsoft Docs
-description: Reference for identity providers supported in authorizations in Azure API Management. API Management authorizations manage OAuth 2.0 authorization tokens to APIs.
--- Previously updated : 05/02/2022---
-# Authorizations reference
-This article is a reference for the supported identity providers in API Management [authorizations](authorizations-overview.md) (preview) and their configuration options.
-
-## Azure Active Directory
--
-**Supported grant types**: authorization code and client credentials
--
-### Authorization provider - Authorization code grant type
-
-| Name | Required | Description | Default |
-|||||
-| Provider name | Yes | Name of Authorization provider. | |
-| Client id | Yes | The id used to identify this application with the service provider. | |
-| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
-| Login URL | No | The Azure Active Directory login URL. | https://login.windows.net |
-| Tenant ID | No | The tenant ID of your Azure Active Directory application. | common |
-| Resource URL | Yes | The resource to get authorization for. | |
-| Scopes | No | Scopes used for the authorization. Multiple scopes could be defined separate with a space, for example, "User.Read User.ReadBasic.All" | |
--
-### Authorization - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Authorization name | Yes | Name of Authorization. | |
-
-
-
-### Authorization provider - Client credentials code grant type
-| Name | Required | Description | Default |
-|||||
-| Provider name | Yes | Name of Authorization provider. | |
-| Login URL | No | The Azure Active Directory login URL. | https://login.windows.net |
-| Tenant ID | No | The tenant ID of your Azure Active Directory application. | common |
-| Resource URL | Yes | The resource to get authorization for. | |
--
-### Authorization - Client credentials code grant type
-| Name | Required | Description | Default |
-|||||
-| Authorization name | Yes | Name of Authorization. | |
-| Client id | Yes | The id used to identify this application with the service provider. | |
-| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
-
-
-
-## Google, LinkedIn, Spotify, Dropbox, GitHub
-
-**Supported grant types**: authorization code
-
-### Authorization provider - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Provider name | Yes | Name of Authorization provider. | |
-| Client id | Yes | The id used to identify this application with the service provider. | |
-| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
-| Scopes | No | Scopes used for the authorization. Depending on the identity provider, multiple scopes are separated by space or comma. Default for most identity providers is space. | |
--
-### Authorization - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Authorization name | Yes | Name of Authorization. | |
-
-
-
-## Generic OAuth 2
-
-**Supported grant types**: authorization code
--
-### Authorization provider - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Provider name | Yes | Name of Authorization provider. | |
-| Client id | Yes | The id used to identify this application with the service provider. | |
-| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
-| Authorization URL | No | The authorization endpoint URL. | |
-| Token URL | No | The token endpoint URL. | |
-| Refresh URL | No | The token refresh endpoint URL. | |
-| Scopes | No | Scopes used for the authorization. Depending on the identity provider, multiple scopes are separated by space or comma. Default for most identity providers is space. | |
--
-### Authorization - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Authorization name | Yes | Name of Authorization. | |
-
-## Next steps
-
-Learn more about [authorizations](authorizations-overview.md) and how to [create and use authorizations](authorizations-how-to.md)
api-management Configure Authorization Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-authorization-connection.md
+
+ Title: Configure multiple authorization connections - Azure API Management
+description: Learn how to set up multiple authorization connections to a configured authorization provider using the portal.
++++ Last updated : 03/16/2023+++
+# Configure multiple authorization connections
+
+You can configure multiple authorizations (also called *authorization connections*) to an authorization provider in your API Management instance. For example, if you configured Azure AD as an authorization provider, you might need to create multiple authorizations for different scenarios and users.
+
+In this article, you learn how to add an authorization connection to an existing provider, using the portal. For an overview of configuration steps, see [How to configure authorizations?](authorizations-overview.md#how-to-configure-authorizations)
+
+## Prerequisites
+
+* An API Management instance. If you need to, [create one](get-started-create-service-instance.md).
+* A configured authorization provider. For example, see the steps to create a provider for [GitHub](authorizations-how-to-github.md) or [Azure AD](authorizations-how-to-azure-ad.md).
+
+## Create an authorization connection - portal
+
+1. Sign into [portal](https://portal.azure.com) and go to your API Management instance.
+1. In the left menu, select **Authorizations**.
+1. Select the authorization provider that you want to create multiple connections for (for example, *mygithub*).
+
+ :::image type="content" source="media/configure-authorization-connection/select-provider.png" alt-text="Screenshot of selecting an authorization provider in the portal.":::
+1. In the provider windows, select **Authorization**, and then select **+ Create**.
+
+ :::image type="content" source="media/configure-authorization-connection/create-authorization.png" alt-text="Screenshot of creating an authorization connection in the portal.":::
+1. Complete the steps for your authorization connection.
+ 1. On the **Authorization** tab, enter an **Authorization name**. Select **Create**, then select **Next**.
+ 1. On the **Login** tab (for authorization code grant type), complete the steps to login to the authorization provider to allow access. Select **Next**.
+ 1. On the **Access policy** tab, assign access to the Azure AD identity or identities that can use the authorization. Select **Complete**.
+1. The new connection appears in the list of authorizations, and shows a status of **Connected**.
+
+ :::image type="content" source="media/configure-authorization-connection/list-authorizations.png" alt-text="Screenshot of list of authorization connections in the portal.":::
+
+If you want to create another authorization connection for the provider, complete the preceding steps.
+
+## Manage authorizations - portal
+
+You can manage authorization provider settings and authorization connections in the portal. For example, you might need to update client credentials for the authorization provider.
+
+To update provider settings:
+
+1. Sign into [portal](https://portal.azure.com) and go to your API Management instance.
+1. In the left menu, select **Authorizations**.
+1. Select the authorization provider that you want to manage.
+1. In the provider windows, select **Settings**.
+1. In the provider settings, make updates, and select **Save**.
+
+ :::image type="content" source="media/configure-authorization-connection/update-provider.png" alt-text="Screenshot of updating authorization provider settings in the portal.":::
+
+To update an authorization connection:
+
+1. Sign into [portal](https://portal.azure.com) and go to your API Management instance.
+1. In the left menu, select **Authorizations**.
+1. Select the authorization provider (for example, *mygithub*).
+1. In the provider window, select **Authorization**.
+1. In the row for the authorization connection you want to update, select the context (...) menu, and select from the options. For example, to manage access policies, select **Access policies**.
+
+ :::image type="content" source="media/configure-authorization-connection/update-connection.png" alt-text="Screenshot of updating an authorization connection in the portal.":::
+
+## Next steps
+
+* Learn more about [configuring identity providers](authorizations-configure-common-providers.md) for authorizations.
+* Review [limits](authorizations-overview.md#limits) for authorization providers and authorizations.
++++
api-management Get Authorization Context Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-authorization-context-policy.md
Previously updated : 12/08/2022 Last updated : 03/20/2023 # Get authorization context
-Use the `get-authorization-context` policy to get the authorization context of a specified [authorization](authorizations-overview.md) (preview) configured in the API Management instance.
+Use the `get-authorization-context` policy to get the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.
-The policy fetches and stores authorization and refresh tokens from the configured authorization provider.
-
-If `identity-type=jwt` is configured, a JWT token is required to be validated. The audience of this token must be `https://azure-api.net/authorization-manager`.
+The policy fetches and stores authorization and refresh tokens from the configured authorization provider.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
If `identity-type=jwt` is configured, a JWT token is required to be validated. T
| Attribute | Description | Required | Default | |||||
-| provider-id | The authorization provider resource identifier. | Yes | N/A |
-| authorization-id | The authorization resource identifier. | Yes | N/A |
-| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). | Yes | N/A |
-| identity-type | Type of identity to be checked against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute. | No | `managed` |
-| identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: `https://azure-api.net/authorization-manager` <br> - `oid`: Permission object ID <br> - `tid`: Permission tenant ID | No | N/A |
-| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource is not found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | `false` |
+| provider-id | The authorization provider resource identifier. Policy expressions are allowed. | Yes | N/A |
+| authorization-id | The authorization resource identifier. Policy expressions are allowed. | Yes | N/A |
+| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). Policy expressions are allowed. | Yes | N/A |
+| identity-type | Type of identity to check against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute.<br/><br/>Policy expressions are allowed. | No | `managed` |
+| identity | An Azure AD JWT bearer token to check against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: `https://azure-api.net/authorization-manager` <br> - `oid`: Permission object ID <br> - `tid`: Permission tenant ID<br/><br/>Policy expressions are allowed. | No | N/A |
+| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource isn't found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500`<br/><br/>If you set the value to `false`, and the policy configuration includes an `on-error` section, the error is available in the `context.LastError` property.<br/><br/>Policy expressions are allowed. | No | `false` |
### Authorization object
class Authorization
| Property Name | Description | | -- | -- | | AccessToken | Bearer access token to authorize a backend HTTP request. |
-| Claims | Claims returned from the authorization serverΓÇÖs token response API (see [RFC6749#section-5.1](https://datatracker.ietf.org/doc/html/rfc6749#section-5.1)). |
+| Claims | Claims returned from the authorization server's token response API (see [RFC6749#section-5.1](https://datatracker.ietf.org/doc/html/rfc6749#section-5.1)). |
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption
+
+### Usage notes
+
+* Configure `identity-type=jwt` when the [access policy](authorizations-overview.md#step-3access-policy) for the authorization is assigned to a service principal. Only `/.default` app-only scopes are supported for the JWT.
## Examples
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 3443 | Inbound | TCP | ApiManagement / VirtualNetwork | **Management endpoint for Azure portal and PowerShell** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / Storage | **Dependency on Azure Storage** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / AzureConnectors | [Authorizations](authorizations-overview.md) dependency (optional) | External & Internal |
| * / 1433 | Outbound | TCP | VirtualNetwork / Sql | **Access to Azure SQL endpoints** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / AzureKeyVault | **Access to Azure Key Vault** | External & Internal | | * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / EventHub | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and [Azure Monitor](api-management-howto-use-azure-monitor.md) (optional) | External & Internal |
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 443 | Outbound | TCP | VirtualNetwork / Storage | **Dependency on Azure Storage** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / AzureKeyVault | Access to Azure Key Vault for [named values](api-management-howto-properties.md) integration (optional) | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / AzureConnectors | [Authorizations](authorizations-overview.md) dependency (optional) | External & Internal |
| * / 1433 | Outbound | TCP | VirtualNetwork / Sql | **Access to Azure SQL endpoints** | External & Internal | | * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / Azure Event Hubs | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional)| External & Internal | | * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-hotpatch.md
Title: Hotpatch for Windows Server Azure Edition
-description: Learn how Hotpatch for Windows Server Azure Edition works and how to enable it
+description: Learn how hotpatch for Windows Server Azure Edition works and how to enable it
Previously updated : 02/22/2021 Last updated : 04/18/2023 # Hotpatch for new virtual machines
-<!--
> [!IMPORTANT]
-> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the Hotpatch capability described below.
-> This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
>
+> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the hotpatch capability described below. This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> [!IMPORTANT]
-> Hotpatch is supported on _Windows Server 2022 Datacenter: Azure Edition (Server Core)_.
+> [!NOTE]
+> Hotpatch is supported on _Windows Server 2022 Datacenter: Azure Edition_.
-Hotpatching is a new way to install updates on supported _Windows Server Azure Edition_ virtual machines (VMs) that doesnΓÇÖt require a reboot after installation. This article covers information about Hotpatch for supported _Windows Server Azure Edition_ VMs, which has the following benefits:
+Hotpatching is a new way to install updates on supported _Windows Server Azure Edition_ virtual machines (VMs) that doesnΓÇÖt require a reboot after installation. This article covers information about hotpatch for supported _Windows Server Azure Edition_ VMs, which has the following benefits:
* Lower workload impact with less reboots * Faster deployment of updates as the packages are smaller, install faster, and have easier patch orchestration with Azure Update Manager
-* Better protection, as the Hotpatch update packages are scoped to Windows security updates that install faster without rebooting
+* Better protection, as the hotpatch update packages are scoped to Windows security updates that install faster without rebooting
## How hotpatch works
-Hotpatch works by first establishing a baseline with a Windows Update Latest Cumulative Update. Hotpatches are periodically released (for example, on the second Tuesday of the month) that build on that baseline. Hotpatches will contain updates that don't require a reboot. Periodically (starting at every three months), the baseline is refreshed with a new Latest Cumulative Update.
+Hotpatch works by first establishing a baseline with a Windows Update Latest Cumulative Update. Hotpatches are periodically released (for example, on the second Tuesday of the month) that builds on that baseline. Hotpatches will contain updates that don't require a reboot. Periodically (starting at every three months), the baseline is refreshed with a new Latest Cumulative Update.
:::image type="content" source="media\automanage-hotpatch\hotpatch-sample-schedule.png" alt-text="Hotpatch Sample Schedule.":::
-There are two types of baselines: **Planned baselines** and **unplanned baselines**.
-* **Planned baselines** are released on a regular cadence, with hotpatch releases in between. Planned baselines include all the updates in a comparable _Latest Cumulative Update_ for that month, and require a reboot.
+There are two types of baselines: **Planned baselines** and **Unplanned baselines**.
+* **Planned baselines** are released on a regular cadence, with hotpatch releases in between. Planned baselines include all the updates in a comparable _Latest Cumulative Update_ for that month, and require a reboot.
* The sample schedule above illustrates four planned baseline releases in a calendar year (five total in the diagram), and eight hotpatch releases.
-* **Unplanned baselines** are released when an important update (such as a zero-day fix) is released, and that particular update can't be released as a Hotpatch. When unplanned baselines are released, a hotpatch release will be replaced with an unplanned baseline in that month. Unplanned baselines also include all the updates in a comparable _Latest Cumulative Update_ for that month, and also require a reboot.
+* **Unplanned baselines** are released when an important update (such as a zero-day fix) is released, and that particular update can't be released as a hotpatch. When unplanned baselines are released, a hotpatch release will be replaced with an unplanned baseline in that month. Unplanned baselines also include all the updates in a comparable _Latest Cumulative Update_ for that month, and also require a reboot.
* The sample schedule above illustrates two unplanned baselines that would replace the hotpatch releases for those months (the actual number of unplanned baselines in a year isn't known in advance). ## Regional availability
Hotpatch is available in all global Azure regions.
> [!NOTE] > You can preview onboarding Automanage machine best practices during VM creation in the Azure portal using [this link](https://aka.ms/AzureEdition).
-To start using Hotpatch on a new VM, follow these steps:
+To start using hotpatch on a new VM, follow these steps:
1. Start creating a new VM from the Azure portal
- * You can preview onboarding Automanage machine best practices during VM creation in the Azure portal using [this link](https://aka.ms/AzureEdition).
+ * You can preview onboarding Automanage machine best practices during VM creation in the Azure portal by visiting the [Azure Marketplace](https://aka.ms/AzureEdition).
1. Supply details during VM creation
- * Ensure that a supported _Windows Server Azure Edition_ image is selected in the Image dropdown. Use [this guide](automanage-windows-server-services-overview.md#getting-started-with-windows-server-azure-edition) to determine which images are supported.
- * On the Management tab under section ΓÇÿGuest OS updatesΓÇÖ, the checkbox for 'Enable hotpatch' will be selected. Patch orchestration options will be set to 'Azure-orchestrated'.
- * If you create a VM using [this link](https://aka.ms/AzureEdition), on the Management tab under section 'Azure Automanage', select 'Dev/Test' or 'Production' for 'Azure Automanage environment' to evaluate Automanage machine best practices while in preview.
+ * Ensure that a supported _Windows Server Azure Edition_ image is selected in the Image dropdown. See [automanage windows server services](automanage-windows-server-services-overview.md#getting-started-with-windows-server-azure-edition) to determine which images are supported.
+ * On the Management tab under section ΓÇÿGuest OS updatesΓÇÖ, the checkbox for 'Enable hotpatch' will be selected. Patch orchestration options are set to 'Azure-orchestrated'.
+ * If you create a VM by visiting the [Azure Marketplace](https://aka.ms/AzureEdition), on the Management tab under section 'Azure Automanage', select 'Dev/Test' or 'Production' for 'Azure Automanage environment' to evaluate Automanage machine best practices while in preview.
1. Create your new VM
az provider register --namespace Microsoft.Compute
When [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled on a VM, the available Critical and Security patches are downloaded and applied automatically. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
-With Hotpatch enabled on supported _Windows Server Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months will require VM reboots. Additional Critical or Security patches may also be available periodically which may require VM reboots.
+With hotpatch enabled on supported _Windows Server Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months require VM reboots. Additional Critical or Security patches may also be available periodically, which may require VM reboots.
The VM is assessed automatically every few days and multiple times within any 30-day period to determine the applicable patches for that VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity.
-Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM will be assessed and applicable patches will be installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days.
+Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM is assessed and applicable patches are installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days.
Definition updates and other patches not classified as Critical or Security won't be installed through automatic VM guest patching. ## Understanding the patch status for your VM
-To view the patch status for your VM, navigate to the **Guest + host updates** section for your VM in the Azure portal. Under the **Guest OS updates** section, click on ΓÇÿGo to Hotpatch (Preview)ΓÇÖ to view the latest patch status for your VM.
+To view the patch status for your VM, navigate to the **Guest + host updates** section for your VM in the Azure portal. Under the **Guest OS updates** section, select ΓÇÿGo to Hotpatch (Preview)ΓÇÖ to view the latest patch status for your VM.
-On this screen, you'll see the Hotpatch status for your VM. You can also review if there any available patches for your VM that haven't been installed. As described in the ΓÇÿPatch installationΓÇÖ section above, all security and critical updates will be automatically installed on your VM using [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) and no extra actions are required. Patches with other update classifications aren't automatically installed. Instead, they're viewable in the list of available patches under the ΓÇÿUpdate complianceΓÇÖ tab. You can also view the history of update deployments on your VM through the ΓÇÿUpdate historyΓÇÖ. Update history from the past 30 days is displayed, along with patch installation details.
+On this screen, you'll see the hotpatch status for your VM. You can also review if there any available patches for your VM that haven't been installed. As described in the ΓÇÿPatch installationΓÇÖ section above, all security and critical updates are automatically installed on your VM using [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) and no extra actions are required. Patches with other update classifications aren't automatically installed. Instead, they're viewable in the list of available patches under the ΓÇÿUpdate complianceΓÇÖ tab. You can also view the history of update deployments on your VM through the ΓÇÿUpdate historyΓÇÖ. Update history from the past 30 days is displayed, along with patch installation details.
:::image type="content" source="media\automanage-hotpatch\hotpatch-management-ui.png" alt-text="Hotpatch Management.":::
Similar to on-demand assessment, you can also install patches on-demand for your
## Supported updates
-Hotpatch covers Windows Security updates and maintains parity with the content of security updates issued to in the regular (non-Hotpatch) Windows update channel.
+Hotpatch covers Windows Security updates and maintains parity with the content of security updates issued to in the regular (non-hotpatch) Windows update channel.
-There are some important considerations to running a supported _Windows Server Azure Edition_ VM with Hotpatch enabled. Reboots are still required to install updates that aren't included in the Hotpatch program. Reboots are also required periodically after a new baseline has been installed. These reboots keep the VM in sync with non-security patches included in the latest cumulative update.
-* Patches that are currently not included in the Hotpatch program include non-security updates released for Windows, and non-Windows updates (such as .NET patches). These types of patches need to be installed during a baseline month, and will require a reboot.
+There are some important considerations to running a supported _Windows Server Azure Edition_ VM with hotpatch enabled. Reboots are still required to install updates that aren't included in the hotpatch program. Reboots are also required periodically after a new baseline has been installed. These reboots keep the VM in sync with non-security patches included in the latest cumulative update.
+* Patches that are currently not included in the hotpatch program include non-security updates released for Windows, and non-Windows updates (such as .NET patches). These types of patches need to be installed during a baseline month, and will require a reboot.
## Frequently asked questions
There are some important considerations to running a supported _Windows Server A
* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with updates that donΓÇÖt require a reboot to take effect. The baseline is updated periodically with a new cumulative update. The cumulative update includes all security and quality updates and requires a reboot.
-### Why should I use Hotpatch?
+### Why should I use hotpatch?
-* When you use Hotpatch on a supported _Windows Server Azure Edition_ image, your VM will have higher availability (fewer reboots), and faster updates (smaller packages that are installed faster without the need to restart processes). This process results in a VM that is always up to date and secure.
+* When you use hotpatch on a supported _Windows Server Azure Edition_ image, your VM will have higher availability (fewer reboots), and faster updates (smaller packages that are installed faster without the need to restart processes). This process results in a VM that is always up to date and secure.
-### What types of updates are covered by Hotpatch?
+### What types of updates are covered by hotpatch?
* Hotpatch currently covers Windows security updates.
-### When will I receive the first Hotpatch update?
+### When will I receive the first hotpatch update?
* Hotpatch updates are typically released on the second Tuesday of each month. For more information, see below.
-### What will the Hotpatch schedule look like?
+### What will the hotpatch schedule look like?
-* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with Hotpatch updates released monthly. Baselines will be released starting out every three months. See the image below for an example of an annual three-month schedule (including example unplanned baselines due to zero-day fixes).
+* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with hotpatch updates released monthly. Baselines will be released starting out every three months. See the image below for an example of an annual three-month schedule (including example unplanned baselines due to zero-day fixes).
:::image type="content" source="media\automanage-hotpatch\hotpatch-sample-schedule.png" alt-text="Hotpatch Sample Schedule.":::
-### Are reboots still needed for a VM enrolled in Hotpatch?
+### Are reboots still needed for a VM enrolled in hotpatch?
-* Reboots are still required to install updates not included in the Hotpatch program, and are required periodically after a baseline (Windows Update Latest Cumulative Update) has been installed. This reboot will keep your VM in sync with all the patches included in the cumulative update. Baselines (which require a reboot) will start out on a three-month cadence and increase over time.
+* Reboots are still required to install updates not included in the hotpatch program, and are required periodically after a baseline (Windows Update Latest Cumulative Update) has been installed. This reboot will keep your VM in sync with all the patches included in the cumulative update. Baselines (which require a reboot) will start out on a three-month cadence and increase over time.
-### Are my applications affected when a Hotpatch update is installed?
+### Are my applications affected when a hotpatch update is installed?
-* Because Hotpatch patches the in-memory code of running processes without the need to restart the process, your applications will be unaffected by the patching process. Note that this is separate from any potential performance and functionality implications of the patch itself.
+* Because hotpatch patches the in-memory code of running processes without the need to restart the process, your applications are unaffected by the patching process. This is separate from any potential performance and functionality implications of the patch itself.
-### Can I turn off Hotpatch on my VM?
+### Can I turn off hotpatch on my VM?
-* You can turn off Hotpatch on a VM via the Azure portal. Turning off Hotpatch will unenroll the VM from Hotpatch, which reverts the VM to typical update behavior for Windows Server. Once you unenroll from Hotpatch on a VM, you can re-enroll that VM when the next Hotpatch baseline is released.
+* You can turn off hotpatch on a VM via the Azure portal. Turning off hotpatch will unenroll the VM from hotpatch, which reverts the VM to typical update behavior for Windows Server. Once you unenroll from hotpatch on a VM, you can re-enroll that VM when the next hotpatch baseline is released.
### Can I upgrade from my existing Windows Server OS? * Yes, upgrading from existing versions of Windows Server (such as Windows Server 2016 or Windows Server 2019) to _Windows Server 2022 Datacenter: Azure Edition_ is supported.
-### How can I get troubleshooting support for Hotpatching?
+### How can I get troubleshooting support for hotpatching?
* You can file a [technical support case ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). For the Service option, search for and select **Virtual Machine running Windows** under Compute. Select **Azure Features** for the problem type and **Automatic VM Guest Patching** for the problem subtype.
automanage Automanage Windows Server Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-windows-server-services-overview.md
Previously updated : 02/13/2022 Last updated : 04/18/2023
Azure Automanage for Windows Server brings new capabilities specifically to _Win
- SMB over QUIC - Extended network for Azure
-<!--
> [!IMPORTANT]
-> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the Hotpatch capability described below.
-> This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
>
+> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the Hotpatch capability described below. This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Automanage for Windows Server capabilities can be found in one or more of these _Windows Server Azure Edition_ images:
Capabilities vary by image, see [getting started](#getting-started-with-windows-
Hotpatch is available on the following images:
+- Windows Server 2022 Datacenter: Azure Edition (Desktop Experience)
- Windows Server 2022 Datacenter: Azure Edition (Core)
-Hotpatch gives you the ability to apply security updates on your VM without rebooting. Additionally, Automanage for Windows Server automates the onboarding, configuration, and orchestration of hot patching. To learn more, see [Hotpatch](automanage-hotpatch.md).
+Hotpatch gives you the ability to apply security updates on your VM without rebooting. Additionally, Automanage for Windows Server automates the onboarding, configuration, and orchestration of hot patching. To learn more, see [Hotpatch](automanage-hotpatch.md).
### SMB over QUIC
SMB over QUIC offers an "SMB VPN" for telecommuters, mobile device users, and br
SMB over QUIC is also integrated with [Automanage machine best practices for Windows Server](automanage-windows-server.md) to help make SMB over QUIC management easier. QUIC uses certificates to provide its encryption and organizations often struggle to maintain complex public key infrastructures. Automanage machine best practices ensure that certificates do not expire without warning and that SMB over QUIC stays enabled for maximum continuity of service. To learn more, see [SMB over QUIC](/windows-server/storage/file-server/smb-over-quic) and [SMB over QUIC management with Automanage machine best practices](automanage-smb-over-quic.md).
-
### Extended network for Azure
Extended Network for Azure is available on the following images:
Azure Extended Network enables you to stretch an on-premises subnet into Azure to let on-premises virtual machines keep their original on-premises private IP addresses when migrating to Azure. To learn more, see [Azure Extended Network](/windows-server/manage/windows-admin-center/azure/azure-extended-network). - ## Getting started with Windows Server Azure Edition
-It's important to consider up front, which Automanage for Windows Server capabilities you would like to use, then choose a corresponding VM image that supports all of those capabilities. Some of the _Windows Server Azure Edition_ images support only a subset of capabilities, see the table below for more details.
+It's important to consider up front, which Automanage for Windows Server capabilities you would like to use, then choose a corresponding VM image that supports all of those capabilities. Some of the _Windows Server Azure Edition_ images support only a subset of capabilities.
> [!NOTE] > If you would like to preview the upcoming version of **Windows Server Azure Edition**, see [Windows Server VNext Datacenter: Azure Edition](windows-server-azure-edition-vnext.md).
It's important to consider up front, which Automanage for Windows Server capabil
To start using Automanage for Windows Server capabilities on a new VM, use your preferred method to create an Azure VM, and select the _Windows Server Azure Edition_ image that corresponds to the set of [capabilities](#getting-started-with-windows-server-azure-edition) that you would like to use.
-<!--
> [!IMPORTANT] > Some capabilities have specific configuration steps to perform during VM creation, and some capabilities that are in preview have specific opt-in and portal viewing requirements. See the individual capability topics above to learn more about using that capability with your VM.> ## Next steps > [!div class="nextstepaction"]
-> [Learn more about Azure Automanage](overview-about.md)
+> [Learn more about Azure Automanage](overview-about.md)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Valid values:
| `node` | [JavaScript](functions-reference-node.md)<br/>[TypeScript](functions-reference-node.md#typescript) | | `powershell` | [PowerShell](functions-reference-powershell.md) | | `python` | [Python](functions-reference-python.md) |
+| `custom` | [Other](functions-custom-handlers.md) |
## FUNCTIONS\_WORKER\_SHARED\_MEMORY\_DATA\_TRANSFER\_ENABLED
azure-functions Functions Node Upgrade V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md
The http request and response types are now a subset of the [fetch standard](htt
If you see the following error, make sure you [set the `EnableWorkerIndexing` flag](#enable-v4-programming-model) and you're using the minimum version of all [requirements](#requirements): > No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).+
+If you see the following error, make sure you're using Node.js version 18.x:
+
+> System.Private.CoreLib: Exception while executing function: Functions.httpTrigger1. System.Private.CoreLib: Result: Failure
+> Exception: undici_1.Request is not a constructor
+
+For any other issues or feedback, feel free to file an issue on our [GitHub repo](https://github.com/Azure/azure-functions-nodejs-library/issues).
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
For more information, see the [Geolocation service] documentation.
### Render service
-[Render service V2] introduces a new version of the [Get Map Tile V2 API] that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 tile sizes (where applicable) and numerous map types such as road, weather, contour, or map tiles. For a complete list, see [TilesetID] in the REST API documentation. It's recommended that you use Render service V2 instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service V2, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API].
+[Render V2 service] introduces a new version of the [Get Map Tile V2 API] that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 tile sizes (where applicable) and numerous map types such as road, weather, contour, or map tiles. For a complete list, see [TilesetID] in the REST API documentation. It's recommended that you use Render V2 service instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render V2 service, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API].
### Route service
Stay up to date on Azure Maps:
[Geolocation service]: /rest/api/maps/geolocation [Get Map Tile V2 API]: /rest/api/maps/render-v2/get-map-tile [Get Weather along route API]: /rest/api/maps/weather/getweatheralongroute
-[Render service V2]: /rest/api/maps/render-v2
+[Render V2 service]: /rest/api/maps/render-v2
[REST APIs]: /rest/api/maps/ [Route service]: /rest/api/maps/route [routeset API]: /rest/api/maps/v20220901preview/routeset
azure-maps How To Secure Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-device-code.md
Title: How to secure input constrained device with Azure AD and Azure Maps REST APIs
+ Title: How to secure an input constrained device using Azure AD and Azure Maps REST API
-description: How to configure a browser-less application which supports sign-in to Azure AD and calls Azure Maps REST APIs.
+description: How to configure a browser-less application that supports sign-in to Azure AD and calls Azure Maps REST API.
Last updated 06/12/2020
-# Secure an input constrained device with Azure AD and Azure Maps REST APIs
+# Secure an input constrained device by using Azure Active Directory (Azure AD) and Azure Maps REST APIs
-This guide discusses how to secure public applications or devices that cannot securely store secrets or accept browser input. These types of applications fall under the category of IoT or internet of things. Some examples of these applications may include: Smart TV devices or sensor data emitting applications.
+This guide discusses how to secure public applications or devices that can't securely store secrets or accept browser input. These types of applications fall under the internet of things (IoT) category. Examples include Smart TVs and sensor data emitting applications.
[!INCLUDE [authentication details](./includes/view-authentication-details.md)] ## Create an application registration in Azure AD > [!NOTE]
-> * **Prerequisite Reading:** [Scenario: Desktop app that calls web APIs](../active-directory/develop/scenario-desktop-overview.md)
+>
+> * **Prerequisite Reading:** [Scenario: Desktop app that calls web APIs]
> * The following scenario uses the device code flow, which does not involve a web browser to acquire a token.
-Create the device based application in Azure AD to enable Azure AD sign in. This application will be granted access to Azure Maps REST APIs.
+Create the device based application in Azure AD to enable Azure AD sign in, which is granted access to Azure Maps REST APIs.
1. In the Azure portal, in the list of Azure services, select **Azure Active Directory** > **App registrations** > **New registration**.
- > [!div class="mx-imgBorder"]
- > ![App registration](./media/how-to-manage-authentication/app-registration.png)
+ :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Azure AD":::
-2. Enter a **Name**, choose **Accounts in this organizational directory only** as the **Supported account type**. In **Redirect URIs**, specify **Public client / native (mobile & desktop)** then add `https://login.microsoftonline.com/common/oauth2/nativeclient` to the value. For more details please see Azure AD [Desktop app that calls web APIs: App registration](../active-directory/develop/scenario-desktop-app-registration.md). Then **Register** the application.
+2. Enter a **Name**, choose **Accounts in this organizational directory only** as the **Supported account type**. In **Redirect URIs**, specify **Public client / native (mobile & desktop)** then add `https://login.microsoftonline.com/common/oauth2/nativeclient` to the value. For more information, see Azure AD [Desktop app that calls web APIs: App registration]. Then **Register** the application.
- > [!div class="mx-imgBorder"]
- > ![Add app registration details for name and redirect uri](./media/azure-maps-authentication/devicecode-app-registration.png)
+ :::image type="content" source="./media/azure-maps-authentication/devicecode-app-registration.png" alt-text="A screenshot showing the settings used to register an application.":::
-3. Navigate to **Authentication** and enable **Treat application as a public client**. This will enable device code authentication with Azure AD.
+3. Navigate to **Authentication** and enable **Treat application as a public client** to enable device code authentication with Azure AD.
- > [!div class="mx-imgBorder"]
- > ![Enable app registration as public client](./media/azure-maps-authentication/devicecode-public-client.png)
+ :::image type="content" source="./media/azure-maps-authentication/devicecode-public-client.png" alt-text="A screenshot showing the advanced settings used to specify treating the application as a public client.":::
4. To assign delegated API permissions to Azure Maps, go to the application. Then select **API permissions** > **Add a permission**. Under **APIs my organization uses**, search for and select **Azure Maps**.
- > [!div class="mx-imgBorder"]
- > ![Add app API permissions](./media/how-to-manage-authentication/app-permissions.png)
+ :::image type="content" source="./media/how-to-manage-authentication/app-permissions.png" alt-text="A screenshot showing where you request API permissions.":::
5. Select the check box next to **Access Azure Maps**, and then select **Add permissions**.
- > [!div class="mx-imgBorder"]
- > ![Select app API permissions](./media/how-to-manage-authentication/select-app-permissions.png)
+ :::image type="content" source="./media/how-to-manage-authentication/select-app-permissions.png" alt-text="A screenshot showing where you specify the app permissions you require.":::
-6. Configure Azure role-based access control (Azure RBAC) for users or groups. See [Grant role-based access for users to Azure Maps](#grant-role-based-access-for-users-to-azure-maps).
+6. Configure Azure role-based access control (Azure RBAC) for users or groups. For more information, see [Grant role-based access for users to Azure Maps].
-7. Add code for acquiring token flow in the application, for implementation details see [Device code flow](../active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md). When acquiring tokens, reference the scope: `user_impersonation` which was selected on earlier steps.
+7. Add code for acquiring token flow in the application, for implementation details see [Device code flow]. When acquiring tokens, reference the scope: `user_impersonation` that was selected on earlier steps.
> [!Tip] > Use Microsoft Authentication Library (MSAL) to acquire access tokens.
- > See recommendations on [Desktop app that calls web APIs: Code configuration](../active-directory/develop/scenario-desktop-app-configuration.md)
+ > For more information, see [Desktop app that calls web APIs: Code configuration] in the active directory documentation.
8. Compose the HTTP request with the acquired token from Azure AD, and sent request with a valid HTTP client.
x-ms-client-id: 30d7cc….9f55
Authorization: Bearer eyJ0e….HNIVN ```
- The sample request body below is in GeoJSON:
+ The following sample request body is in GeoJSON:
```json {
Operation-Location: https://us.atlas.microsoft.com/mapData/operations/{udid}?api
Access-Control-Expose-Headers: Operation-Location ``` - [!INCLUDE [grant role-based access to users](./includes/grant-rbac-users.md)] ## Next steps Find the API usage metrics for your Azure Maps account:+ > [!div class="nextstepaction"]
-> [View usage metrics](how-to-view-api-usage.md)
+> [View usage metrics]
+
+[Desktop app that calls web APIs: App registration]: ../active-directory/develop/scenario-desktop-app-registration.md
+[Desktop app that calls web APIs: Code configuration]: ../active-directory/develop/scenario-desktop-app-configuration.md
+[Device code flow]: ../active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md
+[Grant role-based access for users to Azure Maps]: #grant-role-based-access-for-users-to-azure-maps
+[Scenario: Desktop app that calls web APIs]: ../active-directory/develop/scenario-desktop-overview.md
+[View usage metrics]: how-to-view-api-usage.md
azure-maps How To Secure Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-app.md
# How to secure a single-page web application with non-interactive sign-in
-This article describes how to secure a single-page web application with Azure Active Directory (Azure AD), when the user isn't able to sign in to Azure AD.
+Secure a single-page web application with Azure Active Directory (Azure AD), even when the user isn't able to sign in to Azure AD.
-To create this non-interactive authentication flow, we'll create an Azure Function secure web service that's responsible for acquiring access tokens from Azure AD. This web service will be exclusively available only to your single-page web application.
+To create this non-interactive authentication flow, first create an Azure Function secure web service that's responsible for acquiring access tokens from Azure AD. This web service is exclusively available only to your single-page web application.
[!INCLUDE [authentication details](./includes/view-authentication-details.md)]
-> [!Tip]
+> [!TIP]
> Azure Maps can support access tokens from user sign-on or interactive flows. You can use interactive flows for a more restricted scope of access revocation and secret management. ## Create an Azure function To create a secured web service application that's responsible for authentication to Azure AD:
-1. Create a function in the Azure portal. For more information, see [Getting started with Azure Functions](../azure-functions/functions-get-started.md).
+1. Create a function in the Azure portal. For more information, see [Getting started with Azure Functions].
-2. Configure CORS policy on the Azure function to be accessible by the single-page web application. The CORS policy secures browser clients to the allowed origins of your web application. For more information, see [Add CORS functionality](../app-service/app-service-web-tutorial-rest-api.md#add-cors-functionality).
+2. Configure CORS policy on the Azure function to be accessible by the single-page web application. The CORS policy secures browser clients to the allowed origins of your web application. For more information, see [Add CORS functionality].
-3. [Add a system-assigned identity](../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) on the Azure function to enable creation of a service principal to authenticate to Azure AD.
+3. [Add a system-assigned identity] on the Azure function to enable creation of a service principal to authenticate to Azure AD.
-4. Grant role-based access for the system-assigned identity to the Azure Maps account. For details, see [Grant role-based access](#grant-role-based-access-for-users-to-azure-maps).
+4. Grant role-based access for the system-assigned identity to the Azure Maps account. For more information, see [Grant role-based access].
-5. Write code for the Azure function to obtain Azure Maps access tokens using system-assigned identity with one of the supported mechanisms or the REST protocol. For more information, see [Obtain tokens for Azure resources](../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity)
+5. Write code for the Azure function to obtain Azure Maps access tokens using system-assigned identity with one of the supported mechanisms or the REST protocol. For more information, see [Obtain tokens for Azure resources].
Here's an example REST protocol:
To create a secured web service application that's responsible for authenticatio
6. Configure security for the Azure function HttpTrigger:
- 1. [Create a function access key](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#authorization-keys)
+ 1. [Create a function access key]
1. [Secure HTTP endpoint](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#secure-an-http-endpoint-in-production) for the Azure function in production. 7. Configure a web application Azure Maps Web SDK.
Find the API usage metrics for your Azure Maps account:
Explore other samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"] > [Azure Maps Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ClientGrant)+
+[Getting started with Azure Functions]: ../azure-functions/functions-get-started.md
+[Add CORS functionality]: ../app-service/app-service-web-tutorial-rest-api.md#add-cors-functionality
+[Add a system-assigned identity]: ../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity
+[Grant role-based access]: #grant-role-based-access-for-users-to-azure-maps
+[Obtain tokens for Azure resources]: ../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity
+[Create a function access key]: ../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#authorization-keys
azure-maps How To Secure Spa Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md
Title: How to secure a single page application with user sign-in
-description: How to configure a single page application which supports Azure AD single-sign-on with Azure Maps Web SDK.
+description: How to configure a single page application that supports Azure AD single-sign-on with Azure Maps Web SDK.
Last updated 06/12/2020
# Secure a single page application with user sign-in
-The following guide pertains to an application which is hosted on a content server or has minimal web server dependencies. The application provides protected resources secured only to Azure AD users. The objective of the scenario is to enable the web application to authenticate to Azure AD and call Azure Maps REST APIs on behalf of the user.
+The following guide pertains to an application that is hosted on a content server or has minimal web server dependencies. The application provides protected resources secured only to Azure AD users. The objective of the scenario is to enable the web application to authenticate to Azure AD and call Azure Maps REST APIs on behalf of the user.
[!INCLUDE [authentication details](./includes/view-authentication-details.md)]
Create the web application in Azure AD for users to sign in. The web application
1. In the Azure portal, in the list of Azure services, select **Azure Active Directory** > **App registrations** > **New registration**.
- > [!div class="mx-imgBorder"]
- > ![App registration](./media/how-to-manage-authentication/app-registration.png)
+ :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="Screenshot showing the new registration page in the App registrations blade in Azure Active Directory.":::
-2. Enter a **Name**, choose a **Support account type**, provide a redirect URI which will represent the url which Azure AD will issue the token and is the url where the map control is hosted. For a detailed sample please see [Azure Maps Azure AD samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ImplicitGrant). Then select **Register**.
+2. Enter a **Name**, choose a **Support account type**, provide a redirect URI that represents the url which Azure AD issues the token and is the url where the map control is hosted. For a detailed sample, see [Azure Maps Azure AD samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ImplicitGrant). Then select **Register**.
3. To assign delegated API permissions to Azure Maps, go to the application. Then under **App registrations**, select **API permissions** > **Add a permission**. Under **APIs my organization uses**, search for and select **Azure Maps**.
- > [!div class="mx-imgBorder"]
- > ![Add app API permissions](./media/how-to-manage-authentication/app-permissions.png)
+ :::image type="content" source="./media/how-to-manage-authentication/app-permissions.png" alt-text="Screenshot showing a list of APIs my organization uses.":::
4. Select the check box next to **Access Azure Maps**, and then select **Add permissions**.
- > [!div class="mx-imgBorder"]
- > ![Select app API permissions](./media/how-to-manage-authentication/select-app-permissions.png)
+ :::image type="content" source="./media/how-to-manage-authentication/select-app-permissions.png" alt-text="Screenshot showing the request app API permissions screen.":::
5. Enable `oauth2AllowImplicitFlow`. To enable it, in the **Manifest** section of your app registration, set `oauth2AllowImplicitFlow` to `true`.
Create the web application in Azure AD for users to sign in. The web application
``` 7. Configure Azure role-based access control (Azure RBAC) for users or groups. See the [following sections to enable Azure RBAC](#grant-role-based-access-for-users-to-azure-maps).
-
+ [!INCLUDE [grant role access to users](./includes/grant-rbac-users.md)] ## Next steps
azure-maps How To Show Attribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-attribution.md
Title: Show the correct map copyright attribution information
-description: The map copyright attribution information must be displayed in any applications that use the Render V2 API, including web and mobile applications. In this article, you'll learn how to display the correct attribution every time you display or update a tile.
+description: The map copyright attribution information must be displayed in all applications that use the Render V2 API, including web and mobile applications. This article discusses how to display the correct attribution every time you display or update a tile.
Last updated 3/16/2022
# Show the correct copyright attribution
-When using the [Azure Maps Render service V2], either as a basemap or layer, you're required to display the appropriate data provider copyright attribution on the map. This information should be displayed in the lower right-hand corner of the map.
+When using the Azure Maps [Render V2 service], either as a basemap or layer, you're required to display the appropriate data provider copyright attribution on the map. This information should be displayed in the lower right-hand corner of the map.
-The above image is an example of a map from the Render service V2, displaying the road style. It shows the copyright attribution in the lower right-hand corner of the map.
+The above image is an example of a map from the Render V2 service, displaying the road style. It shows the copyright attribution in the lower right-hand corner of the map.
-The above image is an example of a map from the Render service V2, displaying the satellite style. note that there's another data provider listed.
+The above image is an example of a map from the Render V2 service, displaying the satellite style. note that there's another data provider listed.
## The Get Map Attribution API
The [Get Map Attribution API] enables you to request map copyright attribution i
The map copyright attribution information must be displayed on the map in any applications that use the Render V2 API, including web and mobile applications.
-The attribution is automatically displayed and updated on the map When using any of the Azure Maps SDKs. This includes the [Web SDK], [Android SDK] and the [iOS SDK].
+The attribution is automatically displayed and updated on the map When using any of the Azure Maps SDKs, including the [Web], [Android] and [iOS] SDKs.
When using map tiles from the Render service in a third-party map, you must display and update the copyright attribution information on the map.
Since the data providers can differ depending on the *region* and *zoom* level,
### How to use the Get Map Attribution API
-You'll need the following information to run the `attribution` command:
+You need the following information to run the `attribution` command:
| Parameter | Type | Description | | -- | | -- |
https://atlas.microsoft.com/map/attribution?subscription-key={Your-Azure-Maps-Su
## Additional information
-* For more information, see the [Azure Maps Render service V2] documentation.
+* For more information, see the [Render V2 service] documentation.
-[Azure Maps Render service V2]: /rest/api/maps/render-v2
+[Android]: how-to-use-android-map-control-library.md
+[Authentication with Azure Maps]: azure-maps-authentication.md
[Get Map Attribution API]: /rest/api/maps/render-v2/get-map-attribution
-[Web SDK]: how-to-use-map-control.md
-[Android SDK]: how-to-use-android-map-control-library.md
-[iOS SDK]: how-to-use-ios-map-control-library.md
-[Tileset Create API]: /rest/api/maps/v2/tileset/create
[Get Map Attribution]: /rest/api/maps/render-v2/get-map-attribution#tilesetid
+[iOS]: how-to-use-ios-map-control-library.md
+[Render V2 service]: /rest/api/maps/render-v2
+[Tileset Create API]: /rest/api/maps/v2/tileset/create
[TilesetID]: /rest/api/maps/render-v2/get-map-attribution#tilesetid
-[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
-[Authentication with Azure Maps]: azure-maps-authentication.md
+[Web]: how-to-use-map-control.md
+[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
# Best practices for Azure Maps Route service
-The Route Directions and Route Matrix APIs in Azure Maps [Route service] can be used to calculate the estimated arrival times (ETAs) for each requested route. Route APIs consider factors such as real-time traffic information and historic traffic data, like the typical road speeds on the requested day of the week and time of day. The APIs return the shortest or fastest routes available to multiple destinations at a time in sequence or in optimized order, based on time or distance. Users can also request specialized routes and details for walkers, bicyclists, and commercial vehicles like trucks. In this article, we'll share the best practices to call Azure Maps [Route service], and you'll learn how-to:
+The Route Directions and Route Matrix APIs in Azure Maps [Route service] can be used to calculate the estimated arrival times (ETAs) for each requested route. Route APIs consider factors such as real-time traffic information and historic traffic data, like the typical road speeds on the requested date and time. The APIs return the shortest or fastest routes available to multiple destinations at a time in sequence or in optimized order, based on time or distance. Users can also request specialized routes and details for walkers, bicyclists, and commercial vehicles like trucks. This article discusses best practices for calling the Azure Maps [Route service], including how-to:
* Choose between the Route Directions APIs and the Matrix Routing API * Request historic and predicted travel times, based on real-time and historical traffic data
This article uses the [Postman] application to build REST calls, but you can cho
## Choose between Route Directions and Matrix Routing
-The Route Directions APIs return instructions including the travel time and the coordinates for a route path. The Route Matrix API lets you calculate the travel time and distances for a set of routes that are defined by origin and destination locations. For every given origin, the Matrix API calculates the cost (travel time and distance) of routing from that origin to every given destination. These API allow you to specify parameters such as the desired departure time, arrival times, and the vehicle type, like car or truck. They all use real-time or predictive traffic data accordingly to return the most optimal routes.
+The Route Directions APIs return instructions including the travel time and the coordinates for a route path. The Route Matrix API lets you calculate the travel time and distances for a set of routes defined by origin and destination locations. For every given origin, the Matrix API calculates the cost (travel time and distance) of routing from that origin to every given destination. These API allow you to specify parameters such as the desired departure time, arrival times, and the vehicle type, like car or truck. They all use real-time or predictive traffic data accordingly to return the most optimal routes.
Consider calling Route Directions APIs if your scenario is to:
Consider calling Matrix Routing API if your scenario is to:
* Sort potential routes by their actual travel distance or time. The Matrix API returns only travel times and distances for each origin and destination combination. * Cluster data based on travel time or distances. For example, your company has 50 employees, find all employees that live within 20 minute Drive Time from your office.
-Here is a comparison to show some capabilities of the Route Directions and Matrix APIs:
+Here's a comparison to show some capabilities of the Route Directions and Matrix APIs:
| Azure Maps API | Max number of queries in the request | Avoid areas | Truck and electric vehicle routing | Waypoints and Traveling Salesman optimization | Supporting points | | :--: | :--: | :--: | :--: | :--: | :--: |
To learn more about electric vehicle routing capabilities, see our tutorial on h
## Request historic and real-time data
-By default, the Route service assumes the traveling mode is a car and the departure time is now. It returns route based on real-time traffic conditions unless a route calculation request specifies otherwise. Fixed time-dependent traffic restrictions, like 'Left turns aren't allowed between 4:00 PM to 6:00 PM' are captured and will be considered by the routing engine. Road closures, like roadworks, will be considered unless you specifically request a route that ignores the current live traffic. To ignore the current traffic, set `traffic` to `false` in your API request.
+By default, the Route service assumes the traveling mode is a car and the departure time is now. It returns route based on real-time traffic conditions unless a route calculation request specifies otherwise. The routing engine factors fixed time-dependent traffic restrictions, like 'Left turns aren't allowed between 4:00 PM to 6:00 PM'. Road closures, like roadworks, are considered unless you specifically request a route that ignores the current live traffic. To ignore the current traffic, set `traffic` to `false` in your API request.
-The route calculation **travelTimeInSeconds** value includes the delay due to traffic. It's generated by leveraging the current and historic travel time data, when departure time is set to now. If your departure time is set in the future, the APIs return predicted travel times based on historical data.
+The route calculation **travelTimeInSeconds** value includes the delay due to traffic. It's generated by using the current and historic travel time data, when departure time is set to now. If your departure time is set in the future, the APIs return predicted travel times based on historical data.
-If you include the **computeTravelTimeFor=all** parameter in your request, then the summary element in the response will have the following additional fields including historical traffic conditions:
+If you include the **computeTravelTimeFor=all** parameter in your request, then the summary element in the response has the following fields including historical traffic conditions:
| Element | Description| | : | : |
In the first example below the departure time is set to the future, at the time
https://atlas.microsoft.com/route/directions/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&query=51.368752,-0.118332:51.385426,-0.128929&travelMode=car&traffic=true&departAt=2025-03-29T08:00:20&computeTravelTimeFor=all ```
-The response contains a summary element, like the one below. Because the departure time is set to the future, the **trafficDelayInSeconds** value is zero. The **travelTimeInSeconds** value is calculated using time-dependent historic traffic data. So, in this case, the **travelTimeInSeconds** value is equal to the **historicTrafficTravelTimeInSeconds** value.
+The response contains a summary element, like the following example. Because the departure time is set to the future, the **trafficDelayInSeconds** value is zero. The **travelTimeInSeconds** value is calculated using time-dependent historic traffic data. So, in this case, the **travelTimeInSeconds** value is equal to the **historicTrafficTravelTimeInSeconds** value.
```json "summary": {
The response contains a summary element, like the one below. Because the departu
### Sample query
-In the second example below, we have a real-time routing request, where departure time is now. It's not explicitly specified in the URL because it's the default value.
+In the next example, we have a real-time routing request, where departure time is now. It's not explicitly specified in the URL because it's the default value.
```http https://atlas.microsoft.com/route/directions/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&query=47.6422356,-122.1389797:47.6641142,-122.3011268&travelMode=car&traffic=true&computeTravelTimeFor=all ```
-The response contains a summary as shown below. Because of congestions, the **trafficDelaysInSeconds** value is greater than zero. It's also greater than **historicTrafficTravelTimeInSeconds**.
+The response contains a summary as shown in the following example. Because of congestion, the **trafficDelaysInSeconds** value is greater than zero. It's also greater than **historicTrafficTravelTimeInSeconds**.
```json "summary": {
The response contains a summary as shown below. Because of congestions, the **tr
## Request route and leg details
-By default, the Route service will return an array of coordinates. The response will contain the coordinates that make up the path in a list named `points`. Route response also includes the distance from the start of the route and the estimated elapsed time. These values can be used to calculate the average speed for the entire route.
+By default, the Route service returns an array of coordinates. The response contains the coordinates that make up the path in a list named `points`. Route response also includes the distance from the start of the route and the estimated elapsed time. These values can be used to calculate the average speed for the entire route.
The following image shows the `points` element.
The Route API returns directions that accommodate the dimensions of the truck an
### Sample query
-Changing the US Hazmat Class, from the above query, will result in a different route to accommodate this change.
+Changing the US Hazmat Class, from the above query, results in a different route to accommodate this change.
```http https://atlas.microsoft.com/route/directions/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&vehicleWidth=2&vehicleHeight=2&vehicleCommercial=true&vehicleLoadType=USHazmatClass9&travelMode=truck&instructionsType=text&query=51.368752,-0.118332:41.385426,-0.128929 ```
-The response below is for a truck carrying a class 9 hazardous material, which is less dangerous than a class 1 hazardous material. When you expand the `guidance` element to read the directions, you'll notice that the directions aren't the same. There are more route instructions for the truck carrying class 1 hazardous material.
+The following response is for a truck carrying a class 9 hazardous material, which is less dangerous than a class 1 hazardous material. When you expand the `guidance` element to read the directions, notice that the directions aren't the same. There are more route instructions for the truck carrying class 1 hazardous material.
![Truck with class 9 hazwaste](media/how-to-use-best-practices-for-routing/truck-with-hazwaste9-img.png)
The response contains the sections that are suitable for traffic along the given
![Traffic sections](media/how-to-use-best-practices-for-routing/traffic-section-type-img.png)
-This option can be used to color the sections when rendering the map, as in the image below:
+This option can be used to color the sections when rendering the map, as in The following image:
![Colored sections rendered on map](media/how-to-use-best-practices-for-routing/show-traffic-sections-img.png)
Azure Maps currently provides two forms of route optimizations:
* Traveling salesman optimization, which changes the order of the waypoints to obtain the best order to visit each stop
-For multi-stop routing, up to 150 waypoints may be specified in a single route request. The starting and ending coordinate locations can be the same, as would be the case with a round trip. But you need to provide at least one additional waypoint to make the route calculation. Waypoints can be added to the query in-between the origin and destination coordinates.
+For multi-stop routing, up to 150 waypoints may be specified in a single route request. The starting and ending coordinate locations can be the same, as would be the case with a round trip. But you need to provide at least one more waypoint to make the route calculation. Waypoints can be added to the query in-between the origin and destination coordinates.
If you want to optimize the best order to visit the given waypoints, then you need to specify **computeBestOrder=true**. This scenario is also known as the traveling salesman optimization problem.
The response describes the path length to be 140,851 meters, and that it would t
![Non-optimized response](media/how-to-use-best-practices-for-routing/non-optimized-response-img.png)
-The image below illustrates the path resulting from this query. This path is one possible route. It's not the optimal path based on time or distance.
+The following image illustrates the path resulting from this query. This path is one possible route. It's not the optimal path based on time or distance.
![Non-optimized image](media/how-to-use-best-practices-for-routing/non-optimized-image-img.png)
The response describes the path length to be 91,814 meters, and that it would ta
![Optimized response](media/how-to-use-best-practices-for-routing/optimized-response-img.png)
-The image below illustrates the path resulting from this query.
+The following image illustrates the path resulting from this query.
![Optimized image](media/how-to-use-best-practices-for-routing/optimized-image-img.png)
You might have situations where you want to reconstruct a route to calculate zer
3. Order the locations based on the distance from the start of the route 4. Add these locations as supporting points in a new route request to [Post Route Directions]. To learn more about the supporting points, see the [Post Route Directions API documentation].
-When calling [Post Route Directions], you can set the minimum deviation time or the distance constraints, along with the supporting points. Use these parameters if you want to offer alternative routes, but you also want to limit the travel time. When these constraints are used, the alternative routes will follow the reference route from the origin point for the given time or distance. In other words, the other routes diverge from the reference route per the given constraints.
+When calling [Post Route Directions], you can set the minimum deviation time or the distance constraints, along with the supporting points. Use these parameters if you want to offer alternative routes, but you also want to limit the travel time. When these constraints are used, the alternative routes follow the reference route from the origin point for the given time or distance. In other words, the other routes diverge from the reference route per the given constraints.
-The image below is an example of rendering alternative routes with specified deviation limits for the time and the distance.
+The following image is an example of rendering alternative routes with specified deviation limits for the time and the distance.
![Alternative routes](media/how-to-use-best-practices-for-routing/alternative-routes-img.png)
The Azure Maps Web SDK provides a [Service module]. This module is a helper libr
To learn more, please see: > [!div class="nextstepaction"]
-> [Azure Maps Route service](/rest/api/maps/route)
+> [Azure Maps Route service]
> [!div class="nextstepaction"]
-> [How to use the Service module](./how-to-use-services-module.md)
+> [How to use the Service module]
> [!div class="nextstepaction"]
-> [Show route on the map](./map-route.md)
+> [Show route on the map]
> [!div class="nextstepaction"]
-> [Azure Maps npm Package](https://www.npmjs.com/package/azure-maps-rest )
+> [Azure Maps npm Package]
-[Route service]: /rest/api/maps/route
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Routing Coverage]: routing-coverage.md
-[Postman]: https://www.postman.com/downloads/
-[RouteType]: /rest/api/maps/route/postroutedirections#routetype
+[Azure Maps npm Package]: https://www.npmjs.com/package/azure-maps-rest
+[Azure Maps Route service]: /rest/api/maps/route
+[How to use the Service module]: how-to-use-services-module.md
[Point of Interest]: /rest/api/maps/search/getsearchpoi
-[Post Route Directions]: /rest/api/maps/route/postroutedirections
[Post Route Directions API documentation]: /rest/api/maps/route/postroutedirections#supportingpoints
+[Post Route Directions]: /rest/api/maps/route/postroutedirections
+[Postman]: https://www.postman.com/downloads/
+[Route service]: /rest/api/maps/route
+[RouteType]: /rest/api/maps/route/postroutedirections#routetype
+[Routing Coverage]: routing-coverage.md
[Service module]: /javascript/api/azure-maps-rest/
+[Show route on the map]: map-route.md
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps How To Use Feedback Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-feedback-tool.md
Title: Provide data feedback to Azure Maps | Microsoft Azure Maps
+ Title: Provide data feedback to Azure Maps
+ description: Provide data feedback using Microsoft Azure Maps feedback tool.
Azure Maps has been available since May 2018. Azure Maps has been providing fresh map data, easy-to-use REST APIs, and powerful SDKs to support our enterprise customers with different kind of business use cases. The real world is changing every second, and itΓÇÖs crucial for us to provide a factual digital representation to our customers. Our customers that are planning to open or close facilities need our maps to update promptly. So, they can efficiently plan for delivery, maintenance, or customer service at the right facilities. We have created the Azure Maps data feedback site to empower our customers to provide direct data feedback. CustomersΓÇÖ data feedback goes directly to our data providers and their map editors. They can quickly evaluate and incorporate feedback into our mapping products.
-[Azure Maps Data feedback site](https://feedback.azuremaps.com) provides an easy way for our customers to provide map data feedback, especially on business points of interest and residential addresses. This article guides you on how to provide different kinds of feedback using the Azure Maps feedback site.
+[Azure Maps Data feedback site] provides an easy way for our customers to provide map data feedback, especially on business points of interest and residential addresses. This article guides you on how to provide different kinds of feedback using the Azure Maps feedback site.
-## Add a business place or a residential address
+## Add a business place or a residential address
-You may want to provide feedback about a missing point of interest or a residential address. There are two ways to do so. Open the Azure Map data feedback site, search for the missing location's coordinates, and then click "Add a place"
+You may want to provide feedback about a missing point of interest or a residential address. There are two ways to do so. Open the Azure Map data feedback site, search for the missing location's coordinates, and then select **Add a place**.
![search missing location](./media/how-to-use-feedback-tool/search-poi.png)
-Or, you can interact with the map. Click on the location to drop a pin at the coordinate and click "Add a place".
+Or, you can interact with the map. Select the location to drop a pin at the coordinate then select **Add a place**.
![add pin](./media/how-to-use-feedback-tool/add-poi.png)
-Upon clicking, you'll be directed to a form to provide the corresponding details for the place.
+Once selected, you're directed to a form to provide the corresponding details for the place.
![add a place](./media/how-to-use-feedback-tool/add-a-place.png)
-## Fix a business place or a residential address
+## Fix a business place or a residential address
-The feedback site also allows you to search and locate a business place or an address. You can provide feedback to fix the address or the pin location, if they aren't correct. To provide feedback to fix the address, use the search bar to search for a business place or residential address. Click on the location of your interest from the results list. Click on "Fix this place".
+The feedback site also allows you to search and locate a business place or an address. You can provide feedback to fix the address or the pin location, if they aren't correct. To provide feedback to fix the address, use the search bar to search for a business place or residential address. Select the location of your interest from the results list, then **Fix this place**.
![search place to fix](./media/how-to-use-feedback-tool/fix-place.png)
-To provide feedback to fix the address, fill out the "Fix a place" form, and then click on the "submit" button.
+To provide feedback to fix the address, fill out the **Fix a place** form, then select **Submit**.
![fix form](./media/how-to-use-feedback-tool/fix-form.png)
-If the pin location for the place is wrong, check the checkbox on the "Fix a place" form that says "The pin location is incorrect". Move the pin to the correct location, and then click the "submit" button.
+If the pin location for the place is wrong, select the **The pin location is incorrect** checkbox. Move the pin to the correct location, and then select **Submit**.
![move pin location](./media/how-to-use-feedback-tool/move-pin.png)
-## Add a comment
+## Add a comment
-In addition to letting you search for a location, the feedback tool also lets you add a free form text comment for details related to the location. To add a comment, search for the location or click on the location. Click "Add a comment", write a comment, and then click "Submit".
+In addition to letting you search for a location, the feedback tool also lets you add a free form text comment for details related to the location. To add a comment, search for the location or select the location, write a comment in the **Add a comment** field then **Submit**.
![add comment](./media/how-to-use-feedback-tool/add-comment.png)
-## Track status
+## Track status
-You can also track the status of your request by checking the "I want to track status" box and providing your email while making a request. You will receive a tracking link in the email that provides an up-to-date status of your request.
+You can also track the status of your request by selecting the **I want to track status** box and providing your email while making a request. You receive a tracking link in the email that provides an up-to-date status of your request.
![feedback status](./media/how-to-use-feedback-tool/feedback-status.png) - ## Next steps
-To post any technical questions related to Azure Maps, visit:
+For any technical questions related to Azure Maps, see [Microsoft Q & A].
-* [Microsoft Q & A](/answers/topics/azure-maps.html)
+[Azure Maps Data feedback site]: https://feedback.azuremaps.com
+[Microsoft Q & A]: /answers/topics/azure-maps.html
azure-maps Release Notes Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-indoor-module.md
This document contains information about new features and other changes to the Azure Maps Indoor Module.
+## [0.2.1]
+
+### New features (0.2.1)
+
+- multiple statesets are now supported for map configurations with multiple tileset, instead of single stateset ID, a mapping between tileset IDs and stateset ids can be passed:
+
+ ```js
+ indoorManager.setOptions({
+ statesetId: {
+ 'tilesetId1': 'stasetId1',
+ 'tilesetId2': 'stasetId2'
+ }
+ });
+
+ indoorManager.setDynamicStyling(true)
+ ```
+
+- autofocus and autofocusOptions support: when you set autofocus on `IndoorManagerOptions`, the camera is focused on the indoor facilities once the indoor map is loaded. Camera options can be further customized via autofocus options:
+
+ ```js
+ indoorManager.setOptions({
+ autofocus: true,
+ autofocusOptions: {
+ padding: { top: 50, bottom: 50, left: 50, right: 50 }
+ }
+ });
+ ```
+
+- focusCamera support: instead of `autofocus`, you can call `focusCamera` directly. (Alternative to `autofocus`, when indoor map configuration is used, tilesetId can be provided to focus on a specific facility only, otherwise bounds that enclose all facilities are used):
+
+ ```js
+ indoorManager.focusCamera({
+ type: 'ease',
+ duration: 1000,
+ padding: { top: 50, bottom: 50, left: 50, right: 50 }
+ })
+ ```
+
+- level name labels in LevelControl (in addition to `ordinal`, LevelControl can now display level names derived from 'name' property of level features):
+
+ ```js
+ indoorManager.setOptions({
+ levelControl: new LevelControl({ levelLabel: 'name' })
+ });
+ ```
+### Changes (0.2.1)
+
+- non level-bound features are now displayed
+
+### Bug fixes (0.2.1)
+
+- fix facility state not initialized when tile loads don't emit `sourcedata` event
+
+- level preference sorting fixed
+ ## [0.2.0] ### New features (0.2.0)
Stay up to date on Azure Maps:
> [Azure Maps Blog] [drawing package 2.0]: ./drawing-package-guide.md
+[0.2.1]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.1
[0.2.0]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.0 [Azure Maps Creator Samples]: https://samples.azuremaps.com/?search=creator [Azure Maps Blog]: https://techcommunity.microsoft.com/t5/azure-maps-blog/bg-p/AzureMapsBlog
azure-monitor Prometheus Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/prometheus-alerts.md
Last updated 09/15/2022
# Prometheus alerts in Azure Monitor
-Prometheus alert rules allow you to define alert conditions, using queries which are written in Prometheus Query Language (Prom QL) that are applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it is fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule.
+Prometheus alert rules allow you to define alert conditions, using queries which are written in Prometheus Query Language (Prom QL). The rule queries are applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it is fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule.
> [!NOTE] > Azure Monitor managed service for Prometheus, including Prometheus metrics, is currently in public preview and does not yet have all of its features enabled. Prometheus metrics are displayed with alerts generated by other types of alert rules, but they currently have a difference experience for creating and managing them. ## Create Prometheus alert rule
-Prometheus alert rules are created as part of a Prometheus rule group which is stored in [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details.
+Prometheus alert rules are created as part of a Prometheus rule group, which is applied on the [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details.
## View Prometheus alerts View fired and resolved Prometheus alerts in the Azure portal with other alert types. Use the following steps to filter on only Prometheus alerts.
View fired and resolved Prometheus alerts in the Azure portal with other alert t
:::image type="content" source="media/prometheus-metric-alerts/view-alerts.png" lightbox="media/prometheus-metric-alerts/view-alerts.png" alt-text="Screenshot of a list of alerts in Azure Monitor with a filter for Prometheus alerts."::: 4. Click the alert name to view the details of a specific fired/resolved alert.
-## Next steps
+
+## Explore Prometheus alerts in Grafana
+1. In the fired alerts details pane, you can click the **View query in Grafana** link.
+2. A browser tab will be opened taking you to the [Azure Managed Grafana](../../managed-grafan) instance connected to your Azure Monitor Workspace.
+3. Grafana will be opened in Explore mode, presenting the chart for your alert rule expression query which triggered the alert, around the alert firing time. You can further explore the query in Grafana to identify the reason causing the alert to fire.
+> [!NOTE]
+> 1. If there is no Azure Managed Grafana connected to your Azure Monitor Workspace, a link to Grafana will not be available.
+> 2. In order to view the alert query in Explore mode, you must have either a Grafana Admin or Grafana Editor role permissions. If you don't have the needed permissions, you will get a respective Grafana error.
+
+## Next steps
- [Create a Prometheus rule group](../essentials/prometheus-rule-groups.md).
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
description: Search logs generated by Trace, NLog, or Log4Net.
ms.devlang: csharp Previously updated : 03/22/2023 Last updated : 04/18/2023 - # Explore .NET/.NET Core and Python trace logs in Application Insights
-Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to [Azure Application Insights][start]. For Python applications, send diagnostic tracing logs by using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search for them. Those logs are merged with the other log files from your application. You can use them to identify traces that are associated with each user request and correlate them with other events and exception reports.
+Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to Azure Application Insights. For Python applications, send diagnostic tracing logs by using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search for them. Those logs are merged with the other log files from your application. You can use them to identify traces that are associated with each user request and correlate them with other events and exception reports.
> [!NOTE] > Do you need the log-capture module? It's a useful adapter for third-party loggers. But if you aren't already using NLog, log4Net, or System.Diagnostics.Trace, consider calling [**Application Insights TrackTrace()**](./api-custom-events-metrics.md#tracktrace) directly.
Install your chosen logging framework in your project, which should result in an
## Configure Application Insights to collect logs
-[Add Application Insights to your project](./asp-net.md) if you haven't done that yet. You'll see an option to include the log collector.
+[Add Application Insights to your project](./asp-net.md) if you haven't done that yet and there is an option to include the log collector.
Or right-click your project in Solution Explorer to **Configure Application Insights**. Select the **Configure trace collection** option.
You can also add a severity level to your message. And, like other telemetry, yo
new Dictionary<string, string> { { "database", "db.ID" } }); ```
-Now you can easily filter out in [Search][diagnostic] all the messages of a particular severity level that relate to a particular database.
+Now you can easily filter out in **Transaction Search** all the messages of a particular severity level that relate to a particular database.
## AzureLogHandler for OpenCensus Python
logger.warning('Hello, World!')
Run your app in debug mode or deploy it live.
-In your app's overview pane in the [Application Insights portal][portal], select [Search][diagnostic].
+In your app's overview pane in the Application Insights portal, select **Transaction Search**.
You can, for example:
The Application Insights Java agent collects logs from Log4j, Logback, and java.
### <a name="emptykey"></a>Why do I get the "Instrumentation key cannot be empty" error message?
-You probably installed the logging adapter NuGet package without installing Application Insights. In Solution Explorer, right-click *ApplicationInsights.config*, and select **Update Application Insights**. You'll be prompted to sign in to Azure and create an Application Insights resource or reuse an existing one. That should fix the problem.
+You probably installed the logging adapter NuGet package without installing Application Insights. In Solution Explorer, right-click *ApplicationInsights.config*, and select **Update Application Insights**. You are prompted to sign in to Azure and create an Application Insights resource or reuse an existing one. It should fix the problem.
### Why can I see traces but not other events in diagnostic search?
Perhaps your application sends voluminous amounts of data and you're using the A
## <a name="add"></a>Next steps
-* [Diagnose failures and exceptions in ASP.NET][exceptions]
-* [Learn more about Search][diagnostic]
-* [Set up availability and responsiveness tests][availability]
-* [Troubleshooting][qna]
+* [Diagnose failures and exceptions in ASP.NET](asp-net-exceptions.md)
+* [Learn more about Transaction Search](diagnostic-search.md)
+* [Set up availability and responsiveness tests](availability-overview.md)
<!--Link references-->
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
If your monitoring of a business application is limited to functionality provide
- Collect detailed application usage and performance data such as response time, failure rates, and request rates. - Collect browser data such as page views and load performance. - Detect exceptions and drill into stack trace and related requests.-- Perform advanced analysis using features such as [distributed tracing](app/distributed-tracing.md) and [smart detection](alerts/proactive-diagnostics.md).
+- Perform advanced analysis using features such as [distributed tracing](app/distributed-tracing-telemetry-correlation.md) and [smart detection](alerts/proactive-diagnostics.md).
- Use [metrics explorer](essentials/metrics-getting-started.md) to interactively analyze performance data. - Use [log queries](logs/log-query-overview.md) to interactively analyze collected telemetry together with data collected for Azure services and VM insights.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Title: Optimize costs in Azure Monitor
+ Title: Cost optimization in Azure Monitor
description: Recommendations for reducing costs in Azure Monitor.
Last updated 03/29/2023
-# Optimize costs in Azure Monitor
-You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+# Cost optimization in Azure Monitor
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
-> [!NOTE]
-> This article describes [Cost optimization](/azure/architecture/framework/cost/) for Azure Monitor as part of the [Azure Well-Architected Framework](/azure/architecture/framework/). This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
->
-> - Reliability
-> - Security
-> - Cost Optimization
-> - Operational Excellence
-> - Performance Efficiency
+This article describes [Cost optimization](/azure/architecture/framework/cost/) for Azure Monitor as part of the [Azure Well-Architected Framework](/azure/architecture/framework/). This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
-## Design considerations
+- Reliability
+ - Security
+ - Cost Optimization
+ - Operational Excellence
+ - Performance Efficiency
-Azure Monitor includes the following design considerations related to cost:
-- Log Analytics workspace architecture<br><br>You can start using Azure Monitor with a single Log Analytics workspace by using default options. As your monitoring environment grows, you'll need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces. There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. This may include trade-offs between functionality and cost depending on your particular priorities.<br><br>See [Design a Log Analytics workspace architecture](logs/workspace-design.md) for a list of criteria to consider when designing a workspace architecture.
+## Azure Monitor Logs
-## Checklist
-**Log Analytics workspace configuration**
+## Azure resources
-> [!div class="checklist"]
-> - Configure pricing tier or dedicated cluster to optimize your cost depending on your usage.
-> - Configure tables used for debugging, troubleshooting, and auditing as Basic Logs.
-> - Configure data retention and archiving.
-**Data collection**
+### Design checklist
> [!div class="checklist"]
-> - Use diagnostic settings and transformations to collect only critical resource log data from Azure resources.
-> - Configure VM agents to collect only critical events.
-> - Use transformations to filter resource logs for [supported tables](logs/tables-feature-support.md).
-> - Ensure that VMs aren't sending data to multiple workspaces.
+> - Collect only critical resource log data from Azure resources.
-**Monitor usage**
-> [!div class="checklist"]
-> - Send alert when data collection is high.
-> - Analyze your collected data at regular intervals to determine if there are opportunities to further reduce your cost.
-> - Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget.
+### Configuration recommendations
+| Recommendation | Benefit |
+|:|:|
+| Collect only critical resource log data from Azure resources. | When you create [diagnostic settings](essentials/diagnostic-settings.md) to send [resource logs](essentials/resource-logs.md) for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, you can use a [workspace transformation](essentials/data-collection-transformations.md?#workspace-transformation-dcr) to further filter unneeded data for those resources that use a [supported table](logs/tables-feature-support.md). See [Diagnostic settings in Azure Monitor](essentials/diagnostic-settings.md#controlling-costs) for details on how to configure diagnostic settings and using transformations to filter their data. |
-## Configuration recommendations
+## Virtual machines
+### Design checklist
-### Log Analytics workspace configuration
-You may be able to significantly reduce your costs by optimizing the configuration of your Log Analytics workspaces. You can commit to a minimum amount of data collection in exchange for a reduced rate, and optimize your costs for the functionality and retention of data in particular tables.
+> [!div class="checklist"]
+> - Configure VM agents to collect only important events.
+> - Ensure that VMs aren't sending data to multiple workspaces.
+> - Use transformations to filter unnecessary data from collected events.
-| Recommendation | Description |
-|:|:|
-| Configure pricing tier or dedicated cluster for your Log Analytics workspaces. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers) or [dedicated cluster](logs/logs-dedicated-clusters.md), which allows you to commit to a daily minimum of data collected in exchange for a lower rate.<br><br>See [Azure Monitor Logs cost calculations and options](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
-| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost.<br><br>See [Configure Basic Logs in Azure Monitor](logs/basic-logs-configure.md) for more information about Basic Logs and [Query Basic Logs in Azure Monitor](.//logs/basic-logs-query.md) for details on query limitations. |
-| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). If you need to retain data for compliance reasons or for occasional investigation or analysis of historical data, configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost.<br><br>See [Configure data retention and archive policies in Azure Monitor Logs](logs/data-retention-archive.md) for details on how to configure your workspace and how to work with archived data. |
+### Configuration recommendations
+| Recommendation | Benefit |
+|:|:|
+| Configure VM agents to collect only important events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-data-collection.md#controlling-costs) for guidance on data to collect and strategies for using [XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to limit it.|
+| Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. |
+| Use transformations to filter unnecessary data from collected events. | [Transformations](essentials/data-collection-transformations.md) can be used in data collection rules to remove unnecessary data or even entire columns from events collected from the virtual machine which can significantly reduce the cost for their ingestion and retention. |
+## Container insights
-### Data collection
-Since Azure Monitor charges for the collection of data, your goal should be to collect the minimal amount of data required to meet your monitoring requirements. You have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you're not using for alerting or analysis.
+### Design checklist
-#### Azure resources
+> [!div class="checklist"]
+> - Configure agent collection to remove unneeded data.
+> - Modify settings for collection of metric data.
+> - Limit Prometheus metrics collected.
+> - Configure Basic Logs.
+### Configuration recommendations
-| Recommendation | Description |
+| Recommendation | Benefit |
|:|:|
-| Collect only critical resource log data from Azure resources. | When you create [diagnostic settings](essentials/diagnostic-settings.md) to send [resource logs](essentials/resource-logs.md) for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, you can use a [workspace transformation](essentials/data-collection-transformations.md?#workspace-transformation-dcr) to further filter unneeded data for those resources that use a [supported table](logs/tables-feature-support.md). See [Diagnostic settings in Azure Monitor](essentials/diagnostic-settings.md#controlling-costs) for details on how to configure diagnostic settings and using transformations to filter their data. |
+| Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data in ContainerLogs you don't need. |
+| Modify settings for collection of metric data. | You can reduce your costs by modifying the default collection settings Container insights uses for the collection of metric data. See [Enable cost optimization settings (preview)](containers/container-insights-cost-config.md) for details on modifying both the frequency that metric data is collected and the namespaces that are collected. |
+| Limit Prometheus metrics collected. | If you configured Prometheus metric scraping, then follow the recommendations at [Controlling ingestion to reduce cost](containers/container-insights-cost.md#prometheus-metrics-scraping) to optimize your data collection for cost. |
+| Configure Basic Logs. | [Convert your schema to ContainerLogV2](containers/container-insights-logging-v2.md) which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). |
-#### Virtual machines
-| Recommendation | Description |
-|:|:|
-| Configure VM agents to collect only critical events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-data-collection.md#controlling-costs) for guidance on data to collect and strategies for using XPath queries and transformations to limit it.|
-| Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. |
+## Application Insights
-#### Container insights
-
-| Recommendation | Description |
-|:|:|
-| Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data in ContainerLogs you don't need. |
-| Modify settings for collection of metric data | You can reduce your costs by modifying the default collection settings Container insights uses for the collection of metric data. See [Enable cost optimization settings (preview)](containers/container-insights-cost-config.md) for details on modifying both the frequency that metric data is collected and the namespaces that are collected. |
-| Limit Prometheus metrics collected | If you configured Prometheus metric scraping, then follow the recommendations at [Controlling ingestion to reduce cost](containers/container-insights-cost.md#prometheus-metrics-scraping) to optimize your data collection for cost. |
-| Configure Basic Logs | [Convert your schema to ContainerLogV2](containers/container-insights-logging-v2.md) which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). |
+### Design checklist
+> [!div class="checklist"]
+> - Use sampling to tune the amount of data collected.
+> - Use sampling to tune the amount of data collected.
+> - Limit the number of Ajax calls.
+> - Disable unneeded modules.
+> - Pre-aggregate metrics from any calls to TrackMetric.
+> - Limit the use of custom metrics.
+> - Ensure use of updated SDKs.
-#### Application Insights
+### Configuration recommendations
-| Recommendation | Description |
+| Recommendation | Benefit |
|:|:|
-| Change to Workspace-based Application Insights | Ensure that your Application Insights resources are [Workspace-based](app/create-workspace-resource.md) so that they can leveage new cost savings tools such as [Basic Logs](logs/basic-logs-configure.md), [Commitment Tiers](logs/cost-logs.md#commitment-tiers), [Retention by data type and Data Archive](logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). |
+| Change to Workspace-based Application Insights | Ensure that your Application Insights resources are [Workspace-based](app/create-workspace-resource.md) so that they can leverage new cost savings tools such as [Basic Logs](logs/basic-logs-configure.md), [Commitment Tiers](logs/cost-logs.md#commitment-tiers), [Retention by data type and Data Archive](logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). |
| Use sampling to tune the amount of data collected. | [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. | | Limit the number of Ajax calls. | [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too. | | Disable unneeded modules. | [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required. |
Since Azure Monitor charges for the collection of data, your goal should be to c
| Limit the use of custom metrics. | The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs. Using this option can result in the creation of more pre-aggregation metrics. | | Ensure use of updated SDKs. | Earlier versions of the ASP.NET Core SDK and Worker Service SDK [collect many counters by default](app/eventcounters.md#default-counters-collected), which were collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected). |
-#### All log data collection
-
-| Recommendation | Description |
-|:|:|
-| Remove unnecssary data during data ingestion | After following all of the preveious recommendations, consider using Azure Monitor [data collection transformations](essentials/data-collection-transformations.md) to reduce the size of your data during ingestion. |
--
-## Monitor workspace and analyze usage
-
-After you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have other opportunities to further filter out collected data that hasn't proven to be useful.
--
-| Recommendation | Description |
-|:|:|
-| Send alert when data collection is high. | To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period. See [Send alert when data collection is high](logs/analyze-usage.md#send-alert-when-data-collection-is-high) for details. |
-| Analyze collected data | Periodically analyze data collection using methods in [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. |
-| Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. | A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in [When to use a daily cap](logs/daily-cap.md). See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for information on how the daily cap works and how to configure one. |
-- ## Next step
azure-monitor Best Practices Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-logs.md
+
+ Title: Best practices for Azure Monitor Logs
+description: Provides a template for a Well-Architected Framework (WAF) article specific to Log Analytics workspaces in Azure Monitor.
+++ Last updated : 03/29/2023+++
+# Best practices for Azure Monitor Logs
+This article provides architectural best practices for Azure Monitor Logs. The guidance is based on the five pillars of architecture excellence described in [Azure Well-Architected Framework](/azure/architecture/framework/).
+++
+## Reliability
+In the cloud, we acknowledge that failures happen. Instead of trying to prevent failures altogether, the goal is to minimize the effects of a single failing component. Use the following information to minimize failure of your Log Analytics workspaces and to protect the data they collect.
+++
+## Security
+Security is one of the most important aspects of any architecture. Azure Monitor provides features to employ both the principle of least privilege and defense-in-depth. Use the following information to maximize the security of your Log Analytics workspaces and ensure that only authorized users access collected data.
+++
+## Cost optimization
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+
+> [!NOTE]
+> See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
+++
+## Operational excellence
+Operational excellence refers to operations processes required keep a service running reliably in production. Use the following information to minimize the operational requirements for supporting Log Analytics workspaces.
+++
+## Performance efficiency
+Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner. Use the following information to ensure that your Log Analytics workspaces and log queries are configured for maximum performance.
++
+## Next step
+
+- [Get best practices for a complete deployment of Azure Monitor](best-practices.md).
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
Read more about Azure Monitor logs including their sources of data in [Logs in A
Traces are series of related events that follow a user request through a distributed system. They can be used to determine the behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components.
-Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md). Trace data is stored with other application log data collected by Application Insights. This way it's available to the same analysis tools as other log data including log queries, dashboards, and alerts.
+Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing-telemetry-correlation.md). Trace data is stored with other application log data collected by Application Insights. This way it's available to the same analysis tools as other log data including log queries, dashboards, and alerts.
-Read more about distributed tracing at [What is distributed tracing?](app/distributed-tracing.md).
+Read more about distributed tracing at [What is distributed tracing?](app/distributed-tracing-telemetry-correlation.md).
### Changes
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
When you enable Application Insights for an application by installing an instrum
| Destination | Description | Reference | |:|:|:| | Azure Monitor Logs | Operational data about your application including page views, application requests, exceptions, and traces. | [Analyze log data in Azure Monitor](logs/log-query-overview.md) |
-| | Dependency information between application components to support Application Map and telemetry correlation. | [Telemetry correlation in Application Insights](app/correlation.md) <br> [Application Map](app/app-map.md) |
+| | Dependency information between application components to support Application Map and telemetry correlation. | [Telemetry correlation in Application Insights](app/distributed-tracing-telemetry-correlation.md) <br> [Application Map](app/app-map.md) |
| | Results of availability tests that test the availability and responsiveness of your application from different locations on the public Internet. | [Monitor availability and responsiveness of any web site](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) | | Azure Monitor Metrics | Application Insights collects metrics describing the performance and operation of the application in addition to custom metrics that you define in your application into the Azure Monitor metrics database. | [Log-based and pre-aggregated metrics in Application Insights](app/pre-aggregated-metrics-log-metrics.md)<br>[Application Insights API for custom events and metrics](app/api-custom-events-metrics.md) | | Azure Monitor Change Analysis | Change Analysis detects and provides insights on various types of changes in your application. | [Use Change Analysis in Azure Monitor](./change/change-analysis.md) |
azure-monitor Stream Monitoring Data Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/stream-monitoring-data-event-hubs.md
Before you configure streaming for any data source, you need to [create an Event
| [Operating system (guest)](../data-sources.md#operating-system-guest) | Azure virtual machines | Install the [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md) on Windows and Linux virtual machines in Azure. For more information, see [Streaming Azure Diagnostics data in the hot path by using event hubs](../agents/diagnostics-extension-stream-event-hubs.md) for details on Windows VMs. See [Use Linux Diagnostic extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md#protected-settings) for details on Linux VMs. | | [Application code](../data-sources.md#application-code) | Application Insights | Use diagnostic settings to stream to event hubs. This tier is only available with workspace-based Application Insights resources. For help with setting up workspace-based Application Insights resources, see [Workspace-based Application Insights resources](../app/create-workspace-resource.md#workspace-based-application-insights-resources) and [Migrate to workspace-based Application Insights resources](../app/convert-classic-resource.md#migrate-to-workspace-based-application-insights-resources).|
+## Stream diagnostics data
+
+Use diagnostics setting to stream logs and metrics to Event Hubs.
+For information on how to set up diagnostic settings, see [Create diagnostic settings](./diagnostic-settings.md?tabs=portal#create-diagnostic-settings)
+
+The following JSON is an example of metrics data sent to an event hub:
+
+```json
+[
+ {
+ "records": [
+ {
+ "count": 2,
+ "total": 0.217,
+ "minimum": 0.042,
+ "maximum": 0.175,
+ "average": 0.1085,
+ "resourceId": "/SUBSCRIPTIONS/ABCDEF12-3456-78AB-CD12-34567890ABCD/RESOURCEGROUPS/RG-001/PROVIDERS/MICROSOFT.WEB/SITES/SCALEABLEWEBAPP1",
+ "time": "2023-04-18T09:03:00.0000000Z",
+ "metricName": "CpuTime",
+ "timeGrain": "PT1M"
+ },
+ {
+ "count": 2,
+ "total": 0.284,
+ "minimum": 0.053,
+ "maximum": 0.231,
+ "average": 0.142,
+ "resourceId": "/SUBSCRIPTIONS/ABCDEF12-3456-78AB-CD12-34567890ABCD/RESOURCEGROUPS/RG-001/PROVIDERS/MICROSOFT.WEB/SITES/SCALEABLEWEBAPP1",
+ "time": "2023-04-18T09:04:00.0000000Z",
+ "metricName": "CpuTime",
+ "timeGrain": "PT1M"
+ },
+ {
+ "count": 1,
+ "total": 1,
+ "minimum": 1,
+ "maximum": 1,
+ "average": 1,
+ "resourceId": "/SUBSCRIPTIONS/ABCDEF12-3456-78AB-CD12-34567890ABCD/RESOURCEGROUPS/RG-001/PROVIDERS/MICROSOFT.WEB/SITES/SCALEABLEWEBAPP1",
+ "time": "2023-04-18T09:03:00.0000000Z",
+ "metricName": "Requests",
+ "timeGrain": "PT1M"
+ },
+ ...
+ ]
+ }
+]
+```
+
+The following JSON is an example of log data sent to an event hub:
++
+```json
+[
+ {
+ "records": [
+ {
+ "time": "2023-04-18T09:39:56.5027358Z",
+ "category": "AuditEvent",
+ "operationName": "VaultGet",
+ "resultType": "Success",
+ "correlationId": "12345678-abc-4bc5-9f31-950eaf3bfcb4",
+ "callerIpAddress": "10.0.0.10",
+ "identity": {
+ "claim": {
+ "http://schemas.microsoft.com/identity/claims/objectidentifier": "123abc12-abcd-9876-cdef-123abc456def",
+ "appid": "12345678-a1a1-b2b2-c3c3-9876543210ab"
+ }
+ },
+ "properties": {
+ "id": "https://mykeyvault.vault.azure.net/",
+ "clientInfo": "AzureResourceGraph.IngestionWorkerService.global/1.23.1.224",
+ "requestUri": "https://northeurope.management.azure.com/subscriptions/ABCDEF12-3456-78AB-CD12-34567890ABCD/resourceGroups/rg-001/providers/Microsoft.KeyVault/vaults/mykeyvault?api-version=2023-02-01&MaskCMKEnabledProperties=true",
+ "httpStatusCode": 200,
+ "properties": {
+ "sku": {
+ "Family": "A",
+ "Name": "Standard",
+ "Capacity": null
+ },
+ "tenantId": "12345678-abcd-1234-abcd-1234567890ab",
+ "networkAcls": null,
+ "enabledForDeployment": 0,
+ "enabledForDiskEncryption": 0,
+ "enabledForTemplateDeployment": 0,
+ "enableSoftDelete": 1,
+ "softDeleteRetentionInDays": 90,
+ "enableRbacAuthorization": 0,
+ "enablePurgeProtection": null
+ }
+ },
+ "resourceId": "/SUBSCRIPTIONS/ABCDEF12-3456-78AB-CD12-34567890ABCD/RESOURCEGROUPS/RG-001/PROVIDERS/MICROSOFT.KEYVAULT/VAULTS/mykeyvault",
+ "operationVersion": "2023-02-01",
+ "resultSignature": "OK",
+ "durationMs": "16"
+ }
+ ],
+ "EventProcessedUtcTime": "2023-04-18T09:42:07.0944007Z",
+ "PartitionId": 1,
+ "EventEnqueuedUtcTime": "2023-04-18T09:41:14.9410000Z"
+ },
+...
+```
+ ## Manual streaming with a logic app+ For data that you can't directly stream to an event hub, you can write to Azure Storage Then you can use a time-triggered logic app that [pulls data from Azure Blob Storage](../../connectors/connectors-create-api-azureblobstorage.md#add-action) and [pushes it as a message to the event hub](../../connectors/connectors-create-api-azure-event-hubs.md#add-action).
+## Query events from your Event Hubs
+
+Use the process data query function to see the contents of monitoring events sent to your event hub.
+
+Follow the steps below to query your event data using the Azure portal:
+1. Select **Process data** from your event hub.
+1. Find the tile entitled **Enable real time insights from events** and select **Start**.
+1. Select **Refresh** in the **Input preview** section of the page to fetch events from your event hub.
++ ## Partner tools with Azure Monitor integration Routing your monitoring data to an event hub with Azure Monitor enables you to easily integrate with external SIEM and monitoring tools. The following table lists examples of tools with Azure Monitor integration.
azure-monitor Query Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-optimization.md
Cross-region and cross-cluster execution of queries requires the system to seria
A query that spans more than five workspaces is considered a query that consumes excessive resources. Queries can't span more than 100 workspaces. > [!IMPORTANT]
-> In some multi-workspace scenarios, the CPU and data measurements won't be accurate and will represent the measurement of only a few of the workspaces.
+> - In some multi-workspace scenarios, the CPU and data measurements won't be accurate and will represent the measurement of only a few of the workspaces.
+> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Resource Manager resource ID, consume less resources and are more performant. See [Create a log query across multiple workspaces](./cross-workspace-query.md#identify-workspace-resources)
## Parallelism Azure Monitor Logs uses large clusters of Azure Data Explorer to run queries. These clusters vary in scale and potentially get up to dozens of compute nodes. The system automatically scales the clusters according to workspace placement logic and capacity.
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
Title: Design a Log Analytics workspace architecture description: The article describes the considerations and recommendations for customers preparing to deploy a workspace in Azure Monitor. Previously updated : 05/25/2022 Last updated : 04/05/2023
The following table presents criteria to consider when you design your workspace
| Criteria | Description | |:|:|
-| [Segregate operational and security data](#segregate-operational-and-security-data) | Many customers will create separate workspaces for their operational and security data for data ownership and the extra cost from Microsoft Sentinel. In some cases, you might be able to save costs by consolidating into a single workspace to qualify for a commitment tier. |
+| [Operational and security data](#operational-and-security-data) | You may choose to combine operational data from Azure Monitor in the same workspace as security data from Microsoft Sentinel or separate each into their own workspace. Combining them gives you better visibility across all your data, while your security standards might require separating them so that your security team has a dedicated workspace. You may also have cost implications to each strategy. |
| [Azure tenants](#azure-tenants) | If you have multiple Azure tenants, you'll usually create a workspace in each one. Several data sources can only send monitoring data to a workspace in the same Azure tenant. | | [Azure regions](#azure-regions) | Each workspace resides in a particular Azure region. You might have regulatory or compliance requirements to store data in specific locations. | | [Data ownership](#data-ownership) | You might choose to create separate workspaces to define data ownership. For example, you might create workspaces by subsidiaries or affiliated companies. |
The following table presents criteria to consider when you design your workspace
| [Legacy agent limitations](#legacy-agent-limitations) | Legacy virtual machine agents have limitations on the number of workspaces they can connect to. | | [Data access control](#data-access-control) | Configure access to the workspace and to different tables and data from different resources. |
-### Segregate operational and security data
-Most customers who use both Azure Monitor and Microsoft Sentinel will create a dedicated workspace for each to segregate ownership of data between operational and security teams. This approach also helps to optimize costs. If Microsoft Sentinel is enabled in a workspace, all data in that workspace is subject to Microsoft Sentinel pricing, even if it's operational data collected by Azure Monitor.
+### Operational and security data
+The decision whether to combine your operational data from Azure Monitor in the same workspace as security data from Microsoft Sentinel or separate each into their own workspace depends on your security requirements and the potential cost implications for your environment.
+
+**Dedicated workspaces**
+Creating dedicated workspaces for Azure Monitor and Microsoft Sentinel will allow you to segregate ownership of data between operational and security teams. This approach may also help to optimize costs since when Microsoft Sentinel is enabled in a workspace, all data in that workspace is subject to Microsoft Sentinel pricing even if it's operational data collected by Azure Monitor.
A workspace with Microsoft Sentinel gets three months of free data retention instead of 31 days. This scenario typically results in higher costs for operational data in a workspace without Microsoft Sentinel. See [Azure Monitor Logs pricing details](cost-logs.md#workspaces-with-microsoft-sentinel).
-The exception is if combining data in the same workspace helps you reach a [commitment tier](#commitment-tiers), which provides a discount to your ingestion charges. For example, consider an organization that has operational data and security data each ingesting about 50 GB per day. Combining the data in the same workspace would allow a commitment tier at 100 GB per day. That scenario would provide a 15% discount for Azure Monitor and a 50% discount for Microsoft Sentinel.
+
+**Combined workspace**
+Combing your data from Azure Monitor and Microsoft Sentinel in the same workspace gives you better visibility across all of your data allowing you to easily combine both in queries and workbooks. If access to the security data should be limited to a particular team, you can use [table level RBAC](../logs/manage-access.md#set-table-level-read-access) to block particular users from tables with security data or limit users to accessing the workspace using [resource-context](../logs/manage-access.md#access-mode).
+
+This configuration may result in cost savings if helps you reach a [commitment tier](#commitment-tiers), which provides a discount to your ingestion charges. For example, consider an organization that has operational data and security data each ingesting about 50 GB per day. Combining the data in the same workspace would allow a commitment tier at 100 GB per day. That scenario would provide a 15% discount for Azure Monitor and a 50% discount for Microsoft Sentinel.
If you create separate workspaces for other criteria, you'll usually create more workspace pairs. For example, if you have two Azure tenants, you might create four workspaces with an operational and security workspace in each tenant. -- **If you use both Azure Monitor and Microsoft Sentinel:** Create a separate workspace for each. Consider combining the two if it helps you reach a commitment tier.
+- **If you use both Azure Monitor and Microsoft Sentinel:** Consider separating each in a dedicated workspace if required by your security team or if it results in a cost savings. Consider combining the two for better visibility of your combined monitoring data or if it helps you reach a commitment tier.
- **If you use both Microsoft Sentinel and Microsoft Defender for Cloud:** Consider using the same workspace for both solutions to keep security data in one place. ### Azure tenants
Most resources can only send monitoring data to a workspace in the same Azure te
- **If you have multiple Azure tenants:** Create a workspace for each tenant. For other options including strategies for service providers, see [Multiple tenant strategies](#multiple-tenant-strategies). ### Azure regions
-Each Log Analytics workspaces resides in a [particular Azure region](https://azure.microsoft.com/global-infrastructure/geographies/). You might have regulatory or compliance purposes for keeping data in a particular region. For example, an international company might locate a workspace in each major geographical region, such as the United States and Europe.
+Each Log Analytics workspace resides in a [particular Azure region](https://azure.microsoft.com/global-infrastructure/geographies/). You might have regulatory or compliance purposes for keeping data in a particular region. For example, an international company might locate a workspace in each major geographical region, such as the United States and Europe.
- **If you have requirements for keeping data in a particular geography:** Create a separate workspace for each region with such requirements. - **If you don't have requirements for keeping data in a particular geography:** Use a single workspace for all regions.
azure-signalr Signalr Quickstart Azure Functions Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-javascript.md
When you run the `func new` command from the root directory of the project, the
1. Run the following command to create the `index` function.
- ```bash
- func new -n index -t HttpTrigger
- ```
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
1. Edit *index/function.json* and replace the contents with the following json code:
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+ }
+ ```
1. Edit *index/index.js* and replace the contents with the following code:
- ```javascript
- var fs = require('fs').promises
-
- module.exports = async function (context, req) {
- const path = context.executionContext.functionDirectory + '/../content/https://docsupdatetracker.net/index.html'
- try {
- var data = await fs.readFile(path);
- context.res = {
- headers: {
- 'Content-Type': 'text/html'
- },
- body: data
- }
- context.done()
- } catch (err) {
- context.log.error(err);
- context.done(err);
- }
- }
- ```
+ ```javascript
+ var fs = require('fs').promises
+
+ module.exports = async function (context, req) {
+ const path = context.executionContext.functionDirectory + '/../content/https://docsupdatetracker.net/index.html'
+ try {
+ var data = await fs.readFile(path);
+ context.res = {
+ headers: {
+ 'Content-Type': 'text/html'
+ },
+ body: data
+ }
+ context.done()
+ } catch (err) {
+ context.log.error(err);
+ context.done(err);
+ }
+ }
+ ```
### Create the negotiate function 1. Run the following command to create the `negotiate` function.
- ```bash
- func new -n negotiate -t HttpTrigger
- ```
+ ```bash
+ func new -n negotiate -t HttpTrigger
+ ```
1. Edit *negotiate/function.json* and replace the contents with the following json code:-
- ```json
- {
- "disabled": false,
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "post"
- ],
- "name": "req",
- "route": "negotiate"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "serverless",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "in"
- }
- ]
- }
- ```
-
+ ```json
+ {
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "post"
+ ],
+ "name": "req",
+ "route": "negotiate"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+1. Edit *negotiate/index.js* and replace the content with the following JavaScript code:
+ ```js
+ module.exports = async function (context, req, connectionInfo) {
+ context.res.body = connectionInfo;
+ };
+ ```
### Create a broadcast function. 1. Run the following command to create the `broadcast` function.
- ```bash
- func new -n broadcast -t TimerTrigger
- ```
+ ```bash
+ func new -n broadcast -t TimerTrigger
+ ```
1. Edit *broadcast/function.json* and replace the contents with the following code: -
- ```json
- {
- "bindings": [
- {
- "name": "myTimer",
- "type": "timerTrigger",
- "direction": "in",
- "schedule": "*/5 * * * * *"
- },
- {
- "type": "signalR",
- "name": "signalRMessages",
- "hubName": "serverless",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "out"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "name": "myTimer",
+ "type": "timerTrigger",
+ "direction": "in",
+ "schedule": "*/5 * * * * *"
+ },
+ {
+ "type": "signalR",
+ "name": "signalRMessages",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
1. Edit *broadcast/index.js* and replace the contents with the following code:
-
- ```javascript
- var https = require('https');
-
- var etag = '';
- var star = 0;
-
- module.exports = function (context) {
- var req = https.request("https://api.github.com/repos/azure/azure-signalr", {
- method: 'GET',
- headers: {'User-Agent': 'serverless', 'If-None-Match': etag}
- }, res => {
- if (res.headers['etag']) {
- etag = res.headers['etag']
- }
-
- var body = "";
-
- res.on('data', data => {
- body += data;
- });
- res.on("end", () => {
- if (res.statusCode === 200) {
- var jbody = JSON.parse(body);
- star = jbody['stargazers_count'];
- }
-
- context.bindings.signalRMessages = [{
- "target": "newMessage",
- "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${star}` ]
- }]
- context.done();
- });
- }).on("error", (error) => {
- context.log(error);
- context.res = {
- status: 500,
- body: error
- };
- context.done();
- });
- req.end();
- }
- ```
+
+ ```javascript
+ var https = require('https');
+
+ var etag = '';
+ var star = 0;
+
+ module.exports = function (context) {
+ var req = https.request("https://api.github.com/repos/azure/azure-signalr", {
+ method: 'GET',
+ headers: {'User-Agent': 'serverless', 'If-None-Match': etag}
+ }, res => {
+ if (res.headers['etag']) {
+ etag = res.headers['etag']
+ }
+
+ var body = "";
+
+ res.on('data', data => {
+ body += data;
+ });
+ res.on("end", () => {
+ if (res.statusCode === 200) {
+ var jbody = JSON.parse(body);
+ star = jbody['stargazers_count'];
+ }
+
+ context.bindings.signalRMessages = [{
+ "target": "newMessage",
+ "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${star}` ]
+ }]
+ context.done();
+ });
+ }).on("error", (error) => {
+ context.log(error);
+ context.res = {
+ status: 500,
+ body: error
+ };
+ context.done();
+ });
+ req.end();
+ }
+ ```
### Create the https://docsupdatetracker.net/index.html file
azure-video-indexer Emotions Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/emotions-detection.md
Previously updated : 06/15/2022 Last updated : 04/17/2023 # Emotions detection
-Emotion detection is an Azure Video Indexer AI feature that automatically detects emotions a video's transcript lines. Each sentence can either be detected as "Anger", "Fear", "Joy", "Neutral", and "Sad". The model works on text only (labeling emotions in video transcripts.) This model doesn't infer the emotional state of people, may not perform where input is ambiguous or unclear, like sarcastic remarks. Thus, the model shouldn't be used for things like assessing employee performance or the emotional state of a person.
+Emotions detection is an Azure Video Indexer AI feature that automatically detects emotions in video's transcript lines. Each sentence can either be detected as "Anger", "Fear", "Joy", "Sad", or none of the above if no other emotion was detected.
-The model doesn't have context of the input data, which can impact its accuracy. To increase the accuracy, it's recommended for the input data to be in a clear and unambiguous format.
+The model works on text only (labeling emotions in video transcripts.) This model doesn't infer the emotional state of people, may not perform where input is ambiguous or unclear, like sarcastic remarks. Thus, the model shouldn't be used for things like assessing employee performance or the emotional state of a person.
## Prerequisites
During the emotions detection procedure, the transcript of the video is processe
|Emotions detection |Each sentence is sent to the emotions detection model. The model produces the confidence level of each emotion. If the confidence level exceeds a specific threshold, and there is no ambiguity between positive and negative emotions, the emotion is detected. In any other case, the sentence is labeled as neutral.| |Confidence level |The estimated confidence level of the detected emotions is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score. |
-## Example use cases
-
-* Personalization of keywords to match customer interests, for example websites about England posting promotions about English movies or festivals.
-* Deep-searching archives for insights on specific keywords to create feature stories about companies, personas or technologies, for example by a news agency.
- ## Considerations and limitations when choosing a use case Below are some considerations to keep in mind when using emotions detection:
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Previously updated : 05/10/2022 Last updated : 04/17/2023 <!-- VERSION 2.3 Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. -->
The following schemas are in use by Azure Video Indexer
"ExternalId": null, "Filename": "1 Second Video 1.mp4", "AnimationModelId": null,
- "BrandsCategories": null
+ "BrandsCategories": null,
+ "CustomLanguages": null,
+ "ExcludedAIs": "Face",
+ "LogoGroupId": "ea9d154d-0845-456c-857e-1c9d5d925d95"
} } } ``` - ## Next steps <!-- replace below with the proper link to your main monitoring service article -->
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 04/06/2023 Last updated : 04/17/2023
To stay up-to-date with the most recent Azure Video Indexer developments, this a
## April 2023
-## Observed people tracing improvements
+### Excluding sensitive AI models
+
+Following the Microsoft Responsible AI agenda, Azure Video Indexer now allows you to exclude specific AI models when indexing media files. The list of sensitive AI models includes: face detection, observed people, emotions, labels identification.
+
+This feature is currently available through the API, and is available in all presets except the Advanced preset.
+
+### Observed people tracing improvements
For more information, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 04/06/2023 Last updated : 04/18/2023
In summary, the **Availability Zone** will only appear when
![Backup jobs filtered](./media/backup-azure-arm-restore-vms/secbackupjobs.png)
+## Cross Subscription Restore (preview)
+
+Azure Backup now allows you to perform Cross Subscription Restore (CSR), which helps you to restore Azure VMs in a subscription that is different from the default one. Default subscription contains the recovery points.
+
+This feature is enabled for Recovery Services vault by default. However, there may be instances when you may need to block Cross Subscription Restore based on your cloud infrastructure. So, you can enable, disable, or permanently disable Cross Subscription Restore for the existing vaults by going to *Vault* > **Properties** > **Cross Subscription Restore**.
++
+>[!Note]
+>- CSR once permanently disabled on a vault can't be re-enabled because it's an irreversible operation. 
+>- If CSR is disabled but not permanently disabled, then you can reverse the operation by selecting *Vault* > **Properties** > **Cross Subscription Restore** > **Enable**.
+>- If a Recovery Services vault is moved to a different subscription when CSR is disabled or permanently disabled, restore to the original subscription fails.
+ ## Restoring unmanaged VMs and disks as managed You're provided with an option to restore [unmanaged disks](../storage/common/storage-disaster-recovery-guidance.md#azure-unmanaged-disks) as [managed disks](../virtual-machines/managed-disks-overview.md) during restore. By default, the unmanaged VMs / disks are restored as unmanaged VMs / disks. However, if you choose to restore as managed VMs / disks, it's now possible to do so. These restore operations aren't triggered from the snapshot phase but only from the vault phase. This feature isn't available for unmanaged encrypted VMs.
backup Backup Azure Diagnostic Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-diagnostic-events.md
To send your vault diagnostics data to Log Analytics:
1. Select **Resource specific** in the toggle, and select the following five events: **CoreAzureBackup**, **AddonAzureBackupJobs**, **AddonAzureBackupPolicy**, **AddonAzureBackupStorage**, and **AddonAzureBackupProtectedInstance**. 1. Select **Save**.
-
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/recovery-services-vault-diagnostics-settings-inline.png" alt-text="Screenshot shows the recovery services vault diagnostics settings." lightbox="./media/backup-azure-configure-backup-reports/recovery-services-vault-diagnostics-settings-expanded.png":::
# [Backup vaults](#tab/backup-vaults)
To send your vault diagnostics data to Log Analytics:
4. Select the following events: **CoreAzureBackup**, **AddonAzureBackupJobs**, **AddonAzureBackupPolicy**, and **AddonAzureBackupProtectedInstance**. 5. Select **Save**.
-
-
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/backup-vault-diagnostics-settings.png" alt-text="Screenshot shows the backup vault diagnostics settings.":::
After data flows into the Log Analytics workspace, dedicated tables for each of these events are created in your workspace. You can query any of these tables directly. You can also perform joins or unions between these tables if necessary.
backup Configure Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/configure-reports.md
Azure Resource Manager resources, such as Recovery Services vaults, record infor
In the monitoring section of your Recovery Services vault, select **Diagnostics settings** and specify the target for the Recovery Services vault's diagnostic data. To learn more about using diagnostic events, see [Use diagnostics settings for Recovery Services vaults](./backup-azure-diagnostic-events.md). - Azure Backup also provides a built-in Azure Policy definition, which automates the configuration of diagnostics settings for all Recovery Services vaults in a given scope. To learn how to use this policy, see [Configure vault diagnostics settings at scale](./azure-policy-configure-diagnostics.md).
Azure Backup also provides a built-in Azure Policy definition, which automates t
In the monitoring section of your Backup vault, select **Diagnostics settings** and specify the target for the Backup vault's diagnostic data. -
In the monitoring section of your Backup vault, select **Diagnostics settings**
After you've configured your vaults to send data to Log Analytics, view your Backup reports by going to the Backup center and selecting **Backup Reports**. Select the relevant workspace(s) on the **Get started** tab. - The report contains various tabs:
The report contains various tabs:
Use this tab to get a high-level overview of your backup estate. You can get a quick glance of the total number of backup items, total cloud storage consumed, the number of protected instances, and the job success rate per workload type. For more detailed information about a specific backup artifact type, go to the respective tabs.
-
##### Backup Items
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
During the public preview of Azure Chaos Studio, there are a few limitations and
## Limitations
+* The target resources must be in [one of the regions supported by the Azure Chaos Studio Preview](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio)
* For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service: * Regional endpoints to allowlist are listed [in this article](chaos-studio-permissions-security.md#network-security). * If sending telemetry data to Application Insights, the IPs [in this document](../azure-monitor/app/ip-addresses.md) are also required.
cloud-services-extended-support Certificates And Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/certificates-and-key-vault.md
Key Vault is used to store certificates that are associated to Cloud Services (e
1. Sign in to the Azure portal and navigate to the Key Vault. If you do not have a Key Vault set up, you can opt to create one in this same window.
-2. Select **Access polices**
+2. Select **Access Configuration**
:::image type="content" source="media/certs-and-key-vault-1.png" alt-text="Image shows selecting access policies from the key vault blade.":::
-3. Ensure the access policies include the following property:
+3. Ensure the access configuration include the following property:
- **Enable access to Azure Virtual Machines for deployment** :::image type="content" source="media/certs-and-key-vault-2.png" alt-text="Image shows access policies window in the Azure portal.":::
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-overview.md
These are top scenarios involving combinations of resources, features, and Cloud
| Service | Configuration | Comments | |||| | [Azure AD Domain Services](../active-directory-domain-services/migrate-from-classic-vnet.md) | Virtual networks that contain Azure Active Directory Domain services. | Virtual network containing both Cloud Service deployment and Azure AD Domain services is supported. Customer first needs to separately migrate Azure AD Domain services and then migrate the virtual network left only with the Cloud Service deployment |
-| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing a prod slot deployment can be migrated. It is not reccomended to migrate staging slot as this can result in issues with retaining service FQDN |
+| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing a prod slot deployment can be migrated. It is not reccomended to migrate staging slot as this can result in issues with retaining service FQDN. To migrate staging slot, first promote staging deployment to production and then migrate to ARM. |
| Cloud Service | Deployment not in a publicly visible virtual network (default virtual network deployment) | A Cloud Service can be in a publicly visible virtual network, in a hidden virtual network or not in any virtual network. Cloud Services in a hidden virtual network and publicly visible virtual networks are supported for migration. Customer can use the Validate API to tell if a deployment is inside a default virtual network or not and thus determine if it can be migrated. | |Cloud Service | XML extensions (BGInfo, Visual Studio Debugger, Web Deploy, and Remote Debugging). | All xml extensions are supported for migration | Virtual Network | Virtual network containing multiple Cloud Services. | Virtual network contain multiple cloud services is supported for migration. The virtual network and all the Cloud Services within it will be migrated together to Azure Resource Manager. |
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md
This article discusses the technical details regarding the migration tool as per
- Each Cloud Services (extended support) deployment is an independent Cloud Service. Deployment are no longer grouped into a cloud service using slots. - If you have two slots in your Cloud Service (classic), you need to delete one slot (staging) and use the migration tool to move the other (production) slot to Azure Resource Manager. - The public IP address on the Cloud Service deployment remains the same after migration to Azure Resource Manager and is exposed as a Basic SKU IP (dynamic or static) resource. -- The DNS name and domain (cloudapp.azure.net) for the migrated cloud service remains the same.
+- The DNS name and domain (cloudapp.net) for the migrated cloud service remains the same.
### Virtual network migration - If a Cloud Services deployment is in a virtual network, then during migration all Cloud Services and associated virtual network resources are migrated together.
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
Install the *nvidia-docker-2* software package. ```bash
-distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+DISTRIBUTION=$(. /etc/os-release;echo $ID$VERSION_ID)
``` ```bash curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - ``` ```bash
-curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
+curl -s -L https://nvidia.github.io/nvidia-docker/$DISTRIBUTION/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
``` ```bash sudo apt-get update
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/data-formats.md
+
+ Title: Custom Text Analytics for health data formats
+
+description: Learn about the data formats accepted by custom text analytics for health.
++++++ Last updated : 04/14/2023++++
+# Accepted data formats in custom text analytics for health
+
+Use this article to learn about formatting your data to be imported into custom text analytics for health.
+
+If you are trying to [import your data](../how-to/create-project.md#import-project) into custom Text Analytics for health, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use the Language Studio to [label your documents](../how-to/label-data.md).
+
+Your Labels file should be in the `json` format below to be used when importing your labels into a project.
+
+```json
+{
+ "projectFileVersion": "{API-VERSION}",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectName": "{PROJECT-NAME}",
+ "projectKind": "CustomHealthcare",
+ "description": "Trying out custom Text Analytics for health",
+ "language": "{LANGUAGE-CODE}",
+ "multilingual": true,
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "settings": {}
+ },
+ "assets": {
+ "projectKind": "CustomHealthcare",
+ "entities": [
+ {
+ "category": "Entity1",
+ "compositionSetting": "{COMPOSITION-SETTING}",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "One",
+ "synonyms": [
+ {
+ "language": "en",
+ "values": [
+ "EntityNumberOne",
+ "FirstEntity"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ },
+ {
+ "category": "Entity2"
+ },
+ {
+ "category": "MedicationName",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "research drugs",
+ "synonyms": [
+ {
+ "language": "en",
+ "values": [
+ "rdrug a",
+ "rdrug b"
+ ]
+ }
+ ]
+
+ }
+ ]
+ }
+ "prebuilts": "MedicationName"
+ }
+ ],
+ "documents": [
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 500,
+ "labels": [
+ {
+ "category": "Entity1",
+ "offset": 25,
+ "length": 10
+ },
+ {
+ "category": "Entity2",
+ "offset": 120,
+ "length": 8
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 100,
+ "labels": [
+ {
+ "category": "Entity2",
+ "offset": 20,
+ "length": 5
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#) to learn more about multilingual support. | `true`|
+|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
+| `storageInputContainerName` |`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
+| `category` | | The name of the entity type, which can be user defined for new entity definitions, or predefined for prebuilt entities. For more information, see the entity naming rules below.| |
+|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
+| `list` | | Array containing all the sublists you have in the project for a specific entity. Lists can be added to prebuilt entities or new entities with learned components.| |
+|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
+| `listKey`| `One` | A normalized value for the list of synonyms to map back to in prediction. | `One` |
+|`synonyms`|`[]`|Array containing all the synonyms|synonym|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the synonym in your sublist. If your project is a multilingual project and you want to support your list of synonyms for all the languages in your project, you have to explicitly add your synonyms to each language. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
+| `values`| `"EntityNumberone"`, `"FirstEntity"` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"EntityNumberone"`, `"FirstEntity"` |
+| `prebuilts` | `MedicationName` | The name of the prebuilt component populating the prebuilt entity. [Prebuilt entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project by default but you can extend them with list components in your labels file. | `MedicationName` |
+| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
+| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
+| `dataset` | `{DATASET}` | The test set to which this file goes to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting). Possible values for this field are `Train` and `Test`. |`Train`|
+| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
+| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
+| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | | The start position for the entity text. | `25`|
+| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
+
+## Entity naming rules
+
+1. [Prebuilt entity names](../../text-analytics-for-health/concepts/health-entity-categories.md) are predefined. They must be populated with a prebuilt component and it must match the entity name.
+2. New user defined entities (entities with learned components or labeled text) can't use prebuilt entity names.
+3. New user defined entities can't be populated with prebuilt components as prebuilt components must match their associated entities names and have no labeled data assigned to them in the documents array.
+++
+## Next steps
+* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
+* See the [how-to article](../how-to/label-data.md) more information about labeling your data.
+* When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/entity-components.md
+
+ Title: Entity components in custom Text Analytics for health
+
+description: Learn how custom Text Analytics for health extracts entities from text
++++++ Last updated : 04/14/2023++++
+# Entity components in custom text analytics for health
+
+In custom Text Analytics for health, entities are relevant pieces of information that are extracted from your unstructured input text. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
+
+## Component types
+
+An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
+
+The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you can't add learned components. Similarly, you can create new entities with learned and list components, but you can't populate them with additional prebuilt components.
+
+### Learned component
+
+The learned component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels to your data for the entity. If you do not label any data, it will not have a learned component.
+
+The Text Analytics for health entities, which by default have prebuilt components can't be extended with learned components, meaning they do not require or accept further labeling to function.
++
+### List component
+
+The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
+
+In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
+++
+### Prebuilt component
+
+The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components. Entities with prebuilt components are pretrained and can extract information relating to their categories without any labels.
+++
+## Entity options
+
+When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
+
+### Combine components
+
+Combine components as one entity when they overlap by taking the union of all the components.
+
+Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your input data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
++
+By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
++
+Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
++
+With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
+++
+### Don't combine components
+
+Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your labeled data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ labeled as Software:
++
+When you do not combine components, the entity will return twice:
+++
+## How to use components and options
+
+Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
+
+A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have a **Medication Name** entity, which has a `Medication.Name` prebuilt component added to it, the entity may not predict all the medication names specific to your domain. You can use a list component to extend the values of the Medication Name entity and thereby extending the prebuilt with your own values of Medication Names.
+
+Other times you may be interested in extracting an entity through context such as a **medical device**. You would label for the learned component of the medical device to learn _where_ a medical device is based on its position within the sentence. You may also have a list of medical devices that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
+
+When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
++
+## Next steps
+
+* [Entities with prebuilt components](../../text-analytics-for-health/concepts/health-entity-categories.md)
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/evaluation-metrics.md
+
+ Title: Custom text analytics for health evaluation metrics
+
+description: Learn about evaluation metrics in custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Evaluation metrics for custom Text Analytics for health models
+
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data labels (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. User defined entities are **included** in the evaluation factoring in Learned and List components; Text Analytics for health prebuilt entities are **not** factored in the model evaluation. For evaluation, custom Text Analytics for health uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
+
+>[!NOTE]
+> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
+
+## Model-level and entity-level evaluation metrics
+
+Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
+
+The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
+
+### Example
+
+*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
+
+The model extracting entities from this text could have the following predictions:
+
+| Entity | Predicted as | Actual type |
+|--|--|--|
+| John Smith | Person | Person |
+| Frederick | Person | City |
+| Forrest | City | Person |
+| Fannie Thomas | Person | Person |
+| Colorado Springs | City | City |
+
+### Entity-level evaluation for the *person* entity
+
+The model would have the following entity-level evaluation, for the *person* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
+| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+
+* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
+* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
+* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
+
+### Entity-level evaluation for the *city* entity
+
+The model would have the following entity-level evaluation, for the *city* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
+| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
+
+### Model-level evaluation for the collective model
+
+The model would have the following evaluation for the model in its entirety:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
+| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
+| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
+
+## Interpreting entity-level evaluation metrics
+
+So what does it actually mean to have high precision or high recall for a certain entity?
+
+| Recall | Precision | Interpretation |
+|--|--|--|
+| High | High | This entity is handled well by the model. |
+| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
+| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
+| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
+
+## Guidance
+
+After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
+
+* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
+
+* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
+
+* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
+
+* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
+
+* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
++
+## Confusion matrix
+
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
+The matrix compares the expected labels with the ones predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
+
+The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
++
+You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
+
+* The values in the diagonal are the *True Positive* values of each entity.
+* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
+* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
+
+Similarly,
+
+* The *true positive* of the model is the sum of *true Positives* for all entities.
+* The *false positive* of the model is the sum of *false positives* for all entities.
+* The *false Negative* of the model is the sum of *false negatives* for all entities.
+
+## Next steps
+
+* [Custom text analytics for health overview](../overview.md)
+* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
+* [Train a model](../how-to/train-model.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/call-api.md
+
+ Title: Send a custom Text Analytics for health request to your custom model
+description: Learn how to send a request for custom text analytics for health.
+++++++ Last updated : 04/14/2023+
+ms.devlang: REST API
+++
+# Send queries to your custom Text Analytics for health model
+
+After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
+You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api).
+
+## Test deployed model
+
+You can use Language Studio to submit the custom Text Analytics for health task and visualize the results.
++
+## Send a custom text analytics for health request to your model
+
+# [Language Studio](#tab/language-studio)
++
+# [REST API](#tab/rest-api)
+
+First you will need to get your resource key and endpoint:
++
+### Submit a custom Text Analytics for health task
++
+### Get task results
+++++
+## Next steps
+
+* [Custom text analytics for health](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/create-project.md
+
+ Title: Using Azure resources in custom Text Analytics for health
+
+description: Learn about the steps for using Azure resources with custom text analytics for health.
++++++ Last updated : 04/14/2023++++
+# How to create custom Text Analytics for health project
+
+Use this article to learn how to set up the requirements for starting with custom text analytics for health and create a project.
+
+## Prerequisites
+
+Before you start using custom text analytics for health, you need:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+## Create a Language resource
+
+Before you start using custom text analytics for health, you'll need an Azure Language resource. It's recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions preconfigured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text analytics for health.
+
+You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
+
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
+
+## Create Language resource and connect storage account
+
+You can create a resource in the following ways:
+
+* The Azure portal
+* Language Studio
+* PowerShell
+
+> [!Note]
+> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
+++++
+> [!NOTE]
+> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
+> * You can only connect your language resource to one storage account.
+
+## Using a pre-existing Language resource
++
+## Create a custom Text Analytics for health project
+
+Once your resource and storage container are configured, create a new custom text analytics for health project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Import project
+
+If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Get project details
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Delete project
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
+
+* After you define your schema, you can start [labeling your data](label-data.md), which will be used for model training, evaluation, and finally making predictions.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/deploy-model.md
+
+ Title: Deploy a custom Text Analytics for health model
+
+description: Learn about deploying a model for custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Deploy a custom text analytics for health model
+
+Once you're satisfied with how your model performs, it's ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure storage account.
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+
+For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).
+
+## Deploy model
+
+After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/design-schema.md
+
+ Title: Preparing data and designing a schema for custom Text Analytics for health
+
+description: Learn about how to select and prepare data, to be successful in creating custom TA4H projects.
++++++ Last updated : 04/14/2023++++
+# How to prepare data and define a schema for custom Text Analytics for health
+
+In order to create a custom TA4H model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it entailing defining the entity types or categories that you need your model to extract from the text at runtime.
+
+## Schema design
+
+Custom Text Analytics for health allows you to extend and customize the Text Analytics for health entity map. The first step of the process is building your schema, which allows you to define the new entity types or categories that you need your model to extract from text in addition to the Text Analytics for health existing entities at runtime.
+
+* Review documents in your dataset to be familiar with their format and structure.
+
+* Identify the entities you want to extract from the data.
+
+ For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
+
+* Avoid entity types ambiguity.
+
+ **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
+
+ For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
+
+* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
+
+ For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
++
+## Add entities
+
+To add entities to your project:
+
+1. Move to **Entities** pivot from the top of the page.
+
+2. [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project. To add additional entity categories, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
+
+3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
+
+4. Entities are defined by [entity components](../concepts/entity-components.md): learned, list or prebuilt. Text Analytics for health entities are by default populated with the prebuilt component and cannot have learned components. Your newly defined entities can be populated with the learned component once you add labels for them in your data but cannot be populated with the prebuilt component.
+
+5. You can add a [list](../concepts/entity-components.md#list-component) component to any of your entities.
+
+
+### Add list component
+
+To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
+
+1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
+
+2. For multilingual projects, from the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
+
+ <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
+
+### Define entity options
+
+Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and click on the **Save** button at the top.
+
+ <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
++
+After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
++
+## Data selection
+
+The quality of data you train your model with affects model performance greatly.
+
+* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
+
+* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
+
+* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
+
+* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
+
+* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
+
+> [!NOTE]
+> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
+
+## Data preparation
+
+As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
+
+* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
+* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
+
+You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
+
+You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/label-data.md) in Language studio.
+
+## Test set
+
+When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set includes documents that represent all entities used in your project.
+
+## Next steps
+
+If you haven't already, create a custom Text Analytics for health project. If it's your first time using custom Text Analytics for health, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/fail-over.md
+
+ Title: Back up and recover your custom Text Analytics for health models
+
+description: Learn how to save and recover your custom Text Analytics for health models.
++++++ Last updated : 04/14/2023++++
+# Back up and recover your custom Text Analytics for health models
+
+When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure Language resources in different regions and synchronizing custom models across them.
+
+If your app or business depends on the use of a custom Text Analytics for health model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./train-model.md) and [deploy](./deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
+
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure Language resources in different Azure regions. [Create your resources](./create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](../quickstart.md?pivots=rest-api#create-a-new-azure-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
++
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get training status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
+
+ [!INCLUDE [get project details](../includes/rest-api/get-project-details.md)]
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
++
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
+
+* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/label-data.md
+
+ Title: How to label your data for custom Text Analytics for health
+
+description: Learn how to label your data for use with custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Label your data using the Language Studio
+
+Data labeling is a crucial step in development lifecycle. In this step, you label your documents with the new entities you defined in your schema to populate their learned components. This data will be used in the next step when training your model so that your model can learn from the labeled data to know which entities to extract. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project, but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio).
+
+## Prerequisites
+
+Before you can label your data, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data labeling guidelines
+
+After preparing your data, designing your schema and creating your project, you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON document in your storage container that you have connected to this project.
+
+As you label your data, keep in mind:
+
+* You can't add labels for Text Analytics for health entities as they're pretrained prebuilt entities. You can only add labels to new entity categories that you defined during schema definition.
+
+If you want to improve the recall for a prebuilt entity, you can extend it by adding a list component while you are [defining your schema](design-schema.md).
+
+* In general, more labeled data leads to better results, provided the data is labeled accurately.
+
+* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
+
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the documents.
+ * **Label completely**: Label all the instances of the entity in all your documents.
+
+ > [!NOTE]
+ > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your schema, and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
+
+## Label your data
+
+Use the following steps to label your data:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
+
+ <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
+
+ >[!TIP]
+ > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
+ > You can also use the filters to view the documents that are labeled with a specific entity type.
+
+3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
+
+ > [!NOTE]
+ > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document. Hebrew is not supported with multi-lingual projects.
+
+4. In the right side pane, you can use the **Add entity type** button to add additional entities to your project that you missed during schema definition.
+
+ <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
+
+5. You have two options to label your document:
+
+ |Option |Description |
+ |||
+ |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
+ |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
+
+ The below screenshot shows labeling using a brush.
+
+ :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
+
+6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each. The prebuilt entities will be shown for reference but you will not be able to label for these prebuilt entities as they are pretrained.
+
+7. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. See [training and testing sets](train-model.md#data-splitting) for information on how they are used for model training and evaluation.
+
+ > [!TIP]
+ > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
+
+7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances* where you can view count of all labeled instances of a specific entity type.
+ * *Documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
+
+7. When you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
+
+## Remove labels
+
+To remove a label
+
+1. Select the entity you want to remove a label from.
+2. Scroll through the menu that appears, and select **Remove label**.
+
+## Delete entities
+
+You cannot delete any of the Text Analytics for health pretrained entities because they have a prebuilt component. You are only permitted to delete newly defined entity categories. To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity removes all its labeled instances from your dataset.
+
+## Next steps
+
+After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/train-model.md
+
+ Title: How to train your custom Text Analytics for health model
+
+description: Learn about how to train your model for custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Train your custom Text Analytics for health model
+
+Training is the process where the model learns from your [labeled data](label-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
+
+To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
++
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data splitting
+
+Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
+The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
+After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated. Model training and evaluation are only for newly defined entities with learned components; therefore, Text Analytics for health entities are excluded from model training and evaluation due to them being entities with prebuilt components. It's recommended to make sure that all your labeled entities are adequately represented in both the training and testing set.
+
+Custom Text Analytics for health supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**:The system splits your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](label-data.md).
+
+## Train model
+
+# [Language studio](#tab/Language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+### Start training job
++
+### Get training job status
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/view-model-evaluation.md
+
+ Title: Evaluate a Custom Text Analytics for health model
+
+description: Learn how to evaluate and score your Custom Text Analytics for health model
++++++ Last updated : 04/14/2023+++++
+# View a custom text analytics for health model's evaluation and details
+
+After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you train a new model, as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](label-data.md).
+
+## Prerequisites
+
+Before viewing model evaluation, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md)
+* A [successfully trained model](train-model.md)
++
+## Model details
+
+There are several metrics you can use to evaluate your mode. See the [performance metrics](../concepts/evaluation-metrics.md) article for more information on the model details described in this article.
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* [Deploy your model](deploy-model.md)
+* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/language-support.md
+
+ Title: Language and region support for custom Text Analytics for health
+
+description: Learn about the languages and regions supported by custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Language support for custom text analytics for health
+
+Use this article to learn about the languages currently supported by custom Text Analytics for health.
+
+## Multilingual option
+
+With custom Text Analytics for health, you can train a model in one language and use it to extract entities from documents other languages. This feature saves you the trouble of building separate projects for each language and instead combining your datasets in a single project, making it easy to scale your projects to multiple languages. You can train your project entirely with English documents, and query it in: French, German, Italian, and others. You can enable the multilingual option as part of the project creation process or later through the project settings.
+
+You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. In the [data labeling](how-to/label-data.md) page in Language Studio, you can select the language of the document you're adding. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better. When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+Hebrew is not supported in multilingual projects. If the primary language of the project is Hebrew, you will not be able to add training data in other languages, or query the model with other languages. Similarly, if the primary language of the project is not Hebrew, you will not be able to add training data in Hebrew, or query the model in Hebrew.
+
+## Language support
+
+Custom Text Analytics for health supports `.txt` files in the following languages:
+
+| Language | Language code |
+| | |
+| English | `en` |
+| French | `fr` |
+| German | `de` |
+| Spanish | `es` |
+| Italian | `it` |
+| Portuguese (Portugal) | `pt-pt` |
+| Hebrew | `he` |
++
+## Next steps
+
+* [Custom Text Analytics for health overview](overview.md)
+* [Service limits](reference/service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/overview.md
+
+ Title: Custom Text Analytics for health - Azure Cognitive Services
+
+description: Customize an AI model to label and extract healthcare information from documents using Azure Cognitive Services.
++++++ Last updated : 04/14/2023++++
+# What is custom Text Analytics for health?
+
+Custom Text Analytics for health is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models on top of [Text Analytics for health](../text-analytics-for-health/overview.md) for custom healthcare entity recognition tasks.
+
+Custom Text Analytics for health enables users to build custom AI models to extract healthcare specific entities from unstructured text, such as clinical notes and reports. By creating a custom Text Analytics for health project, developers can iteratively define new vocabulary, label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+
+This documentation contains the following article types:
+
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through creating making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/label-data.md) contain instructions for using the service in more specific or customized ways.
+
+## Example usage scenarios
+
+Similarly to Text Analytics for health, custom Text Analytics for health can be used in multiple [scenarios](../text-analytics-for-health/overview.md#example-use-cases) across a variety of healthcare industries. However, the main usage of this feature is to provide a layer of customization on top of Text Analytics for health to extend its existing entity map.
++
+## Project development lifecycle
+
+Using custom Text Analytics for health typically involves several different steps.
++
+* **Define your schema**: Know your data and define the new entities you want extracted on top of the existing Text Analytics for health entity map. Avoid ambiguity.
+
+* **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the files.
+ * **Label completely**: Label all the instances of the entity in all your files.
+
+* **Train the model**: Your model starts learning from your labeled data.
+
+* **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
+
+* **Deploy the model**: Deploying a model makes it available for use via an API.
+
+* **Extract entities**: Use your custom models for entity extraction tasks.
+
+## Reference documentation and code samples
+
+As you use custom Text Analytics for health, see the following reference documentation for Azure Cognitive Services for Language:
+
+|APIs| Reference documentation|
+||||
+|REST APIs (Authoring) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-authoring) |
+|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-runtime/submit-job) |
++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using custom Text Analytics for health.
+
+* As you go through the project development lifecycle, review the glossary to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](reference/service-limits.md) for information such as [regional availability](reference/service-limits.md#regional-availability).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/quickstart.md
+
+ Title: Quickstart - Custom Text Analytics for health (Custom TA4H)
+
+description: Quickly start building an AI model to categorize and extract information from healthcare unstructured text.
++++++ Last updated : 04/14/2023++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: custom Text Analytics for health
+
+Use this article to get started with creating a custom Text Analytics for health project where you can train custom models on top of Text Analytics for health for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract healthcare related named entities and are trained by learning from labeled data.
+
+In this article, we use Language Studio to demonstrate key concepts of custom Text Analytics for health. As an example weΓÇÖll build a custom Text Analytics for health model to extract the Facility or treatment location from short discharge notes.
+++++++
+## Next steps
+
+* [Text analytics for health overview](./overview.md)
+
+After you've created entity extraction model, you can:
+
+* [Use the runtime API to extract entities](how-to/call-api.md)
+
+When you start to create your own custom Text Analytics for health projects, use the how-to articles to learn more about data labeling, training and consuming your model in greater detail:
+
+* [Data selection and schema design](how-to/design-schema.md)
+* [Tag data](how-to/label-data.md)
+* [Train a model](how-to/train-model.md)
+* [Model evaluation](how-to/view-model-evaluation.md)
+
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/glossary.md
+
+ Title: Definitions used in custom Text Analytics for health
+
+description: Learn about definitions used in custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Terms and definitions used in custom Text Analytics for health
+
+Use this article to learn about some of the definitions and terms you may encounter when using Custom Text Analytics for health
+
+## Entity
+Entities are words in input data that describe information relating to a specific category or concept. If your entity is complex and you would like your model to identify specific parts, you can break your entity into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Prebuilt entity component
+
+Prebuilt entity components represent pretrained entity components that belong to the [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md). These entities are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components.
++
+## Learned entity component
+
+The learned entity component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by labeling your data for the entity. If you do not label any data with the entity, it will not have a learned component. Learned components cannot be added to entities with prebuilt components.
+
+## List entity component
+A list entity component represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
+
+The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "clinics" and you have the words "clinic a, clinic b, clinic c" in the list, then the size entity will be predicted for all instances of the input data where "clinic a, clinic b, clinic c" are used regardless of the context. List components can be added to all entities regardless of whether they are prebuilt or newly defined.
+
+## Model
+A model is an object that's trained to do a certain task, in this case custom Text Analytics for health models perform all the features of Text Analytics for health in addition to custom entity extraction for the user's defined entities. Models are trained by providing labeled data to learn from so they can later be used to understand context from the input text.
+
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Overfitting
+
+Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+## Project
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+
+## Recall
+Measures the model's ability to predict actual positive entities. It's the ratio between the predicted true positives and what was actually labeled. The recall metric reveals how many of the predicted entities are correct.
++
+## Schema
+Schema is defined as the combination of entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about what are the new entities should you add to your project to extend the existing [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md) and which new vocabulary should you add to the prebuilt entities using list components to enhance their recall. For example, adding a new entity for patient name or extending the prebuilt entity "Medication Name" with a new research drug (Ex: research drug A).
+
+## Training data
+Training data is the set of information that is needed to train a model.
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/service-limits.md
+
+ Title: Custom Text Analytics for health service limits
+
+description: Learn about the data and service limits when using Custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Custom Text Analytics for health service limits
+
+Use this article to learn about the data and service limits when using custom Text Analytics for health.
+
+## Language resource limits
+
+* Your Language resource has to be created in one of the [supported regions](#regional-availability).
+
+* Your resource must be one of the supported pricing tiers:
+
+ |Tier|Description|Limit|
+ |--|--|--|
+ |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
+
+
+* You can only connect one storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](../how-to/create-project.md#create-language-resource-and-connect-storage-account)
+
+* You can have up to 500 projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+## Regional availability
+
+Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment.
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| East US | Γ£ô | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+|Document size|--|125,000 characters. You can send up to 20 documents as long as they collectively do not exceed 125,000 characters|
+
+> [!TIP]
+> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|S|Training time| Unlimited, free |
+|S|Prediction Calls| 5,000 text records for free per language resource|
+
+## Document limits
+
+* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
+
+* All files uploaded in your container must contain data. Empty files are not allowed for training.
+
+* All files should be available at the root of your container.
+
+## Data limits
+
+The following limits are observed for authoring.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Documents count | 10 | 100,000 |
+|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
+|Count of entity types | 1 | 200 |
+|Entity length in characters | 1 | 500 |
+|Count of trained models per project| 0 | 10 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Item | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters. See the supported [data format](../concepts/data-formats.md#entity-naming-rules) for more information on entity names when importing a labels file. |
+| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
++
+## Next steps
+
+* [Custom text analytics for health overview](../overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
Previously updated : 12/09/2022 Last updated : 04/14/2023
This Language service unifies the following previously available Cognitive Servi
The Language service also provides several new features as well, which can either be:
-* Pre-configured, which means the AI models that the feature uses are not customizable. You just send your data, and use the feature's output in your applications.
+* Preconfigured, which means the AI models that the feature uses are not customizable. You just send your data, and use the feature's output in your applications.
* Customizable, which means you'll train an AI model using our tools to fit your data specifically. > [!TIP]
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/named-entity-recognition.png" alt-text="A screenshot of a named entity recognition example." lightbox="media/studio-examples/named-entity-recognition.png"::: :::column-end::: :::column span="":::
- [Named entity recognition](./named-entity-recognition/overview.md) is a pre-configured feature that categorizes entities (words or phrases) in unstructured text across several pre-defined category groups. For example: people, events, places, dates, [and more](./named-entity-recognition/concepts/named-entity-categories.md).
+ [Named entity recognition](./named-entity-recognition/overview.md) is a preconfigured feature that categorizes entities (words or phrases) in unstructured text across several predefined category groups. For example: people, events, places, dates, [and more](./named-entity-recognition/concepts/named-entity-categories.md).
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/personal-information-detection.png" alt-text="A screenshot of a PII detection example." lightbox="media/studio-examples/personal-information-detection.png"::: :::column-end::: :::column span="":::
- [PII detection](./personally-identifiable-information/overview.md) is a pre-configured feature that identifies, categorizes, and redacts sensitive information in both [unstructured text documents](./personally-identifiable-information/how-to-call.md), and [conversation transcripts](./personally-identifiable-information/how-to-call-for-conversations.md). For example: phone numbers, email addresses, forms of identification, [and more](./personally-identifiable-information/concepts/entity-categories.md).
+ [PII detection](./personally-identifiable-information/overview.md) is a preconfigured feature that identifies, categorizes, and redacts sensitive information in both [unstructured text documents](./personally-identifiable-information/how-to-call.md), and [conversation transcripts](./personally-identifiable-information/how-to-call-for-conversations.md). For example: phone numbers, email addresses, forms of identification, [and more](./personally-identifiable-information/concepts/entity-categories.md).
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/language-detection.png" alt-text="A screenshot of a language detection example." lightbox="media/studio-examples/language-detection.png"::: :::column-end::: :::column span="":::
- [Language detection](./language-detection/overview.md) is a pre-configured feature that can detect the language a document is written in, and returns a language code for a wide range of languages, variants, dialects, and some regional/cultural languages.
+ [Language detection](./language-detection/overview.md) is a preconfigured feature that can detect the language a document is written in, and returns a language code for a wide range of languages, variants, dialects, and some regional/cultural languages.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/sentiment-analysis-example.png" alt-text="A screenshot of a sentiment analysis example." lightbox="media/studio-examples/sentiment-analysis-example.png"::: :::column-end::: :::column span="":::
- [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) are pre-configured features that help you find out what people think of your brand or topic by mining text for clues about positive or negative sentiment, and can associate them with specific aspects of the text.
+ [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) are preconfigured features that help you find out what people think of your brand or topic by mining text for clues about positive or negative sentiment, and can associate them with specific aspects of the text.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/summarization-example.png" alt-text="A screenshot of a summarization example." lightbox="media/studio-examples/summarization-example.png"::: :::column-end::: :::column span="":::
- [Summarization](./summarization/overview.md) is a pre-configured feature that uses extractive text summarization to produce a summary of documents and conversation transcriptions. It extracts sentences that collectively represent the most important or relevant information within the original content.
+ [Summarization](./summarization/overview.md) is a preconfigured feature that uses extractive text summarization to produce a summary of documents and conversation transcriptions. It extracts sentences that collectively represent the most important or relevant information within the original content.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/key-phrases.png" alt-text="A screenshot of a key phrase extraction example." lightbox="media/studio-examples/key-phrases.png"::: :::column-end::: :::column span="":::
- [Key phrase extraction](./key-phrase-extraction/overview.md) is a pre-configured feature that evaluates and returns the main concepts in unstructured text, and returns them as a list.
+ [Key phrase extraction](./key-phrase-extraction/overview.md) is a preconfigured feature that evaluates and returns the main concepts in unstructured text, and returns them as a list.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/entity-linking.png" alt-text="A screenshot of an entity linking example." lightbox="media/studio-examples/entity-linking.png"::: :::column-end::: :::column span="":::
- [Entity linking](./entity-linking/overview.md) is a pre-configured feature that disambiguates the identity of entities (words or phrases) found in unstructured text and returns links to Wikipedia.
+ [Entity linking](./entity-linking/overview.md) is a preconfigured feature that disambiguates the identity of entities (words or phrases) found in unstructured text and returns links to Wikipedia.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png"::: :::column-end::: :::column span="":::
- [Text analytics for health](./text-analytics-for-health/overview.md) is a pre-configured feature that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
+ [Text analytics for health](./text-analytics-for-health/overview.md) is a preconfigured feature that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::column-end::: :::row-end:::
+### Custom text analytics for health
+
+ :::column span="":::
+ :::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a custom text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png":::
+ :::column-end:::
+ :::column span="":::
+ [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) is a custom feature that extract healthcare specific entities from unstructured text, using a model you create.
+ :::column-end:::
+ ## Which Language service feature should I use? This section will help you decide which Language service feature you should use for your application:
This section will help you decide which Language service feature you should use
|What do you want to do? |Document format |Your best solution | Is this solution customizable?* | ||||| | Detect and/or redact sensitive information such as PII and PHI. | Unstructured text, <br> transcribed conversations | [PII detection](./personally-identifiable-information/overview.md) | |
-| Extract categories of information without creating a custom model. | Unstructured text | The [pre-configured NER feature](./named-entity-recognition/overview.md) | |
+| Extract categories of information without creating a custom model. | Unstructured text | The [preconfigured NER feature](./named-entity-recognition/overview.md) | |
| Extract categories of information using a model specific to your data. | Unstructured text | [Custom NER](./custom-named-entity-recognition/overview.md) | Γ£ô | |Extract main topics and important phrases. | Unstructured text | [Key phrase extraction](./key-phrase-extraction/overview.md) | | | Determine the sentiment and opinions expressed in text. | Unstructured text | [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) | | | Summarize long chunks of text or conversations. | Unstructured text, <br> transcribed conversations. | [Summarization](./summarization/overview.md) | | | Disambiguate entities and get links to Wikipedia. | Unstructured text | [Entity linking](./entity-linking/overview.md) | | | Classify documents into one or more categories. | Unstructured text | [Custom text classification](./custom-text-classification/overview.md) | Γ£ô|
-| Extract medical information from clinical/medical documents. | Unstructured text | [Text analytics for health](./text-analytics-for-health/overview.md) | |
-| Build an conversational application that responds to user inputs. | Unstructured user inputs | [Question answering](./question-answering/overview.md) | Γ£ô |
+| Extract medical information from clinical/medical documents, without building a model. | Unstructured text | [Text analytics for health](./text-analytics-for-health/overview.md) | |
+| Extract medical information from clinical/medical documents using a model that's trained on your data. | Unstructured text | [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) | |
+| Build a conversational application that responds to user inputs. | Unstructured user inputs | [Question answering](./question-answering/overview.md) | Γ£ô |
| Detect the language that a text was written in. | Unstructured text | [Language detection](./language-detection/overview.md) | | | Predict the intention of user inputs and extract information from them. | Unstructured user inputs | [Conversational language understanding](./conversational-language-understanding/overview.md) | Γ£ô | | Connect apps from conversational language understanding, LUIS, and question answering. | Unstructured user inputs | [Orchestration workflow](./orchestration-workflow/overview.md) | Γ£ô |
-\* If a feature is customizable, you can train an AI model using our tools to fit your data specifically. Otherwise a feature is pre-configured, meaning the AI models it uses cannot be changed. You just send your data, and use the feature's output in your applications.
+\* If a feature is customizable, you can train an AI model using our tools to fit your data specifically. Otherwise a feature is preconfigured, meaning the AI models it uses cannot be changed. You just send your data, and use the feature's output in your applications.
## Migrate from Text Analytics, QnA Maker, or Language Understanding (LUIS)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 03/09/2023 Last updated : 04/14/2023
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## April 2023
+* [Custom Text analytics for health](./custom-text-analytics-for-health/overview.md) is available in public preview, which enables you to build custom AI models to extract healthcare specific entities from unstructured text
* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below. * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md). * Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
cognitive-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md
The table below outlines the various ways content filtering can appear:
As part of your application design you'll need to think carefully on how to maximize the benefits of your applications while minimizing the harms. Consider the following best practices: -- How you want to handle scenarios where your users send in-appropriate or miss-use your application. Check the finish_reason to see if the generation is filtered.
+- How you want to handle scenarios where your users send inappropriate input or misuse your application. Check the finish_reason to see if the generation is filtered.
- If it's critical that the content filters run on your generations, check that there's no `error` object in the `content_filter_result`. - To help with monitoring for possible misuse, applications serving multiple end-users should pass the `user` parameter with each API call. The `user` should be a unique identifier for the end-user. Don't send any actual user identifiable information as the value.
communication-services Advisor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advisor-overview.md
The following SDKs are supported for this feature, along with all their supporte
The following documents may be interesting to you: -- [Logging and diagnostics](./logging-and-diagnostics.md)
+- [Logging and diagnostics](./analytics/enable-logging.md)
+- Access logs for [voice and video](./analytics/logs/voice-and-video-logs.md), [chat](./analytics/logs/chat-logs.md), [email](./analytics/logs/email-logs.md), [network traversal](./analytics/logs/network-traversal-logs.md), [recording](./analytics/logs/recording-logs.md), [SMS](./analytics/logs/sms-logs.md) and [call automation](./analytics/logs/call-automation-logs.md).
- [Metrics](./metrics.md)
communication-services Call Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/call-logs-azure-monitor.md
- Title: Azure Communication Services - Call Logs -
-description: Learn about Call Summary and Call Diagnostic Logs in Azure Monitor
---- Previously updated : 10/25/2021-----
-# Call Summary and Call Diagnostic Logs
-
-> [!IMPORTANT]
-> The following refers to logs enabled through [Azure Monitor](../../../azure-monitor/overview.md) (see also [FAQ](../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](./enable-logging.md)
--
-## Data Concepts
-The following are high level descriptions of data concepts specific to Voice and Video calling within your Communications Services that are important to review in order to understand the meaning of the data captured in the logs.
-
-### Entities and IDs
-
-A *Call*, as it relates to the entities represented in the data, is an abstraction represented by the `correlationId`. `CorrelationId`s are unique per Call, and are time-bound by `callStartTime` and `callDuration`. Every Call is an event that contains data from two or more *Endpoints*, which represent the various human, bot, or server participants in the Call.
-
-A *Participant* (`participantId`) is present only when the Call is a *Group* Call, as it represents the connection between an Endpoint and the server.
-
-An *Endpoint* is the most unique entity, represented by `endpointId`. `EndpointType` tells you whether the Endpoint represents a human user (PSTN, VoIP), a Bot (Bot), or the server that is managing multiple Participants within a Call. When an `endpointType` is `"Server"`, the Endpoint will not be assigned a unique ID. By analyzing endpointType and the number of `endpointIds`, you can determine how many users and other non-human Participants (bots, servers) join a Call. Our native SDKs (Androis, iOS) reuse the same `endpointId` for a user across multiple Calls, thus enabling an understanding of experience across sessions. This differs from web-based Endpoints, which will always generate a new `endpointId` for each new Call.
-
-A *Stream* is the most granular entity, as there is one Stream per direction (inbound/outbound) and `mediaType` (e.g. audio, video).
---
-## Data Definitions
-
-### Call Summary Log
-The Call Summary Log contains data to help you identify key properties of all Calls. A different Call Summary Log will be created per each `participantId` (`endpointId` in the case of P2P calls) in the Call.
-
-> [!IMPORTANT]
-> Participant information in the call summary log will vary based on the participant tenant. The SDK and OS version will be redacted if the participant is not within the same tenant (also referred to as cross-tenant) as the ACS resource. Cross-tenantsΓÇÖ participants are classified as external users invited by a resource tenant to join and collaborate during a call.
-
-| Property | Description |
-|-||
-| time | The timestamp (UTC) of when the log was generated. |
-| operationName | The operation associated with log record. |
-| operationVersion | The api-version associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
-| correlationId | `correlationId` is the unique ID for a Call. The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call, and it can be used to join data from different logs. If you ever need to open a support case with Microsoft, the `correlationId` will be used to easily identify the Call you're troubleshooting. |
-| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
-| callStartTime | A timestamp for the start of the call, based on the first attempted connection from any Endpoint. |
-| callDuration | The duration of the Call expressed in seconds, based on the first attempted connection and end of the last connection between two endpoints. |
-| callType | Will contain either `"P2P"` or `"Group"`. A `"P2P"` Call is a direct 1:1 connection between only two, non-server endpoints. A `"Group"` Call is a Call that has more than two endpoints or is created as `"Group"` Call prior to the connection. |
-| teamsThreadId | This ID is only relevant when the Call is organized as a Microsoft Teams meeting, representing the Microsoft Teams ΓÇô Azure Communication Services interoperability use-case. This ID is exposed in operational logs. You can also get this ID through the Chat APIs. |
-| participantId | This ID is generated to represent the two-way connection between a `"Participant"` Endpoint (`endpointType` = `"Server"`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
-| participantStartTime | Timestamp for beginning of the first connection attempt by the participant. |
-| participantDuration | The duration of each Participant connection in seconds, from `participantStartTime` to the timestamp when the connection is ended. |
-| participantEndReason | Contains Calling SDK error codes emitted by the SDK when relevant for each `participantId`. See Calling SDK error codes below. |
-| endpointId | Unique ID that represents each Endpoint connected to the call, where the Endpoint type is defined by `endpointType`. When the value is `null`, the connected entity is the Communication Services server (`endpointType`= `"Server"`). `EndpointId` can sometimes persist for the same user across multiple calls (`correlationId`) for native clients. The number of `endpointId`s will determine the number of Call Summary Logs. A distinct Summary Log is created for each `endpointId`. |
-| endpointType | This value describes the properties of each Endpoint connected to the Call. Can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. |
-| sdkVersion | Version string for the Communication Services Calling SDK version used by each relevant Endpoint. (Example: `"1.1.00.20212500"`) |
-| osVersion | String that represents the operating system and version of each Endpoint device. |
-| participantTenantId | The ID of the Microsoft tenant associated with the participant. This field is used to guide cross-tenant redaction.
--
-### Call Diagnostic Log
-Call Diagnostic Logs provide important information about the Endpoints and the media transfers for each Participant, as well as measurements that help to understand quality issues.
-For each Endpoint within a Call, a distinct Call Diagnostic Log is created for outbound media streams (audio, video, etc.) between Endpoints.
-In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In Group Calls the participantId serves as key identifier to join the related outbound logs into a distinct Participant connection. Please note that Call diagnostic logs will remain intact and will be the same regardless of the participant tenant.
-> Note: In this document P2P and group calls are by default within the same tenant, for all call scenarios that are cross-tenant they will be specified accordingly throughout the document.
-
-| Property | Description |
-||-|
-| operationName | The operation associated with log record. |
-| operationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
-| correlationId | The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationId` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationId` will be used to easily identify the Call you're troubleshooting. |
-| participantId | This ID is generated to represent the two-way connection between a "Participant" Endpoint (`endpointType` = `ΓÇ£ServerΓÇ¥`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
-| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams object ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
-| endpointId | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationId`) for native clients but will be unique for every Call when the client is a web browser. |
-| endpointType | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, `"Voicemail"`, `"Anonymous"`, or `"Unknown"`. |
-| mediaType | This string value describes the type of media being transmitted between endpoints within each stream. Possible values include `ΓÇ£AudioΓÇ¥`, `ΓÇ£VideoΓÇ¥`, `ΓÇ£VBSSΓÇ¥` (Video-Based Screen Sharing), and `ΓÇ£AppSharingΓÇ¥`. |
-| streamId | Non-unique integer which, together with `mediaType`, can be used to uniquely identify streams of the same `participantId`. |
-| transportType | String value which describes the network transport protocol per `participantId`. Can contain `"UDPΓÇ¥`, `ΓÇ£TCPΓÇ¥`, or `ΓÇ£UnrecognizedΓÇ¥`. `"Unrecognized"` indicates that the system could not determine if the `transportType` was TCP or UDP. |
-| roundTripTimeAvg | This is the average time it takes to get an IP packet from one Endpoint to another within a `participantDuration`. This network propagation delay is essentially tied to physical distance between the two points and the speed of light, including additional overhead taken by the various routers in between. The latency is measured as one-way or Round-trip Time (RTT). Its value expressed in milliseconds, and an RTT greater than 500ms should be considered as negatively impacting the Call quality. |
-| roundTripTimeMax | The maximum RTT (ms) measured per media stream during a `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| jitterAvg | This is the average change in delay between successive packets. Azure Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering, which is approximately at `jitterAvg` >30 ms, that a negative quality impact is likely occurring. The packets arriving at different speeds cause a speaker's voice to sound robotic. This is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| jitterMax | The is the maximum jitter value measured between packets per media stream. Bursts in network conditions can cause issues in the audio/video traffic flow. |
-| packetLossRateAvg | This is the average percentage of packets that are lost. Packet loss directly affects audio qualityΓÇöfrom small, individual lost packets that have almost no impact to back-to-back burst losses that cause audio to cut out completely. The packets being dropped and not arriving at their intended destination cause gaps in the media, resulting in missed syllables and words, and choppy video and sharing. A packet loss rate of greater than 10% (0.1) should be considered a rate that's likely having a negative quality impact. This is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| packetLossRateMax | This value represents the maximum packet loss rate (%) per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. Bursts in network conditions can cause issues in the audio/video traffic flow.
-### P2P vs. Group Calls
-
-There are two types of Calls (represented by `callType`): P2P and Group.
-
-**P2P** calls are a connection between only two Endpoints, with no server Endpoint. P2P calls are initiated as a Call between those Endpoints and are not created as a group Call event prior to the connection.
-
- :::image type="content" source="media\call-logs-azure-monitor\p2p-diagram.png" alt-text="Screenshot displays P2P call across 2 endpoints.":::
-
-**Group** Calls include any Call that has more than 2 Endpoints connected. Group Calls will include a server Endpoint, and the connection between each Endpoint and the server. P2P Calls that add an additional Endpoint during the Call cease to be P2P, and they become a Group Call. By viewing the `participantStartTime` and `participantDuration`, the timeline of when each Endpoint joined the Call can be determined.
--
- :::image type="content" source="media\call-logs-azure-monitor\group-call-version-a.png" alt-text="Screenshot displays group call across multiple endpoints.":::
--
-## Log Structure
-
-Two types of logs are created: **Call Summary** logs and **Call Diagnostic** logs.
-
-Call Summary Logs contain basic information about the Call, including all the relevant IDs, timestamps, Endpoint and SDK information. For each participant within a call, a distinct call summary log is created (if someone rejoins a call, they will have the same EndpointId, but a different ParticipantId, so there will be two Call Summary logs for that endpoint).
-
-Call Diagnostic Logs contain information about the Stream as well as a set of metrics that indicate quality of experience measurements. For each Endpoint within a Call (including the server), a distinct Call Diagnostic Log is created for each media stream (audio, video, etc.) between Endpoints. In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In a Group Call, each stream associated with `endpointType`= `"Server"` will create a log containing data for the inbound streams, and all other streams will create logs containing data for the outbound streams for all non-sever endpoints. In Group Calls, use the `participantId` as the key to join the related inbound/outbound logs into a distinct Participant connection.
-
-### Example 1: P2P Call
-
-The below diagram represents two endpoints connected directly in a P2P Call. In this example, 2 Call Summary Logs would be created (one per `participantID`) and four Call Diagnostic Logs would be created (one per media stream). Each log will contain data relating to the outbound stream of the `participantID`.
---
-### Example 2: Group Call
-
-The below diagram represents a Group Call example with three `participantIDs`, which means three `participantIDs` (`endpointIds` can potentially appear in multiple Participants, e.g. when rejoining a Call from the same device) and a Server Endpoint. One Call Summary Logs would be created per `participantID`, and four Call Diagnostic Logs would be created relating to each `participantID`, one for each media stream.
-
-
-### Example 3: P2P Call cross-tenant
-The below diagram represents two participants across multiple tenants that are connected directly in a P2P Call. In this example, one Call Summary Logs would be created (one per participant) with redacted OS and SDK versioning and four Call Diagnostic Logs would be created (one per media stream). Each log will contain data relating to the outbound stream of the `participantID`.
-
--
-### Example 4: Group Call cross-tenant
-The below diagram represents a Group Call example with three `participantIds` across multiple tenants. One Call Summary Logs would be created per participant with redacted OS and SDK versioning, and four Call Diagnostic Logs would be created relating to each `participantId` , one for each media stream.
---
-> [!NOTE]
-> Only outbound diagnostic logs will be supported in this release.
-> Please note that participants and bots identity are treated the same way, as a result OS and SDK versioning associated to the bot and the participant will be redacted
--
-
-## Sample Data
-
-### P2P Call
--
-Shared fields for all logs in the call:
-
-```json
-"time": "2021-07-19T18:46:50.188Z",
-"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
-"correlationId": "8d1a8374-344d-4502-b54b-ba2d6daaf0ae",
-```
-
-#### Call Summary Logs
-Call Summary Logs have shared operation and category information:
-
-```json
-"operationName": "CallSummary",
-"operationVersion": "1.0",
-"category": "CallSummary",
-
-```
-Call Summary for VoIP user 1
-```json
-"properties": {
- "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
- "callStartTime": "2021-07-19T17:54:05.113Z",
- "callDuration": 6,
- "callType": "P2P",
- "teamsThreadId": "null",
- "participantId": "null",
- "participantStartTime": "2021-07-19T17:54:06.758Z",
- "participantDuration": "5",
- "participantEndReason": "0",
- "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
- "endpointType": "VoIP",
- "sdkVersion": "1.0.1.0",
- "osVersion": "Windows 10.0.17763 Arch: x64"
-}
-```
-
-Call summary for VoIP user 2
-```json
-"properties": {
- "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
- "callStartTime": "2021-07-19T17:54:05.335Z",
- "callDuration": 6,
- "callType": "P2P",
- "teamsThreadId": "null",
- "participantId": "null",
- "participantStartTime": "2021-07-19T17:54:06.335Z",
- "participantDuration": "5",
- "participantEndReason": "0",
- "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
- "endpointType": "VoIP",
- "sdkVersion": "1.1.0.0",
- "osVersion": "null"
-}
-```
-Call Summary Logs crossed tenants: Call summary for VoIP user 1
-```json
-"properties": {
- "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
- "callStartTime": "2022-08-14T06:18:27.010Z",
- "callDuration": 520,
- "callType": "P2P",
- "teamsThreadId": "null",
- "participantId": "null",
- "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
- "participantStartTime": "2022-08-14T06:18:27.010Z",
- "participantDuration": "520",
- "participantEndReason": "0",
- "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
- "endpointType": "VoIP",
- "sdkVersion": "Redacted",
- "osVersion": "Redacted"
-}
-```
-Call summary for PSTN call (**Please note:** P2P or group call logs emitted will have OS, and SDK version redacted regardless is the participant or botΓÇÖs tenant)
-```json
-"properties": {
- "identifier": "b1999c3e-bbbb-4650-9b23-9999bdabab47",
- "callStartTime": "2022-08-07T13:53:12Z",
- "callDuration": 1470,
- "callType": "Group",
- "teamsThreadId": "19:36ec5177126fff000aaa521670c804a3@thread.v2",
- "participantId": " b25cf111-73df-4e0a-a888-640000abe34d",
- "participantStartTime": "2022-08-07T13:56:45Z",
- "participantDuration": 960,
- "participantEndReason": "0",
- "endpointId": "8731d003-6c1e-4808-8159-effff000aaa2",
- "endpointType": "PSTN",
- "sdkVersion": "Redacted",
- "osVersion": "Redacted"
-}
-```
-
-#### Call Diagnostic Logs
-Call diagnostics logs share operation information:
-```json
-"operationName": "CallDiagnostics",
-"operationVersion": "1.0",
-"category": "CallDiagnostics",
-```
-Diagnostic log for audio stream from VoIP Endpoint 1 to VoIP Endpoint 2:
-```json
-"properties": {
- "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
- "participantId": "null",
- "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
- "endpointType": "VoIP",
- "mediaType": "Audio",
- "streamId": "1000",
- "transportType": "UDP",
- "roundTripTimeAvg": "82",
- "roundTripTimeMax": "88",
- "jitterAvg": "1",
- "jitterMax": "1",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for audio stream from VoIP Endpoint 2 to VoIP Endpoint 1:
-```json
-"properties": {
- "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
- "participantId": "null",
- "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
- "endpointType": "VoIP",
- "mediaType": "Audio",
- "streamId": "1363841599",
- "transportType": "UDP",
- "roundTripTimeAvg": "78",
- "roundTripTimeMax": "84",
- "jitterAvg": "1",
- "jitterMax": "1",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for video stream from VoIP Endpoint 1 to VoIP Endpoint 2:
-```json
-"properties": {
- "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
- "participantId": "null",
- "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
- "endpointType": "VoIP",
- "mediaType": "Video",
- "streamId": "2804",
- "transportType": "UDP",
- "roundTripTimeAvg": "103",
- "roundTripTimeMax": "143",
- "jitterAvg": "0",
- "jitterMax": "4",
- "packetLossRateAvg": "3.146336E-05",
- "packetLossRateMax": "0.001769911"
-}
-```
-### Group Call
-
-The data would be generated in three Call Summary Logs and 6 Call Diagnostic Logs. Shared fields for all logs in the Call:
-```json
-"time": "2021-07-05T06:30:06.402Z",
-"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
-"correlationId": "341acde7-8aa5-445b-a3da-2ddadca47d22",
-```
-
-#### Call Summary Logs
-Call Summary Logs have shared operation and category information:
-```json
-"operationName": "CallSummary",
-"operationVersion": "1.0",
-"category": "CallSummary",
-```
-
-Call summary for VoIP Endpoint 1:
-```json
-"properties": {
- "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
- "callStartTime": "2021-07-05T06:16:40.240Z",
- "callDuration": 87,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
- "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
- "participantStartTime": "2021-07-05T06:16:44.235Z",
- "participantDuration": "82",
- "participantEndReason": "0",
- "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
- "endpointType": "VoIP",
- "sdkVersion": "1.0.0.3",
- "osVersion": "Darwin Kernel Version 18.7.0: Mon Nov 9 15:07:15 PST 2020; root:xnu-4903.272.3~3/RELEASE_ARM64_S5L8960X"
-}
-```
-Call summary for VoIP Endpoint 3:
-```json
-"properties": {
- "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
- "callStartTime": "2021-07-05T06:16:40.240Z",
- "callDuration": 87,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLTk2ZDUtYTZlM2I2ZjgxOTkw@thread.v2",
- "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
- "participantStartTime": "2021-07-05T06:16:40.240Z",
- "participantDuration": "87",
- "participantEndReason": "0",
- "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
- "endpointType": "VoIP",
- "sdkVersion": "1.0.0.3",
- "osVersion": "Android 11.0; Manufacturer: Google; Product: redfin; Model: Pixel 5; Hardware: redfin"
-}
-```
-Call summary for PSTN Endpoint 2:
-```json
-"properties": {
- "identifier": "null",
- "callStartTime": "2021-07-05T06:16:40.240Z",
- "callDuration": 87,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
- "participantId": "515650f7-8204-4079-ac9d-d8f4bf07b04c",
- "participantStartTime": "2021-07-05T06:17:10.447Z",
- "participantDuration": "52",
- "participantEndReason": "0",
- "endpointId": "46387150-692a-47be-8c9d-1237efe6c48b",
- "endpointType": "PSTN",
- "sdkVersion": "null",
- "osVersion": "null"
-}
-```
-Call Summary Logs cross-tenant
-```json
-"properties": {
- "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
- "callStartTime": "2022-08-14T06:18:27.010Z",
- "callDuration": 912,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
- "participantId": "aa1dd7da-5922-4bb1-a4fa-e350a111fd9c",
- "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
- "participantStartTime": "2022-08-14T06:18:27.010Z",
- "participantDuration": "902",
- "participantEndReason": "0",
- "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
- "endpointType": "VoIP",
- "sdkVersion": "Redacted",
- "osVersion": "Redacted"
-}
-```
-Call summary log crossed tenant with bot as a participant
-Call summary for bot
-```json
-
-"properties": {
- "identifier": "b1902c3e-b9f7-4650-9b23-9999bdabab47",
- "callStartTime": "2022-08-09T16:00:32Z",
- "callDuration": 1470,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MmQwZDcwYTQtZ000HWE6NzI4LTg1YTAtNXXXXX99999ZZZZZ@thread.v2",
- "participantId": "66e9d9a7-a434-4663-d91d-fb1ea73ff31e",
- "participantStartTime": "2022-08-09T16:14:18Z",
- "participantDuration": 644,
- "participantEndReason": "0",
- "endpointId": "69680ec2-5ac0-4a3c-9574-eaaa77720b82",
- "endpointType": "Bot",
- "sdkVersion": "Redacted",
- "osVersion": "Redacted"
-}
-```
-#### Call Diagnostic Logs
-Call diagnostics logs share operation information:
-```json
-"operationName": "CallDiagnostics",
-"operationVersion": "1.0",
-"category": "CallDiagnostics",
-```
-Diagnostic log for audio stream from VoIP Endpoint 1 to Server Endpoint:
-```json
-"properties": {
- "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
- "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
- "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
- "endpointType": "VoIP",
- "mediaType": "Audio",
- "streamId": "14884",
- "transportType": "UDP",
- "roundTripTimeAvg": "46",
- "roundTripTimeMax": "48",
- "jitterAvg": "0",
- "jitterMax": "1",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 1:
-```json
-"properties": {
- "identifier": null,
- "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
- "endpointId": null,
- "endpointType": "Server",
- "mediaType": "Audio",
- "streamId": "2001",
- "transportType": "UDP",
- "roundTripTimeAvg": "42",
- "roundTripTimeMax": "44",
- "jitterAvg": "1",
- "jitterMax": "1",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for audio stream from VoIP Endpoint 3 to Server Endpoint:
-```json
-"properties": {
- "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
- "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
- "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
- "endpointType": "VoIP",
- "mediaType": "Audio",
- "streamId": "13783",
- "transportType": "UDP",
- "roundTripTimeAvg": "45",
- "roundTripTimeMax": "46",
- "jitterAvg": "1",
- "jitterMax": "2",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 3:
-```json
-"properties": {
- "identifier": "null",
- "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
- "endpointId": null,
- "endpointType": "Server"
- "mediaType": "Audio",
- "streamId": "1000",
- "transportType": "UDP",
- "roundTripTimeAvg": "45",
- "roundTripTimeMax": "46",
- "jitterAvg": "1",
- "jitterMax": "4",
- "packetLossRateAvg": "0",
-```
-### Error Codes
-The `participantEndReason` will contain a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, per Endpoint. See [troubleshooting in Azure communication Calling SDK error codes](../troubleshooting-info.md?tabs=csharp%2cios%2cdotnet#calling-sdk-error-codes)
communication-services Enable Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/enable-logging.md
The following are instructions for configuring your Azure Monitor resource to st
These instructions apply to the following Communications Services logs: -- [Call Summary and Call Diagnostic logs](call-logs-azure-monitor.md)
+- [Call Summary and Call Diagnostic logs](logs/voice-and-video-logs.md)
## Access Diagnostic Settings To access Diagnostic Settings for your Communications Services, start by navigating to your Communications Services home page within Azure portal:
They're all viable and flexible options that can adapt to your specific storage
By choosing to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination, you enable more features within Azure Monitor generally and for your Communications Services. Log Analytics is a tool within Azure portal used to create, edit, and run [queries](../../../azure-monitor/logs/queries.md) with data in your Azure Monitor logs and metrics and [Workbooks](../../../azure-monitor/visualize/workbooks-overview.md), [alerts](../../../azure-monitor/alerts/alerts-log.md), [notification actions](../../../azure-monitor/alerts/action-groups.md), [REST API access](/rest/api/loganalytics/), and many others.
-For your Communications Services logs, we've provided a useful [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack) to provide an initial set of insights to quickly analyze and understand your data. These query packs are described here: [Log Analytics for Communications Services](log-analytics.md).
+For your Communications Services logs, we've provided a useful [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack) to provide an initial set of insights to quickly analyze and understand your data. These query packs are described here: [Log Analytics for Communications Services](query-call-logs.md).
communication-services Call Automation Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/call-automation-logs.md
+
+ Title: Azure Communication Services Call Automation logs
+
+description: Learn about logging for Azure Communication Services Call Automation.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services Call Automation Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Call Automation operational logs** - provides operational information on Call Automation API requests. These logs can be used to identify failure points, query all requests made in a call (using Correlation ID or Server Call ID) or query all requests made by a specific service application in the call (using Participant ID).
+
+## Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+## Call Automation operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `OperationName` | The operation associated with log record. |
+| `CorrelationID` | The identifier to identify a call and correlate events for a unique call. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `ResultType` | The status of the operation. |
+| `ResultSignature` | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `DurationMs` | The duration of the operation in milliseconds. |
+| `CallerIpAddress` | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+| `Level` | The severity level of the event. |
+| `URI` | The URI of the request. |
+| `CallConnectionId` | ID representing the call connection, if available. This ID is different for each participant and is used to identify their connection to the call. |
+| `ServerCallId` | A unique ID to identify a call. |
+| `SDKVersion` | SDK version used for the request. |
+| `SDKType` | The SDK type used for the request. |
+| `ParticipantId` | ID to identify the call participant that made the request. |
+| `SubOperationName` | Used to identify the sub type of media operation (play, recognize) |
communication-services Chat Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/chat-logs.md
+
+ Title: Azure Communication Services chat logs
+
+description: Learn about logging for Azure Communication Services chat.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services chat logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Authentication operational logs** - provides basic information related to the Authentication service
+* **Chat operational logs** - provides basic information related to the chat service
+
+## Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+## Authentication operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `OperationName` | The operation associated with log record. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `ResultType` | The status of the operation. |
+| `ResultSignature` | The sub-status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `DurationMs` | The duration of the operation in milliseconds. |
+| `CallerIpAddress` | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+| `Level` | The severity level of the event. |
+| `URI` | The URI of the request. |
+| `SdkType` | The SDK type used in the request. |
+| `PlatformType` | The platform type used in the request. |
+| `Identity` | The identity of Azure Communication Services or Teams user related to the operation. |
+| `Scopes` | The Communication Services scopes present in the access token. |
+
+## Chat operational logs
+
+| Property | Description |
+| -- | |
+| TimeGenerated | The timestamp (UTC) of when the log was generated. |
+| OperationName | The operation associated with log record. |
+| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| OperationVersion | The api-version associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| ResultType | The status of the operation. |
+| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| ResultDescription | The static text description of this operation. |
+| DurationMs | The duration of the operation in milliseconds. |
+| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+| Level | The severity level of the event. |
+| URI | The URI of the request. |
+| UserId | The request sender's user ID. |
+| ChatThreadId | The chat thread ID associated with the request. |
+| ChatMessageId | The chat message ID associated with the request. |
+| SdkType | The Sdk type used in the request. |
+| PlatformType | The platform type used in the request. |
communication-services Email Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/email-logs.md
+
+ Title: Azure Communication Services email logs
+
+description: Learn about logging for Azure Communication Services email.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services email logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Email Send Mail operational logs** - provides detailed information related to the Email service send mail requests.
+* **Email Status Update operational logs** - provides message and recipient level delivery status updates related to the Email service send mail requests.
+* **Email User Engagement operational logs** - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.
+
+## Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+## Email Send Mail operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `Location` | The region where the operation was processed. |
+| `OperationName` | The operation associated with log record. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
+| `Size` | Represents the total size in megabytes of the email body, subject, headers and attachments. |
+| `ToRecipientsCount` | The total # of unique email addresses on the To line. |
+| `CcRecipientsCount` | The total # of unique email addresses on the Cc line. |
+| `BccRecipientsCount` | The total # of unique email addresses on the Bcc line. |
+| `UniqueRecipientsCount` | This is the deduplicated total recipient count for the To, Cc and Bcc address fields. |
+| `AttachmentsCount` | The total # of attachments. |
+
+## Email Status Update operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `Location` | The region where the operation was processed. |
+| `OperationName` | The operation associated with log record. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
+| `RecipientId` | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
+| `DeliveryStatus` | The terminal status of the message. |
+
+## Email User Engagement operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `Location` | The region where the operation was processed. |
+| `OperationName` | The operation associated with log record. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
+| `RecipientId` | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
+| `EngagementType` | The type of user engagement being tracked. |
+| `EngagementContext` | The context represents what the user interacted with. |
+| `UserAgent` | The user agent string from the client. |
communication-services Network Traversal Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/network-traversal-logs.md
+
+ Title: Azure Communication Services Network Traversal logs
+
+description: Learn about logging for Azure Communication Services Network Traversal.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services Network Traversal Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Network Traversal operational logs** - provides basic information related to the Network Traversal service
+
+## Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+## Network Traversal operational logs
+
+| Dimension | Description|
+||--|
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `OperationName` | The operation associated with log record. |
+| `CorrelationId` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `OperationVersion` | The API-version associated with the operation or version of the operation (if there's no API version). |
+| `Category` | The log category of the event. Logs with the same log category and resource type will have the same properties fields. |
+| `ResultType` | The status of the operation (for example, Succeeded or Failed). |
+| `ResultSignature` | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `DurationMs` | The duration of the operation in milliseconds. |
+| `Level` | The severity level of the operation. |
+| `URI` | The URI of the request. |
+| `Identity` | The request sender's identity, if provided. |
+| `SdkType` | The SDK type being used in the request. |
+| `PlatformType` | The platform type being used in the request. |
+| `RouteType` | The routing methodology to where the ICE server will be located from the client (for example, Any or Nearest). |
+
communication-services Recording Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/recording-logs.md
+
+ Title: Azure Communication Services - Call Recording summary logs
+
+description: Learn about logging for Azure Communication Services Recording.
++++ Last updated : 10/27/2021+++++
+# Azure Communication Services Call Recording Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Call Recording Summary Logs** - provides summary information for call recordings like:
+ - Call duration.
+ - Media content (for example, audio/video, unmixed, or transcription).
+ - Format types used for the recording (for example, WAV or MP4).
+ - The reason why the recording ended.
+
+A recording file is generated at the end of a call or meeting. The recording can be initiated and stopped by either a user or an app (bot). It can also end because of a system failure.
+
+Summary logs are published after a recording is ready to be downloaded. The logs are published within the standard latency time for Azure Monitor resource logs. See [Log data ingestion time in Azure Monitor](../../../../azure-monitor/logs/data-ingestion-time.md#azure-metrics-resource-logs-activity-log).
+
+### Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+### Call Recording summary logs schema
+
+| Property name | Data type | Description |
+|- |--|--|
+|`timeGenerated`|DateTime|Time stamp (UTC) of when the log was generated.|
+|`operationName`|String|Operation associated with a log record.|
+|`correlationId`|String|ID that's used to correlate events between tables.|
+|`recordingID`|String|ID for the recording that this log refers to.|
+|`category`|String|Log category of the event. Logs with the same log category and resource type have the same property fields.|
+|`resultType`|String| Status of the operation.|
+|`level`|String |Severity level of the operation.|
+|`chunkCount`|Integer|Total number of chunks created for the recording.|
+|`channelType`|String|Channel type of the recording, such as mixed or unmixed.|
+|`recordingStartTime`|DateTime|Time that the recording started.|
+|`contentType`|String|Content of the recording, such as audio only, audio/video, or transcription.|
+|`formatType`|String|File format of the recording.|
+|`recordingLength`|Double|Duration of the recording in seconds.|
+|`audioChannelsCount`|Integer|Total number of audio channels in the recording.|
+|`recordingEndReason`|String|Reason why the recording ended.|
+
+### Call Recording and example data
+
+```json
+"operationName": "Call Recording Summary",
+"operationVersion": "1.0",
+"category": "RecordingSummaryPUBLICPREVIEW",
+
+```
+A call can have one recording or many recordings, depending on how many times a recording event is triggered.
+
+For example, if an agent initiates an outbound call on a recorded line and the call drops because of a poor network signal, `callid` will have one `recordingid` value. If the agent calls back the customer, the system generates a new `callid` instance and a new `recordingid` value.
++
+#### Example: Call Recording for one call to one recording
+
+```json
+"properties"
+{
+ "TimeGenerated":"2022-08-17T23:18:26.4332392Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "zzzzzz-cada-4164-be10-0000000000",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuZHBvaW5xxxxxxxxFmNjkwxxxxxxxxxxxxSZXNvdXJjZVNwZWNpZmljSWQiOiJiZGU5YzE3Ni05M2Q3LTRkMWYtYmYwNS0yMTMwZTRiNWNlOTgifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-16T09:07:54.0000000Z",
+ "RecordingLength": "73872.94",
+ "ChunkCount": 6,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+```
+
+If the agent initiates a recording and then stops and restarts the recording multiple times while the call is still on, `callid` will have many `recordingid` values, depending on how many times the recording events were triggered.
+
+#### Example: Call Recording for one call to many recordings
+
+```json
+
+{
+ "TimeGenerated": "2022-08-17T23:55:46.6304762Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "xxxxxxx-cf78-4156-zzzz-0000000fa29cc",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuZHBxxxxxxxxxxxxjkwMC05MmEwLTRlZDYtOTcxYS1kYzZlZTkzNjU0NzciLCJSxxxxxNwZWNpZmljSWQiOiI5ZmY2ZTY2Ny04YmQyLTQ0NzAtYmRkYy00ZTVhMmUwYmNmOTYifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-17T23:55:43.3304762Z",
+ "RecordingLength": 3.34,
+ "ChunkCount": 1,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+{
+ "TimeGenerated": "2022-08-17T23:55:56.7664976Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "xxxxxxx-cf78-4156-zzzz-0000000fa29cc",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuxxxxxxiOiI4NDFmNjkwMC1mMjBiLTQzNmQtYTg0Mi1hODY2YzE4M2Y0YTEiLCJSZXNvdXJjZVNwZWNpZmljSWQiOiI2YzRlZDI4NC0wOGQ1LTQxNjEtOTExMy1jYWIxNTc3YjM1ODYifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-17T23:55:54.0664976Z",
+ "RecordingLength": 2.7,
+ "ChunkCount": 1,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+```
+
+## Next steps
+
+- Get [Call Recording insights](../insights/call-recording-insights.md)
+- Learn more about [Call Recording](../../voice-video-calling/call-recording.md).
+
communication-services Sms Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/sms-logs.md
+
+ Title: Azure Communication Services SMS logs
+
+description: Learn about logging for Azure Communication Services SMS.
++++ Last updated : 04/14/2023+++++
+# Azure Communication Services SMS Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Pre-requisites
+
+Azure Communications Services provides monitoring and analytics features via [Azure Monitor Logs overview](../../../../azure-monitor/logs/data-platform-logs.md) and [Azure Monitor Metrics](../../../../azure-monitor/essentials/data-platform-metrics.md). Each Azure resource requires its own diagnostic setting, which defines the following criteria:
+ * Categories of logs and metric data sent to the destinations defined in the setting. The available categories will vary for different resource types.
+ * One or more destinations to send the logs. Current destinations include Log Analytics workspace, Event Hubs, and Azure Storage.
+ * A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), then create multiple settings. Each resource can have up to five diagnostic settings.
+
+The following are instructions for configuring your Azure Monitor resource to start creating logs and metrics for your Communications Services. For detailed documentation about using Diagnostic Settings across all Azure resources, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+> [!NOTE]
+> Under diagnostic setting name please select ΓÇ£SMS OperationalΓÇ¥ to enable the logs for SMS.
+
+## **Overview**
+
+SMS operational logs are records of events and activities that provide insights into your SMS API requests. They captured details about the performance and functionality of the SMS primitive, including details about the status of message whether they were successfully delivered, blocked, or failed to send.
+SMS operational logs contain information that help identify trends and patterns, resolve issues that might be impacting performance such failed message deliveries or serve issues. The logs include the following details:
+ * Messages sent.
+ * Message received.
+ * Messages delivered.
+ * Messages opt-in & opt-out.
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **SMS operational logs** - provides basic information related to the SMS service
++
+### Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+### SMS operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `OperationName` | The operation associated with log record. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `OperationVersion` | The api-version associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `ResultType` | The status of the operation. |
+| `ResultSignature` | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `ResultDescription` | The static text description of this operation. |
+| `DurationMs` | The duration of the operation in milliseconds. |
+| `CallerIpAddress` | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+| `Level` | The severity level of the event. |
+| `URI` | The URI of the request. |
+| `OutgoingMessageLength` | The number of characters in the outgoing message. |
+| `IncomingMessageLength` | The number of characters in the incoming message. |
+| `DeliveryAttempts` | The number of attempts made to deliver this message. |
+| `PhoneNumber` | The phone number the SMS message is being sent from. |
+| `SdkType` | The SDK type used in the request. |
+| `PlatformType` | The platform type used in the request. |
+| `Method` | The method used in the request. |
+|`NumberType`| The type of number, the SMS message is being sent from. It can be either **LongCodeNumber** or **ShortCodeNumber** or **DynamicAlphaSenderID**|
+|`MessageID`|Represent the unique messageId generated for every outgoing and incoming message. This can be found in the SMS API response object|
+|`Country`|Represent the countries where the SMS messages were sent to or received from|
+
+#### Example SMS sent log
+
+```json
+
+ [
+ {
+ "TimeGenerated": "2022-09-26T15:58:30.100Z",
+ "OperationName": "SMSMessagesSent",
+ "CorrelationId": "dDRmubfpNZZZZZnxBtw3Q.0",
+ "OperationVersion": "2020-07-20-preview1",
+ "Category":"SMSOperational",
+ "ResultType": "Succeeded",
+ "ResultSignature": 202,
+ "DurationMs": 130,
+ "CallerIpAddress": "127.0.0.1",
+ "Level": "Informational",
+ "URI": "https://sms-e2e-prod.communication.azure.com/sms?api-version=2020-07-20-preview1",
+ "OutgoingMessageLength": 151,
+ "IncomingMessageLength": 0,
+ "DeliveryAttempts": 0,
+ "PhoneNumber": "+18445791704",
+ "NumberType": "LongCodeNumber",
+ "SdkType": "azsdk-net-Communication.Sms",
+ "PlatformType": "Microsoft Windows 10.0.17763",
+ "Method": "POST",
+ "MessageId": "Outgoing_20230118181300ff00e5c9-876d-4958-86e3-4637484fe5bd_noam",
+ "Country": "US"
+ }
+ ]
+
+```
+
+#### Example SMS delivery report log
+```json
+
+ [
+ {
+ "TimeGenerated": "2022-09-26T15:58:30.200Z",
+ "OperationName": "SMSDeliveryReportsReceived",
+ "CorrelationId": "tl8WpUTESTSTSTccYadXJm.0",
+ "Category":"SMSOperational",
+ "ResultType": "Succeeded",
+ "ResultSignature": 200,
+ "DurationMs": 130,
+ "CallerIpAddress": "127.0.0.1",
+ "Level": "Informational",
+ "URI": "https://global.smsgw.prod.communication.microsoft.com/rtc/telephony/sms/DeliveryReport",
+ "OutgoingMessageLength": 0,
+ "IncomingMessageLength": 0,
+ "DeliveryAttempts": 1,
+ "PhoneNumber": "+18445791704",
+ "NumberType": "LongCodeNumber",
+ "SdkType": "",
+ "PlatformType": "",
+ "Method": "POST",
+ "MessageId": "Outgoing_20230118181300ff00e5c9-876d-4958-86e3-4637484fe5bd_noam",
+ "Country": "US"
+ }
+ ]
+
+```
+
+#### Example SMS received log
+```json
+
+ [
+ {
+ "TimeGenerated": "2022-09-27T15:58:30.200Z",
+ "OperationName": "SMSMessagesReceived",
+ "CorrelationId": "e2KFTSTSTI/5PTx4ZZB.0",
+ "Category":"SMSOperational",
+ "ResultType": "Succeeded",
+ "ResultSignature": 200,
+ "DurationMs": 130,
+ "CallerIpAddress": "127.0.0.1",
+ "Level": "Informational",
+ "URI": "https://global.smsgw.prod.communication.microsoft.com/rtc/telephony/sms/inbound",
+ "OutgoingMessageLength": 0,
+ "IncomingMessageLength": 110,
+ "DeliveryAttempts": 0,
+ "PhoneNumber": "+18445791704",
+ "NumberType": "LongCodeNumber",
+ "SdkType": "",
+ "PlatformType": "",
+ "Method": "POST",
+ "MessageId": "Incoming_2023011818121211c6ee31-63fe-477c-8d51-f800543c6694",
+ "Country": "US"
+ }
+ ]
+
+```
+
+ (see also [FAQ](../../../../azure-monitor/faq.yml)).
communication-services Voice And Video Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md
+
+ Title: Azure Communication Services - voice and video logs
+
+description: Learn about logging for Azure Communication Services Voice and Video.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services voice and video Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Data Concepts
+The following are high level descriptions of data concepts specific to Voice and Video calling. These concepts are important to review in order to understand the meaning of the data captured in the logs.
+
+### Entities and IDs
+
+A *Call*, as represented in the data, is an abstraction depicted by the `correlationId`. `CorrelationId`s are unique per Call, and are time-bound by `callStartTime` and `callDuration`. Every Call is an event that contains data from two or more *Endpoints*, which represent the various human, bot, or server participants in the Call.
+
+A *Participant* (`participantId`) is present only when the Call is a *Group* Call, as it represents the connection between an Endpoint and the server.
+
+An *Endpoint* is the most unique entity, represented by `endpointId`. `EndpointType` tells you whether the Endpoint represents a human user (PSTN, VoIP), a Bot (Bot), or the server that is managing multiple Participants within a Call. When an `endpointType` is `"Server"`, the Endpoint is not assigned a unique ID. By analyzing endpointType and the number of `endpointIds`, you can determine how many users and other non-human Participants (bots, servers) join a Call. Our native SDKs (Android, iOS) reuse the same `endpointId` for a user across multiple Calls, thus enabling an understanding of experience across sessions. This differs from web-based Endpoints, which always generates a new `endpointId` for each new Call.
+
+A *Stream* is the most granular entity, as there is one Stream per direction (inbound/outbound) and `mediaType` (for example, audio and video).
+
+## Data Definitions
+
+### Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+### Call Summary log schema
+The Call Summary Log contains data to help you identify key properties of all Calls. A different Call Summary Log is created per each `participantId` (`endpointId` in the case of P2P calls) in the Call.
+
+> [!IMPORTANT]
+> Participant information in the call summary log vary based on the participant tenant. The SDK and OS version is redacted if the participant is not within the same tenant (also referred to as cross-tenant) as the ACS resource. Cross-tenantsΓÇÖ participants are classified as external users invited by a resource tenant to join and collaborate during a call.
+
+| Property | Description |
+|-|-|
+| `time` | The timestamp (UTC) of when the log was generated. |
+| `operationName` | The operation associated with log record. |
+| `operationVersion` | The api-version associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
+| `correlationId` | `correlationId` is the unique ID for a Call. The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call, and it can be used to join data from different logs. If you ever need to open a support case with Microsoft, the `correlationId` is used to easily identify the Call you're troubleshooting. |
+| `identifier` | This value is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
+| `callStartTime` | A timestamp for the start of the call, based on the first attempted connection from any Endpoint. |
+| `callDuration` | The duration of the Call expressed in seconds, based on the first attempted connection and end of the last connection between two endpoints. |
+| `callType` | Contains either `"P2P"` or `"Group"`. A `"P2P"` Call is a direct 1:1 connection between only two, non-server endpoints. A `"Group"` Call is a Call that has more than two endpoints or is created as `"Group"` Call prior to the connection. |
+| `teamsThreadId` | This ID is only relevant when the Call is organized as a Microsoft Teams meeting, representing the Microsoft Teams ΓÇô Azure Communication Services interoperability use-case. This ID is exposed in operational logs. You can also get this ID through the Chat APIs. |
+| `participantId` | This ID is generated to represent the two-way connection between a `"Participant"` Endpoint (`endpointType` = `"Server"`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
+| `participantStartTime` | Timestamp for beginning of the first connection attempt by the participant. |
+| `participantDuration` | The duration of each Participant connection in seconds, from `participantStartTime` to the timestamp when the connection is ended. |
+| `participantEndReason` | Contains Calling SDK error codes emitted by the SDK when relevant for each `participantId`. See Calling SDK error codes. |
+| `endpointId` | Unique ID that represents each Endpoint connected to the call, where the Endpoint type is defined by `endpointType`. When the value is `null`, the connected entity is the Communication Services server (`endpointType`= `"Server"`). `EndpointId` can sometimes persist for the same user across multiple calls (`correlationId`) for native clients. The number of `endpointId`s determine the number of Call Summary Logs. A distinct Summary Log is created for each `endpointId`. |
+| `endpointType` | This value describes the properties of each Endpoint connected to the Call. Can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. |
+| `sdkVersion` | Version string for the Communication Services Calling SDK version used by each relevant Endpoint. (Example: `"1.1.00.20212500"`) |
+| `osVersion` | String that represents the operating system and version of each Endpoint device. |
+| `participantTenantId` | The ID of the Microsoft tenant associated with the participant. This field is used to guide cross-tenant redaction.
++
+### Call Diagnostic log schema
+Call Diagnostic Logs provide important information about the Endpoints and the media transfers for each Participant, and as measurements that help to understand quality issues.
+For each Endpoint within a Call, a distinct Call Diagnostic Log is created for outbound media streams (audio, video, etc.) between Endpoints.
+In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In Group Calls the participantId serves as key identifier to join the related outbound logs into a distinct Participant connection. Note that Call diagnostic logs remain intact and are the same regardless of the participant tenant.
+
+> [!NOTE]
+> In this document, P2P and group calls are by default within the same tenant, for all call scenarios that are cross-tenant they are specified accordingly throughout the document.
+
+| Property | Description |
+||-|
+| `operationName` | The operation associated with log record. |
+| `operationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
+| `correlationId` | The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationId` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationId` can used to easily identify the Call you're troubleshooting. |
+| `participantId` | This ID is generated to represent the two-way connection between a "Participant" Endpoint (`endpointType` = `ΓÇ£ServerΓÇ¥`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
+| `identifier` | This valueis the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams object ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
+| `endpointId` | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationId`) for native clients but are unique for every Call when the client is a web browser. |
+| `endpointType` | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, `"Voicemail"`, `"Anonymous"`, or `"Unknown"`. |
+| `mediaType` | This string value describes the type of media being transmitted between endpoints within each stream. Possible values include `ΓÇ£AudioΓÇ¥`, `ΓÇ£VideoΓÇ¥`, `ΓÇ£VBSSΓÇ¥` (Video-Based Screen Sharing), and `ΓÇ£AppSharingΓÇ¥`. |
+| `streamId` | Non-unique integer which, together with `mediaType`, can be used to uniquely identify streams of the same `participantId`.|
+| `transportType` | String value which describes the network transport protocol per `participantId`. Can contain `"UDPΓÇ¥`, `ΓÇ£TCPΓÇ¥`, or `ΓÇ£UnrecognizedΓÇ¥`. `"Unrecognized"` indicates that the system could not determine if the `transportType` was TCP or UDP. |
+| `roundTripTimeAvg` | This metric is the average time it takes to get an IP packet from one Endpoint to another within a `participantDuration`. This network propagation delay is related to the physical distance between the two points, the speed of light, and any overhead taken by the various routers in between. The latency is measured as one-way or Round-trip Time (RTT). Its value expressed in milliseconds, and an RTT greater than 500ms should be considered as negatively impacting the Call quality. |
+| `roundTripTimeMax` | The maximum RTT (ms) measured per media stream during a `participantDuration` in a group Call or `callDuration` in a P2P Call. |
+| `jitterAvg` | This metric is the average change in delay between successive packets. Azure Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering, which is approximately at `jitterAvg` >30 ms, that a negative quality impact is likely occurring. The packets arriving at different speeds cause a speaker's voice to sound robotic. This metric is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
+| `jitterMax` | This metric is the maximum jitter value measured between packets per media stream. Bursts in network conditions can cause issues in the audio/video traffic flow. |
+| `packetLossRateAvg` | This metric is the average percentage of packets that are lost. Packet loss directly affects audio qualityΓÇöfrom small, individual lost packets that have almost no impact to back-to-back burst losses that cause audio to cut out completely. The packets being dropped and not arriving at their intended destination cause gaps in the media, resulting in missed syllables and words, and choppy video and sharing. A packet loss rate of greater than 10% (0.1) should be considered a rate that's likely having a negative quality impact. This metric is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
+| `packetLossRateMax` | This value represents the maximum packet loss rate (%) per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. Bursts in network conditions can cause issues in the audio/video traffic flow.
+### P2P vs. Group Calls
+
+There are two types of Calls (represented by `callType`): P2P and Group.
+
+**P2P** calls are a connection between only two Endpoints, with no server Endpoint. P2P calls are initiated as a Call between those Endpoints and are not created as a group Call event prior to the connection.
+
+ :::image type="content" source="../media/call-logs-azure-monitor/p2p-diagram.png" alt-text="Screenshot displays P2P call across 2 endpoints.":::
+
+**Group** Calls include any Call that has more than 2 Endpoints connected. Group Calls include a server Endpoint, and the connection between each Endpoint and the server. P2P Calls that add an additional Endpoint during the Call cease to be P2P, and they become a Group Call. You can determine the timeline of when each endpoints joined the call by using the `participantStartTime` and `participantDuration` metrics.
++
+ :::image type="content" source="../media/call-logs-azure-monitor/group-call-version-a.png" alt-text="Screenshot displays group call across multiple endpoints.":::
++
+## Log Structure
+
+Two types of logs are created: **Call Summary** logs and **Call Diagnostic** logs.
+
+Call Summary Logs contain basic information about the Call, including all the relevant IDs, timestamps, Endpoint and SDK information. For each participant within a call, a distinct call summary log is created (if someone rejoins a call, they have the same EndpointId, but a different ParticipantId, so there can be two Call Summary logs for that endpoint).
+
+Call Diagnostic Logs contain information about the Stream as well as a set of metrics that indicate quality of experience measurements. For each Endpoint within a Call (including the server), a distinct Call Diagnostic Log is created for each media stream (audio, video, etc.) between Endpoints. In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In a Group Call, each stream associated with `endpointType`= `"Server"` creates a log containing data for the inbound streams, and all other streams creates logs containing data for the outbound streams for all non-sever endpoints. In Group Calls, use the `participantId` as the key to join the related inbound/outbound logs into a distinct Participant connection.
+
+### Example 1: P2P Call
+
+The below diagram represents two endpoints connected directly in a P2P Call. In this example, 2 Call Summary Logs would be created (one per `participantID`) and four Call Diagnostic Logs would be created (one per media stream). Each log contains data relating to the outbound stream of the `participantID`.
+++
+### Example 2: Group Call
+
+The below diagram represents a Group Call example with three `participantIDs`, which means three `participantIDs` (`endpointIds` can potentially appear in multiple Participants, e.g. when rejoining a Call from the same device) and a Server Endpoint. One Call Summary Logs would be created per `participantID`, and four Call Diagnostic Logs would be created relating to each `participantID`, one for each media stream.
+
+
+### Example 3: P2P Call cross-tenant
+The below diagram represents two participants across multiple tenants that are connected directly in a P2P Call. In this example, one Call Summary Logs would be created (one per participant) with redacted OS and SDK versioning and four Call Diagnostic Logs would be created (one per media stream). Each log contains data relating to the outbound stream of the `participantID`.
+
++
+### Example 4: Group Call cross-tenant
+The below diagram represents a Group Call example with three `participantIds` across multiple tenants. One Call Summary Logs would be created per participant with redacted OS and SDK versioning, and four Call Diagnostic Logs would be created relating to each `participantId` , one for each media stream.
+++
+> [!NOTE]
+> Only outbound diagnostic logs can be supported in this release.
+> Please note that participants and bots identity are treated the same way, as a result OS and SDK versioning associated to the bot and the participant can be redacted
+
+## Sample Data
+
+### P2P Call
+
+Shared fields for all logs in the call:
+
+```json
+"time": "2021-07-19T18:46:50.188Z",
+"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
+"correlationId": "8d1a8374-344d-4502-b54b-ba2d6daaf0ae",
+```
+
+#### Call Summary Logs
+Call Summary Logs have shared operation and category information:
+
+```json
+"operationName": "CallSummary",
+"operationVersion": "1.0",
+"category": "CallSummary",
+
+```
+Call Summary for VoIP user 1
+```json
+"properties": {
+ "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
+ "callStartTime": "2021-07-19T17:54:05.113Z",
+ "callDuration": 6,
+ "callType": "P2P",
+ "teamsThreadId": "null",
+ "participantId": "null",
+ "participantStartTime": "2021-07-19T17:54:06.758Z",
+ "participantDuration": "5",
+ "participantEndReason": "0",
+ "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.0.1.0",
+ "osVersion": "Windows 10.0.17763 Arch: x64"
+}
+```
+
+Call summary for VoIP user 2
+```json
+"properties": {
+ "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
+ "callStartTime": "2021-07-19T17:54:05.335Z",
+ "callDuration": 6,
+ "callType": "P2P",
+ "teamsThreadId": "null",
+ "participantId": "null",
+ "participantStartTime": "2021-07-19T17:54:06.335Z",
+ "participantDuration": "5",
+ "participantEndReason": "0",
+ "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.1.0.0",
+ "osVersion": "null"
+}
+```
+Call Summary Logs crossed tenants: Call summary for VoIP user 1
+```json
+"properties": {
+ "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
+ "callStartTime": "2022-08-14T06:18:27.010Z",
+ "callDuration": 520,
+ "callType": "P2P",
+ "teamsThreadId": "null",
+ "participantId": "null",
+ "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
+ "participantStartTime": "2022-08-14T06:18:27.010Z",
+ "participantDuration": "520",
+ "participantEndReason": "0",
+ "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
+ "endpointType": "VoIP",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+Call summary for PSTN call
+
+> [!NOTE]
+> P2P or group call logs emitted have OS, and SDK version redacted regardless is the participant or botΓÇÖs tenant
+
+```json
+"properties": {
+ "identifier": "b1999c3e-bbbb-4650-9b23-9999bdabab47",
+ "callStartTime": "2022-08-07T13:53:12Z",
+ "callDuration": 1470,
+ "callType": "Group",
+ "teamsThreadId": "19:36ec5177126fff000aaa521670c804a3@thread.v2",
+ "participantId": " b25cf111-73df-4e0a-a888-640000abe34d",
+ "participantStartTime": "2022-08-07T13:56:45Z",
+ "participantDuration": 960,
+ "participantEndReason": "0",
+ "endpointId": "8731d003-6c1e-4808-8159-effff000aaa2",
+ "endpointType": "PSTN",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+
+#### Call Diagnostic Logs
+Call diagnostics logs share operation information:
+```json
+"operationName": "CallDiagnostics",
+"operationVersion": "1.0",
+"category": "CallDiagnostics",
+```
+Diagnostic log for audio stream from VoIP Endpoint 1 to VoIP Endpoint 2:
+```json
+"properties": {
+ "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
+ "participantId": "null",
+ "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "1000",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "82",
+ "roundTripTimeMax": "88",
+ "jitterAvg": "1",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from VoIP Endpoint 2 to VoIP Endpoint 1:
+```json
+"properties": {
+ "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
+ "participantId": "null",
+ "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "1363841599",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "78",
+ "roundTripTimeMax": "84",
+ "jitterAvg": "1",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for video stream from VoIP Endpoint 1 to VoIP Endpoint 2:
+```json
+"properties": {
+ "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
+ "participantId": "null",
+ "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
+ "endpointType": "VoIP",
+ "mediaType": "Video",
+ "streamId": "2804",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "103",
+ "roundTripTimeMax": "143",
+ "jitterAvg": "0",
+ "jitterMax": "4",
+ "packetLossRateAvg": "3.146336E-05",
+ "packetLossRateMax": "0.001769911"
+}
+```
+### Group Call
+
+The data would be generated in three Call Summary Logs and 6 Call Diagnostic Logs. Shared fields for all logs in the Call:
+```json
+"time": "2021-07-05T06:30:06.402Z",
+"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
+"correlationId": "341acde7-8aa5-445b-a3da-2ddadca47d22",
+```
+
+#### Call Summary Logs
+Call Summary Logs have shared operation and category information:
+```json
+"operationName": "CallSummary",
+"operationVersion": "1.0",
+"category": "CallSummary",
+```
+
+Call summary for VoIP Endpoint 1:
+```json
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
+ "callStartTime": "2021-07-05T06:16:40.240Z",
+ "callDuration": 87,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
+ "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
+ "participantStartTime": "2021-07-05T06:16:44.235Z",
+ "participantDuration": "82",
+ "participantEndReason": "0",
+ "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.0.0.3",
+ "osVersion": "Darwin Kernel Version 18.7.0: Mon Nov 9 15:07:15 PST 2020; root:xnu-4903.272.3~3/RELEASE_ARM64_S5L8960X"
+}
+```
+Call summary for VoIP Endpoint 3:
+```json
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
+ "callStartTime": "2021-07-05T06:16:40.240Z",
+ "callDuration": 87,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLTk2ZDUtYTZlM2I2ZjgxOTkw@thread.v2",
+ "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
+ "participantStartTime": "2021-07-05T06:16:40.240Z",
+ "participantDuration": "87",
+ "participantEndReason": "0",
+ "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.0.0.3",
+ "osVersion": "Android 11.0; Manufacturer: Google; Product: redfin; Model: Pixel 5; Hardware: redfin"
+}
+```
+Call summary for PSTN Endpoint 2:
+```json
+"properties": {
+ "identifier": "null",
+ "callStartTime": "2021-07-05T06:16:40.240Z",
+ "callDuration": 87,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
+ "participantId": "515650f7-8204-4079-ac9d-d8f4bf07b04c",
+ "participantStartTime": "2021-07-05T06:17:10.447Z",
+ "participantDuration": "52",
+ "participantEndReason": "0",
+ "endpointId": "46387150-692a-47be-8c9d-1237efe6c48b",
+ "endpointType": "PSTN",
+ "sdkVersion": "null",
+ "osVersion": "null"
+}
+```
+Call Summary Logs cross-tenant
+```json
+"properties": {
+ "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
+ "callStartTime": "2022-08-14T06:18:27.010Z",
+ "callDuration": 912,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
+ "participantId": "aa1dd7da-5922-4bb1-a4fa-e350a111fd9c",
+ "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
+ "participantStartTime": "2022-08-14T06:18:27.010Z",
+ "participantDuration": "902",
+ "participantEndReason": "0",
+ "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
+ "endpointType": "VoIP",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+Call summary log crossed tenant with bot as a participant
+Call summary for bot
+```json
+
+"properties": {
+ "identifier": "b1902c3e-b9f7-4650-9b23-9999bdabab47",
+ "callStartTime": "2022-08-09T16:00:32Z",
+ "callDuration": 1470,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MmQwZDcwYTQtZ000HWE6NzI4LTg1YTAtNXXXXX99999ZZZZZ@thread.v2",
+ "participantId": "66e9d9a7-a434-4663-d91d-fb1ea73ff31e",
+ "participantStartTime": "2022-08-09T16:14:18Z",
+ "participantDuration": 644,
+ "participantEndReason": "0",
+ "endpointId": "69680ec2-5ac0-4a3c-9574-eaaa77720b82",
+ "endpointType": "Bot",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+#### Call Diagnostic Logs
+Call diagnostics logs share operation information:
+```json
+"operationName": "CallDiagnostics",
+"operationVersion": "1.0",
+"category": "CallDiagnostics",
+```
+Diagnostic log for audio stream from VoIP Endpoint 1 to Server Endpoint:
+```json
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
+ "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
+ "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "14884",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "46",
+ "roundTripTimeMax": "48",
+ "jitterAvg": "0",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 1:
+```json
+"properties": {
+ "identifier": null,
+ "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
+ "endpointId": null,
+ "endpointType": "Server",
+ "mediaType": "Audio",
+ "streamId": "2001",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "42",
+ "roundTripTimeMax": "44",
+ "jitterAvg": "1",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from VoIP Endpoint 3 to Server Endpoint:
+```json
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
+ "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
+ "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "13783",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "45",
+ "roundTripTimeMax": "46",
+ "jitterAvg": "1",
+ "jitterMax": "2",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 3:
+```json
+"properties": {
+ "identifier": "null",
+ "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
+ "endpointId": null,
+ "endpointType": "Server"
+ "mediaType": "Audio",
+ "streamId": "1000",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "45",
+ "roundTripTimeMax": "46",
+ "jitterAvg": "1",
+ "jitterMax": "4",
+ "packetLossRateAvg": "0",
+```
+### Error Codes
+The `participantEndReason` contains a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, per Endpoint. See [troubleshooting in Azure communication Calling SDK error codes](../../troubleshooting-info.md?tabs=csharp%2cios%2cdotnet#calling-sdk-error-codes)
communication-services Query Call Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/query-call-logs.md
+
+ Title: Azure Communication Services - query call logs
+
+description: About using Log Analytics for Call Summary and Call Diagnostic logs
++++ Last updated : 10/25/2021+++++
+# Query call logs
+
+## Overview and access
+
+Before you can take advantage of [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your Communications Services logs, you must first follow the steps outlined in [Enable logging in Diagnostic Settings](enable-logging.md). Once you've enabled your logs and a [Log Analytics Workspace](../../../azure-monitor/logs/workspace-design.md), you will have access to many helpful [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) that will help you quickly visualize and understand the data available in your logs, which are described below. Through Log Analytics, you also get access to more Communications Services Insights via Azure Monitor Workbooks, the ability to create our own queries and Workbooks, [Log Analytics APIs overview](../../../azure-monitor/logs/api/overview.md) to any query.
+
+### Access
+You can access the queries by starting on your Communications Services resource page, and then clicking on "Logs" in the left navigation within the Monitor section:
++
+From there, you're presented with a modal screen that contains all of the [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) available for your Communications Services, with list of Query Packs available to navigate to the left.
++
+If you close the modal screen, you can still navigate to the various query packs, directly access data in the form of tables based on the schema of the logs and metrics you've enabled in your Diagnostic Setting. Here, you can create your own queries from the data using [KQL (Kusto)](/azure/data-explorer/kusto/query/). Learn more about using, editing, and creating queries by reading more about: [Log Analytics Queries](../../../azure-monitor/logs/queries.md)
+++
+## Default query packs for call summary and call diagnostic logs
+The following are descriptions of each query in the [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack), for the [Call Summary and Call Diagnostic logs](logs/voice-and-video-logs.md) including code samples and example outputs for each query available:
+### Call Overview Queries
+#### Number of participants per call
+
+```
+// Count number of calls and participants,
+// and print average participants per call
+ACSCallSummary
+| distinct CorrelationId, ParticipantId, EndpointId
+| summarize num_participants=count(), num_calls=dcount(CorrelationId)
+| extend avg_participants = todecimal(num_participants) / todecimal(num_calls)
+```
+
+Sample output:
++
+#### Number of participants per group call
+
+```
+// Count number of participants per group call
+ACSCallSummary
+| where CallType == 'Group'
+| distinct CorrelationId, ParticipantId
+| summarize num_participants=count() by CorrelationId
+| summarize participant_counts=count() by num_participants
+| order by num_participants asc
+| render columnchart with (xcolumn = num_participants, title="Number of participants per group call")
+```
+
+Sample output:
++
+#### Ratio of call types
+
+```
+// Ratio of call types
+ACSCallSummary
+| summarize call_types=dcount(CorrelationId) by CallType
+| render piechart title="Call Type Ratio"
+
+```
+
+Sample output:
++
+#### Call duration distribution
+
+```
+// Call duration histogram
+ACSCallSummary
+| distinct CorrelationId, CallDuration
+|summarize duration_counts=count() by CallDuration
+| order by CallDuration asc
+| render columnchart with (xcolumn = CallDuration, title="Call duration histogram")
+```
+
+Sample output:
++
+#### Call duration percentiles
+
+```
+// Call duration percentiles
+ACSCallSummary
+| distinct CorrelationId, CallDuration
+| summarize avg(CallDuration), percentiles(CallDuration, 50, 90, 99)
+```
+
+Sample output:
++
+### Endpoint information queries
+
+#### Number of endpoints per call
+
+```
+// Count number of calls and endpoints,
+// and print average endpoints per call
+ACSCallSummary
+| distinct CorrelationId, EndpointId
+| summarize num_endpoints=count(), num_calls=dcount(CorrelationId)
+| extend avg_endpoints = todecimal(num_endpoints) / todecimal(num_calls)
+```
+
+Sample output:
++
+#### Ratio of SDK versions
+
+```
+// Ratio of SDK Versions
+ACSCallSummary
+| distinct CorrelationId, ParticipantId, EndpointId, SdkVersion
+| summarize sdk_counts=count() by SdkVersion
+| order by SdkVersion asc
+| render piechart title="SDK Version Ratio"
+```
+
+Sample output:
++
+#### Ratio of OS versions (simplified OS name)
+
+```
+// Ratio of OS Versions (simplified OS name)
+ACSCallSummary
+| distinct CorrelationId, ParticipantId, EndpointId, OsVersion
+| extend simple_os = case( indexof(OsVersion, "Android") != -1, tostring(split(OsVersion, ";")[0]),
+ indexof(OsVersion, "Darwin") != -1, tostring(split(OsVersion, ":")[0]),
+ indexof(OsVersion, "Windows") != -1, tostring(split(OsVersion, ".")[0]),
+ OsVersion
+ )
+| summarize os_counts=count() by simple_os
+| order by simple_os asc
+| render piechart title="OS Version Ratio"
+```
+
+Sample output:
++
+### Media stream queries
+#### Streams per call
+
+```
+// Count number of calls and streams,
+// and print average streams per call
+ACSCallDiagnostics
+| summarize num_streams=count(), num_calls=dcount(CorrelationId)
+| extend avg_streams = todecimal(num_streams) / todecimal(num_calls)
+```
+Sample output:
++
+#### Streams per call histogram
+
+```
+// Distribution of streams per call
+ACSCallDiagnostics
+| summarize streams_per_call=count() by CorrelationId
+| summarize stream_counts=count() by streams_per_call
+| order by streams_per_call asc
+| render columnchart title="Streams per call histogram"
+```
++
+#### Ratio of media types
+
+```
+// Ratio of media types by call
+ACSCallDiagnostics
+| summarize media_types=count() by MediaType
+| render piechart title="Media Type Ratio"
+```
++
+### Quality metrics queries
+
+#### Average telemetry values
+
+```
+// Average telemetry values over all streams
+ACSCallDiagnostics
+| summarize Avg_JitterAvg=avg(JitterAvg),
+ Avg_JitterMax=avg(JitterMax),
+ Avg_RoundTripTimeAvg=avg(RoundTripTimeAvg),
+ Avg_RoundTripTimeMax=avg(RoundTripTimeMax),
+ Avg_PacketLossRateAvg=avg(PacketLossRateAvg),
+ Avg_PacketLossRateMax=avg(PacketLossRateMax)
+```
++
+#### JitterAvg histogram
+
+```
+// Jitter Average Histogram
+ACSCallDiagnostics
+| where isnotnull(JitterAvg)
+| summarize JitterAvg_counts=count() by JitterAvg
+| order by JitterAvg asc
+| render columnchart with (xcolumn = JitterAvg, title="JitterAvg histogram")
+```
++
+#### JitterMax histogram
+
+```
+// Jitter Max Histogram
+ACSCallDiagnostics
+| where isnotnull(JitterMax)
+|summarize JitterMax_counts=count() by JitterMax
+| order by JitterMax asc
+| render columnchart with (xcolumn = JitterMax, title="JitterMax histogram")
+```
++
+#### PacketLossRateAvg histogram
+```
+// PacketLossRate Average Histogram
+ACSCallDiagnostics
+| where isnotnull(PacketLossRateAvg)
+|summarize PacketLossRateAvg_counts=count() by bin(PacketLossRateAvg, 0.01)
+| order by PacketLossRateAvg asc
+| render columnchart with (xcolumn = PacketLossRateAvg, title="PacketLossRateAvg histogram")
+```
++
+#### PacketLossRateMax histogram
+```
+// PacketLossRate Max Histogram
+ACSCallDiagnostics
+| where isnotnull(PacketLossRateMax)
+|summarize PacketLossRateMax_counts=count() by bin(PacketLossRateMax, 0.01)
+| order by PacketLossRateMax asc
+| render columnchart with (xcolumn = PacketLossRateMax, title="PacketLossRateMax histogram")
+```
++
+#### RoundTripTimeAvg histogram
+```
+// RoundTripTime Average Histogram
+ACSCallDiagnostics
+| where isnotnull(RoundTripTimeAvg)
+|summarize RoundTripTimeAvg_counts=count() by RoundTripTimeAvg
+| order by RoundTripTimeAvg asc
+| render columnchart with (xcolumn = RoundTripTimeAvg, title="RoundTripTimeAvg histogram")
+```
++
+#### RoundTripTimeMax histogram
+```
+// RoundTripTime Max Histogram
+ACSCallDiagnostics
+| where isnotnull(RoundTripTimeMax)
+|summarize RoundTripTimeMax_counts=count() by RoundTripTimeMax
+| order by RoundTripTimeMax asc
+| render columnchart with (xcolumn = RoundTripTimeMax, title="RoundTripTimeMax histogram")
+```
++
+#### Poor Jitter Quality
+```
+// Get proportion of calls with poor quality jitter
+// (defined as jitter being higher than 30ms)
+ACSCallDiagnostics
+| extend JitterQuality = iff(JitterAvg > 30, "Poor", "Good")
+| summarize count() by JitterQuality
+| render piechart title="Jitter Quality"
+```
+++
+#### PacketLossRate Quality
+```
+// Get proportion of calls with poor quality packet loss
+// rate (defined as packet loss being higher than 10%)
+ACSCallDiagnostics
+| extend PacketLossRateQuality = iff(PacketLossRateAvg > 0.1, "Poor", "Good")
+| summarize count() by PacketLossRateQuality
+| render piechart title="Packet Loss Rate Quality"
+```
++
+#### RoundTripTime Quality
+```
+// Get proportion of calls with poor quality packet loss
+// rate (defined as packet loss being higher than 10%)
+ACSCallDiagnostics
+| extend PacketLossRateQuality = iff(PacketLossRateAvg > 0.1, "Poor", "Good")
+| summarize count() by PacketLossRateQuality
+| render piechart title="Packet Loss Rate Quality"
+```
++
+### Parameterizable Queries
+
+#### Daily calls in the last week
+```
+// Histogram of daily calls over the last week
+ACSCallSummary
+| where CallStartTime > now() - 7d
+| distinct CorrelationId, CallStartTime
+| extend hour = floor(CallStartTime, 1d)
+| summarize event_count=count() by day
+| sort by day asc
+| render columnchart title="Number of calls in last week"
+```
++
+#### Calls per hour in last day
+```
+// Histogram of calls per hour in the last day
+ACSCallSummary
+| where CallStartTime > now() - 1d
+| distinct CorrelationId, CallStartTime
+| extend hour = floor(CallStartTime, 1h)
+| summarize event_count=count() by hour
+| sort by hour asc
+| render columnchart title="Number of calls per hour in last day"
+```
+
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/authentication.md
User access tokens are generated using the Identity SDK and are associated with
## Using identity for monitoring and metrics
-The user identity is intended to act as a primary key for logs and metrics collected through Azure Monitor. If you'd like to get a view of all of a specific user's calls, for example, you should set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a singular user. Learn more about [log analytics](../concepts/analytics/log-analytics.md), and [metrics](../concepts/metrics.md) available to you.
+The user identity is intended to act as a primary key for logs and metrics collected through Azure Monitor. If you'd like to get a view of all of a specific user's calls, for example, you should set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a singular user. Learn more about [log analytics](../concepts/analytics/query-call-logs.md), and [metrics](../concepts/metrics.md) available to you.
## Next steps
communication-services Call Logs Azure Monitor Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-logs-azure-monitor-access.md
To access telemetry for Azure Communication Services Voice & Video resources, follow these steps. ## Enable logging
-1. First, you will need to create a storage account for your logs. Go to [Create a storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) for instructions to complete this step. See also [Storage account overview](../../storage/common/storage-account-overview.md) for more information on the types and features of different storage options. If you already have an Azure storage account go to Step 2.
+1. First, you need to create a storage account for your logs. Go to [Create a storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) for instructions to complete this step. For more information, see [Storage account overview](../../storage/common/storage-account-overview.md) on the types and features of different storage options. If you already have an Azure storage account, go to Step 2.
-1. When you've created your storage account, next you need to enable logging by following the instructions in [Enable diagnostic logs in your resource](./logging-and-diagnostics.md#enable-diagnostic-logs-in-your-resource). You will select the check boxes for the logs "CallSummaryPRIVATEPREVIEW" and "CallDiagnosticPRIVATEPREVIEW".
+2. When you've created your storage account, next you need to enable logging by following the instructions in [Enable diagnostic logs in your resource](./analytics/enable-logging.md). You select the check boxes for the logs "CallSummaryPRIVATEPREVIEW" and "CallDiagnosticPRIVATEPREVIEW".
-1. Next, select the "Archive to a storage account" box and then select the storage account for your logs in the drop-down menu below. The "Send to Analytics workspace" option isn't currently available for Private Preview of this feature, but it will be made available when this feature is made public.
+3. Next, select the "Archive to a storage account" box and then select the storage account for your logs in the drop-down menu. The "Send to Analytics workspace" option isn't currently available for Private Preview of this feature, but it is made available when this feature is made public.
:::image type="content" source="media\call-logs-images\call-logs-access-diagnostic-setting.png" alt-text="Azure Monitor Diagnostic setting"::: -- ## Access Your Logs To access your logs, go to the storage account you designated in Step 3 above by navigating to [Storage Accounts](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Storage%2FStorageAccounts) in the Azure portal.
From there, you can download all logs or individual logs.
## Next Steps -- Learn more about [Logging and Diagnostics](./logging-and-diagnostics.md)
+- Access logs for [voice and video](./analytics/logs/voice-and-video-logs.md), [chat](./analytics/logs/chat-logs.md), [email](./analytics/logs/email-logs.md), [network traversal](./analytics/logs/network-traversal-logs.md), [recording](./analytics/logs/recording-logs.md), [SMS](./analytics/logs/sms-logs.md) and [call automation](./analytics/logs/call-automation-logs.md).
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
The test provides a **unique identifier** for your test, which you can provide o
- [Use Pre-Call Diagnostic APIs to build your own tech check](../voice-video-calling/pre-call-diagnostics.md) - [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)-- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
+- [Consume call logs with Azure Monitor](../analytics/logs/voice-and-video-logs.md)
communication-services Real Time Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/real-time-inspection.md
The tool includes the ability to download the logs captured using the `Download
- [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md) - [Leverage Network Diagnostic Tool](./network-diagnostic.md)-- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
+- [Consume call logs with Azure Monitor](../analytics/logs/voice-and-video-logs.md)
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | Honor setting "Teams Q&A" | No API available | | | Honor setting "Meeting reactions" | No API available | | DevOps | [Azure Metrics](../../metrics.md) | ✔️ |
-| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
| | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ | | | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
The following table shows supported server-side capabilities available in Azure
| | | | [Manage ACS call recording](../../voice-video-calling/call-recording.md) | ❌ | | [Azure Metrics](../../metrics.md) | ✔️ |
-| [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
| [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ | | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ |
communication-services Monitor Logs Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/monitor-logs-metrics.md
# Monitor logs for Teams external users
-In this article, you will learn which Azure logs, Azure metrics & Teams logs are emitted for Teams external users when joining Teams meetings. Azure Communication Services user joining Teams meeting emits the following metrics: [Authentication API](../../metrics.md) and [Chat API](../../metrics.md). Communication Services resource additionally tracks the following logs: [Call Summary](../../analytics/call-logs-azure-monitor.md) and [Call Diagnostic](../../analytics/call-logs-azure-monitor.md) Log. Teams administrator can use [Teams Admin Center](https://aka.ms/teamsadmincenter) and [Teams Call Quality Dashboard](https://cqd.teams.microsoft.com) to review logs stored for Teams external users joining Teams meetings organized by the tenant.
+In this article, you will learn which Azure logs, Azure metrics & Teams logs are emitted for Teams external users when joining Teams meetings. Azure Communication Services user joining Teams meeting emits the following metrics: [Authentication API](../../metrics.md) and [Chat API](../../metrics.md). Communication Services resource additionally tracks the following logs: [Call Summary](../../analytics/logs/voice-and-video-logs.md) and [Call Diagnostic](../../analytics/logs/voice-and-video-logs.md) Log. Teams administrator can use [Teams Admin Center](https://aka.ms/teamsadmincenter) and [Teams Call Quality Dashboard](https://cqd.teams.microsoft.com) to review logs stored for Teams external users joining Teams meetings organized by the tenant.
## Azure logs & metrics
Call summary and call diagnostics logs are emitted only for the following partic
- Azure Communication Services users joining the meeting from the same tenant. This includes users rejected in the lobby and Azure Communication Services users from different resources but in the same tenant. - Additional Teams users, phone users and bots joining the meeting only if the organizer and current Azure Communication Services resource are in the same tenant.
-If Azure Communication Services resource and Teams meeting organizer tenants are different, then some fields of the logs are redacted. You can find more information in the call summary & diagnostics logs [documentation](../../analytics/call-logs-azure-monitor.md). Bots indicate service logic provided during the meeting. Here is a list of frequently used bots:
+If Azure Communication Services resource and Teams meeting organizer tenants are different, then some fields of the logs are redacted. You can find more information in the call summary & diagnostics logs [documentation](../../analytics/logs/voice-and-video-logs.md). Bots indicate service logic provided during the meeting. Here is a list of frequently used bots:
- b1902c3e-b9f7-4650-9b23-5772bd429747 - Teams convenient recording ## Microsoft Teams logs
Teams administrator can see Teams external users in the overview of the meeting
- [Enable logs and metrics](../../analytics/enable-logging.md) - [Metrics](../../metrics.md)-- [Call summary and call diagnostics](../../analytics/call-logs-azure-monitor.md)
+- [Call summary and call diagnostics](../../analytics/logs/voice-and-video-logs.md)
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features that are currently available in
| | Honor setting "Spam filtering" | ✔️ | | | Honor setting "SIP devices can be used for calls" | ✔️ | | DevOps | [Azure Metrics](../metrics.md) | ✔️ |
-| | [Azure Monitor](../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Monitor](../analytics/logs/voice-and-video-logs.md) | ✔️ |
| | [Azure Communication Services Insights](../analytics/insights/voice-and-video-insights.md) | ✔️ | | | [Azure Communication Services Voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
communication-services Meeting Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md
The following list of capabilities is allowed when Teams user participates in Te
| | Honor setting "Mode for IP video" | ❌ | | | Honor setting "IP video" | ❌ | | | Honor setting "Local broadcasting" | ❌ |
-| | Honor setting "Media bit rate (Kbs)" | ❌ |
+| | Honor setting "Media bit rate (kBps)" | ❌ |
| | Honor setting "Network configuration lookup" | ❌ | | | Honor setting "Transcription" | No API available | | | Honor setting "Cloud recording" | No API available |
The following list of capabilities is allowed when Teams user participates in Te
| | Honor setting "Teams Q&A" | No API available | | | Honor setting "Meeting reactions" | No API available | | DevOps | [Azure Metrics](../../metrics.md) | ✔️ |
-| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
| | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ | | | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
Teams meeting organizers can configure the Teams meeting options to adjust the e
|[Allow camera for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️| |[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️| |Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️|
-|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Services don't support reactions. |❌|
+|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Services doesn't support reactions. |❌|
|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable| |[Provide CART Captions](https://support.microsoft.com/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
communication-services Phone Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/phone-capabilities.md
The following list of capabilities is supported for scenarios where at least one
| | Replace the caller ID with this service number | ❌ | | Teams dial out plan policies | Start a phone call honoring dial plan policy | ❌ | | DevOps | [Azure Metrics](../../metrics.md) | ✔️ |
-| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
-| | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ |
+| | [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
+| | [Azure Communication Services Insights](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | | | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/join-teams-meeting.md
During a Teams meeting, all chat messages sent by Teams users or Communication S
If the hosting Microsoft 365 organization has defined a retention policy that deletes chat messages for any of the Teams users in the meeting, then all copies of the most recently sent message that have been stored for Communication Services users will also be deleted in accordance with the policy. If there is not a retention policy defined, then the copies of the most recently sent message for all Communication Services users will be deleted after 30 days. For more information about Teams retention policies, review the article [Learn about retention for Microsoft Teams](/microsoft-365/compliance/retention-policies-teams). ## Diagnostics and call analytics
-After a Teams meeting ends, diagnostic information about the meeting is available using the [Communication Services logging and diagnostics](./logging-and-diagnostics.md) and using the [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) in the Teams admin center. Communication Services users will appear as "Anonymous" in Call Analytics screens. Communication Services users aren't included in the [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality).
+After a Teams meeting ends, diagnostic information about the meeting is available using the [Communication Services logging and diagnostics](./analytics/logs/voice-and-video-logs.md) and using the [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) in the Teams admin center. Communication Services users will appear as "Anonymous" in Call Analytics screens. Communication Services users aren't included in the [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality).
## Privacy Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/logging-and-diagnostics.md
- Title: Communication Services Logs-
-description: Learn about logging in Azure Communication Services
----- Previously updated : 06/30/2021-----
-# Communication Services logs
-
-Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
-
- >[!IMPORTANT]
- > For Audio/Video/Telephony call data refer to [Call Summary and Call Diagnostic Logs](../concepts/analytics/call-logs-azure-monitor.md)
-
-## Enable diagnostic logs in your resource
-
-Logging is turned off by default when a resource is created. To enable logging, navigate to the **Diagnostic settings** tab in the resource menu under the **Monitoring** section. Then select **Add diagnostic setting**.
-
-Next, select the archive target you want. Currently, we support storage accounts and Log Analytics as archive targets. After selecting the types of logs that you'd like to capture, save the diagnostic settings.
-
-New settings take effect in about 10 minutes. Logs will begin appearing in the configured archival target within the Logs pane of your Communication Services resource.
--
-For more information about configuring diagnostics, see the overview of [Azure resource logs](../../azure-monitor/essentials/platform-logs-overview.md).
-
-## Resource log categories
-
-Communication Services offers the following types of logs that you can enable:
-
-* **Usage logs** - provides usage data associated with each billed service offering
-* **Chat operational logs** - provides basic information related to the chat service
-* **SMS operational logs** - provides basic information related to the SMS service
-* **Authentication operational logs** - provides basic information related to the Authentication service
-* **Network Traversal operational logs** - provides basic information related to the Network Traversal service
-* **Email Send Mail operational logs** - provides detailed information related to the Email service send mail requests.
-* **Email Status Update operational logs** - provides message and recipient level delivery status updates related to the Email service send mail requests.
-* **Email User Engagement operational logs** - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.
-* **Call Automation operational logs** - provides operational information on Call Automation API requests. These logs can be used to identify failure points, query all requests made in a call (using Correlation ID or Server Call ID) or query all requests made by a specific service application in the call (using Participant ID).
-
-### Usage logs schema
-
-| Property | Description |
-| -- | |
-| Timestamp | The timestamp (UTC) of when the log was generated. |
-| Operation Name | The operation associated with log record. |
-| Operation Version | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| Correlation ID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| Properties | Other data applicable to various modes of Communication Services. |
-| Record ID | The unique ID for a given usage record. |
-| Usage Type | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
-| Unit Type | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
-| Quantity | The number of units used or consumed for this record. |
-
-### Chat operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| OperationVersion | The api-version associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| ResultType | The status of the operation. |
-| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| ResultDescription | The static text description of this operation. |
-| DurationMs | The duration of the operation in milliseconds. |
-| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
-| Level | The severity level of the event. |
-| URI | The URI of the request. |
-| UserId | The request sender's user ID. |
-| ChatThreadId | The chat thread ID associated with the request. |
-| ChatMessageId | The chat message ID associated with the request. |
-| SdkType | The Sdk type used in the request. |
-| PlatformType | The platform type used in the request. |
-
-### SMS operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| OperationVersion | The api-version associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| ResultType | The status of the operation. |
-| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| ResultDescription | The static text description of this operation. |
-| DurationMs | The duration of the operation in milliseconds. |
-| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
-| Level | The severity level of the event. |
-| URI | The URI of the request. |
-| OutgoingMessageLength | The number of characters in the outgoing message. |
-| IncomingMessageLength | The number of characters in the incoming message. |
-| DeliveryAttempts | The number of attempts made to deliver this message. |
-| PhoneNumber | The phone number the SMS message is being sent from. |
-| SdkType | The SDK type used in the request. |
-| PlatformType | The platform type used in the request. |
-| Method | The method used in the request. |
-|NumberType| The type of number, the SMS message is being sent from. It can be either **LongCodeNumber** or **ShortCodeNumber** |
-
-### Authentication operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| ResultType | The status of the operation. |
-| ResultSignature | The sub-status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| DurationMs | The duration of the operation in milliseconds. |
-| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
-| Level | The severity level of the event. |
-| URI | The URI of the request. |
-| SdkType | The SDK type used in the request. |
-| PlatformType | The platform type used in the request. |
-| Identity | The identity of Azure Communication Services or Teams user related to the operation. |
-| Scopes | The Communication Services scopes present in the access token. |
-
-### Network Traversal operational logs
-
-| Dimension | Description |
-||-|
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationId | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| OperationVersion | The API-version associated with the operation or version of the operation (if there's no API version). |
-| Category | The log category of the event. Logs with the same log category and resource type will have the same properties fields. |
-| ResultType | The status of the operation (for example, Succeeded or Failed). |
-| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| DurationMs | The duration of the operation in milliseconds. |
-| Level | The severity level of the operation. |
-| URI | The URI of the request. |
-| Identity | The request sender's identity, if provided. |
-| SdkType | The SDK type being used in the request. |
-| PlatformType | The platform type being used in the request. |
-| RouteType | The routing methodology to where the ICE server will be located from the client (for example, Any or Nearest). |
--
-### Email Send Mail operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| Location | The region where the operation was processed. |
-| OperationName | The operation associated with log record. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
-| Size | Represents the total size in megabytes of the email body, subject, headers and attachments. |
-| ToRecipientsCount | The total # of unique email addresses on the To line. |
-| CcRecipientsCount | The total # of unique email addresses on the Cc line. |
-| BccRecipientsCount | The total # of unique email addresses on the Bcc line. |
-| UniqueRecipientsCount | This is the deduplicated total recipient count for the To, Cc and Bcc address fields. |
-| AttachmentsCount | The total # of attachments. |
--
-### Email Status Update operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| Location | The region where the operation was processed. |
-| OperationName | The operation associated with log record. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
-| RecipientId | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
-| DeliveryStatus | The terminal status of the message. |
-
-### Email User Engagement operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| Location | The region where the operation was processed. |
-| OperationName | The operation associated with log record. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
-| RecipientId | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
-| EngagementType | The type of user engagement being tracked. |
-| EngagementContext | The context represents what the user interacted with. |
-| UserAgent | The user agent string from the client. |
--
-### Call Automation operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationID | The identifier to identify a call and correlate events for a unique call. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| ResultType | The status of the operation. |
-| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| DurationMs | The duration of the operation in milliseconds. |
-| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
-| Level | The severity level of the event. |
-| URI | The URI of the request. |
-| CallConnectionId | ID representing the call connection, if available. This ID is different for each participant and is used to identify their connection to the call. |
-| ServerCallId | A unique ID to identify a call. |
-| SDKVersion | SDK version used for the request. |
-| SDKType | The SDK type used for the request. |
-| ParticipantId | ID to identify the call participant that made the request. |
-| SubOperationName | Used to identify the sub type of media operation (play, recognize) |
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The following tables summarize current availability:
| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | | USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | | USA | Short-Codes\** | General Availability | General Availability | - | - |
+| UK | Toll-Free | - | - | General Availability | General Availability\* |
+| UK | Local | - | - |
+| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| Canada | Local | - | - | General Availability | General Availability\* |
| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID\** | Public Preview | - | - | - | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :-- | :- | :- | :- | : | : | | UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - |
+| UK | Local | - | - | General Availability | General Availability\* |
| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | | USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | | Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : | | Slovakia | Local | - | - | Public Preview | Public Preview\* |
+| Slovakia | Toll-Free | - | - | Public Preview | Public Preview\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : | | Germany | Local | - | - | Public Preview | Public Preview\* |
+| Germany | Toll-Free | - | - | Public Preview | Public Preview\* |
| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
|Number type |Monthly fee | |--|--| |Geographic |USD 1.00/mo |
+|Toll-Free |USD 18.00/mo |
### Usage charges |Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.0234/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0234/min |Starting at USD 0.0401/min |
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
All prices shown below are in USD.
|Number type |Monthly fee | |--|--| |Geographic |USD 1.00/mo |
+|Toll-Free |USD 25.00/mo |
### Usage charges |Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.0270/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0270/min |Starting at USD 0.1151/min |
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
# Troubleshooting in Azure Communication Services
-This document will help you troubleshoot issues that you may experience within your Communication Services solution. If you're troubleshooting SMS, you can [enable delivery reporting with Event Grid](../quickstarts/sms/handle-sms-events.md) to capture SMS delivery details.
+This document helps you troubleshoot issues that you may experience within your Communication Services solution. If you're troubleshooting SMS, you can [enable delivery reporting with Event Grid](../quickstarts/sms/handle-sms-events.md) to capture SMS delivery details.
## Getting help
-We encourage developers to submit questions, suggest features, and report problems as issues. To aid in doing this we have a [dedicated support and help options page](../support.md) which lists your options for support.
+We encourage developers to submit questions, suggest features, and report problems as issues. To aid in doing this, we have a [dedicated support and help options page](../support.md) which lists your options for support.
To help you troubleshoot certain types of issues, you may be asked for any of the following pieces of information:
To help you troubleshoot certain types of issues, you may be asked for any of th
* **Short Code Program Brief ID**: This ID is used to identify a short code program brief application. * **Email message ID**: This ID is used to identify Send Email requests. * **Correlation ID**: This ID is used to identify requests made using Call Automation.
-* **Call logs**: These logs contain detailed information that can be used to troubleshoot calling and network issues.
+* **Call logs**: These logs contain detailed information that are used to troubleshoot calling and network issues.
Also take a look at our [service limits](service-limits.md) documentation for more information on throttling and limitations.
The MS-CV ID can be accessed by configuring diagnostics in the `clientOptions` o
### Client options example
-The following code snippets demonstrate diagnostics configuration. When the SDKs are used with diagnostics enabled, diagnostics details will be emitted to the configured event listener:
+The following code snippets demonstrate diagnostics configuration. When the SDKs are used with diagnostics enabled, diagnostics details can be emitted to the configured event listener:
# [C#](#tab/csharp) ```
chat_client = ChatClient(
## Access IDs required for Call Automation
-When troubleshooting issues with the Call Automation SDK, like call management or recording problems, you'll need to collect the IDs that help identify the failing call or operation. You can provide either of the two IDs mentioned here.
+When troubleshooting issues with the Call Automation SDK, like call management or recording problems, you need to collect the IDs that help identify the failing call or operation. You can provide either of the two IDs mentioned here.
- From the header of API response, locate the field `X-Ms-Skype-Chain-Id`. ![Screenshot of response header showing X-Ms-Skype-Chain-Id.](media/troubleshooting/response-header.png)
In addition to one of these IDs, please provide the details on the failing use c
## Access your client call ID
-When troubleshooting voice or video calls, you may be asked to provide a `call ID`. This can be accessed via the `id` property of the `call` object:
+When troubleshooting voice or video calls, you may be asked to provide a `call ID`. This value can be accessed via the `id` property of the `call` object:
# [JavaScript](#tab/javascript) ```javascript
async function main() {
}, { enableDeliveryReport: true // Optional parameter });
-console.log(result); // your message ID will be in the result
+console.log(result); // your message ID is in the result
} ```
The program brief ID can be found on the [Azure portal](https://portal.azure.com
## Access your email operation ID
-When troubleshooting send email or email message status requests, you may be asked to provide an `operation ID`. This can be accessed in the response:
+When troubleshooting send email or email message status requests, you may be asked to provide an `operation ID`. This value can be accessed in the response:
# [.NET](#tab/dotnet) ```csharp
const callClient = new CallClient();
``` You can use AzureLogger to redirect the logging output from Azure SDKs by overriding the `AzureLogger.log` method:
-This may be useful if you want to redirect logs to a location other than console.
+This value may be useful if you want to redirect logs to a location other than console.
```javascript import { AzureLogger } from '@azure/logger';
When developing for iOS, your logs are stored in `.blog` files. Note that you ca
These can be accessed by opening Xcode. Go to Windows > Devices and Simulators > Devices. Select your device. Under Installed Apps, select your application and click on "Download container".
-This will give you a `xcappdata` file. Right-click on this file and select ΓÇ£Show package contentsΓÇ¥. You'll then see the `.blog` files that you can then attach to your Azure support request.
+This process gives you a `xcappdata` file. Right-click on this file and select ΓÇ£Show package contentsΓÇ¥. You'll then see the `.blog` files that you can then attach to your Azure support request.
# [Android](#tab/android) When developing for Android, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
-On Android Studio, navigate to the Device File Explorer by selecting View > Tool Windows > Device File Explorer from both the simulator and the device. The `.blog` file will be located within your application's directory, which should look something like `/data/data/[app_name_space:com.contoso.com.acsquickstartapp]/files/acs_sdk.blog`. You can attach this file to your support request.
+On Android Studio, navigate to the Device File Explorer by selecting View > Tool Windows > Device File Explorer from both the simulator and the device. The `.blog` file is located within your application's directory, which should look something like `/data/data/[app_name_space:com.contoso.com.acsquickstartapp]/files/acs_sdk.blog`. You can attach this file to your support request.
On Android Studio, navigate to the Device File Explorer by selecting View > Tool
When developing for Windows, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
-These can be accessed by looking at where your app is keeping its local data. There are many ways to figure out where a UWP app keeps its local data, the following steps are just one of these ways:
+These are accessed by looking at where your app is keeping its local data. There are many ways to figure out where a UWP app keeps its local data, the following steps are just one of these ways:
1. Open a Windows Command Prompt (Windows Key + R) 2. Type `cmd.exe` 3. Type `where /r %USERPROFILE%\AppData acs*.blog`
To verify your Teams License eligibility via Teams web client, follow the steps
1. If the authentication is successful and you remain in the https://teams.microsoft.com/ domain, then your Teams License is eligible. If authentication fails or you're redirected to the https://teams.live.com/v2/ domain, then your Teams License isn't eligible to use Azure Communication Services support for Teams users. #### Checking your current Teams license via Microsoft Graph API
-You can find your current Teams license using [licenseDetails](/graph/api/resources/licensedetails) Microsoft Graph API that returns licenses assigned to a user. Follow the steps below to use the Graph Explorer tool to view licenses assigned to a user:
+You can find your current Teams license using [licenseDetails](/graph/api/resources/licensedetails) Microsoft Graph API that returns the licenses assigned to a user. Follow the steps below to use the Graph Explorer tool to view licenses assigned to a user:
1. Open your browser and navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) 1. Sign in to Graph Explorer using the credentials.
The below error codes are exposed by Call Automation SDK.
|--|--|--| | 400 | Bad request | The input request is invalid. Look at the error message to determine which input is incorrect. | 400 | Play Failed | Ensure your audio file is WAV, 16KHz, Mono and make sure the file url is publicly accessible. |
-| 400 | Recognize Failed | Check the error message. The message will highlight if this is due to timeout being reached or if operation was canceled. For more information about the error codes and messages you can check our how-to guide for [gathering user input](../how-tos/call-automation/recognize-action.md#event-codes).
+| 400 | Recognize Failed | Check the error message. The message highlights if this failure is due to timeout being reached or if operation was canceled. For more information about the error codes and messages you can check our how-to guide for [gathering user input](../how-tos/call-automation/recognize-action.md#event-codes).
| 401 | Unauthorized | HMAC authentication failed. Verify whether the connection string used to create CallAutomationClient is correct. | 403 | Forbidden | Request is forbidden. Make sure that you can have access to the resource you are trying to access. | 404 | Resource not found | The call you are trying to act on doesn't exist. For example, transferring a call that has already disconnected.
The below error codes are exposed by Call Automation SDK.
| 502 | Bad gateway | Retry after a delay with a fresh http client. Consider the below tips when troubleshooting certain issues. -- Your application is not getting IncomingCall Event Grid event: Make sure the application endpoint has been [validated with Event Grid](../../event-grid/webhook-event-delivery.md) at the time of creating event subscription. The provisioning status for your event subscription will be marked as succeeded if the validation was successful.
+- Your application is not getting IncomingCall Event Grid event: Make sure the application endpoint has been [validated with Event Grid](../../event-grid/webhook-event-delivery.md) at the time of creating event subscription. The provisioning status for your event subscription is marked as succeeded if the validation was successful.
- Getting the error 'The field CallbackUri is invalid': Call Automation does not support HTTP endpoints. Make sure the callback url you provide supports HTTPS. - PlayAudio action does not play anything: Currently only Wave file (.wav) format is supported for audio files. The audio content in the wave file must be mono (single-channel), 16-bit samples with a 16,000 (16KHz) sampling rate. - Actions on PSTN endpoints aren't working: CreateCall, Transfer, AddParticipant and Redirect to phone numbers require you to set the SourceCallerId in the action request. Unless you are using Direct Routing, the source caller ID should be a phone number owned by your Communication Services resource for the action to succeed.
The Azure Communication Services SMS SDK uses the following error codes to help
## Related information-- [Logs and diagnostics](logging-and-diagnostics.md)
+- Access logs for [voice and video](./analytics/logs/voice-and-video-logs.md), [chat](./analytics/logs/chat-logs.md), [email](./analytics/logs/email-logs.md), [network traversal](./analytics/logs/network-traversal-logs.md), [recording](./analytics/logs/recording-logs.md), [SMS](./analytics/logs/sms-logs.md) and [call automation](./analytics/logs/call-automation-logs.md).
- [Metrics](metrics.md) - [Service limits](service-limits.md)
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
When the Pre-Call diagnostic test runs, behind the scenes it uses calling minute
- [Check your network condition with the diagnostics tool](../developer-tools/network-diagnostic.md) - [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)-- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
+- [Consume call logs with Azure Monitor](../analytics/logs/voice-and-video-logs.md)
communication-services Spotlight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/spotlight.md
+
+ Title: Spotlight states
+
+description: Use Azure Communication Services SDKs to send spotlight state.
+++++ Last updated : 03/01/2023++++
+# Spotlight states
++
+In this article, you'll learn how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability allows users in the call or meeting to pin and unpin videos for everyone.
+
+Since the video stream resolution of a participant is increased when spotlighted, it should be noted that the settings done on [Video Constraints](../../concepts/voice-video-calling/video-constraints.md) also apply to spotlight.
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md).
+- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
+
+Communication Services or Microsoft 365 users can call the spotlight APIs based on role type and conversation type
+
+**In a one to one call or group call scenario, the following APIs are supported for both Communication Services and Microsoft 365 users**
+
+|APIs| Organizer | Presenter | Attendee |
+|-|--|--|--|
+| startSpotlight | ✔️ | ✔️ | ✔️ |
+| stopSpotlight | ✔️ | ✔️ | ✔️ |
+| stopAllSpotlight | ✔️ | ✔️ | ✔️ |
+| getSpotlightedParticipants | ✔️ | ✔️ | ✔️ |
+
+**For meeting scenario the following APIs are supported for both Communication Services and Microsoft 365 users**
+
+|APIs| Organizer | Presenter | Attendee |
+|-|--|--|--|
+| startSpotlight | ✔️ | ✔️ | |
+| stopSpotlight | ✔️ | ✔️ | ✔️ |
+| stopAllSpotlight | ✔️ | ✔️ | |
+| getSpotlightedParticipants | ✔️ | ✔️ | ✔️ |
++
+## Next steps
+- [Learn how to manage calls](./manage-calls.md)
+- [Learn how to manage video](./manage-video.md)
+- [Learn how to record calls](./record-calls.md)
+- [Learn how to transcribe calls](./call-transcription.md)
communication-services Archive Chat Threads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/chat-sdk/archive-chat-threads.md
+
+ Title: Archive your chat threads
+
+description: Learn how to archive chat threads and messages with your own storage.
+++++ Last updated : 03/24/2023+++++
+# Archiving chat threads into your preferred storage solution
+
+In this guide, learn how to move chat messages into your own storage in real-time or chat threads once conversations are complete. Developers are able to maintain an archive of chat threads or messages for compliance reasons or to integrate with Azure OpenAI or both.
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A storage account, in this guide we take an example of Azure Blob Storage. You can use the portal to set up an [account](../../../event-grid/blob-event-quickstart-portal.md). You can use any other storage option that you prefer.
+- If you would like to archive messages in near real time, enable Azure Event Grid, which is a paid service (this prerequisite is only for option 2).
+
+## About Event Grid
+
+[Event Grid](../../../event-grid/overview.md) is a cloud-based eventing service. You need to subscribe to [communication service events](../../../event-grid/event-schema-communication-services.md), and trigger an event in order to archive the messages in near real time. Typically, you send events to an endpoint that processes the event data and takes actions.
+
+## Set up the environment
+
+To set up the environment that you use to generate and receive events, take the steps in the following sections.
+
+### Register an Event Grid resource provider
+
+If you haven't previously used Event Grid in your Azure subscription, you might need to register your Event Grid resource provider. To register the provider, follow these steps:
+
+1. Go to the Azure portal.
+1. On the left menu, select **Subscriptions**.
+1. Select the subscription that you use for Event Grid.
+1. On the left menu, under **Settings**, select **Resource providers**.
+1. Find **Microsoft.EventGrid**.
+1. If your resource provider isn't registered, select **Register**.
+
+It might take a moment for the registration to finish. Select **Refresh** to update the status. When **Registered** appears under **Status**, you're ready to continue.
+
+### Deploy the Event Grid viewer
+
+You need to use an Event Grid viewer to view events in near-real time. The viewer provides the user with the experience of a real-time feed.
+
+There are two methods for archiving chat threads. You can choose to archive messages when the thread is inactive or in near real time.
+
+## Option 1: Archiving inactive conversations using a back end application
+
+This option is suited when your chat volume is high and multiple parties are involved.
+
+Create a backend application to perform jobs to move chat threads into your own storage, we recommend archiving when the thread is no longer active, i.e the conversation with the customer is complete.
+
+The backend application would run a job to do the following steps:
+
+1. [List](../../quickstarts/chat/get-started.md?tabs=windows&pivots=platform-azcli#list-chat-messages-in-a-chat-thread) the messages in the chat thread you wish to archive
+2. Write the chat thread in the desired format you wish to store it in i.e JSON, CSV
+3. Copy the thread in the format as a blob into Azure Blob storage
+
+## Option 2: Archiving chat messages in real-time
+
+This option is suited if the chat volume is low as conversations are happening in real time.
++
+Follow these steps for archiving messages:
+
+- Subscribe to Event Grid events which come with Azure Event Grid through web hooks. Azure Communications Chat service supports the following [events](../../concepts/chat/concepts.md#real-time-notifications) for real-time notifications. The following events are recommended: Message Received [event](../../../event-grid/communication-services-chat-events.md#microsoftcommunicationchatmessagereceived-event), Message Edited [event](../../../event-grid/communication-services-chat-events.md#microsoftcommunicationchatmessageedited-event), and Message Deleted [event](../../../event-grid/communication-services-chat-events.md#microsoftcommunicationchatmessagedeleted-event).
+- Validate the [events](../../how-tos/event-grid/view-events-request-bin.md) by configuring your resource to receive these events
+- Test your Event Grid handler [locally](../../how-tos/event-grid/local-testing-event-grid.md) to ensure that you are receiving events that you need for archiving.
+
+> [!Note]
+> You would have to pay for [events](https://azure.microsoft.com/pricing/details/event-grid/).
+
+## Next steps
+
+* For an introduction to Azure Event Grid Concepts, see [Concepts in Event Grid](../../../event-grid/concepts.md)
+* Service [Limits](../../concepts/service-limits.md)
+* [Troubleshooting](../../concepts/troubleshooting-info.md)
+* Help and support [options](../../support.md)
+++
+
communication-services Enable User Engagement Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/enable-user-engagement-tracking.md
You can now subscribe to Email User Engagement operational logs - provides infor
## Next steps
-* [Get started with log analytics in Azure Communication Service](../../concepts/logging-and-diagnostics.md)
-
+- Access logs for [Email Communication Service](../../concepts/analytics/logs/email-logs.md).
The following documents may be interesting to you:
communication-services Click To Call Widget https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/widgets/click-to-call-widget.md
+
+ Title: Tutorial - Embed a Teams call widget into your web application
+
+description: Learn how to use Azure Communication Services to embed a calling widget into your web application.
+++++ Last updated : 04/17/2023+++++
+# Embed a Teams call widget into your web application
+
+Enable your customers to talk with your support agent on Teams through a call interface directly embedded into your web application.
+
+## Architecture overview
+
+## Prerequisites
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+
+## Set up an Azure Function to provide access tokens
+
+Follow instructions from our [trusted user access service tutorial](../trusted-service-tutorial.md) to deploy your Azure Function for access tokens. This service returns an access token that our widget uses to authenticate to Azure Communication Services and start the call to the Teams user we define.
+
+## Set up boilerplate vanilla web application
+
+1. Create an HTML file named `https://docsupdatetracker.net/index.html` and add the following code to it:
+
+``` html
+
+ <!DOCTYPE html>
+ <html>
+ <head>
+ <meta charset="utf-8">
+ <title>Call Widget App - Vanilla</title>
+ <link rel="stylesheet" href="style.css">
+ </head>
+ <body>
+ <div id="call-widget">
+ <div id="call-widget-header">
+ <div id="call-widget-header-title">Call Widget App</div>
+ <button class='widget'> ? </button >
+ <div class='callWidget'></div>
+ </div>
+ </div>
+ </body>
+ </html>
+
+```
+
+2. Create a CSS file named `style.css` and add the following code to it:
+
+``` css
+
+ .widget {
+ height: 75px;
+ width: 75px;
+ position: absolute;
+ right: 0;
+ bottom: 0;
+ background-color: blue;
+ margin-bottom: 35px;
+ margin-right: 35px;
+ border-radius: 50%;
+ text-align: center;
+ vertical-align: middle;
+ line-height: 75px;
+ color: white;
+ font-size: 30px;
+ }
+
+ .callWidget {
+ height: 400px;
+ width: 600px;
+ background-color: blue;
+ position: absolute;
+ right: 35px;
+ bottom: 120px;
+ z-index: 10;
+ display: none;
+ border-radius: 5px;
+ border-style: solid;
+ border-width: 5px;
+ }
+
+```
+
+1. Configure the call window to be hidden by default. We show it when the user clicks the button.
+
+``` html
+
+ <script>
+ var open = false;
+ const button = document.querySelector('.widget');
+ const content = document.querySelector('.callWidget');
+ button.addEventListener('click', async function() {
+ if(!open){
+ open = !open;
+ content.style.display = 'block';
+ button.innerHTML = 'X';
+ //Add code to initialize call widget here
+ } else if (open) {
+ open = !open;
+ content.style.display = 'none';
+ button.innerHTML = '?';
+ }
+ });
+
+ async function getAccessToken(){
+ //Add code to get access token here
+ }
+ </script>
+
+```
+
+At this point, we have set up a static HTML page with a button that opens a call widget when clicked. Next, we add the widget script code. It makes a call to our Azure Function to get the access token and then use it to initialize our call client for Azure Communication Services and start the call to the Teams user we define.
+
+## Fetch an access token from your Azure Function
+
+Add the following code to the `getAccessToken()` function:
+
+``` javascript
+
+ async function getAccessToken(){
+ const response = await fetch('https://<your-function-name>.azurewebsites.net/api/GetAccessToken?code=<your-function-key>');
+ const data = await response.json();
+ return data.token;
+ }
+
+```
+You need to add the URL of your Azure Function. You can find these values in the Azure portal under your Azure Function resource.
++
+## Initialize the call widget
+
+1. Add a script tag to load the call widget script:
+
+``` html
+
+ <script src="https://github.com/ddematheu2/ACS-UI-Library-Widget/releases/download/widget/callComposite.js"></script>
+
+```
+
+We provide a test script hosted on GitHub for you to use for testing. For production scenarios, we recommend hosting the script on your own CDN. For more information on how to build your own bundle, see [this article](https://azure.github.io/communication-ui-library/?path=/docs/use-composite-in-non-react-environment--page#build-your-own-composite-js-bundle-files).
+
+1. Add the following code under the button event listener:
+
+``` javascript
+
+ button.addEventListener('click', async function() {
+ if(!open){
+ open = !open;
+ content.style.display = 'block';
+ button.innerHTML = 'X';
+ let response = await getChatContext();
+ console.log(response);
+ const callAdapter = await callComposite.loadCallComposite(
+ {
+ displayName: "Test User",
+ locator: { participantIds: ['INSERT USER UNIQUE IDENTIFIER FROM MICROSOFT GRAPH']},
+ userId: response.user,
+ token: response.userToken
+ },
+ content,
+ {
+ formFactor: 'mobile',
+ key: new Date()
+ }
+ );
+ } else if (open) {
+ open = !open;
+ content.style.display = 'none';
+ button.innerHTML = '?';
+ }
+ });
+
+```
+
+Add a Microsoft Graph [User](https://learn.microsoft.com/graph/api/resources/user?view=graph-rest-1.0) ID to the `participantIds` array. You can find this value through [Microsoft Graph](https://learn.microsoft.com/graph/api/user-get?view=graph-rest-1.0&tabs=http) or through [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) for testing purposes. There you can grab the `id` value from the response.
+
+## Run code
+
+Open the `https://docsupdatetracker.net/index.html` in a browser. This code initializes the call widget when the button is clicked. It makes a call to our Azure Function to get the access token and then use it to initialize our call client for Azure Communication Services and start the call to the Teams user we define.
cosmos-db Database Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-encryption-at-rest.md
A: Microsoft has a set of internal guidelines for encryption key rotation, which
### Q: Can I use my own encryption keys? A: Yes, this feature is now available for new Azure Cosmos DB accounts and this should be done at the time of account creation. Please go through [Customer-managed Keys](./how-to-setup-cmk.md) document for more information.
+> [!WARNING]
+> The following field names are reserved on Cassandra API tables in accounts using Customer-managed Keys:
+>
+> - `id`
+> - `ttl`
+> - `_ts`
+> - `_etag`
+> - `_rid`
+> - `_self`
+> - `_attachments`
+> - `_epk`
+>
+> When Customer-managed Keys are not enabled, only field names beginning with `__sys_` are reserved.
+ ### Q: What regions have encryption turned on? A: All Azure Cosmos DB regions have encryption turned on for all user data.
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
You must store customer-managed keys in [Azure Key Vault](../key-vault/general/o
> [!NOTE] > Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation.
+> [!WARNING]
+> The following field names are reserved on Cassandra API tables in accounts using Customer-managed Keys:
+>
+> - `id`
+> - `ttl`
+> - `_ts`
+> - `_etag`
+> - `_rid`
+> - `_self`
+> - `_attachments`
+> - `_epk`
+>
+> When Customer-managed Keys are not enabled, only field names beginning with `__sys_` are reserved.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
cosmos-db Optimize Cost Reads Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-reads-writes.md
The only factor affecting the RU charge of a point read (besides the consistency
| 1 KB | 1 RU | | 100 KB | 10 RUs |
-Because point reads (key/value lookups on the item ID) are the most efficient kind of read, you should make sure your item ID has a meaningful value so you can fetch your items with a point read (instead of a query) when possible.
+Because point reads (key/value lookups on the item ID and partition key) are the most efficient kind of read, you should make sure your item ID has a meaningful value so you can fetch your items with a point read (instead of a query) when possible.
+
+> [!NOTE]
+> In the API for NoSQL, point reads can only be made using the REST API or SDKs. Queries that filter on one item's ID and partition key aren't considered a point read. To see an example using the .NET SDK, see [read an item in Azure Cosmos DB for NoSQL.](./nosql/how-to-dotnet-read-item.md)
### Queries
cosmos-db Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/request-units.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
Azure Cosmos DB normalizes the cost of all database operations using Request Units (or RUs, for short). Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
-The cost to do a point read (fetching a single item by its ID and partition key value) for a 1-KB item is one Request Unit (or one RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos DB container, RUs measure the actual costs of using that API. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
+The cost to do a [point read](optimize-cost-reads-writes.md#point-reads) (fetching a single item by its ID and partition key value) for a 1-KB item is one Request Unit (or one RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos DB container, RUs measure the actual costs of using that API. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
> [!VIDEO https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=772fba63-62c7-488c-acdb-a8f686a3b5f4]
cost-management-billing Create Enterprise Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-enterprise-subscription.md
Previously updated : 03/29/2023 Last updated : 04/18/2023
You need the following permissions to create subscriptions for an EA:
## Create an EA subscription
-Use the following information to create an EA subscription.
+An account owner uses the following information to create an EA subscription.
+
+>[!NOTE]
+> If you want to create an Enterprise Dev/Test subscription, an enterprise administrator must enable account owners to create them. Otherwise, the option to create them isn't available. To enable the dev/test offer for an enrollment, see [Enable the enterprise dev/test offer](direct-ea-administration.md#enable-the-enterprise-devtest-offer).
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Navigate to **Subscriptions** and then select **Add**.
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 04/06/2023 Last updated : 04/18/2023
Enterprise agreements and the customers accessing the agreements can have multip
1. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with. :::image type="content" source="./media/direct-ea-administration/select-billing-scope.png" alt-text="Screenshot showing select a billing account." lightbox="./media/direct-ea-administration/select-billing-scope.png" :::
+## Activate your enrollment
+
+To activate your enrollment, the initial enterprise administrator signs in to the Azure portal using their work, school, or Microsoft account.
+If you've been set up as the enterprise administrator, you don't need to receive the activation email. You can login to Azure portal and activate the enrollment.
+
+### To activate an enrollment
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes).
+1. Search for **Cost Management + Billing** and select it.
+ :::image type="content" source="./media/direct-ea-administration/search-cost-management.png" alt-text="Screenshot showing search for Cost Management + Billing." lightbox="./media/direct-ea-administration/search-cost-management.png" :::
+1. Select the enrollment that you want to activate.
+ :::image type="content" source="./media/direct-ea-administration/select-billing-scope.png" alt-text="Screenshot showing select a billing account." lightbox="./media/direct-ea-administration/select-billing-scope.png" :::
+1. Once the enrollment is selected, status of enrollment is changed to active.
+1. You can view the status of enrollment under **Essentials** on Summary view.
+ ## View enrollment details An Azure enterprise administrator (EA admin) can view and manage enrollment properties and policy to ensure that enrollment settings are correctly configured.
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
Previously updated : 11/17/2022 Last updated : 04/14/2023 # Transform data from an SAP ODP source using the SAP CDC connector in Azure Data Factory or Azure Synapse Analytics
To prepare an SAP CDC dataset, follow [Prepare the SAP CDC source dataset](sap-c
SAP CDC datasets can be used as source in mapping data flow. Since the raw SAP ODP change feed is difficult to interpret and to correctly update to a sink, mapping data flow takes care of this by evaluating technical attributes provided by the ODP framework (e.g., ODQ_CHANGEMODE) automatically. This allows users to concentrate on the required transformation logic without having to bother with the internals of the SAP ODP change feed, the right order of changes, etc.
+To get started, create a pipeline with a mapping data flow.
++
+Next, specify a staging folder in Azure Data Lake Gen2, which will serve as an intermediate storage for data extracted from SAP.
++ ### Mapping data flow properties To create a mapping data flow using the SAP CDC connector as a source, complete the following steps:
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud-- Previously updated : 03/29/2023 Last updated : 04/18/2023 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **PowerZure exploitation toolkit used to enumerate storage containers, shares, and tables**<br>(ARM_PowerZure.ShowStorageContent) | PowerZure exploitation toolkit was used to enumerate storage shares, tables, and containers. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **PowerZure exploitation toolkit used to execute a Runbook in your subscription**<br>(ARM_PowerZure.StartRunbook) | PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **PowerZure exploitation toolkit used to extract Runbooks content**<br>(ARM_PowerZure.AzureRunbookContent) | PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
-| **PREVIEW - Activity from a risky IP address**<br>(ARM.MCAS_ActivityFromAnonymousIPAddresses) | Users activity from an IP address that has been identified as an anonymous proxy IP address has been detected.<br>These proxies are used by people who want to hide their device's IP address, and can be used for malicious intent. This detection uses a machine learning algorithm that reduces false positives, such as mis-tagged IP addresses that are widely used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
-| **PREVIEW - Activity from infrequent country**<br>(ARM.MCAS_ActivityFromInfrequentCountry) | Activity from a location that wasn't recently or ever visited by any user in the organization has occurred.<br>This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
| **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High |
-| **PREVIEW - Impossible travel activity**<br>(ARM.MCAS_ImpossibleTravelActivity) | Two user activities (in a single or multiple sessions) have occurred, originating from geographically distant locations. This occurs within a time period shorter than the time it would have taken the user to travel from the first location to the second. This indicates that a different user is using the same credentials.<br>This detection uses a machine learning algorithm that ignores obvious false positives contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The detection has an initial learning period of seven days, during which it learns a new user's activity pattern.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
-| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
| **PREVIEW - Suspicious key vault recovery detected**<br>(Arm_Suspicious_Vault_Recovering) | Microsoft Defender for Resource Manager detected a suspicious recovery operation for a soft-deleted key vault resource.<br> The user recovering the resource is different from the user that deleted it. This is highly suspicious because the user rarely invokes such an operation. In addition, the user logged on without multi-factor authentication (MFA).<br> This might indicate that the user is compromised and is attempting to discover secrets and keys to gain access to sensitive resources, or to perform lateral movement across your network. | Lateral movement | Medium/high | | **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW - Suspicious invocation of a high-risk 'Credential Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Credential access | Medium |
Defender for Cloud's supported kill chain intents are based on [version 9 of the
## Defender for Servers alerts to be deprecated
-The following tables include the Defender for Servers security alerts [to be deprecated in April, 2023](upcoming-changes.md#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers).
+The following tables include the Defender for Servers security alerts [to be deprecated in April, 2023](release-notes.md#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers).
### Linux alerts to be deprecated
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
This section lists all of the cloud security graph components (connections and
| DEASM findings | Microsoft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP | | Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container | | Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod |
-| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Kubernetes image |
-| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Kubernetes image |
+| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image |
+| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image |
| Public IP metadata | Lists the metadata of an Public IP | Public IP | | Identity metadata | Lists the metadata of an identity | Azure AD Identity |
This section lists all of the cloud security graph components (connections and
| Has permission to | Indicates that an identity has permissions to a resource or a group of resources | Azure AD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources| | Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server | All Azure & AWS resources, All Kubernetes entities, All DevOps entities, Azure SQL database | | Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service |
-| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Kubernetes image, Kubernetes pod |
+| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Container image, Kubernetes pod |
| Member of | Indicates that the source identity is a member of the target identities group | Azure AD group, Azure AD user | Azure AD group | | Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod |
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Previously updated : 03/21/2023 Last updated : 04/18/2023 # Automatically configure vulnerability assessment for your machines
To assess your machines for vulnerabilities, you can use one of the following so
:::image type="content" source="media/auto-deploy-vulnerability-assessment/turn-on-deploy-vulnerability-assessment.png" alt-text="Screenshot showing where to turn on deployment of vulnerability assessment for machines." lightbox="media/auto-deploy-vulnerability-assessment/turn-on-deploy-vulnerability-assessment.png"::: > [!TIP]
- > If you select the "Microsoft Defender for Cloud built-in Qualys solution" solution, Defender for Cloud enables the following policy: [(Preview) Configure machines to receive a vulnerability assessment provider](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f13ce0167-8ca6-4048-8e6b-f996402e3c1b).
+ > If you select the "Microsoft Defender for Cloud built-in Qualys solution" solution, Defender for Cloud enables the following policy: [Configure machines to receive a vulnerability assessment provider](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f13ce0167-8ca6-4048-8e6b-f996402e3c1b).
1. Select **Apply** and then select **Save**.
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
+
+ Title: Agentless Container Posture for Microsoft Defender for Cloud
+description: Learn how Agentless Container Posture offers discovery and visibility for Containers without installing an agent on your machines.
++ Last updated : 04/16/2023+++
+# Agentless Container Posture (Preview)
+
+You can identify security risks that exist in containers and Kubernetes realms with the agentless discovery and visibility capability across SDLC and runtime.
+
+You can maximize the coverage of your container posture issues and extend your protection beyond the reach of agent-based assessments to provide a holistic approach to your posture improvement. This includes, for example, container vulnerability assessment insights as part of [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md) and Kubernetes [Attack Path](attack-path-reference.md#azure-containers) analysis.
+
+Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md).
+
+> [!IMPORTANT]
+> The Agentless Container Posture preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available" and are excluded from the service-level agreements and limited warranty. Agentless Container Posture previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use.
+
+## Capabilities
+
+Agentless Container Posture provides the following capabilities:
+
+- Using Kubernetes Attack Path analysis to visualize risks and threats to Kubernetes environments.
+- Using Cloud Security Explorer for risk hunting by querying various risk scenarios.
+- Viewing security insights, such as internet exposure, and other pre-defined security scenarios. For more information, search for `Kubernetes` in the [list of Insights](attack-path-reference.md#insights).
+- Agentless discovery and visibility within Kubernetes components.
+- Agentless container registry vulnerability assessment, using the image scanning results of your Azure Container Registry (ACR) with Cloud Security Explorer.
+
+ [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) for Containers in Defender Cloud Security Posture Management (CSPM) gives you frictionless, wide, and instant visibility on actionable posture issues without the need for installed agents, network connectivity requirements, or container performance impact.
+
+All of these capabilities are available as part of the [Defender Cloud Security Posture Management](concept-cloud-security-posture-management.md) plan.
+
+## Availability
+
+| Aspect | Details |
+|||
+|Release state:|Preview|
+|Pricing:|Requires [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts |
+| Permissions | You need to have access as a Subscription Owner, or, User Access Admin as well as Security Admin permissions for the Azure subscription used for onboarding |
+
+## Prerequisites
+
+You need to have a Defender for CSPM plan enabled. There's no dependency on Defender for Containers​.
+
+This feature uses trusted access. Learn more about [AKS trusted access prerequisites](/azure/aks/trusted-access-feature#prerequisites).
+
+## Onboard Agentless Containers for CSPM
+
+Onboarding Agentless Containers for CSPM will allow you to gain wide visibility into Kubernetes and containers registries across SDLC and runtime.
+
+**To onboard Agentless Containers for CSPM:**
+
+1. In the Azure portal, navigate to the Defender for Cloud's **Environment Settings** page.
+
+1. Select the subscription that's onboarded to the Defender CSPM plan, then select **Settings**.
+
+1. Ensure the **Agentless discovery for Kubernetes** and **Container registries vulnerability assessments** extensions are toggled to **On**.
+
+1. Select **Continue**.
+
+ :::image type="content" source="media/concept-agentless-containers/settings-continue.png" alt-text="Screenshot of selecting agentless discovery for Kubernetes and Container registries vulnerability assessments." lightbox="media/concept-agentless-containers/settings-continue.png":::
+
+1. Select **Save**.
+
+A notification message pops up in the top right corner that will verify that the settings were saved successfully.
+
+## Agentless Container Posture extensions
+
+### Container registries vulnerability assessments
+
+For container registries vulnerability assessments, recommendations are available based on the vulnerability assessment timeline.
+
+Learn more about [image scanning](defender-for-containers-vulnerability-assessment-azure.md).
+
+### Agentless discovery for Kubernetes
+
+The systemΓÇÖs architecture is based on a snapshot mechanism at intervals.
++
+By enabling the Agentless discovery for Kubernetes extension, the following process occurs:
+
+- **Create**: MDC (Microsoft Defender for Cloud) creates an identity in customer environments called CloudPosture/securityOperator/DefenderCSPMSecurityOperator.
+
+- **Assign**: MDC assigns 1 built-in role called **Kubernetes Agentless Operator** to that identity on subscription scope.
+
+ The role contains the following permissions:
+ - AKS read (Microsoft.ContainerService/managedClusters/read)
+ - AKS Trusted Access with the following permissions:
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
+
+ Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
+
+- **Discover**: Using the system assigned identity, MDC performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.
+
+- **Bind**: Upon discovery of an AKS cluster, MDC performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives MDC data plane read permission inside the cluster.
+
+### Refresh intervals
+
+Agentless information in Defender CSPM is updated once an hour through a snapshot mechanism. It can take up to **24 hours** to see results in Cloud Security Explorer and Attack Path.
+
+## FAQs
+
+### Why don't I see results from my clusters?
+
+If you don't see results from your clusters, check the following:
+
+- Do you have [stopped clusters](#what-do-i-do-if-i-have-stopped-clusters)?
+- Are your clusters [Read only (locked)](#what-do-i-do-if-i-have-read-only-clusters-locked)?
+
+### What do I do if I have stopped clusters?
+
+We suggest that you rerun the cluster to solve this issue.
+
+### What do I do if I have Read only clusters (locked)?
+
+We suggest that you do one of the following:
+
+- Remove the lock.
+- Perform the bind operation manually by doing an API request.
+
+Learn more about [locked resources](/azure/azure-resource-manager/management/lock-resources?tabs=json).
+
+## Next steps
+
+Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md).
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
For commercial and national cloud coverage, see the [features supported in diffe
Defender for Cloud offers foundational multicloud CSPM capabilities for free. These capabilities are automatically enabled by default on any subscription or account that has onboarded to Defender for Cloud. The foundational CSPM includes asset discovery, continuous assessment and security recommendations for posture hardening, compliance with Microsoft Cloud Security Benchmark (MCSB), and a [Secure score](secure-score-access-and-track.md) which measure the current status of your organization's posture.
-The optional Defender CSPM plan, provides advanced posture management capabilities such as [Attack path analysis](how-to-manage-attack-path.md), [Cloud security explorer](how-to-manage-cloud-security-explorer.md), advanced threat hunting, [security governance capabilities](concept-regulatory-compliance.md), and also tools to assess your [security compliance](review-security-recommendations.md) with a wide range of benchmarks, regulatory standards, and any custom security policies required in your organization, industry, or region.
+The optional Defender CSPM plan, provides advanced posture management capabilities such as [Attack path analysis](how-to-manage-attack-path.md), [Cloud security explorer](how-to-manage-cloud-security-explorer.md), advanced threat hunting, [security governance capabilities](governance-rules.md), and also tools to assess your [security compliance](review-security-recommendations.md) with a wide range of benchmarks, regulatory standards, and any custom security policies required in your organization, industry, or region.
### Plan pricing
The following table summarizes each plan and their cloud availability.
| Workflow automation | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | Remediation tracking | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| [Governance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| [Governance](governance-rules.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
| [Regulatory compliance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| Agentless discovery for Kubernetes | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
-| Agentless vulnerability assessments for container images, including registry scanning (\* Up to 20 unique images per billable resource) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
+| [Agentless discovery for Kubernetes](concept-agentless-containers.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
+| [Agentless vulnerability assessments for container images](defender-for-containers-vulnerability-assessment-azure.md), including registry scanning (\* Up to 20 unique images per billable resource) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
| Sensitive data discovery | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | Data flows discovery | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
-# Use Defender for Containers to scan your Azure Container Registry images for vulnerabilities
+# Scan your Azure Container Registry images for vulnerabilities
-This article explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
+As part of the protections provided within Microsoft Defender for Cloud, you can scan the container images that are stored in your Azure Resource Manager-based Azure Container Registry.
-To enable scanning of vulnerabilities in containers, you have to [enable Defender for Containers](defender-for-containers-enable.md). When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
+When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
Defender for Cloud filters and classifies findings from the scanner. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
The triggers for an image scan are:
- A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension. - Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
-
+ When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete. ## Prerequisites Before you can scan your ACR images: -- [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
+- You must enable one of the following plans on your subscription:
+
+ - [Defender CSPM](concept-cloud-security-posture-management.md). When you enable this plan, ensure you enable the **Container registries vulnerability assessments (preview)** extension.
+ - [Defender for Containers](defender-for-containers-enable.md).
- >[!NOTE]
- > This feature is charged per image.
+ >[!NOTE]
+ > This feature is charged per image. Learn more about the [pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/)
-- If you want to find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
+To find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
- Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
+Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
- Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
+Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
- You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-vulnerability-assessment-elastic.md) directly from the Azure portal.
+You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-vulnerability-assessment-elastic.md) directly from the Azure portal.
For a list of the types of images and container registries supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#registries-and-images).
defender-for-cloud Devops Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md
Title: Defender for DevOps FAQ description: If you're having issues with Defender for DevOps perhaps, you can solve it with these frequently asked questions. Previously updated : 02/23/2023 Last updated : 04/18/2023 # Defender for DevOps frequently asked questions (FAQ)
The ability to block developers from committing code with exposed secrets isn't
### I'm not able to configure Pull Request Annotations
-Make sure you have write (owner/contributor) access to the subscription.
+Make sure you have write (owner/contributor) access to the subscription. If you don't have this type of access today, you can get it through [activating an Azure Active Directory role in PIM](/azure/active-directory/privileged-identity-management/pim-how-to-activate-role).
### What programming languages are supported by Defender for DevOps?
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Security DevOps uses the following Open Source tools:
```yml name: MSDO windows-latest
- on:
+ on:
push: branches: [ main ] pull_request:
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
Learn more about [the cloud security graph, attack path analysis, and the cloud
## Prerequisites -- You must [enable agentless scanning](enable-vulnerability-assessment-agentless.md).- - You must [enable Defender CSPM](enable-enhanced-security.md).
+ - For Agentless Container Posture, you must enable the following extensions:
+ - Agentless discovery for Kubernetes (preview)
+ - Container registries vulnerability assessments (preview)
-- You must [enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. -
- When you enable Defender for Containers, you also gain the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in the security explorer.
+- You must [enable agentless scanning](enable-vulnerability-assessment-agentless.md).
- Required roles and permissions: - Security Reader
defender-for-cloud Iac Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
The IaC scanning tools that are included with Microsoft Security DevOps, are [Template Analyzer](https://github.com/Azure/template-analyzer) (which contains [PSRule](https://aka.ms/ps-rule-azure)) and [Terrascan](https://github.com/tenable/terrascan).
-Template Analyzer runs rules on ARM and Bicep templates. You can learn more about [Template Analyzer's rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-bpa-rules.md#built-in-rules).
+Template Analyzer runs rules on ARM and Bicep templates. You can learn more about [Template Analyzer's rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-rules.md#built-in-rules).
Terrascan runs rules on ARM, CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform templates. You can learn more about the [Terrascan rules](https://runterrascan.io/docs/policies/).
defender-for-cloud Protect Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/protect-network-resources.md
The network map can show you your Azure resources in a **Topology** view and a *
In the **Topology** view of the networking map, you can view the following insights about your networking resources: -- In the inner circle, you can see all the Vnets within your selected subscriptions, the next circle is all the subnets, the outer circle is all the virtual machines.
+- In the inner circle, you can see all the VNets within your selected subscriptions, the next circle is all the subnets, the outer circle is all the virtual machines.
- The lines connecting the resources in the map let you know which resources are associated with each other, and how your Azure network is structured. - Use the severity indicators to quickly get an overview of which resources have open recommendations from Defender for Cloud. - You can click any of the resources to drill down into them and view the details of that resource and its recommendations directly, and in the context of the Network map.
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
For other policies, you can create an exemption directly in the policy itself, b
### What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard?
-If you've got *any* of the Microsoft Defender plan (except for Defender for Servers Plan 1) enabled on *any* of your Azure resources, you can access Defender for Cloud's regulatory compliance dashboard and all of its data.
+If you've got *any* of the Microsoft Defender plans (except for Defender for Servers Plan 1) enabled on *any* of your Azure resources, you can access Defender for Cloud's regulatory compliance dashboard and all of its data.
+
+> [!NOTE]
+> For Defender for Servers you'll get regulatory compliance only for plan 2.
## Next steps
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/17/2023 Last updated : 04/18/2023 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in April include:
+- [Agentless Container Posture in Defender CSPM (Preview)](#agentless-container-posture-in-defender-cspm-preview)
- [New preview Unified Disk Encryption recommendation](#unified-disk-encryption-recommendation-preview)-- [Changes in the recommendation "Machines should be configured securely"](#changes-in-the-recommendation-machines-should-be-configured-securely)
+- [Changes in the recommendation Machines should be configured securely](#changes-in-the-recommendation-machines-should-be-configured-securely)
- [Deprecation of App Service language monitoring policies](#deprecation-of-app-service-language-monitoring-policies)
+- [New alert in Defender for Resource Manager](#new-alert-in-defender-for-resource-manager)
+- [Three alerts in the Defender for Resource Manager plan have been deprecated](#three-alerts-in-the-defender-for-resource-manager-plan-have-been-deprecated)
+- [Alerts automatic export to Log Analytics workspace have been deprecated](#alerts-automatic-export-to-log-analytics-workspace-have-been-deprecated)
+- [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers)
+
+### Agentless Container Posture in Defender CSPM (Preview)
+
+The new Agentless Container Posture (Preview) capabilities are available as part of the Defender CSPM (Cloud Security Posture Management) plan.
+
+Agentless Container Posture allows security teams to identify security risks in containers and Kubernetes realms. An agentless approach allows security teams to gain visibility into their Kubernetes and containers registries across SDLC and runtime, removing friction and footprint from the workloads.
+
+Agentless Container Posture offers container vulnerability assessments that, combined with attack path analysis, enable security teams to prioritize and zoom into specific container vulnerabilities. You can also use cloud security explorer to uncover risks and hunt for container posture insights, such as discovery of applications running vulnerable images or exposed to the internet.
+
+Learn more at [Agentless Container Posture (Preview)](concept-agentless-containers.md).
### Unified Disk Encryption recommendation (preview) We have introduced a unified disk encryption recommendation in public preview, `Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost` and `Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost`.
-These recommendations replace `Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources` which detected Azure Disk Encryption and the policy `Virtual machines and virtual machine scale sets should have encryption at host enabled` which detected EncryptionAtHost. ADE and EncryptionAtHost provide comparable encryption at rest coverage, and we recommend enabling one of them on every virtual machine. The new recommendations detect whether either ADE or EncryptionAtHost are enabled and only warn if neither are enabled. We also warn if ADE is enabled on some, but not all disks of a VM (this condition isn't applicable to EncryptionAtHost).
+These recommendations replace `Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources`, which detected Azure Disk Encryption and the policy `Virtual machines and virtual machine scale sets should have encryption at host enabled`, which detected EncryptionAtHost. ADE and EncryptionAtHost provide comparable encryption at rest coverage, and we recommend enabling one of them on every virtual machine. The new recommendations detect whether either ADE or EncryptionAtHost are enabled and only warn if neither are enabled. We also warn if ADE is enabled on some, but not all disks of a VM (this condition isn't applicable to EncryptionAtHost).
The new recommendations require [Azure Automanage Machine Configuration](https://aka.ms/gcpol).
These recommendations are based on the following policies:
Learn more about [ADE and EncryptionAtHost and how to enable one of them](../virtual-machines/disk-encryption-overview.md).
-### Changes in the recommendation "Machines should be configured securely"
+### Changes in the recommendation Machines should be configured securely
The recommendation `Machines should be configured securely` was updated. The update improves the performance and stability of the recommendation and aligns its experience with the generic behavior of Defender for Cloud's recommendations. As part of this update, the recommendation's ID was changed from `181ac480-f7c4-544b-9865-11b8ffe87f47` to `c476dc48-8110-4139-91af-c8d940896b98`.
-No action is required on the customer side, and there's no expected impact on the secure score.
+No action is required on the customer side, and there's no expected effect on the secure score.
### Deprecation of App Service language monitoring policies
Customers can use alternative built-in policies to monitor any specified languag
These policies are no longer available in Defender for Cloud's built-in recommendations. You can [add them as custom recommendations](create-custom-recommendations.md) to have Defender for Cloud monitor them.
+### New alert in Defender for Resource Manager
+
+Defender for Resource Manager has the following new alert:
+
+| Alert (alert type) | Description | MITRE tactics | Severity |
+|||:-:||
+| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
+
+You can see a list of all of the [alerts available for Resource Manager](alerts-reference.md#alerts-resourcemanager).
+
+### Three alerts in the Defender for Resource Manager plan have been deprecated
+
+**Estimated date for change: March 2023**
+
+The following three alerts for the Defender for Resource Manager plan have been deprecated:
+
+- `Activity from a risky IP address (ARM.MCAS_ActivityFromAnonymousIPAddresses)`
+- `Activity from infrequent country (ARM.MCAS_ActivityFromInfrequentCountry)`
+- `Impossible travel activity (ARM.MCAS_ImpossibleTravelActivity)`
+
+In a scenario where activity from a suspicious IP address is detected, one of the following Defenders for Resource Manager plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present.
++
+### Alerts automatic export to Log Analytics workspace have been deprecated
+
+Defenders for Cloud security alerts are automatically exported to a default Log Analytics workspace on the resource level. This causes an indeterministic behavior and therefore we have deprecated this feature.
+
+Instead, you can export your security alerts to a dedicated Log Analytics workspace with [Continuous Export](continuous-export.md#set-up-a-continuous-export).
+
+If you have already configured continuous export of your alerts to a Log Analytics workspace, no further action is required.
+
+### Deprecation and improvement of selected alerts for Windows and Linux Servers
+
+The security alert quality improvement process for Defender for Servers includes the deprecation of some alerts for both Windows and Linux servers. The deprecated alerts are now sourced from and covered by Defender for Endpoint threat alerts.
+
+If you already have the Defender for Endpoint integration enabled, no further action is required. You may experience a decrease in your alerts volume in April 2023.
+
+If you don't have the Defender for Endpoint integration enabled in Defender for Servers, you'll need to enable the Defender for Endpoint integration to maintain and improve your alert coverage.
+
+All Defender for Servers customers, have full access to the Defender for EndpointΓÇÖs integration as a part of the [Defender for Servers plan](plan-defender-for-servers-select-plan.md#plan-features).
+
+You can learn more about [Microsoft Defender for Endpoint onboarding options](integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration).
+
+You can also view the [full list of alerts](alerts-reference.md#defender-for-servers-alerts-to-be-deprecated) that are set to be deprecated.
+
+Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).
+ ## March 2023 Updates in March include: -- [New alert in Defender for Resource Manager](#new-alert-in-defender-for-resource-manager) - [A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection](#a-new-defender-for-storage-plan-is-available-including-near-real-time-malware-scanning-and-sensitive-data-threat-detection) - [Data-aware security posture (preview)](#data-aware-security-posture-preview) - [Improved experience for managing the default Azure security policies](#improved-experience-for-managing-the-default-azure-security-policies)
Updates in March include:
- [New preview recommendation for Azure SQL Servers](#new-preview-recommendation-for-azure-sql-servers) - [New alert in Defender for Key Vault](#new-alert-in-defender-for-key-vault)
-### New alert in Defender for Resource Manager
-
-Defender for Resource Manager has the following new alert:
-
-| Alert (alert type) | Description | MITRE tactics | Severity |
-|||:-:||
-| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
-
-You can see a list of all of the [alerts available for Resource Manager](alerts-reference.md#alerts-resourcemanager).
- ### A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection
-Cloud storage plays a key role in the organization and stores large volumes of valuable and sensitive data. Today we are announcing a new Defender for Storage plan. If youΓÇÖre using the previous plan (now renamed to "Defender for Storage (classic)"), you will need to proactively [migrate to the new plan](defender-for-storage-classic-migrate.md) in order to use the new features and benefits.
+Cloud storage plays a key role in the organization and stores large volumes of valuable and sensitive data. Today we're announcing a new Defender for Storage plan. If youΓÇÖre using the previous plan (now renamed to "Defender for Storage (classic)"), you'll need to proactively [migrate to the new plan](defender-for-storage-classic-migrate.md) in order to use the new features and benefits.
The new plan includes advanced security capabilities to help protect against malicious file uploads, sensitive data exfiltration, and data corruption. It also provides a more predictable and flexible pricing structure for better control over coverage and costs.
The new plan has new capabilities now in public preview:
- Detecting entities with no identities using SAS tokens
-These capabilities will enhance the existing Activity Monitoring capability, based on control and data plane log analysis and behavioral modeling to identify early signs of breach.
+These capabilities enhance the existing Activity Monitoring capability, based on control and data plane log analysis and behavioral modeling to identify early signs of breach.
All these capabilities are available in a new predictable and flexible pricing plan that provides granular control over data protection at both the subscription and resource levels.
Microsoft Defender for Cloud helps security teams to be more productive at reduc
We introduce an improved Azure security policy management experience for built-in recommendations that simplifies the way Defender for Cloud customers fine tune their security requirements. The new experience includes the following new capabilities: -- A simple interface allows better performance and fewer clicks when managing default security policies within Defender for Cloud, including enabling/disabling, denying, setting parameters and managing exemptions.
+- A simple interface allows better performance and fewer select when managing default security policies within Defender for Cloud, including enabling/disabling, denying, setting parameters and managing exemptions.
- A single view of all built-in security recommendations offered by the Microsoft cloud security benchmark (formerly the Azure security benchmark). Recommendations are organized into logical groups, making it easier to understand the types of resources covered, and the relationship between parameters and recommendations. - New features such as filters and search have been added.
Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com
### Defender CSPM (Cloud Security Posture Management) is now Generally Available (GA)
-We are announcing that Defender CSPM is now Generally Available (GA). Defender CSPM offers all of the services available under the Foundational CSPM capabilities and adds the following benefits:
+We're announcing that Defender CSPM is now Generally Available (GA). Defender CSPM offers all of the services available under the Foundational CSPM capabilities and adds the following benefits:
-- **Attack path analysis and ARG API** - Attack path analysis uses a graph-based algorithm that scans the cloud security graph to expose attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. You can also consume attack paths programmatically by querying Azure Resource Graph (ARG) API. Learn how to use [attack path analysis](how-to-manage-attack-path.md)
+- **Attack path analysis and ARG API** - Attack path analysis uses a graph-based algorithm that scans the cloud security graph to expose attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach. You can also consume attack paths programmatically by querying Azure Resource Graph (ARG) API. Learn how to use [attack path analysis](how-to-manage-attack-path.md)
- **Cloud Security explorer** - Use the Cloud Security Explorer to run graph-based queries on the cloud security graph, to proactively identify security risks in your multicloud environments. Learn more about [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer). Learn more about [Defender CSPM](overview-page.md).
We've added a new recommendation for Azure SQL Servers, `Azure SQL Server authen
The recommendation is based on the existing policy [`Azure SQL Database should have Azure Active Directory Only Authentication enabled`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fabda6d70-9778-44e7-84a8-06713e6db027)
-This recommendation disables local authentication methods and allows only Azure Active Directory Authentication which improves security by ensuring that Azure SQL Databases can exclusively be accessed by Azure Active Directory identities.
+This recommendation disables local authentication methods and allows only Azure Active Directory Authentication, which improves security by ensuring that Azure SQL Databases can exclusively be accessed by Azure Active Directory identities.
Learn how to [create servers with Azure AD-only authentication enabled in Azure SQL](/azure/azure-sql/database/authentication-azure-ad-only-authentication-create-server).
You can see a list of all of the [alerts available for Key Vault](alerts-referen
Updates in February include: - [Enhanced Cloud Security Explorer](#enhanced-cloud-security-explorer)-- [Recommendation to find vulnerabilities in running container images for Linux released for General Availability (GA)](#recommendation-to-find-vulnerabilities-in-running-container-images-released-for-general-availability-ga)
+- [Defender for Containers' vulnerability scans of running Linux images now GA](#defender-for-containers-vulnerability-scans-of-running-linux-images-now-ga)
- [Announcing support for the AWS CIS 1.5.0 compliance standard](#announcing-support-for-the-aws-cis-150-compliance-standard) - [Microsoft Defender for DevOps (preview) is now available in other regions](#microsoft-defender-for-devops-preview-is-now-available-in-other-regions) - [The built-in policy [Preview]: Private endpoint should be configured for Key Vault has been deprecated](#the-built-in-policy-preview-private-endpoint-should-be-configured-for-key-vault-has-been-deprecated)
Updates in February include:
An improved version of the cloud security explorer includes a refreshed user experience that removes query friction dramatically, added the ability to run multicloud and multi-resource queries, and embedded documentation for each query option.
-The Cloud Security Explorer now allows you to run cloud-abstract queries across resources. You can use either the pre-built query templates or use the custom search to apply filters to build your query. Learn [how to manage Cloud Security Explorer](how-to-manage-cloud-security-explorer.md).
+The Cloud Security Explorer now allows you to run cloud-abstract queries across resources. You can use either the prebuilt query templates or use the custom search to apply filters to build your query. Learn [how to manage Cloud Security Explorer](how-to-manage-cloud-security-explorer.md).
+
+### Defender for Containers' vulnerability scans of running Linux images now GA
+
+Defender for Containers detects vulnerabilities in running containers. Both Windows and Linux containers are supported.
-### Recommendation to find vulnerabilities in running container images released for General Availability (GA)
+In August 2022, this capability was [released in preview](release-notes-archive.md) for Windows and Linux. It's now released for general availability (GA) for Linux.
-The [Running container images should have vulnerability findings resolved](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) recommendation for Linux is now GA. The recommendation is used to identify unhealthy resources and is included in the calculations of your secure score.
+When vulnerabilities are detected, Defender for Cloud generates the following security recommendation listing the scan's findings: [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false).
-We recommend that you use the recommendation to remediate vulnerabilities in your Linux containers. Learn about [recommendation remediation](implement-security-recommendations.md).
+Learn more about [viewing vulnerabilities for running images](defender-for-containers-vulnerability-assessment-azure.md).
### Announcing support for the AWS CIS 1.5.0 compliance standard
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
In this example:
### Which recommendations are included in the secure score calculations?
-Only built-in recommendations have an impact on the secure score.
-
+Only built-in recommendations that are part of the default initiative, Azure Security Benchmark, have an impact on the secure score.
Recommendations flagged as **Preview** aren't included in the calculations of your secure score. They should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score. Preview recommendations are marked with: :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false":::
For related material, see the following articles:
- [Learn about the different elements of a recommendation](review-security-recommendations.md) - [Learn how to remediate recommendations](implement-security-recommendations.md) - [View the GitHub-based tools for working programmatically with secure score](https://github.com/Azure/Azure-Security-Center/tree/master/Secure%20Score)++
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
Microsoft Defender for Cloud is available in the following Azure cloud environme
| - [Microsoft Defender for Servers](./defender-for-servers-introduction.md) | GA | GA | GA | | - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available | | - [Microsoft Defender CSPM](./concept-cloud-security-posture-management.md) | GA | Not Available | Not Available |
+| - [Agentless discovery for Kubernetes](concept-agentless-containers.md) | Public Preview | Not Available | Not Available |
+| [Agentless vulnerability assessments for container images](defender-for-containers-vulnerability-assessment-azure.md), including registry scanning (\* Up to 20 unique images per billable resource) | Public Preview | Not Available | Not Available |
| - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA | | - [Microsoft Defender for Kubernetes](./defender-for-kubernetes-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA | GA | | - [Microsoft Defender for Containers](./defender-for-containers-introduction.md) <sup>[7](#footnote7)</sup> | GA | GA | GA |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 04/16/2023 Last updated : 04/18/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Changes in the recommendation "Machines should be configured securely"](#changes-in-the-recommendation-machines-should-be-configured-securely) | March 2023 |
-| [Three alerts in the Defender for Azure Resource Manager plan will be deprecated](#three-alerts-in-the-defender-for-resource-manager-plan-will-be-deprecated) | March 2023 |
-| [Alerts automatic export to Log Analytics workspace will be deprecated](#alerts-automatic-export-to-log-analytics-workspace-will-be-deprecated) | March 2023 |
-| [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 |
| [Deprecation of legacy compliance standards across cloud environments](#deprecation-of-legacy-compliance-standards-across-cloud-environments) | April 2023 |
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 |
| [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services) | April 2023 |
+| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 |
| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | June 2023 |
-### Changes in the recommendation "Machines should be configured securely"
-
-**Estimated date for change: March 2023**
-
-The recommendation `Machines should be configured securely` will be updated. The update will improve the performance and stability of the recommendation and align its experience with the generic behavior of Defender for Cloud's recommendations.
-
-As part of this update, the recommendation's ID will be changed from `181ac480-f7c4-544b-9865-11b8ffe87f47` to `c476dc48-8110-4139-91af-c8d940896b98`.
-
-No action is required on the customer side, and there's no expected downtime nor impact on the secure score.
--
-### Three alerts in the Defender for Resource Manager plan will be deprecated
-
-**Estimated date for change: March 2023**
-
-As we continue to improve the quality of our alerts, the following three alerts from the Defender for Resource Manager plan will be deprecated:
-1. `Activity from a risky IP address (ARM.MCAS_ActivityFromAnonymousIPAddresses)`
-1. `Activity from infrequent country (ARM.MCAS_ActivityFromInfrequentCountry)`
-1. `Impossible travel activity (ARM.MCAS_ImpossibleTravelActivity)`
-
-You can learn more details about each of these alerts from the [alerts reference list](alerts-reference.md#alerts-resourcemanager).
-
-In the scenario where an activity from a suspicious IP address is detected, one of the following Defenders for Resource Manager plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present.
-
-### Alerts automatic export to Log Analytics workspace will be deprecated
-
-**Estimated date for change: March 2023**
-
-Currently, Defender for Cloud security alerts are automatically exported to a default Log Analytics workspace on the resource level. This causes an indeterministic behavior and therefore, this feature is set to be deprecated.
-
-You can export your security alerts to a dedicated Log Analytics workspace with the [Continuous Export](continuous-export.md#set-up-a-continuous-export) feature.
-If you have already configured continuous export of your alerts to a Log Analytics workspace, no further action is required.
-
-### Deprecation and improvement of selected alerts for Windows and Linux Servers
-
-**Estimated date for change: April 2023**
-
-The security alert quality improvement process for Defender for Servers includes the deprecation of some alerts for both Windows and Linux servers. The deprecated alerts will now be sourced from and covered by Defender for Endpoint threat alerts.
-
-If you already have the Defender for Endpoint integration enabled, no further action is required. You may experience a decrease in your alerts volume in April 2023.
-
-If you don't have the Defender for Endpoint integration enabled in Defender for Servers, you'll need to enable the Defender for Endpoint integration to maintain and improve your alert coverage.
-
-All Defender for Servers customers, have full access to the Defender for EndpointΓÇÖs integration as a part of the [Defender for Servers plan](plan-defender-for-servers-select-plan.md#plan-features).
-
-You can learn more about [Microsoft Defender for Endpoint onboarding options](integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration).
-
-You can also view the [full list of alerts](alerts-reference.md#defender-for-servers-alerts-to-be-deprecated) that are set to be deprecated.
-
-Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).
- ### Deprecation of legacy compliance standards across cloud environments **Estimated date for change: April 2023**
-We are announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
+We're announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss) initiative. Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
-### Multiple changes to identity recommendations
-
-**Estimated date for change: May 2023**
-
-We announced previously the [availability of identity recommendations V2 (preview)](release-notes-archive.md#extra-recommendations-added-to-identity), which included enhanced capabilities.
-
-As part of these changes, the following recommendations will be released as General Availability (GA) and replace the V1 recommendations that are set to be deprecated.
-
-#### General Availability (GA) release of identity recommendations V2
-
-The following security recommendations will be released as GA and replace the V1 recommendations:
-
-|Recommendation | Assessment Key|
-|--|--|
-|Accounts with owner permissions on Azure resources should be MFA enabled | 6240402e-f77c-46fa-9060-a7ce53997754 |
-|Accounts with write permissions on Azure resources should be MFA enabled | c0cb17b2-0607-48a7-b0e0-903ed22de39b |
-| Accounts with read permissions on Azure resources should be MFA enabled | dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c |
-| Guest accounts with owner permissions on Azure resources should be removed | 20606e75-05c4-48c0-9d97-add6daa2109a |
-| Guest accounts with write permissions on Azure resources should be removed | 0354476c-a12a-4fcc-a79d-f0ab7ffffdbb |
-| Guest accounts with read permissions on Azure resources should be removed | fde1c0c9-0fd2-4ecc-87b5-98956cbc1095 |
-| Blocked accounts with owner permissions on Azure resources should be removed | 050ac097-3dda-4d24-ab6d-82568e7a50cf |
-| Blocked accounts with read and write permissions on Azure resources should be removed | 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
- #### Deprecation of identity recommendations V1 The following security recommendations will be deprecated as part of this change:
We've improved the coverage of the V2 identity recommendations by scanning all A
**Estimated date for change: April 2023**
-We are announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
+We're announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [`PCI DSS v4`](/azure/compliance/offerings/offering-pci-dss) initiative. Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
Learn how to [Customize the set of standards in your regulatory compliance dashb
| Recommendation Name | Recommendation Description | Policy | |--|--|--| | Azure SQL Managed Instance authentication mode should be Azure Active Directory Only | Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Azure Active Directory identities. Learn more at: aka.ms/adonlycreate | [Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f78215662-041e-49ed-a9dd-5385911b3a1f) |
-| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory (AAD) only authentication methods improves security by ensuring that Synapse Workspaces exclusively require AAD identities for authentication. Learn more at: https://aka.ms/Synapse | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |
+| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory only authentication methods improves security by ensuring that Synapse Workspaces exclusively require Azure AD identities for authentication. Learn more at: https://aka.ms/Synapse | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |
| Azure Database for MySQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e) | | Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |
+### Multiple changes to identity recommendations
+
+**Estimated date for change: May 2023**
+
+We announced previously the [availability of identity recommendations V2 (preview)](release-notes-archive.md#extra-recommendations-added-to-identity), which included enhanced capabilities.
+
+As part of these changes, the following recommendations will be released as General Availability (GA) and replace the V1 recommendations that are set to be deprecated.
+
+#### General Availability (GA) release of identity recommendations V2
+
+The following security recommendations will be released as GA and replace the V1 recommendations:
+
+|Recommendation | Assessment Key|
+|--|--|
+|Accounts with owner permissions on Azure resources should be MFA enabled | 6240402e-f77c-46fa-9060-a7ce53997754 |
+|Accounts with write permissions on Azure resources should be MFA enabled | c0cb17b2-0607-48a7-b0e0-903ed22de39b |
+| Accounts with read permissions on Azure resources should be MFA enabled | dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c |
+| Guest accounts with owner permissions on Azure resources should be removed | 20606e75-05c4-48c0-9d97-add6daa2109a |
+| Guest accounts with write permissions on Azure resources should be removed | 0354476c-a12a-4fcc-a79d-f0ab7ffffdbb |
+| Guest accounts with read permissions on Azure resources should be removed | fde1c0c9-0fd2-4ecc-87b5-98956cbc1095 |
+| Blocked accounts with owner permissions on Azure resources should be removed | 050ac097-3dda-4d24-ab6d-82568e7a50cf |
+| Blocked accounts with read and write permissions on Azure resources should be removed | 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
+ ### DevOps Resource Deduplication for Defender for DevOps **Estimated date for change: June 2023**
-To improve the Defender for DevOps user experience and enable further integration with Defender for Coud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant.
+To improve the Defender for DevOps user experience and enable further integration with Defender for Cloud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant.
-If you do not have an instance of a DevOps organization onboarded more than once to your organization, no further action is required. If you do have more than one instance of a DevOps organization onboarded to your tenant, the subscription owner will be notified and will need to delete the DevOps Connector(s) they do not want to keep by navigating to Defender for Cloud Environment Settings.
+If you don't have an instance of a DevOps organization onboarded more than once to your organization, no further action is required. If you do have more than one instance of a DevOps organization onboarded to your tenant, the subscription owner will be notified and will need to delete the DevOps Connector(s) they don't want to keep by navigating to Defender for Cloud Environment Settings.
-Customers will have until June 30, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps.
+Customers will have until June 30, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists, will remain onboarded to Defender for DevOps.
## Next steps
defender-for-iot Tutorial Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-create-micro-agent-module-twin.md
This tutorial will help you learn how to create an individual `DefenderIotMicroA
## Device twins
-For IoT solutions built in Azure, device twins play a key role in both device management and process automation.
-
-Defender for IoT fully integrates with your existing IoT device management platform. Full integration, enables you to manage your device's security status, and allows you to make use of all existing device control capabilities. Integration is achieved by making use of the IoT Hub twin mechanism.
-
-Learn more about the concept of [Understand and use device twins in IoT Hub](../../iot-hub/iot-hub-devguide-device-twins.md).
-
-## Defender-IoT-micro-agent twin
-
-Defender for IoT uses a Defender-IoT-micro-agent twin for each device. The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security, for each specific device in your solution. Device security properties are configured through a dedicated Defender-IoT-micro-agent twin for safer communication, to enable updates, and maintenance that requires fewer resources.
-
-## Understanding DefenderIotMicroAgent module twins
- Device twins play a key role in both device management and process automation, for IoT solutions that are built in to Azure. Defender for IoT offers the capability to fully integrate your existing IoT device management platform, enabling you to manage your device security status and make use of the existing device control capabilities. You can integrate your Defender for IoT by using the IoT Hub twin mechanism.
To learn more about the general concept of module twins in Azure IoT Hub, see [U
Defender for IoT uses the module twin mechanism, and maintains a Defender-IoT-micro-agent twin named `DefenderIotMicroAgent` for each of your devices.
-To take full advantage of all Defender for IoT feature's, you need to create, configure, and use the Defender-IoT-micro-agent twins for every device in the service.
+To take full advantage of all Defender for IoT features, you need to create, configure, and use the Defender-IoT-micro-agent twins for every device in the service.
+
+## Defender-IoT-micro-agent twin
+
+Defender for IoT uses a Defender-IoT-micro-agent twin for each device. The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security for each specific device in your solution. Device security properties are configured through a dedicated Defender-IoT-micro-agent twin for safer communication, to enable updates, and maintenance that requires fewer resources.
In this tutorial you'll learn how to:
In this tutorial you'll learn how to:
- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md). -- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md)
+- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md).
## Create a DefenderIotMicroAgent module twin
A `DefenderIotMicroAgent` module twin can be created by manually editing each mo
1. Select **Add module identity**.
-1. In the Module Identity Name field, enter `DefenderIotMicroAgent`.
+1. In the **Module Identity Name** field, enter `DefenderIotMicroAgent`.
1. Select **Save**.
A `DefenderIotMicroAgent` module twin can be created by manually editing each mo
1. Select your device.
-1. Under the Module identities menu, confirm the existence of the `DefenderIotMicroAgent` module in the list of module identities associated with the device.
+1. Under the **Module Identities** tab, confirm the existence of the `DefenderIotMicroAgent` module in the list of module identities associated with the device.
:::image type="content" source="media/quickstart-create-micro-agent-module-twin/device-details-module.png" alt-text="Select module identities from the tab.":::
defender-for-iot Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/billing.md
If the number of actual devices detected by Defender for IoT exceeds the number
This message indicates that you need to update the number of committed devices on the relevant subscription to match the actual number of devices being monitored.
-**To update the number of committed devices**:
-
-1. In the warning message, select **Get more device coverage**, which will open the pane to edit your plan for the relevant subscription.
-
-1. In the **Number of devices** field, update the number of committed devices to the actual number of devices being monitored by Defender for IoT for this subscription.
-
- For example:
-
- :::image type="content" source="media/billing/update-number-of-devices.png" alt-text="Screenshot of updating the number of committed devices on a subscription when there is a device coverage warning." lightbox="media/billing/update-number-of-devices.png":::
-
-1. Select **Next**.
-
-1. Select the **I accept the terms and conditions** option, and then select **Purchase**. Billing changes will be updated accordingly.
+To update the number of committed devices, edit your plan from the **Plans and pricing** page. For more information, see [Manage OT plans on Azure subscriptions](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks).
> [!NOTE] > This warning is a reminder for you to update the number of committed devices for your subscription, and does not affect Defender for IoT functionality.
defender-for-iot Faqs Ot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md
For more information, see [Troubleshoot the sensor](how-to-troubleshoot-sensor.m
## I am seeing a warning that we have exceeded the maximum number of devices for the subscription. How do I resolve this?
-If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, a warning message will appear in Defender for IoT in the Azure portal, and you will need to update the number of committed devices on the relevant subscription. For more information, see [Defender for IoT committed devices](billing.md#defender-for-iot-committed-devices).
+If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, a warning message will appear in Defender for IoT in the Azure portal, and you will need to edit your plan and update the number of committed devices on the relevant subscription. For more information, see [Edit a plan for OT networks](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks).
## Next steps
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
The device details page displays comprehensive device information, including the
||| | **Attributes** | Displays full device details such as class, data source, firmware details, activity, type, protocols, Purdue level, sensor, site, zone, and more. | | **Backplane** | Displays the backplane hardware configuration, including slot and rack information. Select a slot in the backplane view to see the details of the underlying devices. The backplane tab is usually visible for Purdue level 1 devices that have slots in use, such as PLC, RTU, and DCS devices. |
-|**Vulnerabilities** | Displays current vulnerabilities specific to the device. Vulnerability data is based on the repository of standards based vulnerability data documented at the US government National Vulnerability Database (NVD). Select the CVE name to see the CVE details and description. You can also view vulnerability data across your network with the [Defender for IoT Vulnerability workbook](workbooks.md#view-workbooks). |
+|**Vulnerabilities** | Displays current vulnerabilities specific to the device. Defender for IoT provides vulnerability coverage for [supported OT vendors](resources-manage-proprietary-protocols.md) where Defender for IoT can detect firmware models and firmwware versions.<br><br>Vulnerability data is based on the repository of standards-based vulnerability data documented in the US government National Vulnerability Database (NVD). Select the CVE name to see the CVE details and description. <br><br>**Tip**: View vulnerability data across your network with the [Defender for IoT Vulnerability workbook](workbooks.md#view-workbooks).|
|**Alerts** | Displays current open alerts related to the device. Select any alert to view more details, and then select **View full details** to open the alert page to view the full alert information and take action. For more information on the alerts page, see [View alerts on the Azure portal](how-to-manage-cloud-alerts.md#view-alerts-on-the-azure-portal). | |**Recommendations** | Displays current recommendations for the device, such as Review PLC operating mode and Review unauthorized devices. For more information on recommendations, see [Enhance security posture with security recommendations](recommendations.md). |
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Your new plan is listed under the relevant subscription in the **Plans** grid.
Edit your Defender for IoT plans for OT networks if you need change your plan commitment or update the number of committed devices or sites.
-For example, you may have more devices that require monitoring if you're increasing existing site coverage, or there are network changes such as adding switches. If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, you may see a warning message reminding you to update the number of committed devices on the relevant subscription.
+For example, you may have more devices that require monitoring if you're increasing existing site coverage, or there are network changes such as adding switches.
+
+> [!NOTE]
+> If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, you may see a warning message that you have exceeded the maximum number of devices for your subscription. This indicates you need to update the number of committed devices on the relevant subscription to the actual number of devices being monitored. Click the link in the warning message to take you to the **Plans and pricing** page, with the **Edit plan** pane already open.
**To edit a plan:**
event-hubs Event Hubs Go Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-go-get-started-send.md
import (
func main() {
+ // create a container client using a connection string and container name
+ checkClient, err := container.NewClientFromConnectionString("AZURE STORAGE CONNECTION STRING", "CONTAINER NAME", nil)
+
// create a checkpoint store that will be used by the event hub
- checkpointStore, err := checkpoints.NewBlobStoreFromConnectionString("AZURE STORAGE CONNECTION STRING", "BLOB CONTAINER NAME", nil)
+ checkpointStore, err := checkpoints.NewBlobStore(checkClient, nil)
if err != nil { panic(err)
external-attack-surface-management Understanding Billable Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-billable-assets.md
When customers create their first Microsoft Defender External Attack Surface Man
The following kinds of assets are considered billable: -- Approved hosts
+- Approved host : IP combinations
- Approved domains - Approved IP addresses ΓÇ»
Assets are only categorized as billable if they have been placed in the Approved
## Calculating billable assets
-This section describes the conditions that the three asset types listed above must meet to be deemed billable. The sum of these billable asset counts comprises your total number of billable assets and thus determines the cost of your subscription.
+This section describes the conditions that the three aforementioned asset types must meet to be deemed billable. The sum of these billable asset counts comprises your total number of billable assets and thus determines the cost of your subscription.
-### Approved hosts
+### Approved host : IP combinations
-Hosts are considered billable if the Defender EASM system has observed resolutions within the last 30 days. All host-IP combinations from Approved Inventory will be identified as potential billable assets. All hosts in the Approved Inventory state are considered billable, regardless of the state of the coinciding IP address.
+Hosts are considered billable if the Defender EASM system has observed resolutions within the last 30 days. If the host is in the Approved Inventory state, the host : IP combination is identified as a billable asset. All hosts in the Approved Inventory state are considered billable, regardless of the state of the coinciding IP address. The IP address does not need to be in the Approved Inventory state for the host : IP combination to be included in your billable asset count.
-For example: if www.contoso.com has resolved to 1.2.3.4 and 5.6.7.8 in the past 30 days, both combinations will be added to the host count list:
+For example: if www.contoso.com has resolved to 1.2.3.4 and 5.6.7.8 in the past 30 days, both combinations are added to the host count list:
- www.contoso.com / 1.2.3.4 - www.contoso.com / 5.6.7.8
-The list is then analyzed to identify duplicate entries and eliminate duplicate hosts. If a host is a subdomain of a parent host that resolves to the same IP address, we'll exclude the child from the billable host count. For example, if both www.contoso.com and contoso.com resolve to 1.2.3.4, then we'll exclude www.contoso.com/ 1.2.3.4 from our Host Count list.
+The list is then analyzed to identify duplicate entries and eliminate duplicate hosts. If a host is a subdomain of a parent host that resolves to the same IP address, we exclude the child from the billable host count. For example, if both www.contoso.com and contoso.com resolve to 1.2.3.4, then we exclude www.contoso.com/ 1.2.3.4 from our Host Count list.
### Approved IP addresses
-Excluding the IP addresses that resolve to a billable resolving host, all active IP addresses in the Approved Inventory state will be part of the billable IP address count.
+Excluding the IP addresses that resolve to a billable resolving host, all active IP addresses in the Approved Inventory state are part of the billable IP address count.
For an IP address to be considered active and therefore billable, it must have one of the following:
These values are all considered ΓÇ£recentΓÇ¥ if observed within the last 30 days
### Approved domains
-Excluding the domains associated with a billable resolving host, all domains in the Approved Inventory state will be part of the billable domain count. If a billable host is registered to the domain in question, the domain will not be included in the billable asset count.
+Excluding the domains associated with a billable resolving host, all domains in the Approved Inventory state are part of the billable domain count. If a billable host is registered to the domain in question, the domain is not included in the billable asset count.
-For example: if server1.contoso.com has recently resolved to an IP address and is therefore included in your billable asset count, then contoso.com will not be added to this count.
+For example: if server1.contoso.com has recently resolved to an IP address and is therefore included in your billable asset count, then contoso.com is not added to this count.
## Viewing billable asset data
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
If still no match is found within application rules, then the packet is evaluate
### DNAT rules and Network rules
-Inbound Internet connectivity can be enabled by configuring Destination Network Address Translation (DNAT) as described in [Tutorial: Filter inbound traffic with Azure Firewall DNAT using the Azure portal](tutorial-firewall-dnat.md). NAT rules are applied in priority before network rules. If a match is found, an implicit corresponding network rule to allow the translated traffic is added. For security reasons, the recommended approach is to add a specific internet source to allow DNAT access to the network and avoid using wildcards.
+Inbound Internet connectivity can be enabled by configuring Destination Network Address Translation (DNAT) as described in [Tutorial: Filter inbound traffic with Azure Firewall DNAT using the Azure portal](tutorial-firewall-dnat.md). NAT rules are applied in priority before network rules. If a match is found, an implicit corresponding network rule to allow the translated traffic is added. This means that the traffic will not be subject to any further processing by other network rules. For security reasons, the recommended approach is to add a specific internet source to allow DNAT access to the network and avoid using wildcards.
Application rules aren't applied for inbound connections. So if you want to filter inbound HTTP/S traffic, you should use Web Application Firewall (WAF). For more information, see [What is Azure Web Application Firewall?](../web-application-firewall/overview.md)
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
Title: Details of the policy assignment structure
description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Last updated 10/03/2022 --++ # Azure Policy assignment structure
governance Attestation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md
Title: Details of the Azure Policy attestation structure
description: Describes the components of the Azure Policy attestation JSON object. Last updated 09/23/2022 --++ # Azure Policy attestation structure
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
Title: Details of the policy definition structure
description: Describes how policy definitions are used to establish conventions for Azure resources in your organization. Last updated 08/29/2022 --++ # Azure Policy definition structure
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported.-+ Last updated 02/22/2023 -+ # Understand Azure Policy effects
governance Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/event-overview.md
Title: Reacting to Azure Policy state change events
description: Use Azure Event Grid to subscribe to Azure Policy events, which allow applications to react to state changes without the need for complicated code. Last updated 07/12/2022 --++ # Reacting to Azure Policy state change events
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
Title: Details of the policy exemption structure
description: Describes the policy exemption definition used by Azure Policy to exempt resources from evaluation of initiatives or definitions. Last updated 11/03/2022 --++ # Azure Policy exemption structure
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
Title: Azure Policy applicability logic
description: Describes the rules Azure Policy uses to determine whether the policy is applied to its assigned resources. Last updated 09/22/2022 --++ # What is applicability in Azure Policy?
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
description: Learn how Azure Policy uses Rego and Open Policy Agent to manage cl
Last updated 06/17/2022 --++ # Understand Azure Policy for Kubernetes clusters
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/determine-non-compliance.md
Title: Determine causes of non-compliance
description: When a resource is non-compliant, there are many possible reasons. Discover what caused the non-compliance quickly and easily. Last updated 06/09/2022 --++ # Determine causes of non-compliance
governance Export Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/export-resources.md
Last updated 04/18/2022
ms.devlang: azurecli--++ # Export Azure Policy resources
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources.-+ Last updated 11/03/2022 -+ # Get compliance data of Azure resources
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
description: This guide walks you through the remediation of resources that are
Last updated 07/29/2022 --++ # Remediate non-compliant resources with Azure Policy
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
Title: Overview of Azure Policy
description: Azure Policy is a service in Azure, that you use to create, assign and, manage policy definitions in your Azure environment. Last updated 12/02/2022 --++ # What is Azure Policy?
governance Policy Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/policy-glossary.md
Title: Azure Policy glossary description: A glossary defining the terminology used throughout Azure Policy--++ Last updated 07/13/2022
governance Gov Dod Impact Level 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-4.md
description: Details of the DoD Impact Level 4 (Azure Government) Regulatory Com
Last updated 08/02/2022 --++ # Details of the DoD Impact Level 4 (Azure Government) Regulatory Compliance built-in initiative
governance Gov Dod Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-5.md
description: Details of the DoD Impact Level 5 (Azure Government) Regulatory Com
Last updated 08/02/2022 --++ # Details of the DoD Impact Level 5 (Azure Government) Regulatory Compliance built-in initiative
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
description: Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory
Last updated 08/02/2022 --++ # Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/index.md
Title: Index of policy samples
description: Index of built-ins for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Last updated 05/11/2022 --++ # Azure Policy Samples
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md
description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in
Last updated 08/02/2022 --++ # Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative
governance Pattern Deploy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pattern-deploy-resources.md
Title: "Pattern: Deploy resources with a policy definition"
description: This Azure Policy pattern provides an example of how to deploy resources with a deployIfNotExists policy definition. Last updated 05/16/2022 --++ # Azure Policy pattern: deploy resources
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.--++ Last updated 10/13/2022
governance Swift Cscf V2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-cscf-v2021.md
description: "Details of the [Preview]: SWIFT CSCF v2021 Regulatory Compliance b
Last updated 05/12/2022 --++ # Details of the SWIFT CSP v2021 Regulatory Compliance built-in initiative
governance Route State Change Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/route-state-change-events.md
Title: "Tutorial: Route policy state change events to Event Grid with Azure CLI" description: In this tutorial, you configure Event Grid to listen for policy state change events and call a webhook.-+ Last updated 07/19/2022 -+ # Tutorial: Route policy state change events to Event Grid with Azure CLI
governance Guidance For Throttled Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/guidance-for-throttled-requests.md
Title: Guidance for throttled requests description: Learn to group, stagger, paginate, and query in parallel to avoid requests being throttled by Azure Resource Graph.--++ Last updated 08/18/2022
governance Query Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/query-language.md
Title: Understand the query language
description: Describes Resource Graph tables and the available Kusto data types, operators, and functions usable with Azure Resource Graph. Last updated 06/15/2022 --++ # Understanding the Azure Resource Graph query language
governance Work With Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/work-with-data.md
Title: Work with large data sets description: Understand how to get, format, page, and skip records in large data sets while working with Azure Resource Graph.-+ Last updated 11/04/2022 -+ # Working with large Azure resource data sets
governance First Query Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-dotnet.md
description: In this quickstart, you follow the steps to enable the Resource Gra
Last updated 01/20/2023 -+ # Quickstart: Run your first Resource Graph query using .NET
governance First Query Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-portal.md
Title: 'Quickstart: Your first portal query' description: In this quickstart, you follow the steps to run your first query from Azure portal using Azure Resource Graph Explorer.--++ Last updated 10/12/2022
governance First Query Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-powershell.md
description: In this quickstart, you follow the steps to enable the Resource Gra
Last updated 06/15/2022 --++ # Quickstart: Run your first Resource Graph query using Azure PowerShell
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
description: Understand how the Azure Resource Graph service enables complex que
Last updated 06/15/2022 --++ # What is Azure Resource Graph?
governance Paginate Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/paginate-powershell.md
Title: 'Paginate Azure Resource Graph query results using Azure PowerShell'
description: In this quickstart, you control the volume Azure Resource Graph query output by using pagination in Azure PowerShell. Last updated 11/11/2022 --++ # Quickstart: Paginate Azure Resource Graph query results using Azure PowerShell
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
description: Provide a list of the Azure Resource Manager resource types support
Last updated 10/26/2022 --++ # Azure Resource Graph table and resource type reference
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/advanced.md
description: Use Azure Resource Graph to run some advanced queries, including wo
Last updated 06/15/2022 --++ # Advanced Resource Graph query samples
governance Starter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/starter.md
Title: Starter query samples description: Use Azure Resource Graph to run some starter queries, including counting resources, ordering resources, or by a specific tag.--++ Last updated 07/19/2022
governance Create Share Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/tutorials/create-share-query.md
Title: "Tutorial: Manage queries in the Azure portal" description: In this tutorial, you create a Resource Graph Query and share the new query with others in the Azure portal.--++ Last updated 10/06/2022
load-balancer Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/components.md
Previously updated : 12/27/2021 Last updated : 3/27/2023 -+ # Azure Load Balancer components
The nature of the IP address determines the **type** of load balancer created. P
| | Public load balancer | Internal load balancer | | - | - | - | | **Frontend IP configuration**| Public IP address | Private IP address|
-| **Description** | A public load balancer maps the public IP and port of incoming traffic to the private IP and port of the VM. Load balancer maps traffic the other way around for the response traffic from the VM. You can distribute specific types of traffic across multiple VMs or services by applying load-balancing rules. For example, you can spread the load of web request traffic across multiple web servers.| An internal load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that are load balanced. Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint, meaning an internal load balancer cannot accept incoming traffic from the internet. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources. |
+| **Description** | A public load balancer maps the public IP and port of incoming traffic to the private IP and port of the VM. Load balancer maps traffic the other way around for the response traffic from the VM. You can distribute specific types of traffic across multiple VMs or services by applying load-balancing rules. For example, you can spread the load of web request traffic across multiple web servers.| An internal load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that are load balanced. Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint, meaning an internal load balancer can't accept incoming traffic from the internet. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources. |
| **SKUs supported** | Basic, Standard | Basic, Standard | ![Tiered load balancer example](./media/load-balancer-overview/load-balancer.png)
Load balancer can have multiple frontend IPs. Learn more about [multiple fronten
The group of virtual machines or instances in a virtual machine scale set that is serving the incoming request. To scale cost-effectively to meet high volumes of incoming traffic, computing guidelines generally recommend adding more instances to the backend pool.
-Load balancer instantly reconfigures itself via automatic reconfiguration when you scale instances up or down. Adding or removing VMs from the backend pool reconfigures the load balancer without additional operations. The scope of the backend pool is any virtual machine in a single virtual network.
+Load balancer instantly reconfigures itself via automatic reconfiguration when you scale instances up or down. Adding or removing VMs from the backend pool reconfigures the load balancer without other operations. The scope of the backend pool is any virtual machine in a single virtual network.
Backend pools support addition of instances via [network interface or IP addresses](backend-pool-management.md).
When considering how to design your backend pool, design for the least number of
## Health probes
-A health probe is used to determine the health status of the instances in the backend pool. During load balancer creation, configure a health probe for the load balancer to use. This health probe will determine if an instance is healthy and can receive traffic.
+A health probe is used to determine the health status of the instances in the backend pool. During load balancer creation, configure a health probe for the load balancer to use. This health probe determines if an instance is healthy and can receive traffic.
You can define the unhealthy threshold for your health probes. When a probe fails to respond, the load balancer stops sending new connections to the unhealthy instances. A probe failure doesn't affect existing connections. The connection continues until the application:
For example, use a load balancer rule for port 80 to route traffic from your fro
## High Availability Ports
-A load balancer rule configured with **'protocol - all and port - 0'** is known as an High Availability (HA) port rule. This rule enables a single rule to load-balance all TCP and UDP flows that arrive on all ports of an internal Standard Load Balancer.
+A load balancer rule configured with **'protocol - all and port - 0'** is known as a High Availability (HA) port rule. This rule enables a single rule to load-balance all TCP and UDP flows that arrive on all ports of an internal Standard Load Balancer.
The load-balancing decision is made per flow. This action is based on the following five-tuple connection:
Basic load balancer doesn't support outbound rules.
- Learn about load balancer [limits](../azure-resource-manager/management/azure-subscription-service-limits.md) - Load balancer provides load balancing and port forwarding for specific TCP or UDP protocols. Load-balancing rules and inbound NAT rules support TCP and UDP, but not other IP protocols including ICMP.-- Load Balancer backend pool cannot consist of a [Private Endpoint](../private-link/private-endpoint-overview.md).
+- Load Balancer backend pool can't consist of a [Private Endpoint](../private-link/private-endpoint-overview.md).
- Outbound flow from a backend VM to a frontend of an internal Load Balancer will fail.-- A load balancer rule cannot span two virtual networks. All load balancer frontends and their backend instances must be in a single virtual network.
+- A load balancer rule can't span two virtual networks. All load balancer frontends and their backend instances must be in a single virtual network.
- Forwarding IP fragments isn't supported on load-balancing rules. IP fragmentation of UDP and TCP packets isn't supported on load-balancing rules. -- You can only have 1 Public Load Balancer (NIC based) and 1 internal Load Balancer (NIC based) per availability set. However, this constraint doesn't apply to IP-based load balancers.
+- You can only have one Public Load Balancer (NIC based) and one internal Load Balancer (NIC based) per availability set. However, this constraint doesn't apply to IP-based load balancers.
## Next steps
load-balancer Load Balancer Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-insights.md
Last updated 10/27/2020 -+ # Using Insights to monitor and configure your Azure Load Balancer
-Through Azure Monitor for networks, you're provided functional dependency visualizations and pre-configured metrics dashboard for your Load Balancers. These visuals help empower you to make informed design decisions and rapidly localize, diagnose, and resolve any faults.
+Through Azure Monitor for networks, you're provided functional dependency visualizations and preconfigured metrics dashboard for your Load Balancers. These visuals help empower you to make informed design decisions and rapidly localize, diagnose, and resolve any faults.
>[!NOTE] >Please note this feature is in Preview and the functional dependency view and preconfigured dashboard may change to improve this experience
Through Azure Monitor for networks, you're provided functional dependency visual
## Functional dependency view
-The functional dependency view will enable you to picture even the most complex load balancer setups. With visual feedback on your latest Load Balancer configuration, you can make updates while keeping your configuration in mind.
+The functional dependency view enables you to picture even the most complex load balancer setups. With visual feedback on your latest Load Balancer configuration, you can make updates while keeping your configuration in mind.
-You can access this view by visiting the Insights blade of your Load Balancer resource in Azure.
+You can access this view by visiting the Insights page of your Load Balancer resource in Azure.
For Standard Load Balancers, your backend pool resources are color-coded with Health Probe status indicating the current availability of your backend pool to serve traffic. Alongside the above topology you're presented with a time-wise graph of health status, giving a snapshot view of the health of your application. ## Metrics dashboard
-From the Insights blade of your Load Balancer, you can select More Detailed Metrics to view a pre-configured [Azure Monitor Workbook](../azure-monitor/visualize/workbooks-overview.md) containing metrics visuals relevant to specific aspects of your Load Balancer. This dashboard will show the Load Balancer status and links to relevant documentation at the top of the page.
+From the Insights page of your Load Balancer, you can select More Detailed Metrics to view a preconfigured [Azure Monitor Workbook](../azure-monitor/visualize/workbooks-overview.md) containing metrics visuals relevant to specific aspects of your Load Balancer. This dashboard shows the Load Balancer status and links to relevant documentation at the top of the Overview tab.
-At first you'll be presented with the Overview tab. You can navigate through the available tabs each of which contain visuals relevant to a specific aspect of your Load Balancer. Explicit guidance for each is available in the dashboard at the bottom of each tab.
+You can navigate through the available tabs each of which contain visuals relevant to a specific aspect of your Load Balancer. Explicit guidance for each is available in the dashboard at the bottom of each tab.
The dashboard tabs currently available are: * Overview
The dashboard tabs currently available are:
### Overview tab The Overview tab contains a searchable grid with the overall Data Path Availability and Health Probe Status for each of the Frontend IPs attached to your Load Balancer. These metrics indicate whether the Frontend IP is responsive and the compute instances in your Backend Pool are individually responsive to inbound connections.
-You can also view the overall data throughput for each Frontend IP on this page to get a sense of whether you are producing and receive expected traffic levels. The guidance at the bottom of the page will direct you to the appropriate tab should you see any irregular values.
+You can also view the overall data throughput for each Frontend IP on this page to get a sense of whether you're producing and receive expected traffic levels. The guidance at the bottom of the page directs you to the appropriate tab should you see any irregular values.
### Frontend and Backend Availability tab The Frontend and Backend Availability tabs show the Data Path Throughput and Health Probe Status metrics presented in a few useful views. The first graph shows the aggregate value so you can determine whether there's an issue. The rest of the graphs show these metrics split by various dimensions so that you can troubleshoot and identify the sources of any inbound availability issues.
The Frontend and Backend Availability tabs show the Data Path Throughput and Hea
A workflow for viewing these graphs is provided at the bottom of the page with common causes for various symptoms. ### Data Throughput tab
-The Data Throughput tab allows you to review your inbound and outbound throughput to identify if your traffic patterns are as expected. It will show the inbound and outbound data throughput split by Frontend IP and Frontend Port so that you can identify if how the services you have running are performing individually.
+The Data Throughput tab allows you to review your inbound and outbound throughput to identify if your traffic patterns are as expected. It shows the inbound and outbound data throughput split by Frontend IP and Frontend Port so that you can identify if how the services you have running are performing individually.
### Flow Distribution
-The Flow Distribution Tab will help you visualize and manage the number of flows your backend instances are receiving and producing. It shows the Flow Creation Rate and Flow Count for inbound and outbound traffic as well as the Network Traffic each VM and virtual machine scale set instance is receiving.
+The Flow Distribution Tab helps you visualize and manage the number of flows your backend instances are receiving and producing. It shows the Flow Creation Rate and Flow Count for inbound and outbound traffic as well as the Network Traffic each VM and Virtual Machine Scale Set instance is receiving.
-These views can give you feedback on whether your Load Balancer configuration or traffic patterns are leading to imbalanced traffic. For example, if you have session affinity configured and a single client is making a disproportionate number of requests. It will also let you know if you are approaching the [per VM flow limit](../virtual-network/virtual-machine-network-throughput.md#flow-limits-and-active-connections-recommendations) for your machine size.
+These views can give you feedback on whether your Load Balancer configuration or traffic patterns are leading to imbalanced traffic. For example, if you have session affinity configured and a single client is making a disproportionate number of requests. It will also let you know if you're approaching the [per VM flow limit](../virtual-network/virtual-machine-network-throughput.md#flow-limits-and-active-connections-recommendations) for your machine size.
### Connection Monitors
-The Connection Monitors tab will show you the round-trip latency on a global map for all of the [Connection Monitors](../network-watcher/connection-monitor.md) you've configured. These visuals provide useful information for services with strict latency requirements. To meet your requirements you may need to add additional regional deployments or move to a [cross-regional load balancing](./cross-region-overview.md) model
+The Connection Monitors tab shows you the round-trip latency on a global map for all of the [Connection Monitors](../network-watcher/connection-monitor.md) you've configured. These visuals provide useful information for services with strict latency requirements. To meet your requirements, you may need to add other regional deployments or move to a [cross-regional load balancing](./cross-region-overview.md) model
### Metric Definitions The Metric Definitions tab contains all the information shown in the [Multi-dimensional Metrics article](./load-balancer-standard-diagnostics.md#multi-dimensional-metrics). ## Next steps
-* Review the dashboard and provide feedback using the below link if there is anything that can be improved
+* Review the dashboard and provide feedback using the below link if there's anything that can be improved
* [Review the metrics documentation to ensure you understand how each metric is calculated](./load-balancer-standard-diagnostics.md#multi-dimensional-metrics) * [Create Connection Monitors for your Load Balancer](../network-watcher/connection-monitor.md) * [Create your own workbooks](../azure-monitor/visualize/workbooks-overview.md), you can take inspiration by clicking on the edit button in your detailed metrics dashboard
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
Title: 'Manage Azure Machine Learning environments with the CLI & SDK (v2)' description: Learn how to manage Azure Machine Learning environments using Python SDK and Azure CLI extension for Machine Learning.-
machine-learning How To R Interactive Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-interactive-development.md
ms.devlang: r
# Interactive R development-
+
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] This article will show you how to use R on a compute instance in Azure Machine Learning studio, running an R kernel in a Jupyter notebook.
Your notebook is now ready for you to run R commands.
You can upload files to your workspace file storage and access them in R. But for files stored in Azure [_data assets_ or data from _datastores_](concept-data.md), you first need to install a few packages.
-This section describes how to use Python and the `reticulate` package to load your data assets and datastores into R from an interactive session. You'll read tabular data as Pandas DataFrames using the [`azureml-fsspec`](/python/api/azureml-fsspec/?view=azure-ml-py&preserve-view=true) Python package and the `reticulate` R package.
+This section describes how to use Python and the `reticulate` package to load your data assets and datastores into R from an interactive session. You'll read tabular data as Pandas DataFrames using the [`azureml-fsspec`](/python/api/azureml-fsspec/?view=azure-ml-py&preserve-view=true) Python package and the `reticulate` R package. There is also an example of reading this into a R `data.frame`.
To install these packages:
The install script performs the following steps:
### Read tabular data from registered data assets or datastores
-When your data is stored in a data asset [created in Azure Machine Learning](how-to-create-data-assets.md?tabs=cli#create-a-file-asset), use these steps to read that tabular file into an R `data.frame`:
+When your data is stored in a data asset [created in Azure Machine Learning](how-to-create-data-assets.md?tabs=cli#create-a-file-asset), use these steps to read that tabular file into a Pandas DataFrame or an R `data.frame`:
> [!NOTE] > Reading a file with `reticulate` only works with tabular data.
When your data is stored in a data asset [created in Azure Machine Learning](how
[!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=read-uri)]
+Alternatively, you can use a Datastore URI to access different files on a registered Datastore, and read this into an R `data.frame`.
+
+ 1. Create a Datastore URI, using your own values in the following format:
+
+ ```r
+ subscription <- '<subscription_id>'
+ resource_group <- '<resource_group>'
+ workspace <- '<workspace>'
+ datastore_name <- '<datastore>'
+ path_on_datastore <- '<path>'
+
+ uri <- paste0("azureml://subscriptions/", subscription, "/resourcegroups/", resource_group, "/workspaces/", workspace, "/datastores/", datastore_name, "/paths/", path_on_datastore)
+ ```
+
+ > [!TIP]
+ > Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI, if you know the datastore where your file is located:
+ > 1. Navigate to the file/folder you want to read into R
+ > 1. Select the elipsis (**...**) next to it.
+ > 1. Select from the menu **Copy URI**.
+ > 1. Select the **Datastore URI** to copy into your notebook/script.
+ > Please note, you will still need to create a variable for `<path>` in the code.
+ > :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
+
+ 2. Create a filestore object using the aforementioned URI:
+ ```r
+ fs <- azureml.fsspec$AzureMachineLearningFileSystem(uri, sep = "")
+ ```
+
+ 3. Read into a R `data.frame`:
+ ```r
+ df <- with(fs$open("<path>)", "r") %as% f, {
+ x <- as.character(f$read(), encoding = "utf-8")
+ read.csv(textConnection(x), header = TRUE, sep = ",", stringsAsFactors = FALSE)
+ })
+ print(df)
+ ```
+
## Install R packages There are many R packages pre-installed on the compute instance.
Other than the above issues, use R as you would in any other environment, such a
## Next steps
-* [Adapt your R script to run in production](how-to-r-modify-script-for-production.md)
+* [Adapt your R script to run in production](how-to-r-modify-script-for-production.md)
machine-learning How To R Modify Script For Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-modify-script-for-production.md
RUN pip install MLflow
RUN ln -f /usr/bin/python3 /usr/bin/python # Install R packages required for logging with MLflow (these are necessary)
-RUN R -e "install.packages('MLflow', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
+RUN R -e "install.packages('mlflow', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
RUN R -e "install.packages('carrier', dependencies = TRUE, repos = 'https://cloud.r-project.org/')" RUN R -e "install.packages('optparse', dependencies = TRUE, repos = 'https://cloud.r-project.org/')" RUN R -e "install.packages('tcltk2', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
Title: Track ML experiments and models with MLflow description: Use MLflow to log metrics and artifacts from machine learning runs-
Use MLflow SDK to track any metric, parameter, artifacts, or models. For detaile
All Azure Machine Learning environments already have MLflow installed for you, so no action is required if you're using a curated environment. If you want to use a custom environment:
-1. Create a `conda.yml` file with the dependencies you need:
+1. Create a `conda.yaml` file with the dependencies you need:
:::code language="yaml" source="~/azureml-examples-main//sdk/python/using-mlflow/deploy/environment/conda.yaml" highlight="7-8" range="1-12":::
machine-learning Reference Yaml Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-environment.md
Title: 'CLI (v2) environment YAML schema' description: Reference documentation for the CLI (v2) environment YAML schema.- - Last updated 03/31/2022
migrate How To Create Azure Vmware Solution Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-vmware-solution-assessment.md
There are two types of sizing criteria that you can use to create Azure VMware S
- If you select to use a reserved instance, you can't specify '**Discount (%)** - [Learn more](../azure-vmware/reserved-instance.md) 1. In **VM Size**:
- - The **Node type** is defaulted to **AV36**. Azure Migrate recommends the node of nodes needed to migrate the servers to Azure VMware Solution.
+ - The **Node type** is defaulted to **AV36**. Azure Migrate recommends the number of nodes needed to migrate the servers to Azure VMware Solution.
- In **FTT setting, RAID level**, select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises server disk requirement, determines the total vSAN storage required in AVS. - In **CPU Oversubscription**, specify the ratio of virtual cores associated with one physical core in the AVS node. Oversubscription of greater than 4:1 might cause performance degradation, but can be used for web server type workloads. - In **Memory overcommit factor**, specify the ratio of memory over commit on the cluster. A value of 1 represents 100% memory use, 0.5 for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place.
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
define('MYSQL_SSL_CERT','/FULLPATH/on-client/to/DigiCertGlobalRootCA.crt.pem');
$conn = mysqli_init(); mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/DigiCertGlobalRootCA.crt.pem", NULL, NULL); mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin', 'yourpassword', 'quickstartdb', 3306, MYSQLI_CLIENT_SSL);
-if (mysqli_connect_errno($conn)) {
+if (mysqli_connect_errno()) {
die('Failed to connect to MySQL: '.mysqli_connect_error()); } ```
mysql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-portal.md
Complete these steps to create a flexible server:
High Availability| Unchecked | For production servers, choose between [zone redundant high availability](concepts-high-availability.md#zone-redundant-ha-architecture) and [same-zone high availability](concepts-high-availability.md#same-zone-ha-architecture). This is highly recommended for business continuity and protection against VM failures| |Standby availability zone| No preference| Choose the standby server zone location and colocate it with the application standby server in case of zone failure | MySQL version|**5.7**| A MySQL major version.|
- Admin username |**mydemouser**| Your own sign-in account to use when you connect to the server. The admin user name can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.|
+ Admin username |**mydemouser**| Your own sign-in account to use when you connect to the server. The admin user name can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, **sa**, or **public**.|
Password |Your password| A new password for the server admin account. It must contain between 8 and 128 characters. It must also contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, and so on).| Compute + storage | **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, **7 days** | The compute, storage, IOPS, and backup configurations for your new server. Select **Configure server**. **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, and **7 days** are the default values for **Compute tier**, **Compute size**, **Storage size**, **iops**, and backup **Retention period**. You can leave those values as is or adjust them. For faster data loads during migration, it is recommended to increase the IOPS to the maximum size supported by compute size and later scale it back to save cost. To save the compute and storage selection, select **Save** to continue with the configuration. The following screenshot shows the compute and storage options.|
mysql How To Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-ssl.md
Refer to the list of [compatible drivers](concepts-compatibility.md) supported b
$conn = mysqli_init(); mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/BaltimoreCyberTrustRoot.crt.pem", NULL, NULL); mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, MYSQLI_CLIENT_SSL);
-if (mysqli_connect_errno($conn)) {
+if (mysqli_connect_errno()) {
die('Failed to connect to MySQL: '.mysqli_connect_error()); } ```
network-watcher Traffic Analytics Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-policy-portal.md
Title: Deploy and manage traffic analytics using Azure Policy
+ Title: Manage traffic analytics using Azure Policy
-description: This article explains how to use Azure built-in policies to manage the deployment of traffic analytics.
+description: Learn how to use Azure built-in policies to manage the deployment of Azure Network Watcher traffic analytics.
- Previously updated : 02/09/2022 Last updated : 04/18/2023 -+
-# Deploy and manage Azure Network Watcher traffic analytics using Azure Policy
+# Manage Azure Network Watcher traffic analytics using Azure Policy
-Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. In this article, we will cover three built-in policies available for [Traffic Analytics](./traffic-analytics.md) to manage your setup.
+Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. In this article, you learn how to use three built-in policies available for [traffic analytics](./traffic-analytics.md) to manage your setup.
-If you are creating an Azure Policy definition for the first time, you can read through:
-- [Azure Policy overview](../governance/policy/overview.md) -- [Tutorial for creating an Azure Policy assignment](../governance/policy/assign-policy-portal.md#create-a-policy-assignment).
+To learn more about Azure policy, see [What is Azure Policy?](../governance/policy/overview.md) and [Quickstart: Create a policy assignment to identify non-compliant resources](../governance/policy/assign-policy-portal.md).
+## <a name="audit"></a>Audit flow logs using a built-in policy
-## Locate the policies
-1. Go to the Azure portal ΓÇô [portal.azure.com](https://portal.azure.com)
+**Network Watcher flow logs should have traffic analytics enabled** policy audits all existing Azure Resource Manager objects of type `Microsoft.Network/networkWatchers/flowLogs` and checks if traffic analytics is enabled via the `networkWatcherFlowAnalyticsConfiguration.enabled` property of the flow logs resource. It flags the flow logs resource that have the property set to false.
-Navigate to Azure Policy page by searching for Policy in the top search bar
-![Policy Home Page](./media/network-watcher-builtin-policy/1_policy-search.png)
+To assign policy and audit your flow logs, use the following steps:
-2. Head over to the **Assignments** tab from the left pane
+1. Sign in to the [Azure portal](https://portal.azure.com).
-![Assignments Tab](./media/network-watcher-builtin-policy/2_assignments-tab.png)
+1. In the search box at the top of the portal, enter *policy*. Select **Policy** in the search results.
-3. Click on **Assign Policy** button
+ :::image type="content" source="./media/traffic-analytics-policy-portal/azure-portal.png" alt-text="Screenshot of searching for policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/azure-portal.png":::
-![Assign Policy Button](./media/network-watcher-builtin-policy/3_assign-policy-button.png)
+1. Select **Assignments**, then select on **Assign Policy**.
-4. Click the three dots menu under "Policy Definitions" to see available policies
+ :::image type="content" source="./media/traffic-analytics-policy-portal/assign-policy.png" alt-text="Screenshot of selecting Assign policy button in the Azure portal.":::
-5. Use the Type filter and choose "Built-in". Then search for "traffic analytics "
+1. Select the ellipsis **...** next to **Scope** to choose your Azure subscription that has the flow logs that you want the policy to audit. You can also choose the resource group that has the flow logs. After you made your selections, select **Select** button.
-You should see the three built-in policies
-![Policy List for traffic analytics](./media/traffic-analytics/policy-filtered-view.png)
+ :::image type="content" source="./media/traffic-analytics-policy-portal/policy-scope.png" alt-text="Screenshot of selecting the scope of the policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/policy-scope.png":::
-6. Choose the policy you want to assign
+1. Select the ellipsis **...** next to **Policy definition** to choose the built-in policy that you want to assign. Enter *traffic analytics* in the search box, and select **Built-in** filter. From the search results, select **Network Watcher flow logs should have traffic analytics enabled** and then select **Add**.
-- *"Network Watcher flow logs should have traffic analytics enabled"* is the audit policy that flags non-compliant flow logs, that is flow logs without traffic analytics enabled-- *"Configure network security groups to use specific workspace for traffic analytics"* and *"Configure network security groups to enable Traffic Analytics"* are the policies with a deployment action. They enable traffic analytics on all the NSGs overwriting/not overwriting already configured settings depending on the policy enabled.
+ :::image type="content" source="./media/traffic-analytics-policy-portal/audit-policy.png" alt-text="Screenshot of selecting the audit policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/audit-policy.png":::
-There are separate instructions for each policy below.
+1. Enter a name in **Assignment name** and your name in **Assigned by**. This policy doesn't require any parameters.
-## Audit Policy
+1. Select **Review + create** and then **Create**.
-### Network Watcher flow logs should have traffic analytics enabled
+ :::image type="content" source="./media/traffic-analytics-policy-portal/assign-audit-policy.png" alt-text="Screenshot of Basics tab to assign an audit policy in the Azure portal.":::
-The policy audits all existing Azure Resource Manager objects of type "Microsoft.Network/networkWatchers/flowLogs" and checks if Traffic Analytics is enabled via the "networkWatcherFlowAnalyticsConfiguration.enabled" property of the flow logs resource. It flags the flow logs resource which have the property set to false.
+ > [!NOTE]
+ > This policy doesn't require any parameters. It also doesn't contain any role definitions so you don't need create role assignments for the managed identity in the **Remediation** tab.
-If you want to see the full definition of the policy, you can visit the [Definitions tab](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyMenuBlade/Definitions) and search for "traffic analytics" to find the policy
+1. Select **Compliance**. Search for the name of your assignment and then select it.
-### Assignment
+ :::image type="content" source="./media/traffic-analytics-policy-portal/audit-policy-compliance.png" alt-text="Screenshot of Compliance page of Azure Policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/audit-policy-compliance.png":::
-1. Fill in your policy details
+1. **Resource compliance** list all non-compliant flow logs.
-- Scope: It can be a subscription or a resource group. In latter case, select resource group that contains flow logs resource (and not network security group)-- Policy Definition: Should be chosen as shown in the "Locate the policies" section.-- AssignmentName: Choose a descriptive name
+ :::image type="content" source="./media/traffic-analytics-policy-portal/audit-policy-compliance-details.png" alt-text="Screenshot of the audit policy compliance page in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/audit-policy-compliance-details.png":::
-2. Click on "Review + Create" to review your assignment
+## Deploy and configure traffic analytics using *deployIfNotExists* policies
-The policy does not require any parameters. As you are assigning an audit policy, you do not need to fill the details in the "Remediation" tab.
+There are two *deployIfNotExists* policies available to configure NSG flow logs:
-![Audit Policy Review Traffic Analytics](./media/traffic-analytics/policy-one-assign.png)
-
-### Results
-
-To check the results, open the Compliance tab and search for the name of your Assignment.
-You should see something similar to the following screenshot once your policy runs. In case your policy hasn't run, wait for some time.
-
-![Audit Policy Results traffic analytics](./media/traffic-analytics/policy-one-results.png)
-
-## Deploy-If-not-exists Policy
-
-### Configure network security groups to use specific workspace for traffic analytics
-
-It flags the NSG that do not have Traffic Analytics enabled. It means that for the flagged NSG, either the corresponding flow logs resource does not exist or flow logs resource exist but traffic analytics is not enabled on it. You can create Remediation task if you want the policy to affect existing resources.
-Network Watcher is a regional service so this policy will apply to NSGs belonging to particular region only in the selected scope. (For a different region, create another policy assignment.)
+- **Configure network security groups to use specific workspace, storage account and flow log retention policy for traffic analytics**: This policy flags the network security group that doesn't have traffic analytics enabled. For a flagged network security group, either the corresponding NSG flow logs resource doesn't exist or the NSG flow logs resource exist but traffic analytics isn't enabled on it. You can create a *remediation* task if you want the policy to affect existing resources.
-Remediation can be assigned while assigning policy or after policy is assigned and evaluated. Remediation will enable Traffic Analytics on all the flagged resources with the provided parameters. Note that if an NSG already has flow Logs enabled into a particular storage ID but it does not have Traffic Analytics enabled, then remediation will enable Traffic Analytics on this NSG with the provided parameters. If for the flagged NSG, the storage ID provided in the parameters is different from the one already enabled for flow logs, then the latter gets overwritten with the provided storage ID in the remediation task. If you don't want to overwrite, use policy *"Configure network security groups to enable Traffic Analytics"* described below.
+ Remediation can be assigned while assigning policy or after policy is assigned and evaluated. Remediation enables traffic analytics on all the flagged resources with the provided parameters. If a network security group already has flow logs enabled into a particular storage ID but it doesn't have traffic analytics enabled, then remediation enables traffic analytics on this network security group with the provided parameters. If the storage ID provided in the parameters is different from the one enabled for flow logs, then the latter gets overwritten with the provided storage ID in the remediation task. If you don't want to overwrite, use **Configure network security groups to enable traffic analytics** policy.
-If you want to see the full definition of the policy, you can visit the [Definitions tab](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyMenuBlade/Definitions) and search for "Traffic Analytics" to find the policy.
+- **Configure network security groups to enable traffic analytics**: This policy is similar to the previous policy except that during remediation, it doesn't overwrite flow logs settings on the flagged network security groups that have flow logs enabled but traffic analytics disabled with the parameter provided in the policy assignment.
-### Configure network security groups to enable Traffic Analytics
+> [!NOTE]
+> Network Watcher is a regional service so the two *deployIfNotExists* policies will apply to network security groups that exist in a particular region. For network security groups in a different region, create another policy assignment in that region.
-It is same as the above policy except that during remediation, it does not overwrite flow logs settings on the flagged NSGs that have flow logs enabled but Traffic Analytics disabled with the parameter provided in the policy assignment.
+To assign any of the *deployIfNotExists* two policies, repeat steps 1-4 from the [previous section](#audit) and then continue with the following steps:
-### Assignment
+1. Select the ellipsis **...** next to **Policy definition** to choose the built-in policy that you want to assign. Enter *traffic analytics* in the search box, and select **Built-in** filter. From the search results, select **Network Watcher flow logs should have traffic analytics enabled** and then select **Add**.
-1. Fill in your policy details
+ :::image type="content" source="./media/traffic-analytics-policy-portal/deploy-policy.png" alt-text="Screenshot of selecting a deployIfNotExists policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/deploy-policy.png":::
-- Scope: It can be a subscription or a resource group -- Policy Definition: Should be chosen as shown in the "Locate the policies" section.-- AssignmentName: Choose a descriptive name
+1. Enter a name in **Assignment name** and your name in **Assigned by**.
-2. Add policy parameters
+ :::image type="content" source="./media/traffic-analytics-policy-portal/assign-deploy-policy-basics.png" alt-text="Screenshot of the Basics tab of assigning a deploy policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/assign-deploy-policy-basics.png":::
-- NSG Region: Azure regions at which the policy is targeted-- Storage ID: Full resource ID of the storage account. This storage account should be in the same region as the NSG.-- Network Watchers RG: Name of the resource group containing your Network Watcher resource. If you have not renamed it, you can enter 'NetworkWatcherRG' which is the default.-- Network Watcher name: Name of the regional network watcher service. Format: NetworkWatcher_RegionName. Example: NetworkWatcher_centralus.-- Workspace resource ID: Resource ID of the workspace where Traffic Analytics has to be enabled. Format is `/subscriptions/<SubscriptionID>/resourceGroups/<ResouceGroupName>/providers/Microsoft.Storage/storageAccounts/<StorageAccountName>`-- WorkspaceID: Workspace guid-- WorkspaceRegion: Region of the workspace (note that it need not be same as the region of NSG)-- TimeInterval: Frequency at which processed logs will be pushed into workspace. Currently allowed values are 60 mins and 10 mins. Default value is 60 mins.-- Effect: DeployIfNotExists (already assigned value)
+1. Select **Next** button twice or select **Parameters** tab. Enter or select the following values:
-3. Add Remediation details
+ | Setting | Value |
+ | | |
+ | Effect | Select **DeployIfNotExists**. |
+ | Network security group region | Select the region of your network security group that you're targeting with the policy. |
+ | Storage resource ID | Enter the full resource ID of the storage account. The storage account must be in the same region as the network security group. The format of storage resource ID is: `/subscriptions/<SubscriptionID>/resourceGroups/<ResouceGroupName>/providers/Microsoft.Storage/storageAccounts/<StorageAccountName>`. |
+ | Traffic analytics processing interval in minutes | Select the frequency at which processed logs are pushed into the workspace. Currently available values are 10 and 60 minutes. Default value is 60 minutes. |
+ | Workspace resource ID | Enter the full resource ID of the workspace where traffic analytics has to be enabled. The format of workspace resource ID is: `/subscriptions/<SubscriptionID>/resourcegroups/<ResouceGroupName>/providers/microsoft.operationalinsights/workspaces/<WorkspaceName>`. |
+ | Workspace region | Select the region of your traffic analytics workspace. |
+ | Workspace ID | Enter your traffic analytics workspace ID. |
+ | Network Watcher resource group | Select the resource group of your Network Watcher. |
+ | Network Watcher name | Enter the name of your Network Watcher. |
+ | Number of days to retain flow logs | Enter the number of days for which flow logs data will be retained in the storage account. If you want to retain data forever, enter *0*.|
-- Check mark on *"Create Remediation task"* if you want the policy to affect existing resources-- *"Create a Managed Identity"* should be already checked-- Selected the same location as previous for your Managed Identity-- You will need Contributor or Owner permissions to use this policy. If you have these permissions, you should not see any errors.
+ > [!NOTE]
+ > The region of traffic analytics workspace doesn't have to be the same as the region of targeted network security group.
-4. Click on "Review + Create" to review your assignment
-You should see something similar to the following screenshot.
+ :::image type="content" source="./media/traffic-analytics-policy-portal/assign-deploy-policy-parameters.png" alt-text="Screenshot of the Parameters tab of assigning a deploy policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/assign-deploy-policy-parameters.png":::
-![DINE Policy review traffic analytics](./media/traffic-analytics/policy-two-review.png)
+1. Select **Next** or **Remediation** tab. Enter or select the following values:
+ | Setting | Value |
+ | | |
+ | Create Remediation task | Check the box if you want the policy to affect existing resources. |
+ | Create a Managed Identity | Check the box. |
+ | Type of Managed Identity | Select the type of Managed Identity that you want to use. |
+ | System assigned identity location | Select the region of your Managed Identity. |
-### Results
+ :::image type="content" source="./media/traffic-analytics-policy-portal/assign-deploy-policy-remediation.png" alt-text="Screenshot of the Remediation tab of assigning a deploy policy in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/assign-deploy-policy-remediation.png":::
-To check the results, open the Compliance tab and search for the name of your Assignment.
-You should see something like following screenshot once your policy. In case your policy hasn't run, wait for some time.
+1. Select **Review + create** and then **Create**.
-![DINE Policy results traffic analytics](./media/traffic-analytics/policy-two-results.png)
+1. Select **Compliance**. Search for the name of your assignment and then select it.
-### Remediation
+ :::image type="content" source="./media/traffic-analytics-policy-portal/audit-policy-compliance.png" alt-text="Screenshot of Compliance page of Azure Policy." lightbox="./media/traffic-analytics-policy-portal/audit-policy-compliance.png":::
-To manually remediate, select *"Create Remediation task"* on the compliance tab shown above
-
-![DINE Policy remediate traffic analytics](./media/traffic-analytics/policy-two-remediate.png)
+1. **Resource compliance** list all non-compliant flow logs.
+ :::image type="content" source="./media/traffic-analytics-policy-portal/audit-policy-compliance-details.png" alt-text="Screenshot of the audit policy compliance page in the Azure portal." lightbox="./media/traffic-analytics-policy-portal/audit-policy-compliance-details.png":::
## Troubleshooting
-### Remediation task fails with "PolicyAuthorizationFailed" error code.
-
-Sample error example "The policy assignment '/subscriptions/123ds-fdf3657-fdjjjskms638/resourceGroups/DummyRG/providers/Microsoft.Authorization/policyAssignments/b67334e8770a4afc92e7a929/' resource identity does not have the necessary permissions to create deployment."
+Remediation task fails with `PolicyAuthorizationFailed` error code: sample error example *The policy assignment `/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourceGroups/DummyRG/providers/Microsoft.Authorization/policyAssignments/b67334e8770a4afc92e7a929/` resource identity doesn't have the necessary permissions to create deployment.*
-In such scenarios, the assignment's managed identity must be manually granted access. Go to the appropriate subscription/resource group (containing the resources provided in the policy parameters) and grant contributor access to the managed identity create by the policy. In the above example, "b67334e8770a4afc92e7a929" has to be as the contributor.
+In such scenario, the managed identity must be manually granted access. Go to the appropriate subscription/resource group (containing the resources provided in the policy parameters) and grant contributor access to the managed identity created by the policy. In the previous example, *b67334e8770a4afc92e7a929* has to be as the contributor.
## Next steps -- Learn about [NSG Flow Logs Built-in Policies](./nsg-flow-logs-policy-portal.md)-- Learn more about [Traffic Analytics](./traffic-analytics.md)-- Learn more about [Network Watcher](./index.yml)
+- Learn about [NSG flow logs built-in policies](./nsg-flow-logs-policy-portal.md)
+- Learn more about [traffic analytics](./traffic-analytics.md)
operator-nexus Howto Baremetal Run Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-read.md
In the response, an HTTP status code of 202 is returned as the operation is perf
Sample output looks something as below. It prints the top 4K characters of the result to the screen for convenience and provides a short-lived link to the storage blob containing the command execution result. You can use the link to download the zipped output file (tar.gz).
-```azurecli
+```output
====Action Command Output==== + hostname rack1compute01
partner-solutions Qumulo Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-create.md
Title: Get started with Azure Native Qumulo Scalable File Service Preview
+ Title: Get started with Azure Native Qumulo Scalable File Service
description: In this quickstart, learn how to create an instance of Azure Native Qumulo Scalable File Service.
Last updated 01/18/2023
-# Quickstart: Get started with Azure Native Qumulo Scalable File Service Preview
+# Quickstart: Get started with Azure Native Qumulo Scalable File Service
-In this quickstart, you create an instance of Azure Native Qumulo Scalable File Service Preview. When you create the service instance, the following entities are also created and mapped to a Qumulo file system namespace:
+In this quickstart, you create an instance of Azure Native Qumulo Scalable File Service. When you create the service instance, the following entities are also created and mapped to a Qumulo file system namespace:
- A delegated subnet that enables the Qumulo service to inject service endpoints (eNICs) into your virtual network. - A managed resource group that has internal networking and other resources required for the Qumulo service.
partner-solutions Qumulo How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-how-to-manage.md
Title: Manage Azure Native Qumulo Scalable File Service Preview
+ Title: Manage Azure Native Qumulo Scalable File Service
description: This article describes how to manage Azure Native Qumulo Scalable File Service in the Azure portal.
Last updated 01/18/2023
-# Manage Azure Native Qumulo Scalable File Service Preview
+# Manage Azure Native Qumulo Scalable File Service
-This article describes how to manage your instance of Azure Native Qumulo Scalable File Service Preview.
+This article describes how to manage your instance of Azure Native Qumulo Scalable File Service.
## Manage the Qumulo resource
partner-solutions Qumulo Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-overview.md
Title: Azure Native Qumulo Scalable File Service Preview overview
+ Title: Azure Native Qumulo Scalable File Service overview
description: Learn about what Azure Native Qumulo Scalable File Service offers you.
Last updated 01/18/2023
-# What is Azure Native Qumulo Scalable File Service Preview?
+# What is Azure Native Qumulo Scalable File Service?
Qumulo is an industry leader in distributed file system and object storage. Qumulo provides a scalable, performant, and simple-to-use cloud-native file system that can support a wide variety of data workloads. The file system uses standard file-sharing protocols, such as NFS, SMB, FTP, and S3.
partner-solutions Qumulo Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-troubleshoot.md
Title: Troubleshoot Azure Native Qumulo Scalable File Service Preview
+ Title: Troubleshoot Azure Native Qumulo Scalable File Service
description: This article provides information about troubleshooting Azure Native Qumulo Scalable File Service.
Last updated 01/18/2023
-# Troubleshoot Azure Native Qumulo Scalable File Service Preview
+# Troubleshoot Azure Native Qumulo Scalable File Service
-This article describes how to fix common problems when you're working with Azure Native Qumulo Scalable File Service Preview.
+This article describes how to fix common problems when you're working with Azure Native Qumulo Scalable File Service.
Try the troubleshooting information in this article first. If that doesn't work, you can use one of the following methods to open a request form for Qumulo support:
For successful creation of a Qumulo service, custom role-based access control (R
## Next steps -- [Manage Azure Native Qumulo Scalable File Service Preview](qumulo-how-to-manage.md)
+- [Manage Azure Native Qumulo Scalable File Service](qumulo-how-to-manage.md)
payment-hsm Inspect Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/inspect-traffic.md
Last updated 04/06/2023
# Azure Payment HSM traffic inspection
-Azure Payment Hardware Security Module (Payment HSM or PHSM) is a [bare-metal service](overview.md) providing cryptographic key operations for real-time and critical payment transactions in the Azure cloud. For more information, see [What is Azure Payment HSM?](overview.md).
+Azure Payment Hardware Security Module (Payment HSM or PHSM) is a [bare-metal service](overview.md) providing cryptographic key operations for real-time and critical payment transactions in the Azure cloud. For more information, see [What is Azure Payment HSM?](overview.md).
When Payment HSM is deployed, it comes with a host network interface and a management network interface. There are several deployment scenarios:
This solution requires a reverse proxy, such as:
- Reverse proxy Server using NGINX (VM-based) - Reverse proxy Server using HAProxy (VM-based)
-Example of reverse proxy Server using NGINX (VM-based) configuration:
+Example of reverse proxy Server using NGINX (VM-based) configuration to load balance tcp traffic:
```conf # Nginx.conf  
postgresql How To Cost Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-cost-optimization.md
+
+ Title: How to optimize costs in Azure Database for Postgres Flexible Server
+description: This article provides a list of cost optimization recommendations
+++++ Last updated : 4/13/2023++
+# How to optimize costs in Azure Database for Postgres Flexible Server
+
+Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on the [PostgreSQL Community Edition.](https://www.postgresql.org/). It's a fully managed database as a service offering that can handle mission-critical workloads with predictable performance and dynamic scalability.
+
+This article provides a list of recommendations for optimizing Azure Postgres Flexible Server cost. The list includes design considerations, a configuration checklist, and recommended database settings to help you optimize your workload.
+
+>[!div class="checklist"]
+> * Leverage reserved capacity pricing.
+> * Scale compute Up/Down.
+> * Using Azure advisor recommendations.
+> * Evaluate HA (high availability) and DR (disaster recovery) requirements.
+> * Consolidate databases and servers.
+> * Place test servers in cost-efficient geo-regions.
+> * Starting and Stopping servers.
+> * Archive old data for cold storage.
+
+## 1. Use reserved capacity pricing
+
+Azure Postgres reserved capacity pricing allows committing to a specific capacity for 1-3 years, saving costs for customers using Azure Database for PostgreSQL service. The cost savings compared to pay-as-you-go pricing can be significant, depending on the amount of capacity reserved and the length of the term. Customers can purchase reserved capacity in increments of vCores and storage. Reserved capacity can cover costs for Azure Database for PostgreSQL servers in the same region, applied to the customer's Azure subscription. Reserved Pricing for Azure Postgres Flexible Server offers cost savings up to 40% for 1 year and up to 60% for 3-year commitments, for customers who reserve capacity. For more details, please refer Pricing Calculator | Microsoft Azure
+
+To learn more, refer [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)
+
+## 2. Scale compute up/down
+
+Scaling up or down the resources of an Azure Database for PostgreSQL server can help you optimize costs. Adjust the vCores and storage as needed to only pay for necessary resources. Scaling can be done through the Azure portal, Azure CLI, or Azure REST API. Scaling compute resources up or down can be done at any time and requires server restart. It's good practice to monitor your database usage patterns and adjust the resources accordingly to optimize costs and ensure performance. For more details, please refer Compute and Storage options in Azure Database for PostgreSQL - Flexible Server.
+
+Configure Non-prod environments conservatively - Configure idle dev/test/stage environments to have cost-efficient SKUs. Choosing Burstable SKUs is ideal for workloads that don't need continuous full capacity.
+
+To learn more, refer [Scale operations in Flexible Server](how-to-scale-compute-storage-portal.md)
+
+## 3. Using Azure advisor recommendations
+
+Azure Advisor is a free service that provides recommendations to help optimize your Azure resources. It analyzes your resource configuration and usage patterns and provides recommendations on how to improve the performance, security, high availability, and cost-effectiveness of your Azure resources. The recommendations cover various Azure services including compute, storage, networking, and databases.
+
+For Azure Database for PostgreSQL, Azure Advisor can provide recommendations on how to improve the performance, availability, and cost-effectiveness of your database. For example, it can suggest scaling the database up or down, using read-replicas to offload read-intensive workloads, or switching to reserved capacity pricing to reduce costs. Azure Advisor can also recommend security best practices, such as enabling encryption at rest, or enabling network security rules to limit incoming traffic to the database.
+
+You can access the recommendations provided by Azure Advisor through the Azure portal, where you can view and implement the recommendations with just a few clicks. Implementing Azure Advisor recommendations can help you optimize your Azure resources and reduce costs. For more details, refer Azure Advisor for PostgreSQL - Flexible Server.
+
+To learn more, refer [Azure Advisor for PostgreSQL](concepts-azure-advisor-recommendations.md)
+
+## 4. Evaluate HA (high availability) and DR (disaster recovery) requirements
+
+Azure database for PostgreSQL ΓÇô Flexible Server has **built-in** node and storage resiliency at no extra cost to you. Node resiliency allows your Flexible Server to automatically failover to a healthy VM with no data loss (that is, RPO zero) and with no connection string changes except that your application must reconnect. Similarly, the data and transaction logs are stored in three synchronous copies, and it automatically detects storage corruption and takes the corrective action. For most Dev/Test workloads, and for many production workloads, this configuration should suffice.
+
+If your workload requires AZ resiliency and lower RTO, you can enable High Availability (HA) with in-zone or cross-AZ standby. This doubles your deployment costs, but it also provides a higher SLA. To achieve geo-resiliency for your application, you can set up GeoBackup for a lower cost but with a higher RTO. Alternatively, you can set up GeoReadReplica for double the cost, which offers an RTO in minutes if there was a geo-disaster.
+
+Key take away is to evaluate the requirement of your full application stack and then choose the right configuration for the Flexible Server. For example, if your application isn't AZ resilient, there's nothing to be gained by configuring Flexible Server in AZ resilient configuration.
+
+To learn more, refer [High availability architecture in Flexible Server](concepts-high-availability.md)
+
+## 5. Consolidate databases and servers
+
+Consolidating databases can be a cost-saving strategy for Azure Database for PostgreSQL Flexible Server. Consolidating multiple databases into a single Flexible Server instance can reduce the number of instances and overall cost of running Azure Database for PostgreSQL. Follow these steps to consolidate your databases and save costs:
+
+1. Access your server: Identify the server that can be consolidated, considering database's size, geo-region, configuration (CPU, memory, IOPS), performance requirements, workload type and data consistency needs.
+1. Create a new Flexible Server instance: Create a new Flexible Server instance with enough vCPUs, memory, and storage to support the consolidated databases.
+1. Reuse an existing Flexible Server instance: In case you already have an existing server, make sure it has enough vCPUs, memory, and storage to support the consolidated databases.
+1. Migrate the databases: Migrate the databases to the new Flexible Server instance. You can use tools such as pg_dump and pg_restore to export and import databases.
+1. Monitor performance: Monitor the performance of the consolidated Flexible Server instance and adjust the resources as needed to ensure optimal performance.
+
+Consolidating databases can help you save costs by reducing the number of Flexible Server instances you need to run and by enabling you to use larger instances that are more cost-effective than smaller instances. It is important to evaluate the impact of consolidation on your databases' performance and ensure that the consolidated Flexible Server instance is appropriately sized to meet all database needs.
+
+To learn more, refer [Improve the performance of Azure applications by using Azure Advisor](../../advisor/advisor-reference-performance-recommendations.md#postgresql)
+
+## 6. Place test servers in cost-efficient geo-regions
+
+Creating a test server in a cost-efficient Azure region can be a cost-saving strategy for Azure Database for PostgreSQL Flexible Server. By creating a test server in a region with lower cost of computing resources, you can reduce the cost of running your test server and minimize the cost of running Azure Database for PostgreSQL. Here are a few steps to help you create a test server in a cost-efficient Azure region:
+
+1. Identify a cost-efficient region: Identify an Azure region with lower cost of computing resources.
+1. Create a new Flexible Server instance: Create a new Flexible Server instance in the cost-efficient region with the right configuration for your test environment.
+1. Migrate test data: Migrate the test data to the new Flexible Server instance. You can use tools such as pg_dump and pg_restore to export and import databases.
+1. Monitor performance: Monitor the performance of the test server and adjust the resources as needed to ensure optimal performance.
+
+By creating a test server in a cost-efficient Azure region, you can reduce the cost of running your test server and minimize the cost of running Azure Database for PostgreSQL. It is important to evaluate the impact of the region on your test server's performance and your organization's specific regional requirements. This ensures that network latency and data transfer costs are acceptable for your use case.
+
+To learn more, refer [Azure regions](/azure/architecture/framework/cost/design-regions)
+
+## 7. Starting and stopping servers
+
+Starting and stopping servers can be a cost-saving strategy for Azure Database for PostgreSQL Flexible Server. By only running the server when you need it, you can reduce the cost of running Azure Database for PostgreSQL. Here are a few steps to help you start and stop servers and save costs:
+
+1. Identify the server: Identify the Flexible Server instance that you want to start and stop.
+1. Start the server: Start the Flexible Server instance when you need it. You can start the server using the Azure portal, Azure CLI, or Azure REST API.
+1. Stop the server: Stop the Flexible Server instance when you don't need it. You can stop the server using the Azure portal, Azure CLI, or Azure REST API.
+1. Also, if a server has been in a stopped (or idle) state for several continuous weeks, you can consider dropping the server after the required due diligence.
+
+By starting and stopping the server as needed, you can reduce the cost of running Azure Database for PostgreSQL. To ensure smooth database performance, it is crucial to evaluate the impact of starting and stopping the server and have a reliable process in place for these actions as required. To learn more, refer Stop/Start an Azure Database for PostgreSQL - Flexible Server.
+
+To learn more, refer [Stop/Start Flexible Server Instance](how-to-stop-start-server-portal.md)
+
+## 8. Archive old data for cold storage
+
+Archiving infrequently accessed data to Azure archive store (while still keeping access) can help reduce costs. Export data from PostgreSQL to Azure Archived Storage and store it in a lower-cost storage tier.
+
+1. Setup Azure Blob Storage account and create a container for your database backups.
+1. Use `pg_dump` to export the old data to a file.
+1. Use the Azure CLI or PowerShell to upload the exported file to your Blob Storage container.
+1. Set up a retention policy on the Blob Storage container to automatically delete old backups.
+1. Modify the backup script to export the old data to Blob Storage instead of local storage.
+1. Test the backup and restore process to ensure that the archived data can be restored if needed.
+
+You can also use Azure Data Factory to automate this process.
+
+To learn more, refer [Migrate your PostgreSQL database by using dump and restore](../migrate/how-to-migrate-using-dump-and-restore.md)
+
+## Tradeoffs for cost
+
+As you design your application database on Azure Database for PostgerSQL Flexible Server, consider tradeoffs between cost optimization and other aspects of the design, such as security, scalability, resilience, and operability.
+
+**Cost vs reliability**
+> Cost has a direct correlation with reliability.
+
+**Cost vs performance efficiency**
+> Boosting performance will lead to higher cost.
+
+**Cost vs security**
+> Increasing security of the workload will increase cost.
+
+**Cost vs operational excellence**
+> Investing in systems monitoring and automation might increase the cost initially but over time will reduce cost.
+
+## Next steps
+
+To learn more about cost optimization, see:
+
+* [Overview of the cost optimization pillar](/azure/architecture/framework/cost/overview)
+* [Tradeoffs for cost](/azure/architecture/framework/cost/tradeoffs)
+* [Checklist - Optimize cost](/azure/architecture/framework/cost/optimize-checklist)
+* [Checklist - Monitor cost](/azure/architecture/framework/cost/monitor-checklist)
private-5g-core Modify Local Access Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-local-access-configuration.md
If you switched from Azure AD to local usernames and passwords:
1. Sign in to [Azure Cloud Shell](../cloud-shell/overview.md) and select **PowerShell**. If this is your first time accessing your cluster via Azure Cloud Shell, follow [Access your cluster](../azure-arc/kubernetes/cluster-connect.md?tabs=azure-cli) to configure kubectl access. 1. Delete the Kubernetes Secret Objects:
- `kubectl delete secrets sas-auth-secrets grafana-auth-secrets --kubeconfig=<core kubeconfig>`
+ `kubectl delete secrets sas-auth-secrets grafana-auth-secrets --kubeconfig=<core kubeconfig> -n core`
1. Restart the distributed tracing and packet core dashboards pods. 1. Obtain the name of your packet core dashboards pod:
- `kubectl get pods -n core --kubeconfig=<core kubeconfig>" | grep "grafana"`
+ `kubectl get pods -n core --kubeconfig=<core kubeconfig> | grep "grafana"`
1. Copy the output of the previous step and replace it into the following command to restart your pods.
reliability Availability Zones Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md
The table below lists each product that offers migration guidance and/or informa
| | | [Azure Application Gateway (V2)](migrate-app-gateway-v2.md) | | [Azure Backup and Azure Site Recovery](migrate-recovery-services-vault.md) |
+| [Azure Service Fabric](migrate-service-fabric.md) |
| [Azure Storage account: Blob Storage, Azure Data Lake Storage, Files Storage](migrate-storage.md) | | [Azure Storage: Managed Disks](migrate-vm.md)| | [Azure Virtual Machines and Azure Virtual Machine Scale Sets](migrate-vm.md)|
reliability Migrate Service Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-service-fabric.md
+
+ Title: Migrate an Azure Service Fabric cluster to availability zone support
+description: Learn how to migrate both managed and non-managed Azure Service Fabric clusters to availability zone support.
+++ Last updated : 03/23/2023+++++
+# Migrate your Service Fabric cluster to availability zone support
+
+This guide describes how to migrate Service Fabric clusters from non-availability zone support to availability support. We'll take you through the different options for migration. A Service Fabric cluster distributed across availability Zones ensures high availability of the cluster state.
+
+You can migrate both managed and non-managed clusters. Both are covered in this article.
+
+For non-managed clusters, we discuss two different scenarios:
+
+ * Migrating a cluster with a Standard SKU load balancer and IP resource. This configuration supports availability zones without needing to create new resources.
+ * Migrating a cluster with a Basic SKU load balancer and IP resource. This configuration doesn't support availability zones and requires the creation of new resources.
+
+See the appropriate sections under each header for your Service Fabric cluster scenario.
+
+> [!NOTE]
+> The benefit of spanning the primary node type across availability zones is only seen for three zones and not just two. This is true for both managed and non-managed clusters.
+
+Sample templates are available at [Service Fabric cross availability zone templates](https://github.com/Azure-Samples/service-fabric-cluster-templates).
+
+## Prerequisites
+
+### Service Fabric managed clusters
+
+Required:
+
+* Standard SKU cluster.
+* Three [availability zones in the region](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
++
+Recommended:
+
+* The cluster SKU must be Standard.
+* Primary node type should have at least nine nodes for best resiliency, but supports minimum number of six.
+* Secondary node type(s) should have at least six nodes for best resiliency, but supports minimum number of three.
+
+### Service Fabric non-managed clusters
+
+Required: N/A.
+
+Recommended:
+
+* The cluster reliability level set to `Platinum`.
+* A single public IP resource using Standard SKU.
+* A single load balancer resource using Standard SKU.
+* A network security group (NSG) referenced by the subnet in which you deploy your Virtual Machine Scale Sets.
+
+#### Existing Standard SKU load balancer and IP resource
+
+There are no prerequisites for this scenario, as it assumes you have the existing required resources.
+
+#### Basic SKU load balancer and IP resource
+
+* A new load balancer using the Standard SKU, distinct from your existing Basic SKU load balancer.
+* A new IP resource using the Standard SKU, distinct from your existing Basic SKU IP resource.
+
+> [!NOTE]
+> It isn't possible to upgrade your existing resources from a Basic SKU to a Standard SKU, so new resources are required.
+
+## Downtime requirements
+
+### Service Fabric managed cluster
+
+Migration to a zone resilient configuration can cause a brief loss of external connectivity through the load balancer, but won't affect cluster health. The loss of external connectivity occurs when a new Public IP needs to be created in order to make the networking resilient to zone failures. Plan the migration accordingly.
+
+### Service Fabric non-managed cluster
+
+Downtime for migrating Service Fabric non-managed clusters vary widely based on the number of VMs and Upgrade Domains (UDs) in your cluster. UDs are logical groupings of VMs that determine the order in which upgrades are pushed to the VMs in your cluster. The downtime is also affected by the upgrade mode of your cluster, which handles how upgrade tasks for the UDs in your cluster are processed. The `sfZonalUpgradeMode` property, which controls the upgrade mode, is covered in more detail in the following sections.
+
+## Migration for Service Fabric managed clusters
+
+### Create new primary and secondary node types that span availability zones
+
+There's only one method for migrating a non-availability zone enabled Service Fabric managed cluster to an availability zone enabled state.
+
+**To migrate your Service Fabric managed cluster:**
+
+1. Determine whether a new IP is required and what resources need to be migrated to become zone resilient. To get the current availability zone resiliency state for the resources of the managed cluster, use the following API call:
+
+ ```http
+ POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServiceFabric/managedClusters/{clusterName}/getazresiliencystatus?api-version=2022-02-01-preview
+ ```
+ Or, you can use the Az Module as follows:
+ ```
+ Select-AzSubscription -SubscriptionId {subscriptionId}
+ Invoke-AzResourceAction -ResourceId /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServiceFabric/managedClusters/{clusterName} -Action getazresiliencystatus -ApiVersion 2022-02-01-preview
+ ```
+ This should provide with response similar to:
+ ```json
+ {
+ "baseResourceStatus" :[
+ {
+ "resourceName": "sfmccluster1"
+ "resourceType": "Microsoft.Storage/storageAccounts"
+ "isZoneResilient": false
+ },
+ {
+ "resourceName": "PublicIP-sfmccluster1"
+ "resourceType": "Microsoft.Network/publicIPAddresses"
+ "isZoneResilient": false
+ },
+ {
+ "resourceName": "primary"
+ "resourceType": "Microsoft.Compute/virutalmachinescalesets"
+ "isZoneResilient": false
+ },
+ ],
+ "isClusterZoneResilient": false
+ }
+ ```
+
+ If the Public IP resource isn't zone resilient, migration of the cluster causes a brief loss of external connectivity. The loss of connectivity is due to the migration setting up new Public IP and updating the cluster FQDN to the new IP. If the Public IP resource is zone resilient, migration will not modify the Public IP resource or FQDN and there will be no external connectivity impact.
+
+1. Initiate conversion of the underlying storage account created for managed cluster from LRS to ZRS using [customer-initiated conversion](../storage/common/redundancy-migration.md#customer-initiated-conversion). The resource group of storage account that needs to be migrated would be of the form "SFC_ClusterId"(ex SFC_9240df2f-71ab-4733-a641-53a8464d992d) under the same subscription as the managed cluster resource.
+
+1. Add a new primary node type, which spans across availability zones.
+
+ This step triggers the resource provider to perform the migration of the primary node type and Public IP along with a cluster FQDN DNS update, if needed, to become zone resilient. Use the above API to understand implication of this step.
+
+ * Use apiVersion 2022-02-01-preview or higher.
+ * Add a new primary node type to the cluster with zones parameter set to ["1", "2", "3"] as show below:
+
+ ```json
+ {
+ "apiVersion": "2022-02-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ ...
+ "isPrimary": true,
+ "zones": ["1", "2", "3"]
+ ...
+ }
+ }
+ ```
+
+1. Add secondary node type, which spans across availability zones.
+ This step adds a secondary node type, which spans across availability zones similar to the primary node type. Once created, customers need to migrate existing services from the old node types to the new ones by [using placement properties](../service-fabric/service-fabric-cluster-resource-manager-cluster-description.md).
+
+ * Use apiVersion 2022-02-01-preview or higher.
+ * Add a new secondary node type to the cluster with zones parameter set to ["1", "2", "3"] as show below:
+
+ ```json
+ {
+ "apiVersion": "2022-02-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ ...
+ "isPrimary": false,
+ "zones": ["1", "2", "3"]
+ ...
+ }
+ }
+ ```
+
+1. Start removing older non az spanning node types from the cluster
+
+ Once all your services are not present on your non zone spanned node types, you must remove the old node types. Start by [removing the old node types from the cluster](../service-fabric/how-to-managed-cluster-modify-node-type.md) using Portal or cmdlet. As a last step, remove any old node types from your template.
+
+1. Mark the cluster resilient to zone failures
+
+ This step helps in future deployments, since it ensures that all future deployments of node types span across availability zones and so the cluster remains tolerant to zone failures. Set `zonalResiliency: true` in the cluster ARM template and do a deployment to mark the cluster as zone resilient and ensure all new node type deployments span across availability zones.
+
+ ```json
+ {
+ "apiVersion": "2022-02-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters",
+ "zonalResiliency": "true"
+ }
+ ```
+ You can also see the updated status in portal under **Overview > Properties** similar to `Zonal resiliency True`, once complete.
+
+1. Validate all the resources are zone resilient
+
+ To validate the availability zone resiliency state for the resources of the managed cluster use the following GET API call:
+
+ ```http
+ POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ServiceFabric/managedClusters/{clusterName}/getazresiliencystatus?api-version=2022-02-01-preview
+ ```
+ This should provide with response similar to:
+ ```json
+ {
+ "baseResourceStatus" :[
+ {
+ "resourceName": "sfmccluster1"
+ "resourceType": "Microsoft.Storage/storageAccounts"
+ "isZoneResilient": true
+ },
+ {
+ "resourceName": "PublicIP-sfmccluster1"
+ "resourceType": "Microsoft.Network/publicIPAddresses"
+ "isZoneResilient": true
+ },
+ {
+ "resourceName": "primary"
+ "resourceType": "Microsoft.Compute/virutalmachinescalesets"
+ "isZoneResilient": true
+ },
+ ],
+ "isClusterZoneResilient": true
+ }
+ ```
+
+ If you run into any problems, reach out to support for assistance.
+++
+## Migration options for Service Fabric non-managed clusters
+
+### Migration option 1: enable multiple Availability Zones in a single Virtual Machine Scale Set
+
+#### When to use this option
+
+This solution allows users to span three Availability Zones in the same node type. This is the recommended deployment topology as it enables you to deploy across availability zones while maintaining a single Virtual Machine Scale Set.
+
+A full sample template is available on [GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/15-VM-Windows-Multiple-AZ-Secure).
+
+You should use this option when you have an existing Service Fabric non-managed cluster with the Standard SKU load balancer and IP Resources that you want to migrate. If your existing non-managed cluster has Basic SKU resources, you should see the Basic SKU migration option below.
+
+#### How to migrate your Service Fabric non-managed cluster with existing Standard SKU load balancer and IP resources
+
+**To enable zones on a Virtual Machine Scale Set:**
+
+Include the following three values in the Virtual Machine Scale Set resource:
+
+* The first value is the `zones` property, which specifies the Availability Zones that are in the Virtual Machine Scale Set.
+* The second value is the `singlePlacementGroup` property, which must be set to `true`. The scale set that's spanned across three Availability Zones can scale up to 300 VMs even with `singlePlacementGroup = true`.
+* The third value is `zoneBalance`, which ensures strict zone balancing. This value should be `true`. This ensures that the VM distributions across zones are not unbalanced, which means that when one zone goes down, the other two zones have enough VMs to keep the cluster running.
+
+ A cluster with an unbalanced VM distribution might not survive a zone-down scenario because that zone might have the majority of the VMs. Unbalanced VM distribution across zones also leads to service placement issues and infrastructure updates getting stuck. Read more about [zoneBalancing](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing).
+
+You don't need to configure the `FaultDomain` and `UpgradeDomain` overrides.
+
+```json
+{
+ "apiVersion": "2018-10-01",
+ "type": "Microsoft.Compute/virtualMachineScaleSets",
+ "name": "[parameters('vmNodeType1Name')]",
+ "location": "[parameters('computeLocation')]",
+ "zones": [ "1", "2", "3" ],
+ "properties": {
+ "singlePlacementGroup": true,
+ "zoneBalance": true
+ }
+}
+```
+
+>[!NOTE]
+>
+> * Service Fabric clusters should have at least one primary node type. The durability level of primary node types should be Silver or higher.
+> * An availability zone spanning Virtual Machine Scale Set should be configured with at least three Availability Zones, no matter the durability level.
+> * An availability zone spanning Virtual Machine Scale Set with Silver or higher durability should have at least 15 VMs.
+> * An availability zone spanning Virtual Machine Scale Set with Bronze durability should have at least six VMs.
+
+##### Enable support for multiple zones in the Service Fabric node type
+
+To support multiple availability zones, the Service Fabric node type must be enabled.
+
+* The first value is `multipleAvailabilityZones`, which should be set to `true` for the node type.
+
+* The second value is `sfZonalUpgradeMode` and is optional. This property can't be modified if a node type with multiple availability zones is already present in the cluster. This property controls the logical grouping of VMs in UDs.
+
+ * If this value is set to `Parallel`: VMs under the node type are grouped into UDs and ignore the zone info in five UDs. This setting causes UDs across all zones to be upgraded at the same time. Although this deployment mode is faster for upgrades, we don't recommend it because it goes against the SDP guidelines, which state that the updates should be applied to one zone at a time.
+
+ * If this value is omitted or set to `Hierarchical`: VMs are grouped to reflect the zonal distribution in up to 15 UDs. Each of the three zones has five UDs. This ensures that the zones are updated one at a time, moving to next zone only after completing five UDs within the first zone. The update process is safer for the cluster and the user application.
+
+ This property only defines the upgrade behavior for Service Fabric application and code upgrades. The underlying Virtual Machine Scale Set upgrades are still parallel in all Availability Zones. This property doesn't affect the UD distribution for node types that don't have multiple zones enabled.
+
+* The third value is `vmssZonalUpgradeMode`, is optional and can be updated at any time. This property defines the upgrade scheme for the Virtual Machine Scale Set to happen in parallel or sequentially across Availability Zones.
+ * If this value is set to `Parallel`: All scale set updates happen in parallel in all zones. This deployment mode is faster for upgrades, and so we don't recommend it because it goes against the SDP guidelines, which state that the updates should be applied to one zone at a time.
+ * If this value is omitted or set to `Hierarchical`: This ensures that the zones are updated one at a time, moving to next zone only after completing five UDs within the first zone. This update process is safer for the cluster and the user application.
+
+>[!IMPORTANT]
+>The Service Fabric cluster resource API version should be 2020-12-01-preview or later.
+>
+>The cluster code version should be at least 8.1.321 or later.
+
+```json
+{
+ "apiVersion": "2020-12-01-preview",
+ "type": "Microsoft.ServiceFabric/clusters",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('clusterLocation')]",
+ "dependsOn": [
+ "[concat('Microsoft.Storage/storageAccounts/', parameters('supportLogStorageAccountName'))]"
+ ],
+ "properties": {
+ "reliabilityLevel": "Platinum",
+ "sfZonalUpgradeMode": "Hierarchical",
+ "vmssZonalUpgradeMode": "Parallel",
+ "nodeTypes": [
+ {
+ "name": "[parameters('vmNodeType0Name')]",
+ "multipleAvailabilityZones": true
+ }
+ ]
+ }
+}
+```
+
+>[!NOTE]
+>
+> * Public IP and load balancer resources should use the Standard SKU described earlier in the article.
+> * The `multipleAvailabilityZones` property on the node type can only be defined when the node type is created and can't be modified later. Existing node types can't be configured with this property.
+> * When `sfZonalUpgradeMode` is omitted or set to `Hierarchical`, the cluster and application deployments will be slower because there are more upgrade domains in the cluster. It's important to correctly adjust the upgrade policy timeouts to account for the upgrade time required for 15 upgrade domains. The upgrade policy for both the app and the cluster should be updated to ensure that the deployment doesn't exceed the Azure Resource Service deployment time limit of 12 hours. This means that deployment shouldn't take more than 12 hours for 15 UDs (that is, shouldn't take more than 40 minutes for each UD).
+> * Set the cluster reliability level to `Platinum` to ensure that the cluster survives the one zone-down scenario.
+> * Upgrading the DurabilityLevel for a nodetype with multipleAvailabilityZones, is not supported. Please create a new node type with the higher durability instead.
+> * SF supports just 3 AvailabilityZones. Any higher number is not supported right now.
+
+>[!TIP]
+> We recommend setting `sfZonalUpgradeMode` to `Hierarchical` or omitting it. Deployment will follow the zonal distribution of VMs and affect a smaller amount of replicas or instances, making them safer.
+> Use `sfZonalUpgradeMode` set to `Parallel` if deployment speed is a priority or only stateless workloads run on the node type with multiple Availability Zones. This causes the UD walk to happen in parallel in all Availability Zones.
+
+##### Migrate to the node type with multiple Availability Zones
+
+For all migration scenarios, you need to add a new node type that supports multiple Availability Zones. An existing node type can't be migrated to support multiple zones.
+The [Scale up a Service Fabric cluster primary node type](../service-fabric/service-fabric-scale-up-primary-node-type.md) article includes detailed steps to add a new node type and the other resources required for the new node type, such as IP and load balancer resources. That article also describes how to retire the existing node type after a new node type with multiple Availability Zones is added to the cluster.
+
+* Migration from a node type that uses basic load balancer and IP resources: This process is already described in [a sub-section below](#how-to-migrate-your-service-fabric-non-managed-cluster-with-basic-sku-load-balancer-and-ip-resources) for the solution with one node type per Availability Zone.
+
+ For the new node type, the only difference is that there's only one Virtual Machine Scale Set and one node type for all Availability Zones instead of one each per Availability Zone.
+* Migration from a node type that uses the Standard SKU load balancer and IP resources with an NSG: Follow the same procedure described previously. However, there's no need to add new load balancer, IP, and NSG resources. The same resources can be reused in the new node type.
+
+If you run into any problems reach out to support for assistance.
+
+### Migration option 2: deploy zones by pinning one Virtual Machine Scale Set to each zone
+
+#### When to use this option
+
+This is the generally available configuration right now.
+
+To span a Service Fabric cluster across Availability Zones, you must create a primary node type in each Availability Zone supported by the region. This distributes seed nodes evenly across each of the primary node types.
+
+The recommended topology for the primary node type requires this:
+* Three node types marked as primary
+ * Each node type should be mapped to its own Virtual Machine Scale Set located in a different zone.
+ * Each Virtual Machine Scale Set should have at least five nodes (Silver Durability).
+
+You should use this option when you have an existing Service Fabric non-managed cluster with the Standard SKU load balancer and IP Resources that you want to migrate. If your existing non-managed cluster has Basic SKU resources, you should see the Basic SKU migration option below.
+
+#### How to migrate your Service Fabric non-managed cluster with existing Standard SKU load balancer and IP resources
+
+##### Enable zones on a Virtual Machine Scale Set
+
+To enable a zone on a Virtual Machine Scale Set, include the following three values in the Virtual Machine Scale Set resource:
+
+* The first value is the `zones` property, which specifies which Availability Zone the Virtual Machine Scale Set is deployed to.
+* The second value is the `singlePlacementGroup` property, which must be set to `true`.
+* The third value is the `faultDomainOverride` property in the Service Fabric Virtual Machine Scale Set extension. This property should include only the zone in which this Virtual Machine Scale Set will be placed. Example: `"faultDomainOverride": "az1"`. All Virtual Machine Scale Set resources must be placed in the same region because Azure Service Fabric clusters don't have cross-region support.
+
+```json
+{
+ "apiVersion": "2018-10-01",
+ "type": "Microsoft.Compute/virtualMachineScaleSets",
+ "name": "[parameters('vmNodeType1Name')]",
+ "location": "[parameters('computeLocation')]",
+ "zones": [
+ "1"
+ ],
+ "properties": {
+ "singlePlacementGroup": true
+ },
+ "virtualMachineProfile": {
+ "extensionProfile": {
+ "extensions": [
+ {
+ "name": "[concat(parameters('vmNodeType1Name'),'_ServiceFabricNode')]",
+ "properties": {
+ "type": "ServiceFabricNode",
+ "autoUpgradeMinorVersion": false,
+ "publisher": "Microsoft.Azure.ServiceFabric",
+ "settings": {
+ "clusterEndpoint": "[reference(parameters('clusterName')).clusterEndpoint]",
+ "nodeTypeRef": "[parameters('vmNodeType1Name')]",
+ "dataPath": "D:\\\\SvcFab",
+ "durabilityLevel": "Silver",
+ "certificate": {
+ "thumbprint": "[parameters('certificateThumbprint')]",
+ "x509StoreName": "[parameters('certificateStoreValue')]"
+ },
+ "systemLogUploadSettings": {
+ "Enabled": true
+ },
+ "faultDomainOverride": "az1"
+ },
+ "typeHandlerVersion": "1.0"
+ }
+ }
+ ]
+ }
+ }
+}
+```
+
+##### Enable multiple primary node types in the Service Fabric cluster resource
+
+To set one or more node types as primary in a cluster resource, set the `isPrimary` property to `true`. When you deploy a Service Fabric cluster across Availability Zones, you should have three node types in distinct zones.
+
+```json
+{
+ "reliabilityLevel": "Platinum",
+ "nodeTypes": [
+ {
+ "name": "[parameters('vmNodeType0Name')]",
+ "applicationPorts": {
+ "endPort": "[parameters('nt0applicationEndPort')]",
+ "startPort": "[parameters('nt0applicationStartPort')]"
+ },
+ "clientConnectionEndpointPort": "[parameters('nt0fabricTcpGatewayPort')]",
+ "durabilityLevel": "Silver",
+ "ephemeralPorts": {
+ "endPort": "[parameters('nt0ephemeralEndPort')]",
+ "startPort": "[parameters('nt0ephemeralStartPort')]"
+ },
+ "httpGatewayEndpointPort": "[parameters('nt0fabricHttpGatewayPort')]",
+ "isPrimary": true,
+ "vmInstanceCount": "[parameters('nt0InstanceCount')]"
+ },
+ {
+ "name": "[parameters('vmNodeType1Name')]",
+ "applicationPorts": {
+ "endPort": "[parameters('nt1applicationEndPort')]",
+ "startPort": "[parameters('nt1applicationStartPort')]"
+ },
+ "clientConnectionEndpointPort": "[parameters('nt1fabricTcpGatewayPort')]",
+ "durabilityLevel": "Silver",
+ "ephemeralPorts": {
+ "endPort": "[parameters('nt1ephemeralEndPort')]",
+ "startPort": "[parameters('nt1ephemeralStartPort')]"
+ },
+ "httpGatewayEndpointPort": "[parameters('nt1fabricHttpGatewayPort')]",
+ "isPrimary": true,
+ "vmInstanceCount": "[parameters('nt1InstanceCount')]"
+ },
+ {
+ "name": "[parameters('vmNodeType2Name')]",
+ "applicationPorts": {
+ "endPort": "[parameters('nt2applicationEndPort')]",
+ "startPort": "[parameters('nt2applicationStartPort')]"
+ },
+ "clientConnectionEndpointPort": "[parameters('nt2fabricTcpGatewayPort')]",
+ "durabilityLevel": "Silver",
+ "ephemeralPorts": {
+ "endPort": "[parameters('nt2ephemeralEndPort')]",
+ "startPort": "[parameters('nt2ephemeralStartPort')]"
+ },
+ "httpGatewayEndpointPort": "[parameters('nt2fabricHttpGatewayPort')]",
+ "isPrimary": true,
+ "vmInstanceCount": "[parameters('nt2InstanceCount')]"
+ }
+ ]
+}
+```
+
+If you run into any problems reach out to support for assistance.
+
+### Migration option: Service Fabric non-managed cluster with Basic SKU load balancer and IP resources
+
+#### When to use this option
+
+You should use this option when you have an existing Service Fabric non-managed cluster with the Basic SKU load balancer and IP Resources that you want to migrate. If your existing non-managed cluster has Standard SKU resources, you should see the migration options above. If you have not yet created your non-managed cluster but know you will want it to be AZ-enabled, create it with Standard SKU resources.
+
+#### How to migrate your Service Fabric non-managed cluster with Basic SKU load balancer and IP resources
+
+To migrate a cluster that's using a load balancer and IP with a basic SKU, you must first create an entirely new load balancer and IP resource using the standard SKU. It isn't possible to update these resources.
+
+Reference the new load balancer and IP in the new cross-Availability Zone node types that you want to use. In the previous example, three new Virtual Machine Scale Set resources were added in zones 1, 2, and 3. These Virtual Machine Scale Sets reference the newly created load balancer and IP and are marked as primary node types in the Service Fabric cluster resource.
+
+1. To begin, add the new resources to your existing Azure Resource Manager template. These resources include:
+
+ * A public IP resource using Standard SKU
+ * A load balancer resource using Standard SKU
+ * An NSG referenced by the subnet in which you deploy your Virtual Machine Scale Sets
+ * Three node types marked as primary
+ * Each node type should be mapped to its own Virtual Machine Scale Set located in a different zone.
+ * Each Virtual Machine Scale Set should have at least five nodes (Silver Durability).
+
+ An example of these resources can be found in the [sample template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/10-VM-Ubuntu-2-NodeType-Secure).
+
+ ```powershell
+ New-AzureRmResourceGroupDeployment `
+ -ResourceGroupName $ResourceGroupName `
+ -TemplateFile $Template `
+ -TemplateParameterFile $Parameters
+ ```
+
+1. When the resources finish deploying, you can disable the nodes in the primary node type from the original cluster. When the nodes are disabled, the system services migrate to the new primary node type that you deployed previously.
+
+ ```powershell
+ Connect-ServiceFabricCluster -ConnectionEndpoint $ClusterName `
+ -KeepAliveIntervalInSec 10 `
+ -X509Credential `
+ -ServerCertThumbprint $thumb `
+ -FindType FindByThumbprint `
+ -FindValue $thumb `
+ -StoreLocation CurrentUser `
+ -StoreName My
+
+ Write-Host "Connected to cluster"
+
+ $nodeNames = @("_nt0_0", "_nt0_1", "_nt0_2", "_nt0_3", "_nt0_4")
+
+ Write-Host "Disabling nodes..."
+ foreach($name in $nodeNames) {
+ Disable-ServiceFabricNode -NodeName $name -Intent RemoveNode -Force
+ }
+ ```
+
+1. After the nodes are all disabled, the system services will run on the primary node type, which is spread across zones. You can then remove the disabled nodes from the cluster. After the nodes are removed, you can remove the original IP, load balancer, and Virtual Machine Scale Set resources.
+
+ ```powershell
+ foreach($name in $nodeNames){
+ # Remove the node from the cluster
+ Remove-ServiceFabricNodeState -NodeName $name -TimeoutSec 300 -Force
+ Write-Host "Removed node state for node $name"
+ }
+
+ $scaleSetName="nt0"
+ Remove-AzureRmVmss -ResourceGroupName $groupname -VMScaleSetName $scaleSetName -Force
+
+ $lbname="LB-cluster-nt0"
+ $oldPublicIpName="LBIP-cluster-0"
+ $newPublicIpName="LBIP-cluster-1"
+
+ Remove-AzureRmLoadBalancer -Name $lbname -ResourceGroupName $groupname -Force
+ Remove-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname -Force
+ ```
+
+1. Next, remove the references to these resources from the Resource Manager template that you deployed.
+
+1. Finally, update the DNS name and public IP.
+
+ ```powershell
+ $oldprimaryPublicIP = Get-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname
+ $primaryDNSName = $oldprimaryPublicIP.DnsSettings.DomainNameLabel
+ $primaryDNSFqdn = $oldprimaryPublicIP.DnsSettings.Fqdn
+
+ Remove-AzureRmLoadBalancer -Name $lbname -ResourceGroupName $groupname -Force
+ Remove-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname -Force
+
+ $PublicIP = Get-AzureRmPublicIpAddress -Name $newPublicIpName -ResourceGroupName $groupname
+ $PublicIP.DnsSettings.DomainNameLabel = $primaryDNSName
+ $PublicIP.DnsSettings.Fqdn = $primaryDNSFqdn
+ Set-AzureRmPublicIpAddress -PublicIpAddress $PublicIP
+ ```
+
+If you run into any problems reach out to support for assistance.
+
+## Next steps
+
+- [Scale up a Service Fabric non-managed cluster primary node type](../service-fabric/service-fabric-scale-up-primary-node-type.md)
+
+- [Add, remove, or scale Service Fabric managed cluster node types](../service-fabric/how-to-managed-cluster-modify-node-type.md)
security Management Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management-monitoring-overview.md
Symantec Endpoint Protection (SEP) is also supported on Azure. Through portal in
Learn more: * [Microsoft Antimalware for Azure Cloud Services and Virtual Machines](antimalware.md)
-* [How to install and configure Symantec Endpoint Protection on a Windows VM](../../virtual-machines/extensions/symantec.md)
* [New Antimalware Options for Protecting Azure Virtual Machines](https://azure.microsoft.com/blog/new-antimalware-options-for-protecting-azure-virtual-machines/) ## Multi-Factor Authentication
security Virtual Machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/virtual-machines-overview.md
Learn more about antimalware software to help protect your virtual machines:
* [Deploying Antimalware Solutions on Azure Virtual Machines](https://azure.microsoft.com/blog/deploying-antimalware-solutions-on-azure-virtual-machines/) * [How to install and configure Trend Micro Deep Security as a service on a Windows VM](/previous-versions/azure/virtual-machines/extensions/trend)
-* [How to install and configure Symantec Endpoint Protection on a Windows VM](../../virtual-machines/extensions/symantec.md)
* [Security solutions in the Azure Marketplace](https://azure.microsoft.com/marketplace/?term=security) For even more powerful protection, consider using [Microsoft Defender for Endpoint](/mem/configmgr/protect/deploy-use/defender-advanced-threat-protection). With Defender for Endpoint, you get:
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs
description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 03/25/2023 Last updated : 04/18/2023
Data connectors are available as part of the following offerings:
## Atlassian - [Atlassian Confluence Audit (using Azure Function)](data-connectors/atlassian-confluence-audit-using-azure-function.md)
+- [Atlassian Jira Audit (using Azure Function)](data-connectors/atlassian-jira-audit-using-azure-function.md)
## Auth0
Data connectors are available as part of the following offerings:
- [Cisco ASA](data-connectors/cisco-asa.md) - [Cisco AS) - [Cisco Duo Security (using Azure Function)](data-connectors/cisco-duo-security-using-azure-function.md)
+- [Cisco Identity Services Engine](data-connectors/cisco-identity-services-engine.md)
- [Cisco Meraki](data-connectors/cisco-meraki.md) - [Cisco Secure Email Gateway](data-connectors/cisco-secure-email-gateway.md) - [Cisco Secure Endpoint (AMP) (using Azure Function)](data-connectors/cisco-secure-endpoint-amp-using-azure-function.md)
Data connectors are available as part of the following offerings:
- [ExtraHop Reveal(x)](data-connectors/extrahop-reveal-x.md)
-## F5 Networks
+## F5, Inc.
- [F5 BIG-IP](data-connectors/f5-big-ip.md) - [F5 Networks](data-connectors/f5-networks.md)
Data connectors are available as part of the following offerings:
- [Imperva Cloud WAF (using Azure Function)](data-connectors/imperva-cloud-waf-using-azure-function.md)
+## Infoblox
+
+- [Infoblox NIOS](data-connectors/infoblox-nios.md)
+ ## Infoblox Inc. - [Infoblox Cloud Data Connector](data-connectors/infoblox-cloud-data-connector.md)
sentinel Atlassian Jira Audit Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/atlassian-jira-audit-using-azure-function.md
+
+ Title: "Atlassian Jira Audit (using Azure Function) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Atlassian Jira Audit (using Azure Function) to connect your data source to Microsoft Sentinel."
++ Last updated : 04/18/2023++++
+# Atlassian Jira Audit (using Azure Function) connector for Microsoft Sentinel
+
+The [Atlassian Jira](https://www.atlassian.com/software/jira) Audit data connector provides the capability to ingest [Jira Audit Records](https://support.atlassian.com/jira-cloud-administration/docs/audit-activities-in-jira-applications/) events into Microsoft Sentinel through the REST API. Refer to [API documentation](https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-audit-records/) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Application settings** | JiraUsername<br/>JiraAccessToken<br/>JiraHomeSiteName<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) |
+| **Azure function app code** | https://aka.ms/sentinel-jiraauditapi-functionapp |
+| **Kusto function alias** | JiraAudit |
+| **Kusto function url** | https://aka.ms/sentinel-jiraauditapi-parser |
+| **Log Analytics table(s)** | Jira_Audit_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Jira Audit Events - All Activities**
+ ```kusto
+JiraAudit
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Atlassian Jira Audit (using Azure Function) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **REST API Credentials/permissions**: **JiraAccessToken**, **JiraUsername** is required for REST API. [See the documentation to learn more about API](https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-audit-records/). Check all [requirements and follow the instructions](https://developer.atlassian.com/cloud/jira/platform/rest/v3/intro/#authentication) for obtaining credentials.
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Jira REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-jiraauditapi-parser) to create the Kusto functions alias, **JiraAudit**
++
+**STEP 1 - Configuration steps for the Jira API**
+
+ [Follow the instructions](https://developer.atlassian.com/cloud/jira/platform/rest/v3/intro/#authentication) to obtain the credentials.
+++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Workspace data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following).
+++
+Option 1 - Azure Resource Manager (ARM) Template
+
+Use this method for automated deployment of the Jira Audit data connector using an ARM Tempate.
+
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentineljiraauditazuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group.
+3. Enter the **JiraAccessToken**, **JiraUsername**, **JiraHomeSiteName** (short site name part, as example HOMESITENAME from https://HOMESITENAME.atlassian.net) and deploy.
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+Option 2 - Manual Deployment of Azure Functions
+
+Use the following step-by-step instructions to deploy the Jira Audit data connector manually with Azure Functions (Deployment via Visual Studio Code).
++
+**1. Deploy a Function App**
+
+> **NOTE:** You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+
+1. Download the [Azure Function App](https://aka.ms/sentinel-jiraauditapi-functionapp) file. Extract archive to your local development computer.
+2. Start VS Code. Choose File in the main menu and select Open Folder.
+3. Select the top level folder from extracted files.
+4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
+If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
+If you're already signed in, go to the next step.
+5. Provide the following information at the prompts:
+
+ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
+
+ b. **Select Subscription:** Choose the subscription to use.
+
+ c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
+
+ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. JiraAuditXXXXX).
+
+ e. **Select a runtime:** Choose Python 3.8.
+
+ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
+
+6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
+7. Go to Azure Portal for the Function App configuration.
++
+**2. Configure the Function App**
+
+1. In the Function App, select the Function App Name and select **Configuration**.
+2. In the **Application settings** tab, select ** New application setting**.
+3. Add each of the following application settings individually, with their respective string values (case-sensitive):
+ JiraUsername
+ JiraAccessToken
+ JiraHomeSiteName
+ WorkspaceID
+ WorkspaceKey
+ logAnalyticsUri (optional)
+> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
+3. Once all application settings have been entered, click **Save**.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-atlassianjiraaudit?tab=Overview) in the Azure Marketplace.
sentinel Cisco Identity Services Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-identity-services-engine.md
+
+ Title: "Cisco Identity Services Engine connector for Microsoft Sentinel"
+description: "Learn how to install the connector Cisco Identity Services Engine to connect your data source to Microsoft Sentinel."
++ Last updated : 04/18/2023++++
+# Cisco Identity Services Engine connector for Microsoft Sentinel
+
+The Cisco Identity Services Engine (ISE) data connector provides the capability to ingest [Cisco ISE](https://www.cisco.com/c/en/us/products/security/identity-services-engine/https://docsupdatetracker.net/index.html) events with Microsoft Sentinel. It helps you gain visibility into what is happening in your network, such as who is connected, which applications are installed and running, and much more. Refer to [Cisco ISE logging mechanism documentation](https://www.cisco.com/c/en/us/td/docs/security/ise/2-7/admin_guide/b_ise_27_admin_guide/b_ISE_admin_27_maintain_monitor.html#reference_BAFBA5FA046A45938810A5DF04C00591) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Kusto function alias** | CiscoISEEvent |
+| **Kusto function url** | https://aka.ms/sentinel-ciscoise-parser |
+| **Log Analytics table(s)** | Syslog(CiscoISE)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Reporting Devices**
+ ```kusto
+CiscoISEEvent
+
+ | summarize count() by DvcHostname
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-ciscoise-parser) to create the Kusto Functions alias, **CiscoISEEvent**
+
+1. Install and onboard the agent for Linux
+
+Typically, you should install the agent on a different computer from the one on which the logs are generated.
+
+> Syslog logs are collected only from **Linux** agents.
++
+2. Configure the logs to be collected
+
+Configure the facilities you want to collect and their severities.
+
+1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
+2. Select **Apply below configuration to my machines** and select the facilities and severities.
+3. Click **Save**.
++
+3. Configure Cisco ISE Remote Syslog Collection Locations
+
+[Follow these instructions](https://www.cisco.com/c/en/us/td/docs/security/ise/2-7/admin_guide/b_ise_27_admin_guide/b_ISE_admin_27_maintain_monitor.html#ID58) to configure remote syslog collection locations in your Cisco ISE deployment.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoise?tab=Overview) in the Azure Marketplace.
sentinel Infoblox Nios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/infoblox-nios.md
+
+ Title: "Infoblox NIOS connector for Microsoft Sentinel"
+description: "Learn how to install the connector Infoblox NIOS to connect your data source to Microsoft Sentinel."
++ Last updated : 04/18/2023++++
+# Infoblox NIOS connector for Microsoft Sentinel
+
+The [Infoblox Network Identity Operating System (NIOS)](https://www.infoblox.com/glossary/network-identity-operating-system-nios/) connector allows you to easily connect your Infoblox NIOS logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | Syslog (InfobloxNIOS)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Infoblox](https://www.infoblox.com/support/) |
+
+## Query samples
+
+**Total Count by DHCP Request Message Types**
+ ```kusto
+union isfuzzy=true
+ Infoblox_dhcpdiscover,Infoblox_dhcprequest,Infoblox_dhcpinform
+
+ | summarize count() by Log_Type
+ ```
+
+**Top 5 Source IP address**
+ ```kusto
+Infoblox_dnsclient
+
+ | summarize count() by SrcIpAddr
+
+ | top 10 by count_ desc
+ ```
+++
+## Prerequisites
+
+To integrate with Infoblox NIOS make sure you have:
+
+- **Infoblox NIOS**: must be configured to export logs via Syslog
++
+## Vendor installation instructions
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Infoblox and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Infoblox%20NIOS/Parser/Infoblox.txt), on the second line of the query, enter the hostname(s) of your Infoblox device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
+
+1. Install and onboard the agent for Linux
+
+Typically, you should install the agent on a different computer from the one on which the logs are generated.
+
+> Syslog logs are collected only from **Linux** agents.
++
+2. Configure the logs to be collected
+
+Configure the facilities you want to collect and their severities.
+ 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
+ 2. Select **Apply below configuration to my machines** and select the facilities and severities.
+ 3. Click **Save**.
++
+3. Configure and connect the Infoblox NIOS
+
+[Follow these instructions](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-slog-and-snmp-configuration-for-nios.pdf) to enable syslog forwarding of Infoblox NIOS Logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-infobloxnios?tab=Overview) in the Azure Marketplace.
sentinel Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-custom.md
You can easily determine the presence of any auto-disabled rules, by sorting the
SOC managers should be sure to check the rule list regularly for the presence of auto-disabled rules.
+#### Permanent failure due to resource drain
+
+Another kind of permanent failure occurs due to an **improperly built query** that causes the rule to consume **excessive computing resources** and risks being a performance drain on your systems. When Microsoft Sentinel identifies such a rule, it takes the same three steps mentioned above for the other permanent failures&mdash;disables the rule, prepends **"AUTO DISABLED"** to the rule name, and adds the reason for the failure to the description.
+
+To re-enable the rule, you must address the issues in the query that cause it to use too many resources. See the following articles for best practices to optimize your Kusto queries:
+
+- [Query best practices - Azure Data Explorer](/azure/data-explorer/kusto/query/best-practices)
+- [Optimize log queries in Azure Monitor](../azure-monitor/logs/query-optimization.md)
+
+Also see [Useful resources for working with Kusto Query Language in Microsoft Sentinel](kusto-resources.md) for further assistance.
+ ## Next steps When using analytics rules to detect threats from Microsoft Sentinel, make sure that you enable all rules associated with your connected data sources in order to ensure full security coverage for your environment. The most efficient way to enable analytics rules is directly from the data connector page, which lists any related rules. For more information, see [Connect data sources](connect-data-sources.md).
You can also push rules to Microsoft Sentinel via [API](/rest/api/securityinsigh
For more information, see: - [Tutorial: Investigate incidents with Microsoft Sentinel](investigate-cases.md)
+- [Navigate and investigate incidents in Microsoft Sentinel - Preview](investigate-incidents.md)
- [Classify and analyze data using entities in Microsoft Sentinel](entities.md) - [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
service-bus-messaging Service Bus Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-azure-credentials.md
Last updated 04/12/2023-+ - devx-track-csharp
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
Goal: Maximize the throughput of a single queue. The number of senders and recei
* To increase the overall send rate into the queue, use multiple message factories to create senders. For each sender, use asynchronous operations or multiple threads. * To increase the overall receive rate from the queue, use multiple message factories to create receivers. * Use asynchronous operations to take advantage of client-side batching.
-* Set the batching interval to 50 ms to reduce the number of Service Bus client protocol transmissions. If multiple senders are used, increase the batching interval to 100 ms.
* Leave batched store access enabled. This access increases the overall rate at which messages can be written into the queue. * Set the prefetch count to 20 times the maximum processing rates of all receivers of a factory. This count reduces the number of Service Bus client protocol transmissions.
To maximize throughput, follow these steps:
* If each sender is in a different process, use only a single factory per process. * Use asynchronous operations to take advantage of client-side batching.
-* Use the default batching interval of 20 ms to reduce the number of Service Bus client protocol transmissions.
* Leave batched store access enabled. This access increases the overall rate at which messages can be written into the queue or topic. * Set the prefetch count to 20 times the maximum processing rates of all receivers of a factory. This count reduces the number of Service Bus client protocol transmissions.
To maximize throughput, follow these guidelines:
* To increase the overall send rate into the topic, use multiple message factories to create senders. For each sender, use asynchronous operations or multiple threads. * To increase the overall receive rate from a subscription, use multiple message factories to create receivers. For each receiver, use asynchronous operations or multiple threads. * Use asynchronous operations to take advantage of client-side batching.
-* Use the default batching interval of 20 ms to reduce the number of Service Bus client protocol transmissions.
* Leave batched store access enabled. This access increases the overall rate at which messages can be written into the topic. * Set the prefetch count to 20 times the maximum processing rates of all receivers of a factory. This count reduces the number of Service Bus client protocol transmissions.
Topics with a large number of subscriptions typically expose a low overall throu
To maximize throughput, try the following steps: * Use asynchronous operations to take advantage of client-side batching.
-* Use the default batching interval of 20 ms to reduce the number of Service Bus client protocol transmissions.
* Leave batched store access enabled. This access increases the overall rate at which messages can be written into the topic. * Set the prefetch count to 20 times the expected receive rate in seconds. This count reduces the number of Service Bus client protocol transmissions.
service-fabric Service Fabric Diagnostics Oms Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-setup.md
If you are using Windows, continue with the following steps to connect Azure Mon
1. The workspace needs to be connected to the diagnostics data coming from your cluster. Go to the resource group in which you created the Service Fabric Analytics solution. Select **ServiceFabric\<nameOfWorkspace\>** and go to its overview page. From there, you can change solution settings, workspace settings, and access the Log Analytics workspace.
-2. On the left navigation menu, under **Workspace Data Sources**, select **Storage accounts logs**.
+2. On the left navigation menu, click on **Overview tab**,under **Connect a Data Source Tab** select **Storage accounts logs**.
3. On the **Storage account logs** page, select **Add** at the top to add your cluster's logs to the workspace.
spring-apps How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-system-assigned-managed-identity.md
az spring app identity remove \
--system-assigned ``` ++ ## Get the client ID from the object ID (principal ID)
-Use the following command to get the client ID from the object/principle ID value:
+Use the following command to get the client ID from the object/principal ID value:
```azurecli az ad sp show --id <object-ID> --query appId
spring-apps How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-marketplace-offer.md
To provide the best customer experience to manage the Tanzu component license pu
Under this implicit Azure Marketplace third-party offer purchase from VMware, your personal data and application vCPU usage data is shared with VMware. You agree to this data sharing when you agree to the marketplace terms upon creating the service instance.
-To purchase the Tanzu component license successfully, the billing account of your subscription must be included in one of the locations listed in the [Supported geographic locations of billing account](#supported-geographic-locations-of-billing-account) section. Because of tax management restrictions from VMware in some countries/regions, not all countries/regions are supported.
+To purchase the Tanzu component license successfully, the [billing account](../cost-management-billing/manage/view-all-accounts.md) of your subscription must be included in one of the locations listed in the [Supported geographic locations of billing account](#supported-geographic-locations-of-billing-account) section. Because of tax management restrictions from VMware in some countries/regions, not all countries/regions are supported.
The extra license fees apply only to the Enterprise tier. In the Azure Spring Apps Standard tier, there are no extra license fees because the managed Spring components use the OSS config server and Eureka server. No other third-party license fees are required.
You must understand and fulfill the following requirements to successfully creat
- Your Azure subscription must have an associated payment method. Azure credits or free MSDN subscriptions aren't supported. For more information, see the [Purchasing requirements](/marketplace/azure-marketplace-overview#purchasing-requirements) section of [What is Azure Marketplace?](/marketplace/azure-marketplace-overview) -- Your Azure subscription must belong to a billing account in a supported geographic location defined in the [Azure Spring Apps Enterprise](https://aka.ms/ascmpoffer) offer in Azure Marketplace. For more information, see the [Supported geographic locations of billing account](#supported-geographic-locations-of-billing-account) section.
+- Your Azure subscription must belong to a [billing account](../cost-management-billing/manage/view-all-accounts.md) in a supported geographic location defined in the [Azure Spring Apps Enterprise](https://aka.ms/ascmpoffer) offer in Azure Marketplace. For more information, see the [Supported geographic locations of billing account](#supported-geographic-locations-of-billing-account) section.
- Your region must be available. Choose an Azure region currently available. For more information, see [In which regions is Azure Spring Apps Enterprise tier available?](./faq.md#in-which-regions-is-azure-spring-apps-enterprise-tier-available) in the [Azure Spring Apps FAQ](faq.md).
You must understand and fulfill the following requirements to successfully creat
## Supported geographic locations of billing account
-To successfully purchase the [Azure Spring Apps Enterprise](https://aka.ms/ascmpoffer) offer on Azure Marketplace, your Azure subscription must belong to a billing account in a supported geographic location defined in the offer.
+To successfully purchase the [Azure Spring Apps Enterprise](https://aka.ms/ascmpoffer) offer on Azure Marketplace, your Azure subscription must belong to a [billing account](../cost-management-billing/manage/view-all-accounts.md) in a supported geographic location defined in the offer.
The following table lists each supported geographic location and its [ISO 3166 two-digit alpha code](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes).
spring-apps Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-key-vault.md
az spring app create \
--assign-endpoint true \ --runtime-version Java_17 \ --system-assigned
-export SERVICE_IDENTITY=$(az spring app show \
+export MANAGED_IDENTITY_PRINCIPAL_ID=$(az spring app show \
--resource-group "<your-resource-group-name>" \ --service "<your-Azure-Spring-Apps-instance-name>" \ --name "springapp" \
First, create a user-assigned managed identity in advance with its resource ID s
:::image type="content" source="media/tutorial-managed-identities-key-vault/app-user-managed-identity-key-vault.png" alt-text="Screenshot of Azure portal showing the Managed Identity Properties screen with 'Resource ID', 'Principle ID' and 'Client ID' highlighted." lightbox="media/tutorial-managed-identities-key-vault/app-user-managed-identity-key-vault.png"::: ```bash
-export SERVICE_IDENTITY=<principal-ID-of-user-assigned-managed-identity>
+export MANAGED_IDENTITY_PRINCIPAL_ID=<principal-ID-of-user-assigned-managed-identity>
export USER_IDENTITY_RESOURCE_ID=<resource-ID-of-user-assigned-managed-identity> ```
Use the following command to grant proper access in Key Vault for your app:
```azurecli az keyvault set-policy \ --name "<your-keyvault-name>" \
- --object-id ${SERVICE_IDENTITY} \
+ --object-id ${MANAGED_IDENTITY_PRINCIPAL_ID} \
--secret-permissions set get list ``` > [!NOTE]
-> For system-assigned managed identity case, use `az keyvault delete-policy --name "<your-keyvault-name>" --object-id ${SERVICE_IDENTITY}` to remove the access for your app after system-assigned managed identity is disabled.
+> For system-assigned managed identity case, use `az keyvault delete-policy --name "<your-keyvault-name>" --object-id ${MANAGED_IDENTITY_PRINCIPAL_ID}` to remove the access for your app after system-assigned managed identity is disabled.
## Build a sample Spring Boot app with Spring Boot starter
static-web-apps Bitbucket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/bitbucket.md
Now that the repository is created, you can create a static web app from the Azu
|--|--|--|--| | `app_location` | Location of your application code. | Enter `/` if your application source code is at the root of the repository, or `/app` if your application code is in a directory named `app`. | Yes | | `api_location` | Location of your Azure Functions code. | Enter `/api` if your api code is in a folder named `api`. If no Azure Functions app is detected in the folder, the build doesn't fail, the workflow assumes you don't want an API. | No |
- | `output_location` | Location of the build output directory relative to the `app_location`. | If your application source code is located at `/app`, and the build script outputs files to the `/app/build` folder, then set build as the `output_location` value. | No |
+ | `output_location` | Location of the build output directory relative to the `app_location`. | If your application source code is located at `/app`, and the build script outputs files to the `/app/build` folder, then set `build` as the `output_location` value. | No |
Next, define value for the `API_TOKEN` variable.
The process to delete the resource group may take a few minutes to complete.
## Next steps > [!div class="nextstepaction"]
-> [Add an API](add-api.md)
+> [Add an API](add-api.md)
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
Get started with the Azure Blob Storage client library for Go to manage blobs an
This section walks you through preparing a project to work with the Azure Blob Storage client library for Go.
-### Install the packages
+### Download the sample application
-To work with blob and container resources in a storage account, install the [azblob](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/) package using the following command:
+The [sample application](https://github.com/Azure-Samples/storage-blobs-go-quickstart.git) used in this quickstart is a basic Go application.
-```console
-go get github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
-```
-To authenticate with Azure Active Directory (recommended), install the [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) module using the following command:
+Use [git](https://git-scm.com/) to download a copy of the application to your development environment.
```console
-go get github.com/Azure/azure-sdk-for-go/sdk/azidentity
+git clone https://github.com/Azure-Samples/storage-blobs-go-quickstart
```
-### Download the sample application
+This command clones the repository to your local git folder. To open the Go sample for Blob Storage, look for the file named `storage-quickstart.go`.
-The [sample application](https://github.com/Azure-Samples/storage-blobs-go-quickstart.git) used in this quickstart is a basic Go application.
+### Install the packages
-Use [git](https://git-scm.com/) to download a copy of the application to your development environment.
+To work with blob and container resources in a storage account, install the [azblob](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/) package using the following command:
```console
-git clone https://github.com/Azure-Samples/storage-blobs-go-quickstart
+go get github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
```
+To authenticate with Azure Active Directory (recommended), install the [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) module using the following command:
-This command clones the repository to your local git folder. To open the Go sample for Blob Storage, look for the file named `storage-quickstart.go`.
+```console
+go get github.com/Azure/azure-sdk-for-go/sdk/azidentity
+```
## Authenticate to Azure and authorize access to blob data
storage File Sync Disaster Recovery Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-disaster-recovery-best-practices.md
Title: Best practices for disaster recovery with Azure File Sync
-description: Learn about best practices for disaster recovery with Azure File Sync. Specifically, high availability, data protection, and data redundancy.
+description: Learn about best practices for disaster recovery with Azure File Sync, including high availability, data protection, and data redundancy.
Previously updated : 05/24/2022 Last updated : 04/18/2023
In an Azure File Sync deployment, the cloud endpoint always contains a full copy
Due to its hybrid nature, some traditional server backup and disaster recovery strategies won't work with Azure File Sync. For any registered server, Azure File Sync doesn't support: > [!WARNING]
-> Taking any of these actions may lead to issues with sync or broken tiered files that result in eventual data loss. If you have taken one of these actions, contact Azure support to ensure your deployment is healthy.
+> Taking any of these actions may lead to issues with sync or broken tiered files that result in eventual data loss. If you've taken one of these actions, contact Azure support to ensure your deployment is healthy.
-- Transferring disk drives from one server to another
+- Transferring/cloning disk drives (volume) from one server to another while the server endpoint is still active
- Restoring from an operating system backup - Cloning a server's operating system to another server - Reverting to a previous virtual machine checkpoint-- Restoring files from on-premises backup if cloud tiering is enabled
+- Restoring tiered files from on-premises (third party) backup to the server endpoint
## High availability
Although you can manually request a failover of your Storage Sync Service to you
> [!WARNING] > You must contact support to request your Storage Sync Service be failed over if you are initiating this process manually. If you attempt to create a new Storage Sync Service using the same server endpoints in the secondary region may result in extra data staying in your storage account since the previous installation of Azure File Sync won't be cleaned up.
-Once a failover occurs, server endpoints will switch over to sync with the cloud endpoint in the secondary region automatically. However, the server endpoints must reconcile with the cloud endpoints. This may result in file conflicts as the data in the secondary region may not be caught up to the primary.
+Once a failover occurs, server endpoints will switch over to sync with the cloud endpoint in the secondary region automatically. However, the server endpoints must reconcile with the cloud endpoints. This might result in file conflicts as the data in the secondary region might not be caught up to the primary.
## Next steps
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
To use a smart card to authenticate to Azure AD, you must first [configure AD FS
If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host when launching a connection. The following list describes which types of authentication each Azure Virtual Desktop client currently supports. -- The Windows Desktop client supports the following authentication methods:
+- The Windows Desktop client and Azure Virtual Desktop Store app both support the following authentication methods:
- Username and password - Smart card - [Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) - [Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) - [Azure AD authentication](configure-single-sign-on.md)-- The Windows Store client supports the following authentication method:
+- The Remote Desktop app supports the following authentication method:
- Username and password - The web client supports the following authentication method: - Username and password
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
There are some differences between the features of each of the Remote Desktop cl
The following table compares the features of each Remote Desktop client when connecting to Azure Virtual Desktop.
-| Feature | Windows Desktop | Microsoft Store | Android or Chrome OS | iOS or iPadOS | macOS | Web | Description |
+| Feature | Windows Desktop and Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web | Description |
|--|--|--|--|--|--|--|--| | Remote Desktop sessions | X | X | X | X | X | X | Desktop of a remote computer presented in a full screen or windowed mode. | | Integrated RemoteApp sessions | X | | | | X | | Individual remote apps integrated into the local desktop as if they are running locally. |
The following tables compare support for device and other redirections across th
The following table shows which input methods are available for each Remote Desktop client:
-| Input | Windows Desktop | Microsoft Store client | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
+| Input | Windows Desktop and Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
|--|--|--|--|--|--|--| | Keyboard | X | X | X | X | X | X | | Mouse | X | X | X | X | X | X |
The following table shows which input methods are available for each Remote Desk
| Multi-touch | X | X | X | X | | | | Pen | X | | X (as touch) | X\* | | |
-\* Pen input redirection is not supported when connecting to Windows 8, Windows 8.1, Windows Server 2012, or Windows Server 2012 R2.
+\* Pen input redirection is not supported when connecting to Windows Server 2012 or Windows Server 2012 R2.
### Port redirection The following table shows which ports can be redirected for each Remote Desktop client:
-| Redirection | Windows Desktop | Microsoft Store client | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
+| Redirection | Windows Desktop and Azure Virtual Desktop Store app for Windows | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
|--|--|--|--|--|--|--| | Serial port | X | | | | | | | USB | X | | | | | |
When you enable USB port redirection, all USB devices attached to USB ports are
The following table shows which other devices can be redirected with each Remote Desktop client:
-| Redirection | Windows Desktop | Microsoft Store client | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
+| Redirection | Windows Desktop and Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
|--|--|--|--|--|--|--| | Cameras | X | | X | X | X | X (preview) | | Clipboard | X | X | Text | Text, images | X | Text |
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
Consider the following when managing session hosts:
Your users will need a [Remote Desktop client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) to connect to virtual desktops and remote apps. The following clients support Azure Virtual Desktop: - [Windows Desktop client](./users/connect-windows.md)
+- [Azure Virtual Desktop Store app for Windows](./users/connect-windows-azure-virtual-desktop-app.md)
- [Web client](./users/connect-web.md) - [macOS client](./users/connect-macos.md) - [iOS and iPadOS client](./users/connect-ios-ipados.md) - [Android and Chrome OS client](./users/connect-android-chrome-os.md)-- [Microsoft Store client](./users/connect-microsoft-store.md)
+- [Remote Desktop app for Windows](./users/connect-microsoft-store.md)
> [!IMPORTANT] > Azure Virtual Desktop doesn't support connections from the RemoteApp and Desktop Connections (RADC) client or the Remote Desktop Connection (MSTSC) client.
virtual-desktop Screen Capture Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/screen-capture-protection.md
# Screen capture protection in Azure Virtual Desktop
-Screen capture protection, alongside [watermarking](watermarking.md), helps prevent sensitive information from being captured on client endpoints. When you enable screen capture protection, remote content will be automatically blocked or hidden in screenshots and screen shares. Also, the Remote Desktop client will hide content from malicious software that may be capturing the screen.
+Screen capture protection, alongside [watermarking](watermarking.md), helps prevent sensitive information from being captured on client endpoints. When you enable screen capture protection, remote content will be automatically blocked or hidden in screenshots and screen sharing. Also, the Remote Desktop client will hide content from malicious software that may be capturing the screen.
In Windows 11, version 22H2 or later, you can enable screen capture protection on session host VMs as well as remote clients. Protection on session host VMs works just like protection for remote clients.
Screen capture protection is configured on the session host level and enforced o
You must connect to Azure Virtual Desktop with one of the following clients to use support screen capture protection: -- The Windows Desktop client supports screen capture protection for full desktops.-- The macOS client (version 10.7.0 or later) supports screen capture protection for both RemoteApps and full desktops.-- The Windows Desktop client (running Windows 11, Version 22H2 or later) supports screen capture protection for RemoteApps.
+- The Remote Desktop client for Windows and the Azure Virtual Desktop Store app support screen capture protection for full desktops. You can also use them with RemoteApps when using the client on Windows 11, version 22H2 or later.
+- The Remote Desktop client for macOS (version 10.7.0 or later) supports screen capture protection for both RemoteApps and full desktops.
## Configure screen capture protection
virtual-desktop Set Up Scaling Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-scaling-script.md
description: How to automatically scale Azure Virtual Desktop session hosts with
Previously updated : 04/29/2022 Last updated : 04/17/2023
Before you start setting up the scaling tool, make sure you have the following t
- An [Azure Virtual Desktop host pool](create-host-pools-azure-marketplace.md). - Session host pool VMs configured and registered with the Azure Virtual Desktop service.-- A user with the [*Contributor*](../role-based-access-control/role-assignments-portal.md) role-based access control (RBAC) role assigned on the Azure subscription to create the resources. You'll also need the *Application administrator* and/or *Owner* RBAC role to create a Run As account.
+- A user with the [*Contributor*](../role-based-access-control/role-assignments-portal.md) role-based access control (RBAC) role assigned on the Azure subscription to create the resources. You'll also need the *Application administrator* and/or *Owner* RBAC role to create a managed identity.
- A Log Analytics workspace (optional). The machine you use to deploy the tool must have:
First, you'll need an Azure Automation account to run the PowerShell runbook. Th
To check if your webhook is where it should be, select the name of your runbook. Next, go to your runbook's Resources section and select **Webhooks**.
-## Create an Azure Automation Run As account
+## Create a managed identity
-Now that you have an Azure Automation account, you'll also need to create an Azure Automation Run As account if you don't have one already. This account will let the tool access your Azure resources.
+Now that you have an Azure Automation account, you'll also need to set up a [managed identity](../automation/automation-security-overview.md#managed-identities) if you haven't already. Managed identities will help your runbook access other Azure AD-related resources as well as authenticate important automation processes.
+
+To set up a managed identity, follow the directions in [Using a system-assigned managed identity for an Azure Automation account](../automation/enable-managed-identity-for-automation.md). Once you're done, return to this article and [Create the Azure Logic App and execution schedule](#create-the-azure-logic-app-and-execution-schedule) to finish the initial setup process.
> [!IMPORTANT]
-> This scaling tool uses a Run As account with Azure Automation. Azure Automation Run As accounts will retire on September 30, 2023. Microsoft won't provide support beyond that date. From now through September 30, 2023, you can continue to use Azure Automation Run As accounts. This scaling tool won't be updated to create the resources using managed identities, however, you can transition to use [managed identities](../automation/automation-security-overview.md#managed-identities) and will need to before then. For more information, see [Migrate from an existing Run As account to a managed identity](../automation/migrate-run-as-accounts-managed-identity.md).
+> As of April 1, 2023, Run As accounts no longer work. We recommend you use [managed identities](../automation/automation-security-overview.md#managed-identities) instead. If you need help switching from your Run As account to a managed identity, see [Migrate from an existing Run As account to a managed identity](../automation/migrate-run-as-accounts-managed-identity.md).
> > Autoscale is an alternative way to scale session host VMs and is a native feature of Azure Virtual Desktop. We recommend you use Autoscale instead. For more information, see [Autoscale scaling plans](autoscale-scenarios.md).
-An [Azure Automation Run As account](../automation/manage-runas-account.md) provides authentication for managing resources in Azure with Azure cmdlets. When you create a Run As account, it creates a new service principal user in Azure Active Directory and assigns the Contributor role to the service principal user at the subscription level. An Azure Run As account is a great way to authenticate securely with certificates and a service principal name without needing to store a username and password in a credential object. To learn more about Run As account authentication, see [Limit Run As account permissions](../automation/manage-runas-account.md#limit-run-as-account-permissions).
-
-Any user who's assigned the *Application administrator* and/or *Owner* RBAC role on the subscription can create a Run As account.
-
-To create a Run As account in your Azure Automation account:
-
-1. In the Azure portal, select **All services**. In the list of resources, enter and select **Automation accounts**.
-
-1. On the **Automation accounts** page, select the name of your Azure Automation account.
-
-1. In the pane on the left side of the window, select **Run As accounts** under the **Account Settings** section.
-
-1. Select **Azure Run As account**. When the **Add Azure Run As account** pane appears, review the overview information, and then select **Create** to start the account creation process.
-
-1. Wait a few minutes for Azure to create the Run As account. You can track the creation progress in the menu under Notifications.
-
-1. When the process finishes, it will create an asset named **AzureRunAsConnection** in the specified Azure Automation account. Select **Azure Run As account**. The connection asset holds the application ID, tenant ID, subscription ID, and certificate thumbprint. You can also find the same information on the **Connections** page. To go to this page, in the pane on the left side of the window, select **Connections** under the **Shared Resources** section and select the connection asset named **AzureRunAsConnection**.
- ## Create the Azure Logic App and execution schedule Finally, you'll need to create the Azure Logic App and set up an execution schedule for your new scaling tool. First, download and import the [Desktop Virtualization PowerShell module](powershell-module.md) to use in your PowerShell session if you haven't already.
When you report an issue, you'll need to provide the following information to he
- OMSIngestionAPI - Az.DesktopVirtualization -- The expiration date for your [Run As account](#create-an-azure-automation-run-as-account). To find this, open your Azure Automation account, then select **Run As accounts** under **Account Settings** in the pane on the left side of the window. The expiration date should be under **Azure Run As account**.- ### Log Analytics If you decided to use Log Analytics, you can view all the log data in a custom log named **WVDTenantScale_CL** under **Custom Logs** in the **Logs** view of your Log Analytics Workspace. We've listed some sample queries you might find helpful.
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
Before you can use Microsoft Teams on Azure Virtual Desktop, you'll need to do t
- [Prepare your network](/microsoftteams/prepare-network/) for Microsoft Teams. - Install the [Remote Desktop client](./users/connect-windows.md) on a Windows 10, Windows 10 IoT Enterprise, Windows 11, or macOS 10.14 or later device that meets the [hardware requirements for Microsoft Teams](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/).-- Connect to an Azure Virtual Desktop session host running Windows 10 or 11 Multi-session or Windows 10 or 11 Enterprise.
+- Connect to an Azure Virtual Desktop session host running Windows 10 or 11 multi-session or Windows 10 or 11 Enterprise.
- The latest version of the [Microsoft Visual C++ Redistributable](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads). Media optimization for Microsoft Teams is only available for the following two clients: -- Windows Desktop client for Windows 10 or 11 machines, version 1.2.1026.0 or later.-- macOS Remote Desktop client, version 10.7.7 or later.
+- [Remote Desktop client for Windows](users/connect-windows.md) or the [Azure Virtual Desktop app](users/connect-windows-azure-virtual-desktop-app.md), version 1.2.1026.0 or later.
+- [Remote Desktop client for macOS](users/connect-macos.md), version 10.7.7 or later.
For more information about which features Teams on Azure Virtual Desktop supports and minimum required client versions, see [Supported features for Teams on Azure Virtual Desktop](teams-supported-features.md).
virtual-desktop Teams Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-supported-features.md
This article lists the features of Microsoft Teams that Azure Virtual Desktop cu
The following table lists whether the Windows Desktop client or macOS client supports specific features for Teams on Azure Virtual Desktop.
-|Feature|Windows Desktop client|macOS client|
+|Feature|Windows Desktop client and Azure Virtual Desktop app|macOS client|
|||| |Audio/video call|Yes|Yes| |Screen share|Yes|Yes|
The following table lists whether the Windows Desktop client or macOS client sup
The following table lists the minimum required versions for each Teams feature. For optimal user experience on Teams for Azure Virtual Desktop, we recommend using the latest supported versions of each client and the WebRTC Redirector Service, which you can find in the following list: -- [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew)-- [macOS client](/windows-server/remote/remote-desktop-services/clients/mac-whatsnew)
+- [Windows Desktop client](whats-new-client-windows.md)
+- [Azure Virtual Desktop app](whats-new-client-windows-azure-virtual-desktop-app.md)
+- [macOS client](whats-new-client-macos.md)
- [Teams WebRTC Redirector Service](https://aka.ms/msrdcwebrtcsvc/msi) - [Teams desktop app](/microsoftteams/teams-for-vdi#deploy-the-teams-desktop-app-to-the-vm)
-|Supported features|Windows Desktop client version |macOS client version|WebRTC Redirector Service version|Teams version|
+|Supported features|Windows Desktop client and Azure Virtual Desktop Store app version |macOS client version|WebRTC Redirector Service version|Teams version|
|||||| |Audio/video call|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |Screen share|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
virtual-desktop Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/terminology.md
To learn how to set up your Azure Virtual Desktop host pool, see [Create a host
To learn how to connect to Azure Virtual Desktop, see one of the following articles: - [Connect with Windows](./users/connect-windows.md)
+- [Connect with the Azure Virtual Desktop Store app for Windows](./users/connect-windows-azure-virtual-desktop-app.md)
- [Connect with a web browser](./users/connect-web.md) - [Connect with the Android client](./users/connect-android-chrome-os.md) - [Connect with the macOS client](./users/connect-macos.md) - [Connect with the iOS client](./users/connect-ios-ipados.md)
+- [Connect with the Remote Desktop app for Windows](./users/connect-microsoft-store.md)
virtual-desktop Troubleshoot Client Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-microsoft-store.md
Title: Troubleshoot the Remote Desktop client for Windows (Microsoft Store) - Azure Virtual Desktop
-description: Troubleshoot issues you may experience with the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop.
+ Title: Troubleshoot the Remote Desktop app for Windows - Azure Virtual Desktop
+description: Troubleshoot issues you may experience with the Remote Desktop app for Windows when connecting to Azure Virtual Desktop.
Last updated 11/01/2022
-# Troubleshoot the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop
+# Troubleshoot the Remote Desktop app for Windows when connecting to Azure Virtual Desktop
-This article describes issues you may experience with the [Remote Desktop client for Windows (Microsoft Store)](users/connect-microsoft-store.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) when connecting to Azure Virtual Desktop and how to fix them.
+This article describes issues you may experience with the [Remote Desktop app for Windows](users/connect-microsoft-store.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) when connecting to Azure Virtual Desktop and how to fix them.
## General
-In this section you'll find troubleshooting guidance for general issues with the Remote Desktop client.
+In this section you'll find troubleshooting guidance for general issues with the Remote Desktop app.
[!INCLUDE [troubleshoot-remote-desktop-client-doesnt-show-resources](includes/include-troubleshoot-remote-desktop-client-doesnt-show-resources.md)]
virtual-desktop Troubleshoot Client Windows Azure Virtual Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows-azure-virtual-desktop-app.md
+
+ Title: Troubleshoot the Azure Virtual Desktop Store app for Windows - Azure Virtual Desktop
+description: Troubleshoot issues you may experience with the Azure Virtual Desktop Store app for Windows when connecting to Azure Virtual Desktop.
++ Last updated : 11/01/2022+++
+# Troubleshoot the Azure Virtual Desktop Store app for Windows
+
+This article describes issues you may experience with the [Azure Virtual Desktop Store app for Windows](users/connect-windows-azure-virtual-desktop-app.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) when connecting to Azure Virtual Desktop and how to fix them.
+
+## Azure Virtual Desktop Store app is not updating
+
+The Azure Virtual Desktop Store app is downloaded and automatically updated through the Microsoft Store. It relies on the dependency app *Azure Virtual Desktop (HostApp)*, which is also automatically downloaded and updated. For more information, see [Azure Virtual Desktop (HostApp)](users/client-features-windows-azure-virtual-desktop-app.md#azure-virtual-desktop-hostapp).
+
+You can also manually search for new updates for the app. For more information, see [Update the Azure Virtual Desktop app](users/client-features-windows-azure-virtual-desktop-app.md#update-the-azure-virtual-desktop-app).
+
+## General
+
+In this section you'll find troubleshooting guidance for general issues with the Azure Virtual Desktop app.
++++
+## Issue isn't listed here
+
+If your issue isn't listed here, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md) for information about how to open an Azure support case for Azure Virtual Desktop.
virtual-desktop Troubleshoot Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows.md
In this section you'll find troubleshooting guidance for general issues with the
[!INCLUDE [troubleshoot-aadj-connections-all](includes/include-troubleshoot-azure-ad-joined-connections-all.md)]
-### Retrieve and open client logs
-
-You might need the client logs when investigating a problem.
-
-To retrieve the client logs:
-
-1. Ensure no sessions are active and the client process isn't running in the background by right-clicking on the **Remote Desktop** icon in the system tray and selecting **Disconnect all sessions**.
-1. Open **File Explorer**.
-1. Navigate to the **%temp%\DiagOutputDir\RdClientAutoTrace** folder.
-
-The logs are in the .ETL file format. You can convert these to .CSV or .XML to make them easily readable by using the `tracerpt` command. Find the name of the file you want to convert and make a note of it.
--- To convert the .ETL file to .CSV, open PowerShell and run the following, replacing the value for `$filename` with the name of the file you want to convert (without the extension) and `$outputFolder` with the directory in which to create the .CSV file.-
- ```powershell
- $filename = "<filename>"
- $outputFolder = "C:\Temp"
- cd $env:TEMP\DiagOutputDir\RdClientAutoTrace
- tracerpt "$filename.etl" -o "$outputFolder\$filename.csv" -of csv
- ```
--- To convert the .ETL file to .XML, open Command Prompt or PowerShell and run the following, replacing `<filename>` with the name of the file you want to convert and `$outputFolder` with the directory in which to create the .XML file.-
- ```powershell
- $filename = "<filename>"
- $outputFolder = "C:\Temp"
- cd $env:TEMP\DiagOutputDir\RdClientAutoTrace
- tracerpt "$filename.etl" -o "$outputFolder\$filename.xml"
- ```
-
-### Client stops responding or can't be opened
-
-If the Remote Desktop client for Windows stops responding or can't be opened, you may need to reset user data. If you can open the client, you can reset user data from the **About** menu, or if you can't open the client, you can reset user data from the command line. The default settings for the client will be restored and you'll be unsubscribed from all workspaces.
-
-To reset user data from the client:
-
-1. Open the **Remote Desktop** app on your device.
-
-1. Select the three dots at the top right-hand corner to show the menu, then select **About**.
-
-1. In the section **Reset user data**, select **Reset**. To confirm you want to reset your user data, select **Continue**.
-
-To reset user data from the command line:
-
-1. Open PowerShell.
-
-1. Change the directory to where the Remote Desktop client is installed, by default this is `C:\Program Files\Remote Desktop`.
-
-1. Run the following command to reset user data. You'll be prompted to confirm you want to reset your user data.
-
- ```powershell
- .\msrdcw.exe /reset
- ```
-
- You can also add the `/f` option, where your user data will be reset without confirmation:
-
- ```powershell
- .\msrdcw.exe /reset /f
- ```
-
-## Authentication and identity
-
-In this section you'll find troubleshooting guidance for authentication and identity issues with the Remote Desktop client.
--
-### Authentication issues while using an N SKU of Windows
-
-Authentication issues can happen because you're using an *N* SKU of Windows on your local device without the *Media Feature Pack*. For more information and to learn how to install the Media Feature Pack, see [Media Feature Pack list for Windows N editions](https://support.microsoft.com/topic/media-feature-pack-list-for-windows-n-editions-c1c6fffa-d052-8338-7a79-a4bb980a700a).
-
-### Authentication issues when TLS 1.2 not enabled
-
-Authentication issues can happen when your local Windows device doesn't have TLS 1.2 enabled. To enable TLS 1.2, you need to set the following registry values:
--- **Key**: `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client`-
- | Value Name | Type | Value Data |
- |--|--|--|
- | DisabledByDefault | DWORD | 0 |
- | Enabled | DWORD | 1 |
--- **Key**: `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server`-
- | Value Name | Type | Value Data |
- |--|--|--|
- | DisabledByDefault | DWORD | 0 |
- | Enabled | DWORD | 1 |
--- **Key**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319`-
- | Value Name | Type | Value Data |
- |--|--|--|
- | SystemDefaultTlsVersions | DWORD | 1 |
- | SchUseStrongCrypto | DWORD | 1 |
-
-You can configure these registry values by opening PowerShell as an administrator and running the following commands:
-
-```powershell
-New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Force
-New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Name 'Enabled' -Value '1' -PropertyType 'DWORD' -Force
-New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Name 'DisabledByDefault' -Value '0' -PropertyType 'DWORD' -Force
-
-New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Force
-New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name 'Enabled' -Value '1' -PropertyType 'DWORD' -Force
-New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name 'DisabledByDefault' -Value '0' -PropertyType 'DWORD' -Force
-
-New-Item 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Force
-New-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Name 'SystemDefaultTlsVersions' -Value '1' -PropertyType 'DWORD' -Force
-New-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -PropertyType 'DWORD' -Force
-```
## Issue isn't listed here
virtual-desktop Client Features Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-microsoft-store.md
Title: Use features of the Remote Desktop Microsoft Store client - Azure Virtual Desktop
-description: Learn how to use features of the Remote Desktop Microsoft Store client when connecting to Azure Virtual Desktop.
+ Title: Use features of the Remote Desktop app for Windows - Azure Virtual Desktop
+description: Learn how to use features of the Remote Desktop app for Windows when connecting to Azure Virtual Desktop.
Last updated 10/04/2022
-# Use features of the Remote Desktop Microsoft Store client when connecting to Azure Virtual Desktop
+# Use features of the Remote Desktop app for Windows when connecting to Azure Virtual Desktop
-Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop Microsoft Store client. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client](connect-microsoft-store.md).
+Once you've connected to Azure Virtual Desktop using the Remote Desktop app for Windows, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop app for Windows. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop app for Windows](connect-microsoft-store.md).
+
+> [!IMPORTANT]
+> We're no longer updating the Remote Desktop app for Windows with new features.
+>
+> For the best Azure Virtual Desktop experience that includes the latest features and updates, we recommend you download the [Azure Virtual Desktop Store app for Windows](connect-windows-azure-virtual-desktop-app.md) instead.
You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md).
You can find a list of all the Remote Desktop clients at [Remote Desktop clients
To refresh or unsubscribe from a workspace or see its details:
-1. Open the **Remote Desktop** application on your device.
+1. Open the **Remote Desktop** app on your device.
1. Select the three dots to the right-hand side of the name of a workspace where you'll see a menu with options for **Details**, **Refresh**, and **Unsubscribe**.
To refresh or unsubscribe from a workspace or see its details:
- The date and time of the last refresh. - The status of the last refresh. - **Refresh** makes sure you have the latest desktops and apps and their settings provided by your admin.
- - **Unsubscribe** removes the workspace from the Remote Desktop client.
+ - **Unsubscribe** removes the workspace from the Remote Desktop app.
+
+## Pin desktops and applications to the Start Menu
+
+You can pin your Azure Virtual Desktop desktops and applications to the Start Menu on your local device to make them easier to find and launch:
+
+1. Open the **Remote Desktop** app on your device.
+
+1. Right-click on a desktop or application, select **Pin to Start**, then confirm the prompt.
## User accounts
To refresh or unsubscribe from a workspace or see its details:
You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically.
-1. Open the **Remote Desktop** application on your device, then select **Workspaces**.
+1. Open the **Remote Desktop** app on your device, then select **Workspaces**.
1. Select one of the icons to launch a session to Azure Virtual Desktop.
You can save a user account and associate it with workspaces to simplify the con
To save a user account:
-1. Open the **Remote Desktop** application on your device.
+1. Open the **Remote Desktop** app on your device.
1. Select **Settings**.
To save a user account:
To remove an account you no longer want to use:
-1. Open the **Remote Desktop** application on your device.
+1. Open the **Remote Desktop** app on your device.
1. Select **Settings**.
To change the user account a remote session is using, you'll need to remove the
If you want to use different display settings to those specified by your admin, you can configure custom settings. Display settings apply to all workspaces.
-1. Open the **Remote Desktop** application on your device.
+1. Open the **Remote Desktop** app on your device.
1. Select **Settings**.
There are several keyboard shortcuts you can use to help use some of the feature
|--|--|--| | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>DELETE</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>END</kbd> | Shows the Windows Security dialog box. |
-You can configure the Remote Desktop client whether to send keyboard commands to the remote session:
+You can configure the Remote Desktop app whether to send keyboard commands to the remote session:
-1. Open the **Remote Desktop** application on your device.
+1. Open the **Remote Desktop** app on your device.
1. Select **Settings**.
You can configure the Remote Desktop client whether to send keyboard commands to
By default, remote desktops and apps will use the same keyboard language, also known as *locale*, as your Windows PC. For example, if your Windows PC uses **en-GB** for *English (United Kingdom)*, that will also be used by Windows in the remote session.
-You can manually set which keyboard language to use in the remote session by following the steps at [Managing display language settings in Windows](https://support.microsoft.com/windows/manage-display-language-settings-in-windows-219f28b0-9881-cd4c-75ca-dba919c52321). You might need to close and restart the application you're currently using for the keyboard changes to take effect.
+You can manually set which keyboard language to use in the remote session by following the steps at [Managing display language settings in Windows](https://support.microsoft.com/windows/manage-display-language-settings-in-windows-219f28b0-9881-cd4c-75ca-dba919c52321). You might need to close and restart the app you're currently using for the keyboard changes to take effect.
## Redirections
-The Remote Desktop client can make your local clipboard and microphone available in your remote session where you can copy and paste text, images, and files. The audio from the remote session can also be redirected to your local device. However, redirection can't be configured using the Remote Desktop client for Windows. This behavior is configured by your admin in Azure Virtual Desktop.
+The Remote Desktop app can make your local clipboard and microphone available in your remote session where you can copy and paste text, images, and files. The audio from the remote session can also be redirected to your local device. However, redirection can't be configured using the Remote Desktop app for Windows. This behavior is configured by your admin in Azure Virtual Desktop.
-## Update the client
+## Update the app
-Updates for the Remote Desktop client are delivered through the Microsoft Store. Use the Microsoft Store to check for and download updates.
+Updates for the Remote Desktop app are delivered through the Microsoft Store. Use the Microsoft Store to check for and download updates.
## App display modes
-You can configure the Remote Desktop client to be displayed in light or dark mode, or match the mode of your system:
+You can configure the Remote Desktop app to be displayed in light or dark mode, or match the mode of your system:
-1. Open the **Remote Desktop** application on your device.
+1. Open the **Remote Desktop** app on your device.
1. Select **Settings**.
You can configure the Remote Desktop client to be displayed in light or dark mod
You can pin your remote desktops to the Start menu on your local device to make them easier to launch:
-1. Open the **Remote Desktop** application on your device.
+1. Open the **Remote Desktop** app on your device.
1. Right-click a resource, then select **Pin to Start**. ## Admin link to subscribe to a workspace
-The Remote Desktop client for Windows supports the *ms-rd* Uniform Resource Identifier (URI) scheme. This enables you to use a link that users can help to automatically subscribe to a workspace, rather than them having to manually add the workspace in the Remote Desktop client.
+The Remote Desktop app for Windows supports the *ms-rd* Uniform Resource Identifier (URI) scheme. This enables you to use a link that users can help to automatically subscribe to a workspace, rather than them having to manually add the workspace in the Remote Desktop app.
To subscribe to a workspace with a link: 1. Open the following link in a web browser: `ms-rd:subscribe?url=https://rdweb.wvd.microsoft.com`.
-1. If you see the prompt **This site is trying to open Remote Desktop**, select **Open**. The **Remote Desktop** application should open and automatically show a sign-in prompt.
+1. If you see the prompt **This site is trying to open Remote Desktop**, select **Open**. The **Remote Desktop** app should open and automatically show a sign-in prompt.
1. Enter your user account, then select **Sign in**. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin. ## Provide feedback
-If you want to provide feedback to us on the Remote Desktop client for Windows, you can do so by selecting the button that looks like a smiley face emoji in the client app, as shown in the following image. This will open the **Feedback Hub**.
+If you want to provide feedback to us on the Remote Desktop app for Windows, you can do so by selecting the button that looks like a smiley face emoji in the app, as shown in the following image. This will open the **Feedback Hub**.
:::image type="content" source="../media/smiley-face-icon-store.png" alt-text="A screenshot highlighting the feedback button in a red box":::
To best help you, we need you to give us as detailed information as possible. Al
## Next steps
-If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-microsoft-store.md).
+If you're having trouble with the Remote Desktop app for Windows, see [Troubleshoot the Remote Desktop app for Windows](../troubleshoot-client-microsoft-store.md).
virtual-desktop Client Features Windows Azure Virtual Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-windows-azure-virtual-desktop-app.md
+
+ Title: Use features of the Azure Virtual Desktop Store app for Windows (preview) - Azure Virtual Desktop
+description: Learn how to use features of the Azure Virtual Desktop Store app for Windows (preview) when connecting to Azure Virtual Desktop.
++ Last updated : 10/04/2022+++
+# Use features of the Azure Virtual Desktop Store app for Windows (preview)
+
+> [!IMPORTANT]
+> The Azure Virtual Desktop Store app for Windows is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Once you've connected to Azure Virtual Desktop using the Azure Virtual Desktop Store app for Windows (preview), it's important to know how to use the features. This article shows you how to use the features available in the Azure Virtual Desktop Store app for Windows. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Azure Virtual Desktop Store app for Windows](connect-windows-azure-virtual-desktop-app.md).
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md).
+
+> [!NOTE]
+> Your admin can choose to override some of these settings in Azure Virtual Desktop, such as being able to copy and paste between your local device and your remote session. If some of these settings are disabled, please contact your admin.
+
+## Refresh or unsubscribe from a workspace or see its details
+
+To refresh or unsubscribe from a workspace or see its details:
+
+1. Open the **Azure Virtual Desktop** app on your device.
+
+1. Select the three dots to the right-hand side of the name of a workspace where you'll see a menu with options for **Details**, **Refresh**, and **Unsubscribe**.
+
+ - **Details** shows you details about the workspace, such as:
+ - The name of the workspace.
+ - The URL and username used to subscribe.
+ - The number of desktops and apps.
+ - The date and time of the last refresh.
+ - The status of the last refresh.
+ - **Refresh** makes sure you have the latest desktops and apps and their settings provided by your admin.
+ - **Unsubscribe** removes the workspace from the Azure Virtual Desktop app.
+
+## Pin desktops and applications to the Start Menu
+
+You can pin your Azure Virtual Desktop desktops and applications to the Start Menu on your local device to make them easier to find and launch:
+
+1. Open the **Azure Virtual Desktop** app on your device.
+
+1. Right-click on a desktop or application, select **Pin to Start Menu**, then confirm the prompt.
+
+## User accounts
+
+### Manage user accounts
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically. You can also edit a saved account or remove accounts you no longer want to use.
+
+User accounts are stored and managed in *Credential Manager* in Windows as a *generic credential*.
+
+To save a user account:
+
+1. Open the **Azure Virtual Desktop** app on your device.
+
+1. Double-click one of the icons to launch a session to Azure Virtual Desktop. If you're prompted to enter the password for your user account again, enter the password and check the box **Remember me**, then select **OK**.
+
+To edit or remove a saved user account:
+
+1. Open **Credential Manager** from the Control Panel. You can also open Credential Manager by searching the Start menu.
+
+1. Select **Windows Credentials**.
+
+1. Under **Generic Credentials**, find your saved user account and expand its details. It will begin with **RDPClient**.
+
+1. To edit the user account, select **Edit**. You can update the username and password. Once you're done, select **Save**.
+
+1. To remove the user account, select **Remove** and confirm that you want to delete it.
+
+## Display preferences
+
+### Display settings for each remote desktop
+
+If you want to use different display settings to those specified by your admin, you can configure custom settings.
+
+1. Open the **Azure Virtual Desktop** app on your device.
+
+1. Right-click the name of a desktop or app, for example **SessionDesktop**, then select **Settings**.
+
+1. Toggle **Use default settings** to off.
+
+1. On the **Display** tab, you can select from the following options:
+
+ | Display configuration | Description |
+ |--|--|
+ | All displays | Automatically use all displays for the desktop. If you have multiple displays, all of them will be used. <br /><br />For information on limits, see [Compare the features of the Remote Desktop clients](../compare-remote-desktop-clients.md).|
+ | Single display | Only a single display will be used for the remote desktop. |
+ | Select displays | Only select displays will be used for the remote desktop. |
+
+ Each display configuration in the table above has its own settings. Use the following table to understand each setting:
+
+ | Setting | Display configurations | Description |
+ |--|--|--|
+ | Single display when in windowed mode | All displays<br />Select displays | Only use a single display when running in windows mode, rather than full screen. |
+ | Start in full screen | Single display | The desktop will be displayed full screen. |
+ | Fit session to window | All displays<br />Single display<br />Select displays | When you resize the window, the scaling of the desktop will automatically adjust to fit the new window size. The resolution will stay the same. |
+ | Update the resolution on resize | Single display | When you resize the window, the resolution of the desktop will automatically change to match.<br /><br />If this is disabled, a new option for **Resolution** is displayed where you can select from a pre-defined list of resolutions. |
+ | Choose which display to use for this session | Select displays | Select which displays you want to use. All selected displays must be next to each other. |
+ | Maximize to current displays | Select displays | The remote desktop will show full screen on the current display(s) the window is on, even if this isn't the display selected in the settings. If this is off, the remote desktop will show full screen the same display(s) regardless of the current display the window is on. If your window overlaps multiple displays, those displays will be used when maximizing the remote desktop. |
+
+## Input methods
+
+You can use touch input, or a built-in or external PC keyboard, trackpad and mouse to control desktops or apps.
+
+### Use touch gestures and mouse modes in a remote session
+
+You can use touch gestures to replicate mouse actions in your remote session. If you connect to Windows 10 or later with Azure Virtual Desktop, native Windows touch and multi-touch gestures are supported.
+
+The following table shows which mouse operations map to which gestures:
+
+| Mouse operation | Gesture |
+|:|:-|
+| Left-click | Tap with one finger |
+| Right-click | Tap and hold with one finger |
+| Left-click and drag | Double-tap and hold with one finger, then drag |
+| Right-click | Tap with two fingers |
+| Right-click and drag | Double-tap and hold with two fingers, then drag |
+| Mouse wheel | Tap and hold with two fingers, then drag up or down |
+| Zoom | With two fingers, pinch to zoom out and move fingers apart to zoom in |
+
+### Keyboard
+
+There are several keyboard shortcuts you can use to help use some of the features. Some of these are for controlling how the Azure Virtual Desktop Store app displays the session. These are:
+
+| Key combination | Description |
+|--|--|
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>HOME</kbd> | Activates the connection bar when in full-screen mode and the connection bar isn't pinned. |
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>PAUSE</kbd> | Switches the client between full-screen mode and window mode. |
+
+Most common Windows keyboard shortcuts, such as <kbd>CTRL</kbd>+<kbd>C</kbd> for copy and <kbd>CTRL</kbd>+<kbd>Z</kbd> for undo, are the same when using Azure Virtual Desktop. There are some keyboard shortcuts that are different so Windows knows when to use them in Azure Virtual Desktop or on your local device. These are:
+
+| Windows shortcut | Azure Virtual Desktop shortcut | Description |
+|--|--|--|
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>DELETE</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>END</kbd> | Shows the Windows Security dialog box. |
+| <kbd>ALT</kbd>+<kbd>TAB</kbd> | <kbd>ALT</kbd>+<kbd>PAGE UP</kbd> | Switches between programs from left to right. |
+| <kbd>ALT</kbd>+<kbd>SHIFT</kbd>+<kbd>TAB</kbd> | <kbd>ALT</kbd>+<kbd>PAGE DOWN</kbd> | Switches between programs from right to left. |
+| <kbd>WINDOWS</kbd> key, or <br /><kbd>CTRL</kbd>+<kbd>ESC</kbd> | <kbd>ALT</kbd>+<kbd>HOME</kbd> | Shows the Start menu. |
+| <kbd>ALT</kbd>+<kbd>SPACE BAR</kbd> | <kbd>ALT</kbd>+<kbd>DELETE</kbd> | Shows the system menu. |
+| <kbd>PRINT SCREEN</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>+</kbd> (plus sign) | Takes a snapshot of the entire remote session, and places it in the clipboard. |
+| <kbd>ALT</kbd>+<kbd>PRINT SCREEN</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>-</kbd> (minus sign) | Takes a snapshot of the active window in the remote session, and places it in the clipboard. |
+
+> [!NOTE]
+> Keyboard shortcuts will not work when using remote desktop or RemoteApp sessions that are nested.
+
+### Keyboard language
+
+By default, remote desktops and apps will use the same keyboard language, also known as *locale*, as your Windows PC. For example, if your Windows PC uses **en-GB** for *English (United Kingdom)*, that will also be used by Windows in the remote session.
+
+You can manually set which keyboard language to use in the remote session by following the steps at [Managing display language settings in Windows](https://support.microsoft.com/windows/manage-display-language-settings-in-windows-219f28b0-9881-cd4c-75ca-dba919c52321). You might need to close and restart the application you're currently using for the keyboard changes to take effect.
+
+## Redirections
+
+### Folder redirection
+
+The Azure Virtual Desktop Store app can make local folders available in your remote session. This is known as *folder redirection*. This means you can open files from and save files to your Windows PC with your remote session. Redirected folders appear as a network drive in Windows Explorer.
+
+Folder redirection can't be configured using the Azure Virtual Desktop app. This behavior is configured by your admin in Azure Virtual Desktop. By default, all local drives are redirected to a remote session.
+
+### Redirect devices, audio, and clipboard
+
+The Azure Virtual Desktop Store app can make your local clipboard and local devices available in your remote session where you can copy and paste text, images, and files. The audio from the remote session can also be redirected to your local device. However, redirection can't be configured using the Azure Virtual Desktop app. This behavior is configured by your admin in Azure Virtual Desktop. Here's a list of some of the devices and resources that can be redirected. For the full list, see [Compare the features of the Remote Desktop clients when connecting to Azure Virtual Desktop](../compare-remote-desktop-clients.md?toc=%2Fazure%2Fvirtual-desktop%2Fusers%2Ftoc.json#redirections-comparison).
+
+- Printers
+- USB devices
+- Audio output
+- Smart cards
+- Clipboard
+- Microphones
+- Cameras
+
+## Update the Azure Virtual Desktop app
+
+Updates for the Azure Virtual Desktop Store app are available in the Microsoft Store. You can also check for updates through the app directly.
+
+To check for updates through the app:
+
+1. Open the **Azure Virtual Desktop** app on your device.
+
+1. Select the three dots in the top right-hand of the app to show a menu, then select **About**.
+
+1. For the section **App Update** select each link to open the Microsoft Store where you'll be prompted to install any updates once available.
+
+1. If there's an update to either app available, selectΓÇ»**Update**. The apps can be updated in any order.
+
+While installing an update, the Azure Virtual Desktop Store app may close. Once the update is complete, you can reopen the app and continue where you left off. For more information about how to get updates in the Microsoft Store, see [Get updates for apps and games in Microsoft Store](https://support.microsoft.com/account-billing/get-updates-for-apps-and-games-in-microsoft-store-a1fe19c0-532d-ec47-7035-d1c5a1dd464f) and [Turn on automatic app updates](https://support.microsoft.com/windows/turn-on-automatic-app-updates-70634d32-4657-dc76-632b-66048978e51b).
+
+## App display modes
+
+You can configure the Azure Virtual Desktop Store app to be displayed in light or dark mode, or match the mode of your system:
+
+1. Open the **Azure Virtual Desktop** app on your device.
+
+1. Select **Settings**.
+
+1. Under **App mode**, select **Light**, **Dark**, or **Use System Mode**. The change is applied instantly.
+
+## Views
+
+You can view your remote desktops and apps as either a tile view (default) or list view:
+
+1. Open the **Azure Virtual Desktop** app on your device.
+
+1. If you want to switch to List view, select **Tile**, then select **List view**.
+
+1. If you want to switch to Tile view, select **List**, then select **Tile view**.
+
+## Enable Windows Insider releases
+
+If you want to help us test new builds before they're released, you should download our Insider releases. Organizations can use the Insider releases to validate new versions for their users before they're generally available.
+
+> [!NOTE]
+> Insider releases shouldn't be used in production.
+
+Insider releases are made available in the Azure Virtual Desktop Store app once you've configured it to use Insider releases. To configure the app to use Insider releases:
+
+1. Open the **Azure Virtual Desktop** app on your device.
+
+1. Select the three dots in the top right-hand of the app to show a menu.
+
+1. Select **Join the insider group**, then wait while the app is configured.
+
+1. Restart your local device.
+
+1. Open the **Azure Virtual Desktop** app. The title in the top left-hand corner should be **Azure Virtual Desktop (Insider)**:
+
+ :::image type="content" source="../media/client-features-windows-azure-virtual-desktop-app/azure-virtual-desktop-app-windows-insider.png" alt-text="A screenshot of the Azure Virtual Desktop Store app with Insider features enabled. The title is highlighted in a red box.":::
+
+If you already have configured the Azure Virtual Desktop Store app to use Insider releases, you can check for updates to ensure you have the latest Insider release by checking for updates in the normal way. For more information, see [Update the Azure Virtual Desktop app](#update-the-azure-virtual-desktop-app).
+
+## Admin management
+
+### Enterprise deployment
+
+To deploy the Azure Virtual Desktop Store app in an enterprise, you can use Microsoft Intune or Configuration Manager. For more information, see:
+
+- [Add Microsoft Store apps to Microsoft Intune](/mem/intune/apps/store-apps-microsoft).
+
+- [Manage apps from the Microsoft Store for Business and Education with Configuration Manager](/mem/configmgr/apps/deploy-use/manage-apps-from-the-windows-store-for-business)
+
+### URI to subscribe to a workspace
+
+The Azure Virtual Desktop Store app supports Uniform Resource Identifier (URI) schemes to invoke the Remote Desktop client with specific commands, parameters, and values for use with Azure Virtual Desktop. For example, you can subscribe to a workspace or connect to a particular desktop or Remote App.
+
+For more information, see [Uniform Resource Identifier schemes with the Remote Desktop client for Azure Virtual Desktop](../uri-scheme.md).
+
+## Azure Virtual Desktop (HostApp)
+
+The Azure Virtual Desktop (HostApp) is a platform component containing a set of predefined user interfaces and APIs that Azure Virtual Desktop developers can use to deploy and manage Remote Desktop connections to their Azure Virtual Desktop resources. If this application is required on a device for another application to work correctly, it will automatically be downloaded by the Azure Virtual Desktop app. There should be no need for user interaction.
+
+The purpose of the Azure Virtual Desktop (HostApp) is to provide core functionality to the Azure Virtual Desktop Store app in the Microsoft Store. This is known as the *Hosted App Model*. For more information, see [Hosted App Model](https://blogs.windows.com/windowsdeveloper/2020/03/19/hosted-app-model/).
+
+## Provide feedback
+
+If you want to provide feedback to us on the Azure Virtual Desktop app, you can do so by selecting the button that looks like a smiley face emoji in the app, as shown in the following image. This will open the **Feedback Hub**.
++
+To best help you, we need you to give us as detailed information as possible. Along with a detailed description, you can include screenshots, attach a file, or make a recording. For more tips about how to provide helpful feedback, see [Feedback](/windows-insider/feedback#add-new-feedback).
+
+## Next steps
+
+If you're having trouble with the Azure Virtual Desktop app, see [Troubleshoot the Azure Virtual Desktop app](../troubleshoot-client-windows-azure-virtual-desktop-app.md).
virtual-desktop Client Features Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-windows.md
Most common Windows keyboard shortcuts, such as <kbd>CTRL</kbd>+<kbd>C</kbd> for
| <kbd>ALT</kbd>+<kbd>PRINT SCREEN</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>-</kbd> (minus sign) | Takes a snapshot of the active window in the remote session, and places it in the clipboard. | > [!NOTE]
-> Keyboard shortcuts will not work when using Remote Desktop or RemoteApp sessions that are nested.
+> Keyboard shortcuts will not work when using remote desktop or RemoteApp sessions that are nested.
### Keyboard language
virtual-desktop Connect Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-microsoft-store.md
Title: Connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client - Azure Virtual Desktop
-description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop Microsoft Store client.
+ Title: Connect to Azure Virtual Desktop with the Remote Desktop app for Windows - Azure Virtual Desktop
+description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop app for Windows.
Last updated 01/04/2023
-# Connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client
+# Connect to Azure Virtual Desktop with the Remote Desktop app for Windows
-The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client.
+The Microsoft Remote Desktop app is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop app for Windows.
> [!IMPORTANT]
-> We're no longer updating the Microsoft Store client with new features.
+> We're no longer updating the Remote Desktop app for Windows with new features.
>
-> For the best Azure Virtual Desktop experience that includes the latest features and fixes, we recommend you download the [Remote Desktop client for Windows](connect-windows.md) instead.
+> For the best Azure Virtual Desktop experience that includes the latest features and updates, we recommend you download the [Azure Virtual Desktop Store app for Windows](connect-windows-azure-virtual-desktop-app.md) instead.
You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
-If you want to connect to Remote Desktop Services or a remote PC instead of Azure Virtual Desktop, see [Connect to Remote Desktop Services with the Remote Desktop Microsoft Store client](/windows-server/remote/remote-desktop-services/clients/windows).
+If you want to connect to Remote Desktop Services or a remote PC instead of Azure Virtual Desktop, see [Connect to Remote Desktop Services with the Remote Desktop app for Windows](/windows-server/remote/remote-desktop-services/clients/windows).
## Prerequisites
Before you can access your resources, you'll need to meet the prerequisites:
- A device running Windows 11 or Windows 10. -- Download and install the Remote Desktop client from the [Microsoft Store](https://go.microsoft.com/fwlink/?LinkID=616709).
+- Download and install the Remote Desktop app from the [Microsoft Store](https://go.microsoft.com/fwlink/?LinkID=616709).
## Subscribe to a workspace
-A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Remote Desktop client, you need to subscribe to the workspace by following these steps:
+A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Remote Desktop app, you need to subscribe to the workspace by following these steps:
1. Open the **Remote Desktop** app on your device.
Once you've subscribed to a workspace, its content will update automatically reg
## Next steps
-To learn more about the features of the Remote Desktop client for Windows from the Microsoft Store, check out [Use features of the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop](client-features-microsoft-store.md).
+To learn more about the features of the Remote Desktop app for Windows, check out [Use features of the Remote Desktop app for Windows when connecting to Azure Virtual Desktop](client-features-microsoft-store.md).
virtual-desktop Connect Windows Azure Virtual Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows-azure-virtual-desktop-app.md
+
+ Title: Connect to Azure Virtual Desktop with the Azure Virtual Desktop Store app for Windows (preview) - Azure Virtual Desktop
+description: Learn how to connect to Azure Virtual Desktop using the Azure Virtual Desktop Store app for Windows from the Microsoft Store.
++ Last updated : 03/09/2023+++
+# Connect to Azure Virtual Desktop with the Azure Virtual Desktop Store app for Windows (preview)
+
+> [!IMPORTANT]
+> The Azure Virtual Desktop Store app for Windows is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The Azure Virtual Desktop Store app is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Azure Virtual Desktop Store app (preview) for Windows from the Microsoft Store.
+
+You can find a list of all the Remote Desktop clients you can use to connect to Azure Virtual Desktop at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
+
+## Prerequisites
+
+Before you can access your resources, you'll need to meet the prerequisites:
+
+- Internet access.
+
+- A device running one of the following supported versions of Windows:
+ - Windows 11
+ - Windows 10
+
+## Download and install the Azure Virtual Desktop app
+
+The Azure Virtual Desktop Store app is available from the Microsoft Store. To download and install it, follow these steps:
+
+1. Go to the [Azure Virtual Desktop Store app in the Microsoft Store](https://aka.ms/AVDStoreClient).
+
+1. Select **Install** to start downloading the app and installing it.
+
+1. Once the app has finished downloading and installing, select **Open**. The first time the app runs, it will install the *Azure Virtual Desktop (HostApp)* dependency automatically.
+
+## Subscribe to a workspace
+
+A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Azure Virtual Desktop app, you need to subscribe to the workspace by following these steps:
+
+1. Open the **Azure Virtual Desktop** app on your device, if you have not already done so.
+
+2. The first time you subscribe to a workspace, from the **Let's get started** screen, select **Subscribe** or **Subscribe with URL**. Use the tabs below for your scenario.
+
+# [Subscribe](#tab/subscribe)
+
+3. If you selected **Subscribe**, sign in with your user account when prompted, for example `user@contoso.com`. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+ > [!TIP]
+ > If you see the message **No workspace is associated with this email address**, your admin might not have set up email discovery. Try the steps in the **Subscribe with URL** tab instead.
+
+# [Subscribe with URL](#tab/subscribe-with-url)
+
+3. If you selected **Subscribe with URL**, in the **Email or Workspace URL** box, enter the relevant URL from the following table. After a few seconds, the message **We found Workspaces at the following URLs** should be displayed.
+
+ | Azure environment | Workspace URL |
+ |--|--|
+ | Azure cloud *(most common)* | `https://rdweb.wvd.microsoft.com` |
+ | Azure US Gov | `https://rdweb.wvd.azure.us/api/arm/feeddiscovery` |
+ | Azure China 21Vianet | `https://rdweb.wvd.azure.cn/api/arm/feeddiscovery` |
+
+4. Select **Next**.
+
+5. Sign in with your user account when prompted. After a few seconds, the workspace should show the desktops and applications that have been made available to you by your admin.
+
+Once you've subscribed to a workspace, its content will update automatically regularly and each time you start the client. Resources may be added, changed, or removed based on changes made by your admin.
+++
+## Connect to your desktops and applications and pin to the Start Menu
+
+Once you've subscribed to a workspace, here's how to connect:
+
+1. Open the **Azure Virtual Desktop** app on your device.
+
+1. Double-click one of the icons to launch a session to Azure Virtual Desktop. You may be prompted to enter the password for your user account again, depending on how your admin has configured Azure Virtual Desktop.
+
+1. To pin your desktops and applications to the Start Menu, right-click one of the icons and select **Pin to Start Menu**, then confirm the prompt.
+
+## Windows Insider
+
+If you want to help us test new builds before they're released, you should download our Insider releases. Organizations can use the Insider releases to validate new versions for their users before they're generally available. For more information, see [Enable Windows Insider releases](client-features-windows.md#enable-windows-insider-releases).
+
+## Next steps
+
+To learn more about the features of the **Azure Virtual Desktop** app, check out [Use features of the Azure Virtual Desktop Store app when connecting to Azure Virtual Desktop](client-features-windows.md).
virtual-desktop Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md
Once you've subscribed to a workspace, its content will update automatically reg
## Connect to your desktops and applications
-1. Open the **Remote Desktop** app on your device.
+1. Open the **Remote Desktop** client on your device.
1. Double-click one of the icons to launch a session to Azure Virtual Desktop. You may be prompted to enter the password for your user account again, depending on how your admin has configured Azure Virtual Desktop.
virtual-desktop Remote Desktop Clients Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/remote-desktop-clients-overview.md
Title: Remote Desktop clients for Azure Virtual Desktop - Azure Virtual Desktop
description: Overview of the Remote Desktop clients you can use to connect to Azure Virtual Desktop. Previously updated : 11/22/2022 Last updated : 04/13/2023 # Remote Desktop clients for Azure Virtual Desktop
-With Microsoft Remote Desktop clients, you can connect to Azure Virtual Desktop and use and control desktops and apps that your admin has made available to you. There are clients available for many different types of devices on different platforms and form factors, such as desktops and laptops, tablets, smartphones, and through a web browser. Using your web browser on desktops and laptops, you can connect without having to download and install any software.
+With the Microsoft Remote Desktop clients, you can connect to Azure Virtual Desktop and use and control desktops and apps that your admin has made available to you. There are clients available for many different types of devices on different platforms and form factors, such as desktops and laptops, tablets, smartphones, and through a web browser. Using your web browser on desktops and laptops, you can connect without having to download and install any software.
There are many features you can use to enhance your remote experience, such as:
Here's a list of the Remote Desktop client apps and our documentation for connec
| Remote Desktop client | Documentation and download links | Version information | |--|--|--| | Windows Desktop | [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows](connect-windows.md) | [What's new](../whats-new-client-windows.md) |
+| Azure Virtual Desktop Store app for Windows | [Connect to Azure Virtual Desktop with the Azure Virtual Desktop Store app for Windows](connect-windows-azure-virtual-desktop-app.md) | [What's new](../whats-new-client-windows-azure-virtual-desktop-app.md) |
| Web | [Connect to Azure Virtual Desktop with the Remote Desktop client for Web](connect-web.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/web-client-whatsnew?context=/azure/virtual-desktop/context/context) | | macOS | [Connect to Azure Virtual Desktop with the Remote Desktop client for macOS](connect-macos.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/mac-whatsnew?context=/azure/virtual-desktop/context/context) | | iOS/iPadOS | [Connect to Azure Virtual Desktop with the Remote Desktop client for iOS and iPadOS](connect-ios-ipados.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/ios-whatsnew?context=/azure/virtual-desktop/context/context) | | Android/Chrome OS | [Connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS](connect-android-chrome-os.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/android-whatsnew?context=/azure/virtual-desktop/context/context) |
-| Microsoft Store | [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows (Microsoft Store)](connect-microsoft-store.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/windows-whatsnew?context=/azure/virtual-desktop/context/context) |
+| Remote Desktop app for Windows | [Connect to Azure Virtual Desktop with the Remote Desktop app for Windows](connect-microsoft-store.md) | [What's new](../whats-new-client-microsoft-store.md) |
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure description: New features and product updates for the Azure Virtual Desktop Agent. -+ Last updated 04/11/2023
virtual-desktop Whats New Client Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-android-chrome-os.md
Title: What's new in the Remote Desktop client for Android and Chrome OS - Azure Virtual Desktop description: Learn about recent changes to the Remote Desktop client for Android and Chrome OS-+ Last updated 01/04/2023
virtual-desktop Whats New Client Ios Ipados https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-ios-ipados.md
Title: What's new in the Remote Desktop client for iOS and iPadOS - Azure Virtual Desktop description: Learn about recent changes to the Remote Desktop client for iOS and iPadOS-+ Last updated 01/04/2023
virtual-desktop Whats New Client Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-macos.md
Title: What's new in the Remote Desktop client for macOS - Azure Virtual Desktop description: Learn about recent changes to the Remote Desktop client for macOS-+ Last updated 01/04/2023
virtual-desktop Whats New Client Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-microsoft-store.md
Title: What's new in the Remote Desktop Microsoft Store client - Azure Virtual Desktop
-description: Learn about recent changes to the Remote Desktop Microsoft Store client
-
+ Title: What's new in the Remote Desktop app for Windows - Azure Virtual Desktop
+description: Learn about recent changes to the Remote Desktop app for Windows.
+ Previously updated : 01/04/2023 Last updated : 04/14/2023
-# What's new in the Remote Desktop Microsoft Store client
+# What's new in the Remote Desktop app for Windows
-In this article you'll learn about the latest updates for the Remote Desktop Microsoft Store client. To learn more about using the Remote Desktop Microsoft Store client with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop Microsoft Store client](users/connect-microsoft-store.md) and [Use features of the Remote Desktop Microsoft Store client when connecting to Azure Virtual Desktop](users/client-features-microsoft-store.md).
+In this article you'll learn about the latest updates for the Remote Desktop app for Windows. To learn more about using the Remote Desktop app for Windows with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop app for Windows](users/connect-microsoft-store.md) and [Use features of the Remote Desktop app for Windows when connecting to Azure Virtual Desktop](users/client-features-microsoft-store.md).
> [!IMPORTANT]
-> We're no longer updating the Microsoft Store client with new features.
+> We're no longer updating the Remote Desktop app for Windows with new features.
>
-> For the best Azure Virtual Desktop experience that includes the latest features and fixes, we recommend you download the [Remote Desktop client for Windows](users/connect-windows.md) instead.
+> For the best Azure Virtual Desktop experience that includes the latest features and updates, we recommend you download the [Azure Virtual Desktop Store app for Windows](users/connect-windows-azure-virtual-desktop-app.md) instead.
[!INCLUDE [include-whats-new-client-microsoft-store](includes/include-whats-new-client-microsoft-store.md)]
virtual-desktop Whats New Client Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-web.md
Title: What's new in the Remote Desktop Web client for Azure Virtual Desktop description: Learn about recent changes to the Remote Desktop Web client for Azure Virtual Desktop-+ Last updated 01/25/2023
virtual-desktop Whats New Client Windows Azure Virtual Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows-azure-virtual-desktop-app.md
+
+ Title: What's new in the Azure Virtual Desktop Store app for Windows (preview) - Azure Virtual Desktop
+description: Learn about recent changes to the Azure Virtual Desktop Store app for Windows.
+++ Last updated : 04/05/2023++
+# What's new in the Azure Virtual Desktop Store app for Windows (preview)
+
+> [!IMPORTANT]
+> The Azure Virtual Desktop Store app for Windows is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In this article you'll learn about the latest updates for the Azure Virtual Desktop Store app for Windows. To learn more about using the Azure Virtual Desktop Store app for Windows with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Azure Virtual Desktop Store app for Windows](users/connect-windows.md) and [Use features of the Azure Virtual Desktop Store app for Windows when connecting to Azure Virtual Desktop](users/client-features-windows.md).
+
+## March 7, 2023*
+
+In this release, we've made the following changes:
+
+- General improvements to Narrator experience.
+- Fixed a bug that caused the client to stop responding when disconnecting from the session early.
+- Fixed a bug that caused duplicate error messages to appear while connected to an Azure Active Directory-joined host using the new Remote Desktop Services (RDS) Azure Active Directory (Azure AD) Auth protocol.
+- Fixed a bug that caused scale resolution options to not display in display settings for session desktops.
+- Added support for Universal Plug and Play (UPnP) for improved User Datagram Protocol (UDP) connectivity.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Updates to MMR for Azure Virtual Desktop, including the following:
+ - Fixed an issue that caused multimedia redirection (MMR) for Azure Virtual Desktop to not load for the ARM64 version of the client.
+- Updates to Teams for Azure Virtual Desktop, including the following:
+ - Fixed an issue that caused the application window sharing to freeze or show a black screen in scenarios with Topmost window occlusions.
+ - Fixed an issue that caused Teams media optimizations for Azure Virtual Desktop to not load for the ARM64 version of the client.
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
Title: What's new in the Remote Desktop client for Windows - Azure Virtual Desktop description: Learn about recent changes to the Remote Desktop client for Windows-+ Last updated 04/11/2023
virtual-desktop Whats New Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md
+
+ Title: What's new in documentation - Azure Virtual Desktop
+description: Learn about new and updated articles to the Azure Virtual Desktop documentation
+++ Last updated : 04/18/2023++
+# What's new in documentation for Azure Virtual Desktop
+
+We update documentation for Azure Virtual Desktop on a regular basis. In this article we highlight articles for new features and where there have been important updates to existing articles.
+
+## March 2023
+
+In March 2023, we published the following changes:
+
+- A new article for the public preview of [Uniform Resource Identifier (URI) schemes with the Remote Desktop client](uri-scheme.md).
+- An update showing you how to [give session hosts in a personal host pool a friendly name](configure-host-pool-personal-desktop-assignment-type.md#give-session-hosts-in-a-personal-host-pool-a-friendly-name).
+
+## February 2023
+
+In February 2023, we published the following changes:
+
+- Updated [RDP Shortpath](rdp-shortpath.md?tabs=public-networks) and [Configure RDP Shortpath](configure-rdp-shortpath.md?tabs=public-networks) articles with the public preview information for an indirect UDP connection using the Traversal Using Relay NAT (TURN) protocol with a relay between a client and session host.
+- Reorganized the table of contents
+- Published the following articles for deploying Azure Virtual Desktop:
+ - [Tutorial to create and connect to a Windows 11 desktop with Azure Virtual Desktop](tutorial-create-connect-personal-desktop.md)
+ - [Create a host pool](create-host-pool.md)
+ - [Create an application group, a workspace, and assign users](create-application-group-workspace.md)
+ - [Add session hosts to a host pool](add-session-hosts-host-pool.md)
+
+## January 2023
+
+In January 2023, we published the following change:
+
+- A new article for the public preview of [Watermarking](watermarking.md).
+
+## Next steps
+
+- Learn [what's new for Azure Virtual Desktop](whats-new.md).
virtual-desktop Whats New Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-insights.md
Title: What's new in Azure Virtual Desktop Insights? description: New features and product updates in Azure Virtual Desktop Insights. -+ Last updated 03/16/2023
virtual-desktop Whats New Msixmgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-msixmgr.md
+
+ Title: What's new in the MSIXMGR tool - Azure Virtual Desktop
+description: Learn about what's new in the release notes for the MSIXMGR tool.
+++ Last updated : 04/18/2023++
+# What's new in the MSIXMGR tool
+
+This article provides the release notes for the latest updates to the MSIXMGR tool, which you use for expanding MSIX-packaged applications into MSIX images for use with Azure Virtual Desktop.
+
+## Version 1.1.122.0
+
+In this release, we've made the following changes:
+
+- MSIXMGR now supports the expansion of MSIX bundles.
+- Support for creating a VHD image without the size parameter.
+- Improved support for creation of MSIX images as *CIM* files without the need to provide the VHD size parameter.
+- Support for apps with long package paths (over 128 characters).
+
+## Next steps
+
+- Learn how to [use the MSIXMGR tool](app-attach-msixmgr.md).
virtual-desktop Whats New Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-multimedia-redirection.md
Title: What's new in multimedia redirection MMR? - Azure Virtual Desktop description: New features and product updates for multimedia redirection for Azure Virtual Desktop. -+ Last updated 02/07/2023
virtual-desktop Whats New Webrtc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-webrtc.md
Title: What's new in the Remote Desktop WebRTC Redirector Service? description: New features and product updates the Remote Desktop WebRTC Redirector Service for Azure Virtual Desktop. -+ Last updated 03/01/2023
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure description: New features and product updates for Azure Virtual Desktop. -+ Last updated 04/11/2023
virtual-machine-scale-sets Disk Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-cli.md
az group create --name myResourceGroup --location eastus
Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). The following example creates a scale set named *myScaleSet* that is set to automatically update as changes are applied, and generates SSH keys if they don't exist in *~/.ssh/id_rsa*. A 32-Gb data disk is attached to each VM instance, and the Azure [Custom Script Extension](../virtual-machines/extensions/custom-script-linux.md) is used to prepare the data disks with [az vmss extension set](/cli/azure/vmss/extension):
+> [!IMPORTANT]
+> Make sure to select supported Operating System with ADE.
+> [Supported OS for ADE](/azure/virtual-machines/linux/disk-encryption-overview#supported-operating-systems).
+ ```azurecli-interactive # Create a scale set with attached data disk az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \ --orchestration-mode Flexible \
- --image UbuntuLTS \
+ --image <SKU Linux Image> \
--upgrade-policy-mode automatic \ --admin-username azureuser \ --generate-ssh-keys \
az vmss encryption show --resource-group myResourceGroup --name myScaleSet
When VM instances are encrypted, the status code reports *EncryptionState/encrypted*, as shown in the following example output:
-```console
+```output
[ { "disks": [
virtual-machines Oms Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-windows.md
Title: Log Analytics virtual machine extension for Windows
-description: Deploy the Log Analytics agent on Windows virtual machine using a virtual machine extension.
+ Title: Log Analytics agent VM extension for Windows
+description: Learn how to deploy the Log Analytics agent on a Windows virtual machine by using a virtual machine extension.
Previously updated : 11/02/2021 Last updated : 04/14/2023
-# Log Analytics virtual machine extension for Windows
-Azure Monitor Logs provides monitoring capabilities across cloud and on-premises assets. The Log Analytics agent virtual machine extension for Windows is published and supported by Microsoft. The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. This document details the supported platforms, configurations, and deployment options for the Log Analytics virtual machine extension for Windows.
+# Log Analytics agent virtual machine extension for Windows
-> [!NOTE]
-> Azure Arc-enabled servers enables you to deploy, remove, and update the Log Analytics agent VM extension to non-Azure Windows and Linux machines, simplifying the management of your hybrid machine through their lifecycle. For more information, see [VM extension management with Azure Arc-enabled servers](../../azure-arc/servers/manage-vm-extensions.md).
+Azure Monitor Logs provides monitoring capabilities across cloud and on-premises assets. Microsoft publishes and supports the Log Analytics agent virtual machine (VM) extension for Windows. The extension installs the Log Analytics agent on Azure VMs, and enrolls VMs into an existing Log Analytics workspace. This article describes the supported platforms, configurations, and deployment options for the Log Analytics agent VM extension for Windows.
+
+> [!IMPORTANT]
+> The Log Analytics agent is on a **deprecation path** and won't be supported after **August 31, 2024**. If you use the Log Analytics agent to ingest data to Azure Monitor, [migrate to the new Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-migration) prior to that date.
## Prerequisites
+Review the following prerequisites for using the Log Analytics agent VM extension for Windows.
+ ### Operating system
-For details about the supported Windows operating systems, refer to the [Overview of Azure Monitor agents](../../azure-monitor/agents/agents-overview.md#supported-operating-systems) article.
+For details about the supported Windows operating systems, see the [Overview of Azure Monitor agents](/azure/azure-monitor/agents/agents-overview#supported-operating-systems) article.
-### Agent and VM Extension version
+### Agent and VM extension version
The following table provides a mapping of the version of the Windows Log Analytics VM extension and Log Analytics agent for each release.
-| Log Analytics Windows agent version | Log Analytics Windows VM extension version | Release Date | Release Notes |
-|--|--|--|--|
-| 10.20.18067.0|1.0.18067 | March 2022 | <ul><li>Bug fix for perf counters</li><li>Enhancements to Agent Troubleshooter</li></ul> |
-| 10.20.18064.0|1.0.18064 | December 2021 | <ul><li>Bug fix for intermittent crashes</li></ul> |
-| 10.20.18062.0| 1.0.18062 | November 2021 | <ul><li>Minor bug fixes and stabilization improvements</li></ul> |
-| 10.20.18053| 1.0.18053.0 | October 2020 | <ul><li>New Agent Troubleshooter</li><li>Updates to how the agent handles certificate changes to Azure services</li></ul> |
-| 10.20.18040 | 1.0.18040.2 | August 2020 | <ul><li>Resolves an issue on Azure Arc</li></ul> |
-| 10.20.18038 | 1.0.18038 | April 2020 | <ul><li>Enables connectivity over Private Link using Azure Monitor Private Link Scopes</li><li>Adds ingestion throttling to avoid a sudden, accidental influx in ingestion to a workspace</li><li>Adds support for additional Azure Government clouds and regions</li><li>Resolves a bug where HealthService.exe crashed</li></ul> |
-| 10.20.18029 | 1.0.18029 | March 2020 | <ul><li>Adds SHA-2 code signing support</li><li>Improves VM extension installation and management</li><li>Resolves a bug with Azure Arc-enabled servers integration</li><li>Adds a built-in troubleshooting tool for customer support</li><li>Adds support for additional Azure Government regions</li> |
-| 10.20.18018 | 1.0.18018 | October 2019 | <ul><li> Minor bug fixes and stabilization improvements </li></ul> |
-| 10.20.18011 | 1.0.18011 | July 2019 | <ul><li> Minor bug fixes and stabilization improvements </li><li> Increased MaxExpressionDepth to 10000 </li></ul> |
-| 10.20.18001 | 1.0.18001 | June 2019 | <ul><li> Minor bug fixes and stabilization improvements </li><li> Added ability to disable default credentials when making proxy connection (support for WINHTTP_AUTOLOGON_SECURITY_LEVEL_HIGH) </li></ul>|
-| 10.19.13515 | 1.0.13515 | March 2019 | <ul><li>Minor stabilization fixes </li></ul> |
-| 10.19.10006 | n/a | Dec 2018 | <ul><li> Minor stabilization fixes </li></ul> |
-| 8.0.11136 | n/a | Sept 2018 | <ul><li> Added support for detecting resource ID change on VM move </li><li> Added Support for reporting resource ID when using non-extension install </li></ul>|
-| 8.0.11103 | n/a | April 2018 | |
-| 8.0.11081 | 1.0.11081 | Nov 2017 | |
-| 8.0.11072 | 1.0.11072 | Sept 2017 | |
-| 8.0.11049 | 1.0.11049 | Feb 2017 | |
+| Agent version | VM extension version | Release date | Release notes |
+| | | | |
+| 10.20.18067.0 | 1.0.18067 | March 2022 | - Bug fix for performance counters <br> - Enhancements to Agent Troubleshooter |
+| 10.20.18064.0 | 1.0.18064 | December 2021 | - Bug fix for intermittent crashes |
+| 10.20.18062.0 | 1.0.18062 | November 2021 | - Minor bug fixes and stabilization improvements |
+| 10.20.18053 | 1.0.18053.0 | October 2020 | - New Agent Troubleshooter <br> - Updates how the agent handles certificate changes to Azure services |
+| 10.20.18040 | 1.0.18040.2 | August 2020 | - Resolves an issue on Azure Arc |
+| 10.20.18038 | 1.0.18038 | April 2020 | - Enables connectivity over Azure Private Link by using Azure Monitor Private Link Scopes <br> - Adds ingestion throttling to avoid a sudden, accidental influx in ingestion to a workspace <br> - Adds support for more Azure Government clouds and regions <br> - Resolves a bug where HealthService.exe crashed |
+| 10.20.18029 | 1.0.18029 | March 2020 | - Adds SHA-2 code signing support <br> - Improves VM extension installation and management <br> - Resolves a bug with Azure Arc-enabled servers integration <br> - Adds built-in troubleshooting tool for customer support <br> - Adds support for more Azure Government regions |
+| 10.20.18018 | 1.0.18018 | October 2019 | - Minor bug fixes and stabilization improvements |
+| 10.20.18011 | 1.0.18011 | July 2019 | - Minor bug fixes and stabilization improvements <br> - Increases `MaxExpressionDepth` to 10,000 |
+| 10.20.18001 | 1.0.18001 | June 2019 | - Minor bug fixes and stabilization improvements <br> - Adds ability to disable default credentials when making proxy connection (support for `WINHTTP_AUTOLOGON_SECURITY_LEVEL_HIGH`) |
+| 10.19.13515 | 1.0.13515 | March 2019 | - Minor stabilization fixes |
+| 10.19.10006 | n/a | December 2018 | - Minor stabilization fixes |
+| 8.0.11136 | n/a | September 2018 | - Adds support for detecting resource ID change on VM move <br> - Adds support for reporting resource ID when using nonextension install |
+| 8.0.11103 | n/a | April 2018 | |
+| 8.0.11081 | 1.0.11081 | November 2017 | |
+| 8.0.11072 | 1.0.11072 | September 2017 | |
+| 8.0.11049 | 1.0.11049 | February 2017 | |
### Microsoft Defender for Cloud
-Microsoft Defender for Cloud automatically provisions the Log Analytics agent and connects it with the default Log Analytics workspace of the Azure subscription. If you are using Microsoft Defender for Cloud, do not run through the steps in this document. Doing so overwrites the configured workspace and break the connection with Microsoft Defender for Cloud.
+Microsoft Defender for Cloud automatically provisions the Log Analytics agent and connects it with the default Log Analytics workspace of the Azure subscription.
+
+> [!IMPORTANT]
+> If you're using Microsoft Defender for Cloud, don't follow the extension deployment methods described in this article. These deployment processes overwrite the configured Log Analytics workspace and break the connection with Microsoft Defender for Cloud.
+
+### Azure Arc
+
+You can use Azure Arc-enabled servers to deploy, remove, and update the Log Analytics agent VM extension to non-Azure Windows and Linux machines. This approach simplifies the management of your hybrid machine through their lifecycle. For more information, see [VM extension management with Azure Arc-enabled servers](/azure/azure-arc/servers/manage-vm-extensions).
### Internet connectivity
-The Log Analytics agent extension for Windows requires that the target virtual machine is connected to the internet.
+The Log Analytics agent VM extension for Windows requires that the target VM is connected to the internet.
## Extension schema
-The following JSON shows the schema for the Log Analytics agent extension. The extension requires the workspace ID and workspace key from the target Log Analytics workspace. These can be found in the settings for the workspace in the Azure portal. Because the workspace key should be treated as sensitive data, it should be stored in a protected setting configuration. Azure VM extension protected setting data is encrypted, and only decrypted on the target virtual machine. Note that **workspaceId** and **workspaceKey** are case-sensitive.
+The following JSON shows the schema for the Log Analytics agent VM extension for Windows. The extension requires the workspace ID and workspace key from the target Log Analytics workspace. These items can be found in the settings for the workspace in the Azure portal.
+
+Because the workspace key should be treated as sensitive data, it should be stored in a protected setting configuration. Azure VM extension protected-setting data is encrypted, and it's only decrypted on the target VM.
+
+> [!NOTE]
+> The values for `workspaceId` and `workspaceKey` are case-sensitive.
```json {
The following JSON shows the schema for the Log Analytics agent extension. The e
### Property values
-| Name | Value / Example |
-| - | - |
-| apiVersion | 2015-06-15 |
-| publisher | Microsoft.EnterpriseCloud.Monitoring |
-| type | MicrosoftMonitoringAgent |
-| typeHandlerVersion | 1.0 |
-| workspaceId (e.g)* | 6f680a37-00c6-41c7-a93f-1437e3462574 |
-| workspaceKey (e.g) | z4bU3p1/GrnWpQkky4gdabWXAhbWSTz70hm4m2Xt92XI+rSRgE8qVvRhsGo9TXffbrTahyrwv35W0pOqQAU7uQ== |
+The JSON schema includes the following properties.
-\* The workspaceId is called the consumerId in the Log Analytics API.
+| Name | Value/Example |
+| | |
+| `apiVersion` | 2015-06-15 |
+| `publisher` | Microsoft.EnterpriseCloud.Monitoring |
+| `type` | MicrosoftMonitoringAgent |
+| `typeHandlerVersion` | 1.0 |
+| `workspaceId (e.g)`__*__ | 6f680a37-00c6-41c7-a93f-1437e3462574 |
+| `workspaceKey (e.g)` | z4bU3p1/GrnWpQkky4gdabWXAhbWSTz70hm4m2Xt92XI+rSRgE8qVvRhsGo9TXffbrTahyrwv35W0pOqQAU7uQ== |
-> [!NOTE]
-> For additional properties see Azure [Connect Windows Computers to Azure Monitor](../../azure-monitor/agents/agent-windows.md).
+__\*__ The `workspaceId` schema property is specified as the `consumerId` property in the Log Analytics API.
## Template deployment
-Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Log Analytics agent extension during an Azure Resource Manager template deployment. A sample template that includes the Log Analytics agent VM extension can be found on the [Azure Quickstart Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/oms-extension-windows-vm).
+Azure VM extensions can be deployed with Azure Resource Manager (ARM) templates. The JSON schema detailed in the previous section can be used in an ARM template to run the Log Analytics agent VM extension during an ARM template deployment. A sample template that includes the Log Analytics agent VM extension can be found on the [Azure Quickstart Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/oms-extension-windows-vm).
->[!NOTE]
->The template does not support specifying more than one workspace ID and workspace key when you want to configure the agent to report to multiple workspaces. To configure the agent to report to multiple workspaces, see [Add or remove a workspace](../../azure-monitor/agents/agent-manage.md#add-or-remove-a-workspace).
+> [!NOTE]
+> The ARM template doesn't support specifying more than one workspace ID and workspace key when you want to configure the Log Analytics agent to report to multiple workspaces. To configure the Log Analytics agent VM extension to report to multiple workspaces, see [Add or remove a workspace](/azure/azure-monitor/agents/agent-manage?tabs=PowerShellLinux#add-or-remove-a-workspace).
-The JSON for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
+The JSON for a VM extension can be nested inside the VM resource, or placed at the root or top level of a JSON ARM template. The placement of the JSON affects the value of the resource name and type. For more information, see [Set name and type for child resources](/azure/azure-resource-manager/templates/child-resource-name-type).
-The following example assumes the Log Analytics extension is nested inside the virtual machine resource. When nesting the extension resource, the JSON is placed in the `"resources": []` object of the virtual machine.
+The following example assumes the Log Analytics agent VM extension is nested inside the VM resource. When you nest the extension resource, the JSON is placed in the `"resources": []` object of the VM.
```json {
The following example assumes the Log Analytics extension is nested inside the v
} ```
-When placing the extension JSON at the root of the template, the resource name includes a reference to the parent virtual machine, and the type reflects the nested configuration.
+When you place the extension JSON at the root of the ARM template, the resource `name` includes a reference to the parent VM, and the `type` reflects the nested configuration.
```json {
When placing the extension JSON at the root of the template, the resource name i
## PowerShell deployment
-The `Set-AzVMExtension` command can be used to deploy the Log Analytics agent virtual machine extension to an existing virtual machine. Before running the command, the public and private configurations need to be stored in a PowerShell hash table.
+The `Set-AzVMExtension` command can be used to deploy the Log Analytics agent VM extension to an existing VM. Before you run the command, store the public and private configurations in a [PowerShell hashtable](/powershell/scripting/learn/deep-dives/everything-about-hashtable).
```powershell $PublicSettings = @{"workspaceId" = "myWorkspaceId"}
Set-AzVMExtension -ExtensionName "MicrosoftMonitoringAgent" `
-Location WestUS ```
-## Troubleshoot and support
+## <a name="troubleshoot-and-support"></a> Troubleshoot issues
-### Troubleshoot
+Here are some suggestions for how to troubleshoot deployment issues.
-Data about the state of extension deployments can be retrieved from the Azure portal, and by using the Azure PowerShell module. To see the deployment state of extensions for a given VM, run the following command using the Azure PowerShell module.
+### View extension status
-```powershell
-Get-AzVMExtension -ResourceGroupName myResourceGroup -VMName myVM -Name myExtensionName
-```
+Check the status of your extension deployment in the Azure portal, or by using PowerShell or the Azure CLI.
-Extension execution output is logged to files found in the following directory:
+To see the deployment state of extensions for a given VM, run the following commands.
-```cmd
-C:\WindowsAzure\Logs\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\
-```
+- Azure PowerShell:
+
+ ```powershell
+ Get-AzVMExtension -ResourceGroupName <myResourceGroup> -VMName <myVM> -Name <myExtensionName>
+ ```
+
+- The Azure CLI:
+
+ ```azurecli
+ az vm get-instance-view --resource-group <myResourceGroup> --name <myVM> --query "instanceView.extensions"
+ ```
+
+### Review output logs
+
+View output logs for the Log Analytics agent VM extension for Windows under
+`C:\WindowsAzure\Logs\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\`.
+
+### Get support
+
+Here are some other options to help you resolve deployment issues:
+
+- For assistance, contact the Azure experts on the [Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/).
-### Support
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
+- You can also [Contact Microsoft Support](https://support.microsoft.com/contactus/). For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/legal/faq/).
virtual-machines Symantec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/symantec.md
- Title: Install Symantec Endpoint Protection on a Windows VM in Azure
-description: Learn how to install and configure the Symantec Endpoint Protection security extension on a new or existing Azure VM created with the Classic deployment model.
------- Previously updated : 03/06/2023---
-# How to install and configure Symantec Endpoint Protection on a Windows VM
-
-Azure has two different deployment models for creating and working with resources: [Resource Manager and Classic](../../azure-resource-manager/management/deployment-models.md). This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model.
-
-This article shows you how to install and configure the Symantec Endpoint Protection client on an existing virtual machine (VM) running Windows Server. This full client includes services such as virus and spyware protection, firewall, and intrusion prevention. The client is installed as a security extension by using the VM Agent.
-
-If you have an existing subscription from Symantec for an on-premises solution, you can use it to protect your Azure virtual machines. If you're not a customer yet, you can sign up for a trial subscription. For more information about this solution, see [Symantec Endpoint Protection on Microsoft's Azure platform][Symantec]. This page also has links to licensing information and instructions for installing the client if you're already a Symantec customer.
-
-## Install Symantec Endpoint Protection on an existing VM
-Before you begin, you need the following:
-
-* The Azure PowerShell module, version 0.8.2 or later, on your work computer. You can check the version of Azure PowerShell that you have installed with the **Get-Module azure | format-table version** command. For instructions and a link to the latest version, see [How to Install and Configure Azure PowerShell][PS]. Log in to your Azure subscription using `Add-AzureAccount`.
-* The VM Agent running on the Azure Virtual Machine.
-
-First, verify that the VM Agent is already installed on the virtual machine. Fill in the cloud service name and virtual machine name, and then run the following commands at an administrator-level Azure PowerShell command prompt. Replace everything within the quotes, including the < and > characters.
-
-> [!TIP]
-> If you don't know the cloud service and virtual machine names, run **Get-AzureVM** to list the names for all virtual machines in your current subscription.
-
-```powershell
-$CSName = "<cloud service name>"
-$VMName = "<virtual machine name>"
-$vm = Get-AzureVM -ServiceName $CSName -Name $VMName
-write-host $vm.VM.ProvisionGuestAgent
-```
-
-If the **write-host** command displays **True**, the VM Agent is installed. If it displays **False**, see the instructions and a link to the download in the Azure blog post [VM Agent and Extensions - Part 2][Agent].
-
-If the VM Agent is installed, run these commands to install the Symantec Endpoint Protection agent.
-
-```powershell
-$Agent = Get-AzureVMAvailableExtension -Publisher Symantec -ExtensionName SymantecEndpointProtection
-
-Set-AzureVMExtension -Publisher Symantec ΓÇôVersion $Agent.Version -ExtensionName SymantecEndpointProtection \
- -VM $vm | Update-AzureVM
-```
-
-To verify that the Symantec security extension has been installed and is up-to-date:
-
-1. Log on to the virtual machine. For instructions, see [How to Log on to a Virtual Machine Running Windows Server][Logon].
-2. For Windows Server 2008 R2, click **Start > Symantec Endpoint Protection**. For Windows Server 2012 or Windows Server 2012 R2, from the start screen, type **Symantec**, and then click **Symantec Endpoint Protection**.
-3. From the **Status** tab of the **Status-Symantec Endpoint Protection** window, apply updates or restart if needed.
-
-## Additional resources
-[How to Log on to a Virtual Machine Running Windows Server][Logon]
-
-[Azure VM Extensions and Features][Ext]
-
-<!--Link references-->
-[Symantec]: https://www.symantec.com/connect/blogs/symantec-endpoint-protection-now-microsoft-azure
-
-[Create]:../windows/classic/tutorial.md
-
-[PS]: /powershell/azure/
-
-[Agent]: https://go.microsoft.com/fwlink/p/?LinkId=403947
-
-[Logon]:../windows/classic/connect-logon.md
-
-[Ext]: features-windows.md
virtual-machines Create Vm Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-vm-rest-api.md
- Title: Create a Linux VM with the REST API
-description: Learn how to create a Linux virtual machine in Azure that uses Managed Disks and SSH authentication with Azure REST API.
---- Previously updated : 06/05/2018----
-# Create a Linux virtual machine that uses SSH authentication with the REST API
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-A Linux virtual machine (VM) in Azure consists of various resources such as disks and network interfaces and defines parameters such as location, size and operating system image and authentication settings.
-
-You can create a Linux VM via the Azure portal, Azure CLI 2.0, many Azure SDKs, Azure Resource Manager templates and many third-party tools such as Ansible or Terraform. All these tools ultimately use the REST API to create the Linux VM.
-
-This article shows you how to use the REST API to create a Linux VM running Ubuntu 18.04-LTS with managed disks and SSH authentication.
-
-## Before you start
-
-Before you create and submit the request, you will need:
-
-* The `{subscription-id}` for your subscription
- * If you have multiple subscriptions, see [Working with multiple subscriptions](/cli/azure/manage-azure-subscriptions-azure-cli)
-* A `{resourceGroupName}` you've created ahead of time
-* A [virtual network interface](../../virtual-network/virtual-network-network-interface.md) in the same resource group
-* An SSH key pair (you can [generate a new one](mac-create-ssh-keys.md) if you don't have one)
-
-## Request basics
-
-To create or update a virtual machine, use the following *PUT* operation:
-
-``` http
-PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}?api-version=2017-12-01
-```
-
-In addition to the `{subscription-id}` and `{resourceGroupName}` parameters, you'll need to specify the `{vmName}` (`api-version` is optional, but this article was tested with `api-version=2017-12-01`)
-
-The following headers are required:
-
-| Request header | Description |
-||--|
-| *Content-Type:* | Required. Set to `application/json`. |
-| *Authorization:* | Required. Set to a valid `Bearer` [access token](/rest/api/azure/#authorization-code-grant-interactive-clients). |
-
-For general information about working with REST API requests, see [Components of a REST API request/response](/rest/api/azure/#components-of-a-rest-api-requestresponse).
-
-## Create the request body
-
-The following common definitions are used to build a request body:
-
-| Name | Required | Type | Description |
-|-|-|-|--|
-| location | True | string | Resource location. |
-| name | | string | Name for the virtual machine. |
-| properties.hardwareProfile | | [HardwareProfile](/rest/api/compute/virtualmachines/createorupdate#hardwareprofile) | Specifies the hardware settings for the virtual machine. |
-| properties.storageProfile | | [StorageProfile](/rest/api/compute/virtualmachines/createorupdate#storageprofile) | Specifies the storage settings for the virtual machine disks. |
-| properties.osProfile | | [OSProfile](/rest/api/compute/virtualmachines/createorupdate#osprofile) | Specifies the operating system settings for the virtual machine. |
-| properties.networkProfile | | [NetworkProfile](/rest/api/compute/virtualmachines/createorupdate#networkprofile) | Specifies the network interfaces of the virtual machine. |
-
-An example request body is below. Make sure you specify the VM name in the `{computerName}` and `{name}` parameters, the name of the network interface you've created under `networkInterfaces`, your username in `adminUsername` and `path`, and the *public* portion of your SSH keypair (located in, for example, `~/.ssh/id_rsa.pub`) in `keyData`. Other parameters you might want to modify include `location` and `vmSize`.
-
-```json
-{
- "location": "eastus",
- "name": "{vmName}",
- "properties": {
- "hardwareProfile": {
- "vmSize": "Standard_DS1_v2"
- },
- "storageProfile": {
- "imageReference": {
- "sku": "18.04-LTS",
- "publisher": "Canonical",
- "version": "latest",
- "offer": "UbuntuServer"
- },
- "osDisk": {
- "caching": "ReadWrite",
- "managedDisk": {
- "storageAccountType": "Premium_LRS"
- },
- "name": "myVMosdisk",
- "createOption": "FromImage"
- }
- },
- "osProfile": {
- "adminUsername": "{your-username}",
- "computerName": "{vmName}",
- "linuxConfiguration": {
- "ssh": {
- "publicKeys": [
- {
- "path": "/home/{your-username}/.ssh/authorized_keys",
- "keyData": "ssh-rsa AAAAB3NzaC1{snip}mf69/J1"
- }
- ]
- },
- "disablePasswordAuthentication": true
- }
- },
- "networkProfile": {
- "networkInterfaces": [
- {
- "id": "/subscriptions/{subscription-id}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{existing-nic-name}",
- "properties": {
- "primary": true
- }
- }
- ]
- }
- }
-}
-```
-
-For a complete list of the available definitions in the request body, see [Virtual machines create or update request body definitions](/rest/api/compute/virtualmachines/createorupdate#definitions).
-
-## Sending the request
-
-You may use the client of your preference for sending this HTTP request. You may also use an [in-browser tool](/rest/api/compute/virtualmachines/createorupdate) by clicking the **Try it** button.
-
-### Responses
-
-There are two successful responses for the operation to create or update a virtual machine:
-
-| Name | Type | Description |
-|-|--|-|
-| 200 OK | [VirtualMachine](/rest/api/compute/virtualmachines/createorupdate#virtualmachine) | OK |
-| 201 Created | [VirtualMachine](/rest/api/compute/virtualmachines/createorupdate#virtualmachine) | Created |
-
-A condensed *201 Created* response from the previous example request body that creates a VM shows a *vmId* has been assigned and the *provisioningState* is *Creating*:
-
-```json
-{
- "vmId": "e0de9b84-a506-4b95-9623-00a425d05c90",
- "provisioningState": "Creating"
-}
-```
-
-For more information about REST API responses, see [Process the response message](/rest/api/azure/#process-the-response-message).
-
-## Next steps
-
-For more information on the Azure REST APIs or other management tools such as Azure CLI or Azure PowerShell, see the following:
--- [Azure Compute provider REST API](/rest/api/compute/)-- [Get started with Azure REST API](/rest/api/azure/)-- [Azure CLI](/cli/azure/)-- [Azure PowerShell module](/powershell/azure/)
virtual-machines Using Cloud Init https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/using-cloud-init.md
There are two stages to making cloud-init available to the supported Linux distr
|: |: |: |: |: |: | |RedHat 7 |RHEL |7.7, 7.8, 7_9 |latest |yes | yes | |RedHat 8 |RHEL |8.1, 8.2, 8_3, 8_4 |latest |yes | yes |
+|RedHat 9 |RHEL |9_0, 9_1 |latest |yes | yes |
* All other RedHat SKUs starting from RHEL 7 (version 7.7) and RHEL 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init. Cloud-init is not supported on RHEL 6.
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/managed-disks-overview.md
This disk has a maximum capacity of 4,095 GiB, however, many operating systems a
### Temporary disk
-Most VMs contain a temporary disk, which is not a managed disk. The temporary disk provides short-term storage for applications and processes, and is intended to only store data such as page or swap files. Data on the temporary disk may be lost during a [maintenance event](./understand-vm-reboots.md) or when you [redeploy a VM](/troubleshoot/azure/virtual-machines/redeploy-to-new-node-windows?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). During a successful standard reboot of the VM, data on the temporary disk will persist. For more information about VMs without temporary disks, see [Azure VM sizes with no local temporary disk](azure-vms-no-temp-disk.yml).
+Most VMs contain a temporary disk, which is not a managed disk. The temporary disk provides short-term storage for applications and processes, and is intended to only store data such as page files, swap files, or SQL Server tempdb. Data on the temporary disk may be lost during a [maintenance event](./understand-vm-reboots.md) or when you [redeploy a VM](/troubleshoot/azure/virtual-machines/redeploy-to-new-node-windows?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). During a successful standard reboot of the VM, data on the temporary disk will persist. For more information about VMs without temporary disks, see [Azure VM sizes with no local temporary disk](azure-vms-no-temp-disk.yml).
On Azure Linux VMs, the temporary disk is typically /dev/sdb and on Windows VMs the temporary disk is D: by default. The temporary disk is not encrypted unless (for server side encryption) you enable encryption at host or (for Azure Disk Encryption) with the [VolumeType parameter set to All on Windows](./windows/disk-encryption-windows.md#enable-encryption-on-a-newly-added-data-disk) or [EncryptFormatAll on Linux](./linux/disk-encryption-linux.md#use-encryptformatall-feature-for-data-disks-on-linux-vms).
virtual-network Create Custom Ip Address Prefix Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-powershell.md
$prefix =@{
AuthorizationMessage = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|2a05:f500:2::/48|yyyymmdd' SignedMessage = $byoipauthsigned }
-$myCustomIpPrefix = New-AzCustomIPPrefix @prefix
+$myCustomIPv6GlobalPrefix = New-AzCustomIPPrefix @prefix
``` ### Provision a regional custom IPv6 address prefix
$prefix =@{
Location = 'EastUS2' CIDR = '2a05:f500:2:1::/64' }
-$myCustomIpPrefix = New-AzCustomIPPrefix @prefix -Zone 1,2,3
+$myCustomIPv6RegionalPrefix = New-AzCustomIPPrefix @prefix -Zone 1,2,3
``` Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised.
vpn-gateway Vpn Gateway Certificates Point To Site Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-certificates-point-to-site-linux.md
The following steps help you install strongSwan.
## <a name="cli"></a>Linux CLI instructions (strongSwan) The following steps help you generate and export certificates using the Linux CLI (strongSwan).
+For more information, see [Additional instructions to install the Azure CLI](/cli/azure/install-azure-cli-apt).
[!INCLUDE [strongSwan certificates](../../includes/vpn-gateway-strongswan-certificates-include.md)]