Updates from: 04/19/2023 01:10:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## March 2023
+
+### Updated articles
+
+- [Configure SAML identity provider options with Azure Active Directory B2C](identity-provider-generic-saml-options.md)
+- [Tutorial: Configure BioCatch with Azure Active Directory B2C](partner-biocatch.md)
+- [Tutorial: Configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication](partner-nok-nok.md)
+- [Pass an identity provider access token to your application in Azure Active Directory B2C](idp-pass-through-user-flow.md)
+- [Tutorial: Configure Haventec Authenticate with Azure Active Directory B2C for single-step, multi-factor passwordless authentication](partner-haventec.md)
+- [Configure Trusona Authentication Cloud with Azure Active Directory B2C](partner-trusona.md)
+- [Tutorial: Configure IDEMIA Mobile ID with Azure Active Directory B2C](partner-idemia.md)
+- [Configure Azure Active Directory B2C with Bluink eID-Me for identity verification](partner-eid-me.md)
+- [Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication](partner-bloksec.md)
+- [Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall](partner-azure-web-application-firewall.md)
+- [Tutorial to configure Saviynt with Azure Active Directory B2C](partner-saviynt.md)
+- [Tutorial: Configure Keyless with Azure Active Directory B2C](partner-keyless.md)
+- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel](azure-sentinel.md)
+- [Configure authentication in a sample Python web app by using Azure AD B2C](configure-authentication-sample-python-web-app.md)
+- [Billing model for Azure Active Directory B2C](billing.md)
+- [Azure Active Directory B2C: Region availability & data residency](data-residency.md)
+- ['Azure AD B2C: Frequently asked questions (FAQ)'](faq.yml)
+- [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
+
## February 2023 ### Updated articles
active-directory-domain-services Migrate From Classic Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/migrate-from-classic-vnet.md
Title: Migrate Azure AD Domain Services from a Classic virtual network | Microso
description: Learn how to migrate an existing Azure AD Domain Services managed domain from the Classic virtual network model to a Resource Manager-based virtual network. + Previously updated : 03/14/2023 Last updated : 04/17/2023 # Migrate Azure Active Directory Domain Services from the Classic virtual network model to Resource Manager
-Azure Active Directory Domain Services (Azure AD DS) supports a one-time move for customers currently using the Classic virtual network model to the Resource Manager virtual network model. Azure AD DS managed domains that use the Resource Manager deployment model provide additional features such as fine-grained password policy, audit logs, and account lockout protection.
+Starting April 1, 2023, Azure Active Directory Domain Services (Azure AD DS) has shut down all IaaS virtual machines that host domain controller services for customers who use the Classic virtual network model. Azure AD Domain Services offers a best-effort offline migration solution for customers currently using the Classic virtual network model to the Resource Manager virtual network model. Azure AD DS managed domains that use the Resource Manager deployment model have more features, such as fine-grained password policy, audit logs, and account lockout protection.
-This article outlines considerations for migration, then the required steps to successfully migrate an existing managed domain. For some of the benefits, see [Benefits of migration from the Classic to Resource Manager deployment model in Azure AD DS][migration-benefits].
+This article outlines considerations for migration, followed by the required steps to successfully migrate an existing managed domain. For some of the benefits, see [Benefits of migration from the Classic to Resource Manager deployment model in Azure AD DS][migration-benefits].
> [!NOTE] > In 2017, Azure AD Domain Services became available to host in an Azure Resource Manager network. Since then, we have been able to build a more secure service using the Azure Resource Manager's modern capabilities. Because Azure Resource Manager deployments fully replace classic deployments, Azure AD DS classic virtual network deployments will be retired on March 1, 2023.
This article outlines considerations for migration, then the required steps to s
## Overview of the migration process
-The migration process takes an existing managed domain that runs in a Classic virtual network and moves it to an existing Resource Manager virtual network. The migration is performed using PowerShell, and has two main stages of execution: *preparation* and *migration*.
-
-![Overview of the migration process for Azure AD DS](media/migrate-from-classic-vnet/migration-overview.png)
-
-In the *preparation* stage, Azure AD DS takes a backup of the domain to get the latest snapshot of users, groups, and passwords synchronized to the managed domain. Synchronization is then disabled, and the cloud service that hosts the managed domain is deleted. During the preparation stage, the managed domain is unable to authenticate users.
-
-![Preparation stage for migrating Azure AD DS](media/migrate-from-classic-vnet/migration-preparation.png)
-
-In the *migration* stage, the underlying virtual disks for the domain controllers from the Classic managed domain are copied to create the VMs using the Resource Manager deployment model. The managed domain is then recreated, which includes the LDAPS and DNS configuration. Synchronization to Azure AD is restarted, and LDAP certificates are restored. There's no need to rejoin any machines to a managed domainΓÇôthey continue to be joined to the managed domain and run without changes.
-
-![Migration of Azure AD DS](media/migrate-from-classic-vnet/migration-process.png)
-
-## Example scenarios for migration
-
-Some common scenarios for migrating a managed domain include the following examples.
-
-> [!NOTE]
-> Don't convert the Classic virtual network until you have confirmed a successful migration. Converting the virtual network removes the option to roll back or restore the managed domain if there are any problems during the migration and verification stages.
-
-### Migrate Azure AD DS to an existing Resource Manager virtual network (recommended)
-
-A common scenario is where you've already moved other existing Classic resources to a Resource Manager deployment model and virtual network. Peering is then used from the Resource Manager virtual network to the Classic virtual network that continues to run Azure AD DS. This approach lets the Resource Manager applications and services use the authentication and management functionality of the managed domain in the Classic virtual network. Once migrated, all resources run using the Resource Manager deployment model and virtual network.
-
-![Migrate Azure AD DS to an existing Resource Manager virtual network](media/migrate-from-classic-vnet/migrate-to-existing-vnet.png)
-
-High-level steps involved in this example migration scenario include the following parts:
-
-1. Remove existing VPN gateways or virtual network peering configured on the Classic virtual network.
-1. Migrate the managed domain using the steps outlined in this article.
-1. Test and confirm a successful migration, then delete the Classic virtual network.
-
-### Migrate multiple resources including Azure AD DS
-
-In this example scenario, you migrate Azure AD DS and other associated resources from the Classic deployment model to the Resource Manager deployment model. If some resources continued to run in the Classic virtual network alongside the managed domain, they can all benefit from migrating to the Resource Manager deployment model.
-
-![Migrate multiple resources to the Resource Manager deployment model](media/migrate-from-classic-vnet/migrate-multiple-resources.png)
-
-High-level steps involved in this example migration scenario include the following parts:
-
-1. Remove existing VPN gateways or virtual network peering configured on the Classic virtual network.
-1. Migrate the managed domain using the steps outlined in this article.
-1. Set up virtual network peering between the Classic virtual network and Resource Manager network.
-1. Test and confirm a successful migration.
-1. [Move additional Classic resources like VMs][migrate-iaas].
-
-### Migrate Azure AD DS but keep other resources on the Classic virtual network
-
-With this example scenario, you have the minimum amount of downtime in one session. You only migrate Azure AD DS to a Resource Manager virtual network, and keep existing resources on the Classic deployment model and virtual network. In a following maintenance period, you can migrate the additional resources from the Classic deployment model and virtual network as desired.
-
-![Migrate only Azure AD DS to the Resource Manager deployment model](media/migrate-from-classic-vnet/migrate-only-azure-ad-ds.png)
-
-High-level steps involved in this example migration scenario include the following parts:
-
-1. Remove existing VPN gateways or virtual network peering configured on the Classic virtual network.
-1. Migrate the managed domain using the steps outlined in this article.
-1. Set up virtual network peering between the Classic virtual network and the new Resource Manager virtual network.
-1. Later, [migrate the additional resources][migrate-iaas] from the Classic virtual network as needed.
+The offline migration process copies the underlying virtual disks for the domain controllers from the Classic managed domain to create the VMs using the Resource Manager deployment model. The managed domain is then recreated, which includes the LDAPS and DNS configuration. Synchronization to Azure AD is restarted, and LDAP certificates are restored. There's no need to rejoin any machines to a managed domainΓÇôthey continue to be joined to the managed domain and run without changes.
## Before you begin
-As you prepare and then migrate a managed domain, there are some considerations around the availability of authentication and management services. The managed domain is unavailable for a period of time during migration. Applications and services that rely on Azure AD DS experience downtime during migration.
+As you prepare for migration, there are some considerations around the availability of authentication and management services. The managed domain remains unavailable until the migration completes successfully.
> [!IMPORTANT] > Read all of this migration article and guidance before you start the migration process. The migration process affects the availability of the Azure AD DS domain controllers for periods of time. Users, services, and applications can't authenticate against the managed domain during the migration process.
As you prepare and then migrate a managed domain, there are some considerations
The domain controller IP addresses for a managed domain change after migration. This change includes the public IP address for the secure LDAP endpoint. The new IP addresses are inside the address range for the new subnet in the Resource Manager virtual network.
-If you need to roll back, the IP addresses may change after rolling back.
- Azure AD DS typically uses the first two available IP addresses in the address range, but this isn't guaranteed. You can't currently specify the IP addresses to use after migration.
-### Downtime
-
-The migration process involves the domain controllers being offline for a period of time. Domain controllers are inaccessible while Azure AD DS is migrated to the Resource Manager deployment model and virtual network.
-
-On average, the downtime is around 1 to 3 hours. This time period is from when the domain controllers are taken offline to the moment the first domain controller comes back online. This average doesn't include the time it takes for the second domain controller to replicate, or the time it may take to migrate additional resources to the Resource Manager deployment model.
- ### Account lockout Managed domains that run on Classic virtual networks don't have AD account lockout policies in place. If VMs are exposed to the internet, attackers could use password-spray methods to brute-force their way into accounts. There's no account lockout policy to stop those attempts. For managed domains that use the Resource Manager deployment model and virtual networks, AD account lockout policies protect against these password-spray attacks.
-By default, 5 bad password attempts in 2 minutes lock out an account for 30 minutes.
+By default, five (5) bad password attempts in two (2) minutes lock out an account for 30 minutes.
A locked out account can't be used to sign in, which may interfere with the ability to manage the managed domain or applications managed by the account. After a managed domain is migrated, accounts can experience what feels like a permanent lockout due to repeated failed attempts to sign in. Two common scenarios after migration include the following:
A locked out account can't be used to sign in, which may interfere with the abil
If you suspect that some accounts may be locked out after migration, the final migration steps outline how to enable auditing or change the fine-grained password policy settings.
-### Roll back and restore
-
-If the migration isn't successful, there's process to roll back or restore a managed domain. Rollback is a self-service option to immediately return the state of the managed domain to before the migration attempt. Azure support engineers can also restore a managed domain from backup as a last resort. For more information, see [how to roll back or restore from a failed migration](#roll-back-and-restore-from-migration).
- ### Restrictions on available virtual networks There are some restrictions on the virtual networks that a managed domain can be migrated to. The destination Resource Manager virtual network must meet the following requirements:
You must also create a network security group to restrict traffic in the virtual
For more information on what rules are required, see [Azure AD DS network security groups and required ports](network-considerations.md#network-security-groups-and-required-ports).
-### LDAPS and TLS/SSL certificate expiration
-
-If your managed domain is configured for LDAPS, confirm that your current TLS/SSL certificate is valid for more than 30 days. A certificate that expires within the next 30 days causes the migration processes to fail. If needed, renew the certificate and apply it to your managed domain, then begin the migration process.
- ## Migration steps
-The migration to the Resource Manager deployment model and virtual network is split into 5 main steps:
+The migration to the Resource Manager deployment model and virtual network is split into four main steps:
-| Step | Performed through | Estimated time | Downtime | Roll back/Restore? |
-||--|--|--|-|
-| [Step 1 - Update and locate the new virtual network](#update-and-verify-virtual-network-settings) | Azure portal | 15 minutes | No downtime required | N/A |
-| [Step 2 - Prepare the managed domain for migration](#prepare-the-managed-domain-for-migration) | PowerShell | 15 ΓÇô 30 minutes on average | Downtime of Azure AD DS starts after this command is completed. | Roll back and restore available. |
-| [Step 3 - Move the managed domain to an existing virtual network](#migrate-the-managed-domain) | PowerShell | 1 ΓÇô 3 hours on average | One domain controller is available once this command is completed. | On failure, both rollback (self-service) and restore are available. |
-| [Step 4 - Test and wait for the replica domain controller](#test-and-verify-connectivity-after-the-migration)| PowerShell and Azure portal | 1 hour or more, depending on the number of tests | Both domain controllers are available and should function normally, downtime ends. | N/A. Once the first VM is successfully migrated, there's no option for rollback or restore. |
-| [Step 5 - Optional configuration steps](#optional-post-migration-configuration-steps) | Azure portal and VMs | N/A | No downtime required | N/A |
+| Step | Performed through | Estimated time | Downtime |
+||--|--|--|
+| [Step 1 - Update and locate the new virtual network](#update-and-verify-virtual-network-settings) | Azure portal | 15 minutes | |
+| [Step 2 - Perform offline migration](#perform-offline-migration) | PowerShell | 1 ΓÇô 3 hours on average | One domain controller is available once this command is completed. |
+| [Step 3 - Test and wait for the replica domain controller](#test-and-verify-connectivity-after-the-migration)| PowerShell and Azure portal | 1 hour or more, depending on the number of tests | Both domain controllers are available and should function normally, downtime ends. |
+| [Step 4 - Optional configuration steps](#optional-post-migration-configuration-steps) | Azure portal and VMs | N/A | |
> [!IMPORTANT] > To avoid additional downtime, read all of this migration article and guidance before you start the migration process. The migration process affects the availability of the Azure AD DS domain controllers for a period of time. Users, services, and applications can't authenticate against the managed domain during the migration process.
Before you begin the migration process, complete the following initial checks an
1. Update your local Azure PowerShell environment to the latest version. To complete the migration steps, you need at least version *2.3.2*.
- For information on how to check and update your PowerShell version, see [Azure PowerShell overview][azure-powershell].
+ For information about how to check and update your PowerShell version, see [Azure PowerShell overview][azure-powershell].
1. Create, or choose an existing, Resource Manager virtual network.
- Make sure that network settings don't block necessary ports required for Azure AD DS. Ports must be open on both the Classic virtual network and the Resource Manager virtual network. These settings include route tables (although it's not recommended to use route tables) and network security groups.
+ Make sure that network settings don't block ports required for Azure AD DS. Ports must be open on both the Classic virtual network and the Resource Manager virtual network. These settings include route tables (although it's not recommended to use route tables) and network security groups.
Azure AD DS needs a network security group to secure the ports needed for the managed domain and block all other incoming traffic. This network security group acts as an extra layer of protection to lock down access to the managed domain.
Before you begin the migration process, complete the following initial checks an
| Source | Source service tag | Source port ranges | Destination | Service | Destination port ranges | Protocol | Action | Required | Purpose | |:--:|:-:|::|:-:|:-:|:--:|:--:|::|:--:|:--| | Service tag | AzureActiveDirectoryDomainServices | * | Any | WinRM | 5986 | TCP | Allow | Yes | Management of your domain |
- | Service tag | CorpNetSaw | * | Any | RDP | 3389 | TCP | Allow | Optional | Debugging for support |
+ | Service tag | CorpNetSaw | * | Any | RDP | 3389 | TCP | Allow | Optional | Debugging for support |
Make a note of the target resource group, target virtual network, and target virtual network subnet. These resource names are used during the migration process.
- Note that the **CorpNetSaw** service tag isn't available by using Azure portal, and the network security group rule for **CorpNetSaw** has to be added by using [PowerShell](powershell-create-instance.md#create-a-network-security-group).
+ > [!NOTE]
+ > The **CorpNetSaw** service tag isn't available by using Azure portal, and the network security group rule for **CorpNetSaw** has to be added by using [PowerShell](powershell-create-instance.md#create-a-network-security-group).
1. Check the managed domain health in the Azure portal. If you have any alerts for the managed domain, resolve them before you start the migration process. 1. Optionally, if you plan to move other resources to the Resource Manager deployment model and virtual network, confirm that those resources can be migrated. For more information, see [Platform-supported migration of IaaS resources from Classic to Resource Manager][migrate-iaas].
Before you begin the migration process, complete the following initial checks an
> [!NOTE] > Don't convert the Classic virtual network to a Resource Manager virtual network. If you do, there's no option to roll back or restore the managed domain.
-## Prepare the managed domain for migration
-
-Azure PowerShell is used to prepare the managed domain for migration. These steps include taking a backup, pausing synchronization, and deleting the cloud service that hosts Azure AD DS. When this step completes, Azure AD DS is taken offline for a period of time. If the preparation step fails, you can [roll back to the previous state](#roll-back).
+## Perform offline migration
-To prepare the managed domain for migration, complete the following steps:
+Azure PowerShell is used to perform offline migration of the managed domain:
1. Install the `Migrate-Aaads` script from the [PowerShell Gallery][powershell-script]. This PowerShell migration script is a digitally signed by the Azure AD engineering team.
To prepare the managed domain for migration, complete the following steps:
Install-Script -Name Migrate-Aadds ```
-1. Create a variable to hold the credentials for by the migration script using the [Get-Credential][get-credential] cmdlet.
+2. Create a variable to hold the credentials for by the migration script using the [Get-Credential][get-credential] cmdlet.
The user account you specify needs [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS and [Domain Services Contributor](../role-based-access-control/built-in-roles.md#contributor) Azure role to create the required Azure AD DS resources.
To prepare the managed domain for migration, complete the following steps:
$creds = Get-Credential ```
-1. Define a variable for your Azure subscription ID. If needed, you can use the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet to list and view your subscription IDs. Provide your own subscription ID in the following command:
+3. Define a variable for your Azure subscription ID. If needed, you can use the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet to list and view your subscription IDs. Provide your own subscription ID in the following command:
```powershell $subscriptionId = 'yourSubscriptionId' ```
-1. Now run the `Migrate-Aadds` cmdlet using the *-Prepare* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain, such as *aaddscontoso.com*:
+4. Now run the `Migrate-Aadds` cmdlet using the *-Offline* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain, such as *aaddscontoso.com*. Specify the target resource group that contains the virtual network you want to migrate Azure AD DS to, such as *myResourceGroup*. Provide the target virtual network, such as *myVnet*, and the subnet, such as *DomainServices*. This step can take 1 to 3 hours to complete.
```powershell Migrate-Aadds `
- -Prepare `
+ -Offline `
-ManagedDomainFqdn aaddscontoso.com `
+ -VirtualNetworkResourceGroupName myResourceGroup `
+ -VirtualNetworkName myVnet `
+ -VirtualSubnetName DomainServices `
-Credentials $creds ` -SubscriptionId $subscriptionId ```
-## Migrate the managed domain
-
-With the managed domain prepared and backed up, the domain can be migrated. This step recreates the Azure AD DS domain controller VMs using the Resource Manager deployment model. This step can take 1 to 3 hours to complete.
-
-Run the `Migrate-Aadds` cmdlet using the *-Commit* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain prepared in the previous section, such as *aaddscontoso.com*.
-
-Specify the target resource group that contains the virtual network you want to migrate Azure AD DS to, such as *myResourceGroup*. Provide the target virtual network, such as *myVnet*, and the subnet, such as *DomainServices*.
-
-After this command runs, you can't then roll back:
-
-```powershell
-Migrate-Aadds `
- -Commit `
- -ManagedDomainFqdn aaddscontoso.com `
- -VirtualNetworkResourceGroupName myResourceGroup `
- -VirtualNetworkName myVnet `
- -VirtualSubnetName DomainServices `
- -Credentials $creds `
- -SubscriptionId $subscriptionId
-```
-
-After the script validates the managed domain is prepared for migration, enter *Y* to start the migration process.
- > [!IMPORTANT]
-> Don't convert the Classic virtual network to a Resource Manager virtual network during the migration process. If you convert the virtual network, you can't then rollback or restore the managed domain as the original virtual network won't exist anymore.
+> As part of the offline migration workflow, you cannot convert the Classic virtual network to a Resource Manager virtual network.
Every two minutes during the migration process, a progress indicator reports the current status, as shown in the following example output:
If needed, you can update the fine-grained password policy to be less restrictiv
1. Use a network trace on the VM to locate the source of the attacks and block those IP addresses from being able to attempt sign-ins. 1. When there are minimal lockout issues, update the fine-grained password policy to be as restrictive as necessary.
-## Roll back and restore from migration
-
-Up to a certain point in the migration process, you can choose to roll back or restore the managed domain.
-
-### Roll back
-
-If there's an error when you run the PowerShell cmdlet to prepare for migration in step 2 or for the migration itself in step 3, the managed domain can roll back to the original configuration. This roll back requires the original Classic virtual network. The IP addresses may still change after rollback.
-
-Run the `Migrate-Aadds` cmdlet using the *-Abort* parameter. Provide the *-ManagedDomainFqdn* for your own managed domain prepared in a previous section, such as *aaddscontoso.com*, and the Classic virtual network name, such as *myClassicVnet*:
-
-```powershell
-Migrate-Aadds `
- -Abort `
- -ManagedDomainFqdn aaddscontoso.com `
- -ClassicVirtualNetworkName myClassicVnet `
- -Credentials $creds `
- -SubscriptionId $subscriptionId
-```
-
-### Restore
-
-As a last resort, Azure AD Domain Services can be restored from the last available backup. A backup is taken in step 1 of the migration to make sure that the most current backup is available. This backup is stored for 30 days.
-
-To restore the managed domain from backup, [open a support case ticket using the Azure portal][azure-support]. Provide your directory ID, domain name, and reason for restore. The support and restore process may take multiple days to complete.
- ## Troubleshooting If you have problems after migration to the Resource Manager deployment model, review some of the following common troubleshooting areas:
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
This article lists the versions and features of Azure Active Directory Connect P
Microsoft provides direct support for the latest agent version and one version before. ### Download link
-You can download the latest version of the agent using [this link](https://aka.ms/onpremprovisioningagent).
+On-premises app provisioning has been rolled into the provisioning agent and is available from the portal. See [installing the provisioning agent](../cloud-sync/how-to-install.md).
### 1.1.892.0
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Previously updated : 04/17/2023 Last updated : 04/18/2023
Run the initial configuration in a [pilot environment](../fundamentals/active-di
To facilitate Azure AD provisioning workflows between the cloud HR app and Active Directory, you can add multiple provisioning connector apps from the Azure AD app gallery: - **Cloud HR app to Active Directory user provisioning**: This provisioning connector app facilitates user account provisioning from the cloud HR app to a single Active Directory domain. If you have multiple domains, you can add one instance of this app from the Azure AD app gallery for each Active Directory domain you need to provision to.-- **Cloud HR app to Azure AD user provisioning**: While Azure AD Connect is the tool that should be used to synchronize Active Directory users to Azure AD, this provisioning connector app can be used to facilitate the provisioning of cloud-only users from the cloud HR app to a single Azure AD tenant.
+- **Cloud HR app to Azure AD user provisioning**: Azure AD Connect is the tool used to synchronize Active Directory on premises users to Azure Active Directory. The Cloud HR app to Azure AD user provisioning is a connector you use to provision cloud-only users from the cloud HR app to a single Azure AD tenant.
- **Cloud HR app write-back**: This provisioning connector app facilitates the write-back of the user's email addresses from Azure AD to the cloud HR app. For example, the following image lists the Workday connector apps that are available in the Azure AD app gallery.
We recommend the following production configuration:
|Requirement|Recommendation| |:-|:-|
-|Number of Azure AD Connect provisioning agents to deploy|Two (for high availability and failover)
-|Number of provisioning connector apps to configure|One app per child domain|
-|Server host for Azure AD Connect provisioning agent|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers</br>Can coexist with Azure AD Connect service|
+|Number of Azure AD Connect provisioning agents to deploy.|Two (for high availability and failover).
+|Number of provisioning connector apps to configure.|One app per child domain.|
+|Server host for Azure AD Connect provisioning agent.|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers. </br>Can coexist with Azure AD Connect service.|
![Flow to on-premises agents](media/plan-cloud-hr-provision/plan-cloudhr-provisioning-img4.png)
We recommend the following production configuration:
|Requirement|Recommendation| |:-|:-|
-|Number of Azure AD Connect provisioning agents to deploy on-premises|Two per disjoint Active Directory forest|
-|Number of provisioning connector apps to configure|One app per child domain|
-|Server host for Azure AD Connect provisioning agent|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers</br>Can coexist with Azure AD Connect service|
+|Number of Azure AD Connect provisioning agents to deploy on-premises|Two per disjoint Active Directory forest.|
+|Number of provisioning connector apps to configure|One app per child domain.|
+|Server host for Azure AD Connect provisioning agent.|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers. </br>Can coexist with Azure AD Connect service.|
![Single cloud HR app tenant disjoint Active Directory forest](media/plan-cloud-hr-provision/plan-cloudhr-provisioning-img5.png) ### Azure AD Connect provisioning agent requirements
-The cloud HR app to Active Directory user provisioning solution requires that you deploy one or more Azure AD Connect provisioning agents on servers that run Windows Server 2016 or greater. The servers must have a minimum of 4-GB RAM and .NET 4.7.1+ runtime. Ensure that the host server has network access to the target Active Directory domain.
+The cloud HR app to Active Directory user provisioning solution requires the deployment of one or more Azure AD Connect provisioning agents. These agents must be deployed on servers that run Windows Server 2016 or greater. The servers must have a minimum of 4-GB RAM and .NET 4.7.1+ runtime. Ensure that the host server has network access to the target Active Directory domain.
To prepare the on-premises environment, the Azure AD Connect provisioning agent configuration wizard registers the agent with your Azure AD tenant, [opens ports](../app-proxy/application-proxy-add-on-premises-application.md#open-ports), [allows access to URLs](../app-proxy/application-proxy-add-on-premises-application.md#allow-access-to-urls), and supports [outbound HTTPS proxy configuration](../saas-apps/workday-inbound-tutorial.md#how-do-i-configure-the-provisioning-agent-to-use-a-proxy-server-for-outbound-http-communication).
This is the most common deployment topology. Use this topology, if you need to p
* Setup two provisioning agent nodes for high availability and failover. * Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register your AD domain with your Azure AD tenant. * When configuring the provisioning app, select the AD domain from the dropdown of registered domains.
-* If you are using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
+* If you're using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
### Deployment topology 2: Separate apps to provision distinct user sets from Cloud HR to single on-premises Active Directory domain
This topology supports business requirements where attribute mapping and provisi
### Deployment topology 3: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (no cross-domain visibility)
-Use this topology to manage multiple independent child AD domains belonging to the same forest, if managers always exist in the same domain as the user and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* does not require a forest-wide lookup. It also offers the flexibility of delegating the administration of each provisioning job by domain boundary.
+Use this topology to manage multiple independent child AD domains belonging to the same forest, if managers always exist in the same domain as the user and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* doesn't require a forest-wide lookup. It also offers the flexibility of delegating the administration of each provisioning job by domain boundary.
For example: In the diagram below, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Delegated administration of the provisioning app is possible so that *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region.
For example: In the diagram below, the provisioning apps are set up for each geo
### Deployment topology 5: Single app to provision all users from Cloud HR to multiple on-premises Active Directory domains (with cross-domain visibility)
-Use this topology if you want to use a single provisioning app to manage users belonging to all your parent and child AD domains. This topology is recommended if provisioning rules are consistent across all domains and there is no requirement for delegated administration of provisioning jobs. This topology supports resolving cross-domain manager references and can perform forest-wide uniqueness check.
+Use this topology if you want to use a single provisioning app to manage users belonging to all your parent and child AD domains. This topology is recommended if provisioning rules are consistent across all domains and there's no requirement for delegated administration of provisioning jobs. This topology supports resolving cross-domain manager references and can perform forest-wide uniqueness check.
For example: In the diagram below, a single provisioning app manages users present in three different child domains grouped by region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). The attribute mapping for *parentDistinguishedName* is used to dynamically create a user in the appropriate child domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent.
For example: In the diagram below, a single provisioning app manages users prese
* Create a single HR2AD provisioning app for the entire forest. * When configuring the provisioning app, select the parent AD domain from the dropdown of available AD domains. This ensures forest-wide lookup while generating unique values for attributes like *userPrincipalName*, *samAccountName* and *mail*. * Use *parentDistinguishedName* with expression mapping to dynamically create user in the correct child domain and [OU container](#configure-active-directory-ou-container-assignment).
-* If you are using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
+* If you're using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
### Deployment topology 6: Separate apps to provision distinct users from Cloud HR to disconnected on-premises Active Directory forests
Use this topology if your IT infrastructure has disconnected/disjoint AD forests
### Deployment topology 7: Separate apps to provision distinct users from multiple Cloud HR to disconnected on-premises Active Directory forests
-In large organizations, it is not uncommon to have multiple HR systems. During business M&A (mergers and acquisitions) scenarios, you may come across a need to connect your on-premises Active Directory to multiple HR sources. We recommend the topology below if you have multiple HR sources and would like to channel the identity data from these HR sources to either the same or different on-premises Active Directory domains.
+In large organizations, it isn't uncommon to have multiple HR systems. During business M&A (mergers and acquisitions) scenarios, you may come across a need to connect your on-premises Active Directory to multiple HR sources. We recommend the topology below if you have multiple HR sources and would like to channel the identity data from these HR sources to either the same or different on-premises Active Directory domains.
:::image type="content" source="media/plan-cloud-hr-provision/topology-7-separate-apps-from-multiple-hr-to-disconnected-ad-forests.png" alt-text="Screenshot of separate apps to provision users from multiple Cloud HR to disconnected AD forests" lightbox="media/plan-cloud-hr-provision/topology-7-separate-apps-from-multiple-hr-to-disconnected-ad-forests.png":::
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 01/29/2023 Last updated : 04/17/2023
Microsoft doesn't guarantee consistent SMS or voice-based Azure AD Multi-Factor
### Text message verification
-With text message verification during SSPR or Azure AD Multi-Factor Authentication, an SMS is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
+With text message verification during SSPR or Azure AD Multi-Factor Authentication, a Short Message Service (SMS) text is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
+
+Android users can enable Rich Communication Services (RCS) on their devices. RCS offers encryption and other improvements over SMS. For Android, MFA text messages may be sent over RCS rather than SMS. The MFA text message is similar to SMS, but RCS messages have more Microsoft branding and a verified checkmark so users know they can trust the message.
+ ### Phone call verification
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
These browsers support device authentication, allowing the device to be identifi
> [!NOTE] > Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario. >
-> Safari is supported for device-based Conditional Access, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements.
+> Safari is supported for device-based Conditional Access on a managed device, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements.
> On iOS with 3rd party MDM solution only Microsoft Edge browser supports device policy. > > [Firefox 91+](https://support.mozilla.org/kb/windows-sso) is supported for device-based Conditional Access, but "Allow Windows single sign-on for Microsoft, work, and school accounts" needs to be enabled.
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Some applications require the group membership information to appear in the role
Group filtering allows for fine control of the list of groups that's included as part of the group claim. When a filter is configured, only groups that match the filter will be included in the group's claim that's sent to that application. The filter will be applied against all groups regardless of the group hierarchy. > [!NOTE]
-> Group filtering applies to tokens emitted for apps where group claims and filtering was configured in the **Enterprise apps** blade in the portal.
+> Group filtering applies to tokens emitted for apps where group claims and filtering was configured in the **Enterprise apps** blade in the portal.
+> Group filtering does not apply to Azure AD Roles.
You can configure filters to be applied to the group's display name or `SAMAccountName` attribute. The following filtering operations are supported:
You can also configure group claims in the [optional claims](../../active-direct
| Selection | Description | |-|-| | `All` | Emits security groups, distribution lists, and roles. |
- | `SecurityGroup` | Emits security groups that the user is a member of in the group claim. |
+ | `SecurityGroup` | Emits security groups and Azure AD roles that the user is a member of in the group claim. |
| `DirectoryRole` | If the user is assigned directory roles, they're emitted as a `wids` claim. (A group claim won't be emitted.) | | `ApplicationGroup` | Emits only the groups that are explicitly assigned to the application and that the user is a member of. | | `None` | No groups are returned. (It's not case-sensitive, so `none` also works. It can be set directly in the application manifest.) |
active-directory How To Connect Health Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-data-retrieval.md
To retrieve the email addresses for all of your users that are configured in Azu
4. On the **Notification Setting** blade, you will find the list of email addresses that have been enabled as recipients for health Alert notifications. ![Emails](./media/how-to-connect-health-data-retrieval/retrieve5a.png)
-## Retrieve accounts that were flagged with AD FS Bad Password attempts
+## Retrieve all sync errors
-To retrieve accounts that were flagged with AD FS Bad Password attempts, use the following steps.
+To retrieve a list of all sync errors, use the following steps.
1. Starting on the Azure Active Directory Health blade, select **Sync Errors**. ![Sync errors](./media/how-to-connect-health-data-retrieval/retrieve6.png)
active-directory Protect Against Consent Phishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/protect-against-consent-phishing.md
Administrators, users, or Microsoft security researchers may flag OAuth applicat
When Azure AD disables an OAuth application, the following actions occur: - The malicious application and related service principals are placed into a fully disabled state. Any new token requests or requests for refresh tokens are denied, but existing access tokens are still valid until their expiration.-- The disabled state is surfaced through an exposed property called *disabledByMicrosoftStatus* on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph.
+- These applications will show `DisabledDueToViolationOfServicesAgreement` on the `disabledByMicrosoftStatus` property on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph. To prevent them from being instantiated in your organization again in the future, you cannot delete these objects.
- An email is sent to a global administrator when a user in an organization consented to an application before it was disabled. The email specifies the action taken and recommended steps they can do to investigate and improve their security posture. ## Recommended response and remediation
Administrators should be in control of application use by providing the right in
- [Managing access to applications](./what-is-access-management.md) - [Restrict user consent operations in Azure AD](../../security/fundamentals/steps-secure-identity.md#restrict-user-consent-operations) - [Compromised and malicious applications investigation](/security/compass/incident-response-playbook-compromised-malicious-app)+
active-directory Concept Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-audit-logs.md
With an application-centric view, you can get answers to questions such as:
## How do I access it?
-The audit activity report is available in all editions of Azure AD. To access the audit logs, you need to have one of the following roles:
+To access the audit log for a tenant, you must have one of the following roles:
- Reports Reader - Security Reader
The audit activity report is available in all editions of Azure AD. To access th
Sign in to the Azure portal and go to **Azure AD** and select **Audit log** from the **Monitoring** section.
-You can also access the audit log through the [Microsoft Graph API](/graph/api/resources/azure-ad-auditlog-overview).
+The audit activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the audit log through the [Microsoft Graph API](/graph/api/resources/azure-ad-auditlog-overview). See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade.
## What do the logs show?
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
You can further restrict permissions by assigning roles at smaller scopes or by
> | - | | - | > | Manage identity providers | [External Identity Provider Administrator](permissions-reference.md#external-identity-provider-administrator) | | > | Manage settings | [Global Administrator](permissions-reference.md#global-administrator) | |
-> | Manage terms of use | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Manage privacy statement and contact | [Global Administrator](permissions-reference.md#global-administrator) | |
> | Read all configuration | [Global Reader](permissions-reference.md#global-reader) | | ## Password reset
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Azure CNI Overlay has the following limitations:
- Windows Server 2019 node pools are **not** supported for Overlay - Traffic from host network pods is not able to reach Windows Overlay pods. - Sovereign Clouds are not supported-- Virtual Machine Scale Sets (VMAS) are not supported for Overlay
+- Virtual Machine Availability Sets (VMAS) are not supported for Overlay
- Dualstack networking is not supported in Overlay - You can't use [DCsv2-series](/azure/virtual-machines/dcv2-series) virtual machines in node pools. To meet Confidential Computing requirements, consider using [DCasv5 or DCadsv5-series confidential VMs](/azure/virtual-machines/dcasv5-dcadsv5-series) instead.
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
Title: Provision Azure NetApp Files volumes on Azure Kubernetes Service
description: Learn how to provision Azure NetApp Files volumes on an Azure Kubernetes Service cluster. Previously updated : 02/08/2023 Last updated : 04/18/2023 # Provision Azure NetApp Files volumes on Azure Kubernetes Service
Before proceeding to the next section, you need to:
This section walks you through the installation of Astra Trident using the operator.
-1. Download Astra Trident from its [GitHub repository](https://github.com/NetApp/trident/releases). Choose from the desired version and download the installer bundle.
-
- ```bash
- wget https://github.com/NetApp/trident/releases/download/v21.07.1/trident-installer-21.07.1.tar.gz
- tar xzvf trident-installer-21.07.1.tar.gz
- ```
-
-2. Run the [kubectl create][kubectl-create] command to create the *trident* namespace:
+1. Run the [kubectl create][kubectl-create] command to create the *trident* namespace:
```bash kubectl create ns trident
This section walks you through the installation of Astra Trident using the opera
namespace/trident created ```
-3. Run the [kubectl apply][kubectl-apply] command to deploy the Trident operator using the bundle file:
+2. Run the [kubectl apply][kubectl-apply] command to deploy the Trident operator using the bundle file:
+ - For AKS cluster version less than 1.25, run following command:
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/NetApp/trident/v23.01.1/deploy/bundle_pre_1_25.yaml -n trident
+ ```
+ - For AKS cluster 1.25+ version, run following command:
```bash
- kubectl apply -f trident-installer/deploy/bundle.yaml -n trident
+ kubectl apply -f https://raw.githubusercontent.com/NetApp/trident/v23.01.1/deploy/bundle_post_1_25.yaml -n trident
``` The output of the command resembles the following example:
This section walks you through the installation of Astra Trident using the opera
podsecuritypolicy.policy/tridentoperatorpods created ```
-4. Run the following command to create a `TridentOrchestrator` to install Astra Trident.
+3. Run the following command to create a `TridentOrchestrator` to install Astra Trident.
```bash
- kubectl apply -f trident-installer/deploy/crds/tridentorchestrator_cr.yaml
+ kubectl apply -f https://raw.githubusercontent.com/NetApp/trident/v23.01.1/deploy/crds/tridentorchestrator_cr.yaml
``` The output of the command resembles the following example:
This section walks you through the installation of Astra Trident using the opera
The operator installs by using the parameters provided in the `TridentOrchestrator` spec. You can learn about the configuration parameters and example backends from the [Trident install guide][trident-install-guide] and [backend guide][trident-backend-install-guide].
-5. To confirm Astra Trident was installed successfully, run the following [kubectl describe][kubectl-describe] command:
+4. To confirm Astra Trident was installed successfully, run the following [kubectl describe][kubectl-describe] command:
```bash kubectl describe torc trident
This section walks you through the installation of Astra Trident using the opera
Current Installation Params: IPv6: false Autosupport Hostname:
- Autosupport Image: netapp/trident-autosupport:21.01
+ Autosupport Image: netapp/trident-autosupport:23.01
Autosupport Proxy: Autosupport Serial Number: Debug: true
This section walks you through the installation of Astra Trident using the opera
Kubelet Dir: /var/lib/kubelet Log Format: text Silence Autosupport: false
- Trident Image: netapp/trident:21.07.1
+ Trident Image: netapp/trident:23.01.1
Message: Trident installed Namespace: trident Status: Installed
- Version: v21.07.1
+ Version: v23.01.1
Events: Type Reason Age From Message - - - -
This section walks you through the installation of Astra Trident using the opera
### Create a backend
-1. Before creating a backend, you need to update `backend-anf.yaml` to include details about the Azure NetApp Files subscription, such as:
+1. Before creating a backend, you need to update [backend-anf.yaml][backend-anf.yaml] to include details about the Azure NetApp Files subscription, such as:
* `subscriptionID` for the Azure subscription where Azure NetApp Files will be enabled. * `tenantID`, `clientID`, and `clientSecret` from an [App Registration][azure-ad-app-registration] in Azure Active Directory (AD) with sufficient permissions for the Azure NetApp Files service. The App Registration include the `Owner` or `Contributor` role that's predefined by Azure.
This section walks you through the installation of Astra Trident using the opera
2. After Astra Trident is installed, create a backend that points to your Azure NetApp Files subscription by running the following command. ```bash
- kubectl apply -f trident-installer/sample-input/backends-samples/azure-netapp-files/backend-anf.yaml -n trident
+ kubectl apply -f backend-anf.yaml -n trident
``` The output of the command resembles the following example:
After the PVC is created, a pod can be spun up to access the Azure NetApp Files
spec: containers: - name: nginx
- image: mcr.microsoft.com/oss/nginx/nginx:latest1.15.5-alpine
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
resources: requests: cpu: 100m
Astra Trident supports many features with Azure NetApp Files. For more informati
<!-- EXTERNAL LINKS --> [astra-trident]: https://docs.netapp.com/us-en/trident/https://docsupdatetracker.net/index.html
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe [kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
Astra Trident supports many features with Azure NetApp Files. For more informati
[expand-trident-volumes]: https://docs.netapp.com/us-en/trident/trident-use/vol-expansion.html [on-demand-trident-volume-snapshots]: https://docs.netapp.com/us-en/trident/trident-use/vol-snapshots.html [importing-trident-volumes]: https://docs.netapp.com/us-en/trident/trident-use/vol-import.html
+[backend-anf.yaml]: https://raw.githubusercontent.com/NetApp/trident/v23.01.1/trident-installer/sample-input/backends-samples/azure-netapp-files/backend-anf.yaml
<!-- INTERNAL LINKS --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
This article shows you how certificate rotation works in your AKS cluster.
This article requires that you are running the Azure CLI version 2.0.77 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-## Limitation
-
-Certificate rotation is not supported for stopped AKS clusters.
- ## AKS certificates, Certificate Authorities, and Service Accounts AKS generates and uses the following certificates, Certificate Authorities, and Service Accounts:
aks Cilium Enterprise Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cilium-enterprise-marketplace.md
+
+ Title: Isovalent Cilium Enterprise on Azure Marketplace (Preview)
+
+description: Learn about Isovalent Cillium Enterprise on Azure Marketplace and how to deploy it on Azure.
+++++ Last updated : 04/18/2023+++
+# Isovalent Cilium Enterprise on Azure Marketplace (Preview)
+
+Isovalent Cilium Enterprise on Azure Marketplace is a powerful tool for securing and managing KubernetesΓÇÖ workloads on Azure. Cilium Enterprise's range of features and easy deployment make it an ideal solution for organizations of all sizes looking to secure their cloud-native applications.
+
+Isovalent Cilium Enterprise is a network security platform for modern cloud-native workloads that provides visibility, security, and compliance across Kubernetes clusters. It uses eBPF technology to deliver network and application-layer security, while also providing observability and tracing for Kubernetes workloads. Azure Marketplace is an online store for buying and selling cloud computing solutions that allows you to deploy Isovalent Cilium Enterprise to Azure with ease.
++
+> [!IMPORTANT]
+> Isovalent Cilium Enterprise is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Designed for platform teams and using the power of eBPF, Isovalent Cilium Enterprise:
+
+* Combines network and runtime behavior with Kubernetes identity to provide a single source of data for cloud native forensics, audit, compliance monitoring, and threat detection. Isovalent Cilium Enterprise is integrated into your SIEM/Log aggregation platform of choice.
+
+* Scales effortlessly for any deployment size. With capabilities such as traffic management, load balancing, and infrastructure monitoring.
+
+* Fully back-ported and tested. Available with 24x7 support.
+
+* Enables self-service for monitoring, troubleshooting, and security workflows in Kubernetes. Teams can access current and historical views of flow data, metrics, and visualizations for their specific namespaces.
+
+> [!NOTE]
+> If you are upgrading an existing AKS cluster, then it must be created with Azure CNI powered by Cilium. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)](azure-cni-powered-by-cilium.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing Azure Kubernetes Service (AKS) cluster running Azure CNI powered by Cilium. If you don't have an existing AKS cluster, you can create one from the Azure portal. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)](azure-cni-powered-by-cilium.md).
+
+## Deploy Isovalent Cilium Enterprise on Azure Marketplace
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search box at the top of the portal, enter **Cilium Enterprise** and select **Isovalent Cilium Enterprise** from the results.
+
+1. In the **Basics** tab of **Create Isovalent Cilium Enterprise**, enter or select the following information:
+
+| Setting | Value |
+| | |
+| **Project details** | |
+| Subscription | Select your subscription |
+| Resource group | Select **Create new** </br> Enter **test-rg** in **Name**. </br> Select **OK**. </br> Or, select an existing resource group that contains your AKS cluster. |
+| **Instance details** | |
+| Supported Regions | Select **West US 2**. |
+| Create new dev cluster? | Leave the default of **No**. |
+
+1. Select **Next: Cluster Details**.
+
+1. Select your AKS cluster from the **AKS Cluster Name** dropdown.
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+Azure deploys Isovalent Cilium Enterprise to your selected subscription and resource group. This process may take some time and must be completed.
+
+> [!IMPORTANT]
+> Note that Marketplace applications are deployed as AKS extensions onto AKS clusters. If you are upgrading the existing AKS cluster, AKS replaces the Cilium OSS images with Isovalent Cilium Enterprise images seamlessly without any downtime.
+
+When the deployment is complete, you can access the Isovalent Cilium Enterprise by navigating to the resource group that contains the **Cilium Enterprise** resource in the Azure portal.
+
+Cilium can be reconfigured after deployment by updating the Helm values with Azure CLI:
+
+```azurecli
+az k8s-extension update -c <cluster> -t managedClusters -g <region> -n cilium --configuration-settings debug.enabled=true
+```
+
+You can uninstall an Isovalent Cilium Enterprise offer using the AKS extension delete command. Uninstall flow per AKS Cluster isn't added in Marketplace yet until ISVΓÇÖs stop sell the whole offer. For more information about AKS extension delete, see [az k8s-extension delete](/cli/azure/k8s-extension#az-k8s-extension-delete).
+
+## Next steps
+
+- [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)](azure-cni-powered-by-cilium.md)
+
+- [What is Azure Kubernetes Service?](intro-kubernetes.md)
aks Istio About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-about.md
+
+ Title: Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+description: Istio-based service mesh add-on for Azure Kubernetes Service.
+ Last updated : 04/09/2023+++
+# Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+
+[Istio][istio-overview] addresses the challenges developers and operators face with a distributed or microservices architecture. The Istio-based service mesh add-on provides an officially supported and tested integration for Azure Kubernetes Service (AKS).
++
+## What is a Service Mesh?
+
+Modern applications are typically architected as distributed collections of microservices, with each collection of microservices performing some discrete business function. A service mesh is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code. The term **service mesh** describes both the type of software you use to implement this pattern, and the security or network domain that is created when you use that software.
+
+As the deployment of distributed services, such as in a Kubernetes-based system, grows in size and complexity, it can become harder to understand and manage. You may need to implement capabilities such as discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh can also address more complex operational requirements like A/B testing, canary deployments, rate limiting, access control, encryption, and end-to-end authentication.
+
+Service-to-service communication is what makes a distributed application possible. Routing this communication, both within and across application clusters, becomes increasingly complex as the number of services grow. Istio helps reduce this complexity while easing the strain on development teams.
+
+## What is Istio?
+
+Istio is an open-source service mesh that layers transparently onto existing distributed applications. IstioΓÇÖs powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio enables load balancing, service-to-service authentication, and monitoring ΓÇô with few or no service code changes. Its powerful control plane brings vital features, including:
+
+* Secure service-to-service communication in a cluster with TLS encryption, strong identity-based authentication and authorization.
+* Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
+* Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
+* A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
+* Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
+
+## How is the add-on different from open-source Istio?
+
+This service mesh add-on uses and builds on top of open-source Istio. The add-on flavor provides the following extra benefits:
+
+* Istio versions are tested and verified to be compatible with supported versions of Azure Kubernetes Service.
+* Microsoft handles scaling and configuration of Istio control plane
+* Microsoft adjusts scaling of AKS components like `coredns` when Istio is enabled.
+* Microsoft provides managed lifecycle (upgrades) for Istio components when triggered by user.
+* Verified external and internal ingress set-up.
+* Verified to work with [Azure Monitor managed service for Prometheus][managed-prometheus-overview] and [Azure Managed Grafana][managed-grafana-overview].
+* Official Azure support provided for the add-on.
+
+## Limitations
+
+Istio-based service mesh add-on for AKS has the following limitations:
+
+* The add-on currently doesn't work on AKS clusters using [Azure CNI Powered by Cilium][azure-cni-cilium].
+* The add-on doesn't work on AKS clusters that are using [Open Service Mesh addon for AKS][open-service-mesh-about].
+* The add-on doesn't work on AKS clusters that have Istio installed on them already outside the add-on installation.
+* Managed lifecycle of mesh on how Istio versions are installed and later made available for upgrades.
+* Istio doesn't support Windows Server containers.
+* Customization of mesh based on the following custom resources is blocked for now - `EnvoyFilter, ProxyConfig, WorkloadEntry, WorkloadGroup, Telemetry, IstioOperator, WasmPlugin`
+
+## Next steps
+
+* [Deploy Istio-based service mesh add-on][istio-deploy-addon]
+
+[istio-overview]: https://istio.io/latest/
+[managed-prometheus-overview]: ../azure-monitor/essentials/prometheus-metrics-overview.md
+[managed-grafana-overview]: ../managed-grafan
+[azure-cni-cilium]: azure-cni-powered-by-cilium.md
+[open-service-mesh-about]: open-service-mesh-about.md
+
+[istio-deploy-addon]: istio-deploy-addon.md
aks Istio Deploy Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-addon.md
+
+ Title: Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+description: Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview)
++ Last updated : 04/09/2023+++
+# Deploy Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+
+This article shows you how to install the Istio-based service mesh add-on for Azure Kubernetes Service (AKS) cluster.
+
+For more information on Istio and the service mesh add-on, see [Istio-based service mesh add-on for Azure Kubernetes Service][istio-about].
++
+## Before you begin
+
+### Set environment variables
+
+```bash
+export CLUSTER=<cluster-name>
+export RESOURCE_GROUP=<resource-group-name>
+export LOCATION=<location>
+```
+
+### Verify Azure CLI and aks-preview extension versions
+The add-on requires:
+* Azure CLI version 2.44.0 or later installed. To install or upgrade, see [Install Azure CLI][install-azure-cli].
+* `aks-preview` Azure CLI extension of version 0.5.133 or later installed
+
+You can run `az --version` to verify above versions.
+
+To install the aks-preview extension, run the following command:
+
+```azurecli-interactive
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli-interactive
+az extension update --name aks-preview
+```
+
+### Register the _AzureServiceMeshPreview_ feature flag
+
+Register the `AzureServiceMeshPreview` feature flag by using the [az feature register][az-feature-register] command:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AzureServiceMeshPreview"
+```
+
+It takes a few minutes for the feature to register. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "AzureServiceMeshPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Install Istio add-on at the time of cluster creation
+
+To install the Istio add-on when creating the cluster, use the `--enable-azure-service-mesh` or`--enable-asm` parameter.
+
+```azurecli-interactive
+az group create --name ${RESOURCE_GROUP} --location ${LOCATION}
+
+az aks create \
+--resource-group ${RESOURCE_GROUP} \
+--name ${CLUSTER} \
+--enable-asm
+```
+
+## Install Istio add-on for existing cluster
+
+The following example enables Istio add-on for an existing AKS cluster:
+
+> [!IMPORTANT]
+> You can't enable the Istio add-on on an existing cluster if an OSM add-on is already on your cluster. Uninstall the OSM add-on before installing the Istio add-on.
+> For more information, see [uninstall the OSM add-on from your AKS cluster][uninstall-osm-addon].
+> Istio add-on can only be enabled on AKS clusters of version >= 1.23.
+
+```azurecli-interactive
+az aks mesh enable --resource-group ${RESOURCE_GROUP} --name ${CLUSTER}
+```
+
+## Verify successful installation
+
+To verify the Istio add-on is installed on your cluster, run the following command:
+
+```azurecli-interactive
+az aks show --resource-group ${RESOURCE_GROUP} --name ${CLUSTER} --query 'serviceMeshProfile.mode'
+```
+
+Confirm the output shows `Istio`.
+
+Use `az aks get-credentials` to the credentials for your AKS cluster:
+
+```azurecli-interactive
+az aks get-credentials --resource-group ${RESOURCE_GROUP} --name ${CLUSTER}
+```
+
+Use `kubectl` to verify that `istiod` (Istio control plane) pods are running successfully:
+
+```bash
+kubectl get pods -n aks-istio-system
+```
+
+Confirm the `istiod` pod has a status of `Running`. For example:
+
+```
+NAME READY STATUS RESTARTS AGE
+istiod-asm-1-17-74f7f7c46c-xfdtl 2/2 Running 0 2m
+```
+
+## Enable sidecar injection
+
+To automatically install sidecar to any new pods, annotate your namespaces:
+
+```bash
+kubectl label namespace default istio.io/rev=asm-1-17
+```
+
+> [!IMPORTANT]
+> The default `istio-injection=enabled` labeling doesn't work. Explicit versioning (`istio.io/rev=asm-1-17`) is required.
++
+For manual injection of sidecar using `istioctl kube-inject`, you need to specify extra parameters for `istioNamespace` (`-i`) and `revision` (`-r`). Example:
+
+```bash
+kubectl apply -f <(istioctl kube-inject -f sample.yaml -i aks-istio-system -r asm-1-17) -n foo
+```
+
+## Deploy sample application
+
+Use `kubectl apply` to deploy the sample application on the cluster:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/bookinfo/platform/kube/bookinfo.yaml
+```
+
+Confirm several deployments and services are created on your cluster. For example:
+
+```
+service/details created
+serviceaccount/bookinfo-details created
+deployment.apps/details-v1 created
+service/ratings created
+serviceaccount/bookinfo-ratings created
+deployment.apps/ratings-v1 created
+service/reviews created
+serviceaccount/bookinfo-reviews created
+deployment.apps/reviews-v1 created
+deployment.apps/reviews-v2 created
+deployment.apps/reviews-v3 created
+service/productpage created
+serviceaccount/bookinfo-productpage created
+deployment.apps/productpage-v1 created
+```
+
+Use `kubectl get services` to verify that the services were created successfully:
+
+```bash
+kubectl get services
+```
+
+Confirm the following services were deployed:
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+details ClusterIP 10.0.180.193 <none> 9080/TCP 87s
+kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 15m
+productpage ClusterIP 10.0.112.238 <none> 9080/TCP 86s
+ratings ClusterIP 10.0.15.201 <none> 9080/TCP 86s
+reviews ClusterIP 10.0.73.95 <none> 9080/TCP 86s
+```
+
+```bash
+kubectl get pods
+```
+
+Confirm that all the pods have status of `Running`.
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+details-v1-558b8b4b76-2llld 2/2 Running 0 2m41s
+productpage-v1-6987489c74-lpkgl 2/2 Running 0 2m40s
+ratings-v1-7dc98c7588-vzftc 2/2 Running 0 2m41s
+reviews-v1-7f99cc4496-gdxfn 2/2 Running 0 2m41s
+reviews-v2-7d79d5bd5d-8zzqd 2/2 Running 0 2m41s
+reviews-v3-7dbcdcbc56-m8dph 2/2 Running 0 2m41s
+```
+
+> [!NOTE]
+> Each pod has two containers, one of which is the Envoy sidecar injected by Istio and the other is the application container.
+
+To test this sample application against ingress, check out [next-steps](#next-steps).
+
+## Delete resources
+
+Use `kubectl delete` to delete the sample application:
+
+```bash
+kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/bookinfo/platform/kube/bookinfo.yaml
+```
+
+If you don't intend to enable Istio ingress on your cluster and want to disable the Istio add-on, run the following command:
+
+```azurecli-interactive
+az aks mesh disable --resource-group ${RESOURCE_GROUP} --name ${CLUSTER}
+```
+
+> [!CAUTION]
+> Disabling the service mesh addon will completely remove the Istio control plane from the cluster.
+
+Istio `CustomResourceDefintion`s (CRDs) aren't be deleted by default. To clean them up, use:
+
+```bash
+kubectl delete crd $(kubectl get crd -A | grep "istio.io" | awk '{print $1}')
+```
+
+Use `az group delete` to delete your cluster and the associated resources:
+
+```azurecli-interactive
+az group delete --name ${RESOURCE_GROUP} --yes --no-wait
+```
+
+## Next steps
+
+* [Deploy external or internal ingresses for Istio service mesh add-on][istio-deploy-ingress]
+
+[istio-about]: istio-about.md
+
+[azure-cli-install]: /cli/azure/install-azure-cli
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-provider-register]: /cli/azure/provider#az-provider-register
+
+[uninstall-osm-addon]: open-service-mesh-uninstall-add-on.md
+[uninstall-istio-oss]: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio
+
+[istio-deploy-ingress]: istio-deploy-ingress.md
aks Istio Deploy Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-ingress.md
+
+ Title: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)
+description: Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)
++ Last updated : 04/09/2023+++
+# Deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (preview)
+
+This article shows you how to deploy external or internal ingresses for Istio service mesh add-on for Azure Kubernetes Service (AKS) cluster.
++
+## Prerequisites
+
+This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster, deploy a sample application and set environment variables.
+
+## Enable external ingress gateway
+
+Use `az aks mesh enable-ingress-gateway` to enable an externally accessible Istio ingress on your AKS cluster:
+
+```azurecli-interactive
+az aks mesh enable-ingress-gateway --resource-group $RESOURCE_GROUP --name $CLUSTER --ingress-gateway-type external
+```
+
+Use `kubectl get svc` to check the service mapped to the ingress gateway:
+
+```bash
+kubectl get svc aks-istio-ingressgateway-external -n aks-istio-ingress
+```
+
+Observe from the output that the external IP address of the service is a publicly accessible one:
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+aks-istio-ingressgateway-external LoadBalancer 10.0.10.249 <EXTERNAL_IP> 15021:30705/TCP,80:32444/TCP,443:31728/TCP 4m21s
+```
+
+Applications aren't accessible from outside the cluster by default after enabling the ingress gateway. To make an application accessible, map the sample deployment's ingress to the Istio ingress gateway using the following manifest:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.istio.io/v1alpha3
+kind: Gateway
+metadata:
+ name: bookinfo-gateway-external
+spec:
+ selector:
+ istio: aks-istio-ingressgateway-external
+ servers:
+ - port:
+ number: 80
+ name: http
+ protocol: HTTP
+ hosts:
+ - "*"
+
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ name: bookinfo-vs-external
+spec:
+ hosts:
+ - "*"
+ gateways:
+ - bookinfo-gateway-external
+ http:
+ - match:
+ - uri:
+ exact: /productpage
+ - uri:
+ prefix: /static
+ - uri:
+ exact: /login
+ - uri:
+ exact: /logout
+ - uri:
+ prefix: /api/v1/products
+ route:
+ - destination:
+ host: productpage
+ port:
+ number: 9080
+EOF
+```
+
+> [!NOTE]
+> The selector used in the Gateway object points to `istio: aks-istio-ingressgateway-external`, which can be found as label on the service mapped to the external ingress that was enabled earlier.
+
+Set environment variables for external ingress host and ports:
+
+```bash
+export INGRESS_HOST_EXTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-external -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+export INGRESS_PORT_EXTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-external -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
+export GATEWAY_URL_EXTERNAL=$INGRESS_HOST_EXTERNAL:$INGRESS_PORT_EXTERNAL
+```
+
+Retrieve the external address of the sample application:
+
+```bash
+echo "http://$GATEWAY_URL_EXTERNAL/productpage"
+```
+
+Navigate to the URL from the output of the previous command and confirm that the sample application's product page is displayed. Alternatively, you can also use `curl` to confirm the sample application is accessible. For example:
+
+```bash
+curl -s "http://${GATEWAY_URL_EXTERNAL}/productpage" | grep -o "<title>.*</title>"
+```
+
+Confirm that the sample application's product page is accessible. The expected output is:
+
+```html
+<title>Simple Bookstore App</title>
+```
+
+## Enable internal ingress gateway
+
+Use `az aks mesh enable-ingress-gateway` to enable an internal Istio ingress on your AKS cluster:
+
+```azurecli-interactive
+az aks mesh enable-ingress-gateway --resource-group $RESOURCE_GROUP --name $CLUSTER --ingress-gateway-type internal
+```
++
+Use `kubectl get svc` to check the service mapped to the ingress gateway:
+
+```bash
+kubectl get svc aks-istio-ingressgateway-internal -n aks-istio-ingress
+```
+
+Observe from the output that the external IP address of the service isn't a publicly accessible one and is instead only locally accessible:
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+aks-istio-ingressgateway-internal LoadBalancer 10.0.182.240 <IP> 15021:30764/TCP,80:32186/TCP,443:31713/TCP 87s
+```
+
+Applications aren't mapped to the Istio ingress gateway after enabling the ingress gateway. Use the following manifest to map the sample deployment's ingress to the Istio ingress gateway:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.istio.io/v1alpha3
+kind: Gateway
+metadata:
+ name: bookinfo-internal-gateway
+spec:
+ selector:
+ istio: aks-istio-ingressgateway-internal
+ servers:
+ - port:
+ number: 80
+ name: http
+ protocol: HTTP
+ hosts:
+ - "*"
+
+apiVersion: networking.istio.io/v1alpha3
+kind: VirtualService
+metadata:
+ name: bookinfo-vs-internal
+spec:
+ hosts:
+ - "*"
+ gateways:
+ - bookinfo-internal-gateway
+ http:
+ - match:
+ - uri:
+ exact: /productpage
+ - uri:
+ prefix: /static
+ - uri:
+ exact: /login
+ - uri:
+ exact: /logout
+ - uri:
+ prefix: /api/v1/products
+ route:
+ - destination:
+ host: productpage
+ port:
+ number: 9080
+EOF
+```
+
+> [!NOTE]
+> The selector used in the Gateway object points to `istio: aks-istio-ingressgateway-internal`, which can be found as label on the service mapped to the internal ingress that was enabled earlier.
+
+Set environment variables for internal ingress host and ports:
+
+```bash
+export INGRESS_HOST_INTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-internal -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+export INGRESS_PORT_INTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-internal -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
+export GATEWAY_URL_INTERNAL=$INGRESS_HOST_INTERNAL:$INGRESS_PORT_INTERNAL
+```
+
+Retrieve the address of the sample application:
+
+```bash
+echo "http://$GATEWAY_URL_INTERNAL/productpage"
+```
+
+Navigate to the URL from the output of the previous command and confirm that the sample application's product page is **NOT** displayed. Alternatively, you can also use `curl` to confirm the sample application is **NOT** accessible. For example:
+
+```bash
+curl -s "http://${GATEWAY_URL_INTERNAL}/productpage" | grep -o "<title>.*</title>"
+```
+
+Use `kubectl exec` to confirm application is accessible from inside the cluster's virtual network:
+
+```bash
+kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS "http://$GATEWAY_URL_INTERNAL/productpage" | grep -o "<title>.*</title>"
+```
+
+Confirm that the sample application's product page is accessible. The expected output is:
+
+```html
+<title>Simple Bookstore App</title>
+```
+
+## Delete resources
+
+If you want to clean up the Istio service mesh and the ingresses (leaving behind the cluster), run the following command:
+
+```azurecli-interactive
+az aks mesh disable --resource-group ${RESOURCE_GROUP} --name ${CLUSTER}
+```
+
+If you want to clean up all the resources created from the Istio how-to guidance documents, run the following command:
+
+```azurecli-interactive
+az group delete --name ${RESOURCE_GROUP} --yes --no-wait
+```
+
+[istio-deploy-addon]: istio-deploy-addon.md
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
Last updated 04/06/2023
-# Open Service Mesh (OSM) add-on in Azure Kubernetes Service (OSM)
+# Open Service Mesh (OSM) add-on in Azure Kubernetes Service (AKS)
[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, cloud native service mesh that allows you to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
The OSM AKS add-on has the following limitations:
- After installation, you must enable Iptables redirection for port IP address and port range exclusion using `kubectl patch`. For more information, see [iptables redirection][ip-tables-redirection]. - Any pods that need access to IMDS, Azure DNS, or the Kubernetes API server must have their IP addresses added to the global list of excluded outbound IP ranges using [Global outbound IP range exclusions][global-exclusion].
+* The add-on doesn't work on AKS clusters that are using [Istio based service mesh addon for AKS][istio-about].
- OSM doesn't support Windows Server containers. ## Next steps
After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep t
[osm-contour]: https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_contour [osm-nginx]: https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx [web-app-routing]: web-app-routing.md
+[istio-about]: istio-about.md
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Title: Supported Kubernetes versions in Azure Kubernetes Service
-description: Understand the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS)
+ Title: Supported Kubernetes versions in Azure Kubernetes Service (AKS).
+description: Learn the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS).
Last updated 11/21/2022
Aim to run the latest patch release of the minor version you're running. For exa
View the upcoming version releases on the AKS Kubernetes release calendar. To see real-time updates of region release status and version release notes, visit the [AKS release status webpage][aks-release]. To learn more about the release status webpage, see [AKS release tracker][aks-tracker]. > [!NOTE]
-> AKS follows 12 months of support for a generally available (GA) Kubernetes version. To read more about our support policy for Kubernetes versioning, please read our [FAQ](https://learn.microsoft.com/azure/aks/supported-kubernetes-versions?tabs=azure-cli#faq).
+> AKS follows 12 months of support for a generally available (GA) Kubernetes version. To read more about our support policy for Kubernetes versioning, please read our [FAQ](./supported-kubernetes-versions.md#faq).
For the past release history, see [Kubernetes history](https://en.wikipedia.org/wiki/Kubernetes#History).
For the past release history, see [Kubernetes history](https://en.wikipedia.org/
With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster will run the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*.
-When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` won't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch. If you wish to upgrade your patch version in the same minor version, please use [auto-upgrade](https://learn.microsoft.com/azure/aks/auto-upgrade-cluster#using-cluster-auto-upgrade).
+When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` won't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch. If you wish to upgrade your patch version in the same minor version, please use [auto-upgrade](./auto-upgrade-cluster.md#using-cluster-auto-upgrade).
To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The `currentKubernetesVersion` property shows the whole Kubernetes version.
To see what patch you're on, run the `az aks show --resource-group myResourceGro
## Kubernetes version support policy
-AKS defines a GA version as a version enabled in all SLO or SLA measurements and available in all regions. AKS supports three GA minor versions of Kubernetes:
+AKS defines a generally available (GA) version as a version available in all regions and enabled in all SLO or SLA measurements. AKS supports three GA minor versions of Kubernetes:
-* The latest GA minor version released in AKS (which we'll refer to as N).
+* The latest GA minor version released in AKS (which we'll refer to as *N*).
* Two previous minor versions.
- * Each supported minor version also supports a maximum of two (2) stable patches.
+ * Each supported minor version also supports a maximum of two stable patches.
AKS may also support preview versions, which are explicitly labeled and subject to [preview terms and conditions][preview-terms]. > [!NOTE] > AKS uses safe deployment practices which involve gradual region deployment. This means it may take up to 10 business days for a new release or a new version to be available in all regions.
-The supported window of Kubernetes versions on AKS is known as "N-2": (N (Latest release) - 2 (minor versions)).
+The supported window of Kubernetes versions on AKS is known as "N-2": (N (Latest release) - 2 (minor versions)), and ".letter" is representative of patch versions.
For example, if AKS introduces *1.17.a* today, support is provided for the following versions:
When a new minor version is introduced, the oldest minor version and patch relea
When AKS releases 1.18.\*, all the 1.15.\* versions go out of support 30 days later. > [!NOTE]
-> If customers are running an unsupported Kubernetes version, they'll be asked to upgrade when requesting support for the cluster. Clusters running unsupported Kubernetes releases aren't covered by the [AKS support policies](./support-policies.md).
+> If you're running an unsupported Kubernetes version, you'll be asked to upgrade when requesting support for the cluster. Clusters running unsupported Kubernetes releases aren't covered by the [AKS support policies](./support-policies.md).
-In addition to the above, AKS supports a maximum of two **patch** releases of a given minor version. So given the following supported versions:
+AKS also supports a maximum of two **patch** releases of a given minor version. For example, given the following supported versions:
``` Current Supported Version List
Install-AzAksKubectl -Version latest
+## Long Term Support (LTS)
+
+AKS provides a Long Term Support (LTS) version of Kubernetes for a two-year period. There's only a single minor version of Kubernetes deemed LTS at any one time.
+
+| | Community Support |Long Term Support |
+||||
+| **When to use** | When you can keep up with upstream Kubernetes releases | When you need control over when to migrate from one version to another |
+| **Support versions** | Three GA minor versions | One Kubernetes version (currently *1.27*) for two years |
+| **Pricing** | Included | Per hour cluster cost |
+
+The upstream community maintains a minor release of Kubernetes for one year from release. After this period, Microsoft creates and applies security updates to the LTS version of Kubernetes to provide a total of two years of support on AKS.
+
+> [!IMPORTANT]
+> AKS will begin its support for the LTS version of Kubernetes upon the release of Kubernetes version 1.27.
+ ## Release and deprecation process You can reference upcoming version releases and deprecations on the [AKS Kubernetes release calendar](#aks-kubernetes-release-calendar). For new **minor** versions of Kubernetes:
-* AKS publishes a pre-announcement with the planned date of the new version release and respective old version deprecation. This announcement is published on the [AKS release notes](https://aka.ms/aks/releasenotes) at least 30 days before removal.
-* AKS uses [Azure Advisor](../advisor/advisor-overview.md) to alert users if a new version will cause issues in their cluster because of deprecated APIs. Azure Advisor is also used to alert the user if they're currently out of support.
+* AKS publishes an announcement with the planned date of a new version release and respective old version deprecation on the [AKS Release notes](https://aka.ms/aks/releasenotes) at least 30 days prior to removal.
+* AKS uses [Azure Advisor](../advisor/advisor-overview.md) to alert you if a new version could cause issues in your cluster because of deprecated APIs. Azure Advisor also alerts you if you're out of support
* AKS publishes a [service health notification](../service-health/service-health-overview.md) available to all users with AKS and portal access and sends an email to the subscription administrators with the planned version removal dates.- > [!NOTE]
- > Visit [manage Azure subscriptions](../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator) to determine who your subscription administrators are and make any necessary changes.
-
-* Users have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support.
+ > To find out who is your subscription administrators or to change it, please refer to [manage Azure subscriptions](../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator).
+* You have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support.
For new **patch** versions of Kubernetes:
-* Because of the urgent nature of patch versions, they can be introduced into the service as they become available. Once available, patches will have a two month minimum lifecycle.
-* In general, AKS doesn't broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify users to upgrade to the newly available patch.
-* Users have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support. However, you'll **no longer be able to create clusters or node pools once the version is deprecated/removed.**
+* Because of the urgent nature of patch versions, they can be introduced into the service as they become available. Once available, patches have a two month minimum lifecycle.
+* In general, AKS doesn't broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify you to upgrade to the newly available patch.
+* You have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support. However, you'll **no longer be able to create clusters or node pools once the version is deprecated/removed.**
### Supported versions policy exceptions
When you deploy an AKS cluster with Azure portal, Azure CLI, Azure PowerShell, t
### [Azure CLI](#tab/azure-cli) To find out what versions are currently available for your subscription and region, use the
-[az aks get-versions][az-aks-get-versions] command. The following example lists available Kubernetes versions for the *EastUS* region:
+[`az aks get-versions`][az-aks-get-versions] command. The following example lists the available Kubernetes versions for the *EastUS* region:
```azurecli-interactive az aks get-versions --location eastus --output table
Get-AzAksVersion -Location eastus
### How does Microsoft notify me of new Kubernetes versions?
-The AKS team publishes pre-announcements with planned dates of the new Kubernetes versions in the AKS docs, our [GitHub](https://github.com/Azure/AKS/releases), and emails to subscription administrators who own clusters that are going to fall out of support. AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to alert customers in the Azure portal to notify users if they're out of support. It also alerts them of deprecated APIs that will affect their application or development processes.
+The AKS team publishes announcements with planned dates of the new Kubernetes versions in our documentation, our [GitHub](https://github.com/Azure/AKS/releases), and in emails to subscription administrators who own clusters that are going to fall out of support. AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to alert you inside the Azure portal if you're out of support and inform you of deprecated APIs that will affect your application or development process.
### How often should I expect to upgrade Kubernetes versions to stay in support?
-Starting with Kubernetes 1.19, the [open source community has expanded support to one year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you'll be able to upgrade at a minimum of once a year to stay on a supported version.
+Starting with Kubernetes 1.19, the [open source community has expanded support to one year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you can upgrade at a minimum of once a year to stay on a supported version.
-### What happens when a user upgrades a Kubernetes cluster with a minor version that isn't supported?
+**What happens when you upgrade a Kubernetes cluster with a minor version that isn't supported?**
If you're on the *n-3* version or older, it means you're outside of support and will be asked to upgrade. When your upgrade from version n-3 to n-2 succeeds, you're back within our support policies. For example:
For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes
<!-- LINKS - Internal --> [aks-upgrade]: upgrade-cluster.md
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-update]: /cli/azure/aks#az_aks_update
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az-extension-update [az-aks-get-versions]: /cli/azure/aks#az_aks_get_versions
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser
description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 11/01/2022 Last updated : 03/23/2023 # Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)
Last updated 11/01/2022
Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitives to associate [managed identities for Azure resources][az-managed-identities] and identities in Azure AD with pods. Administrators create identities and bindings as Kubernetes primitives that allow pods to access Azure resources that rely on Azure AD as an identity provider. > [!NOTE]
-> We recommend you review [Azure AD workload identity][workload-identity-overview] (preview).
+> We recommend you review [Azure AD workload identity][workload-identity-overview].
> This authentication method replaces pod-managed identity (preview), which integrates with the > Kubernetes native capabilities to federate with any external identity providers on behalf of the > application.
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workload identity (preview)
-description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview).
+ Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workload identity
+description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity.
- Previously updated : 04/12/2023 Last updated : 04/18/2023+
-# Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster
+# Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage Kubernetes clusters. In this article, you will:
-* Deploy an AKS cluster using the Azure CLI that includes the OpenID Connect Issuer and an Azure AD workload identity (preview)
+* Deploy an AKS cluster using the Azure CLI that includes the OpenID Connect Issuer and an Azure AD workload identity
* Grant access to your Azure Key Vault * Create an Azure Active Directory (Azure AD) workload identity and Kubernetes service account * Configure the managed identity for token federation.
-This article assumes you have a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. If you aren't familiar with Azure AD workload identity (preview), see the following [Overview][workload-identity-overview] article.
+This article assumes you have a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. If you aren't familiar with Azure AD workload identity, see the following [Overview][workload-identity-overview] article.
- This article requires version 2.40.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
This article assumes you have a basic understanding of Kubernetes concepts. For
- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account][az-account] command.
-## Install the aks-preview Azure CLI extension
--
-To install the aks-preview extension, run the following command:
-
-```azurecli
-az extension add --name aks-preview
-```
-
-Run the following command to update to the latest version of the extension released:
-
-```azurecli
-az extension update --name aks-preview
-```
-
-## Register the 'EnableWorkloadIdentityPreview' feature flag
-
-Register the `EnableWorkloadIdentityPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
-
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
-```
-
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ## Create AKS cluster Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*:
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Last updated 03/14/2023
-# Migrate from pod managed-identity to workload identity (preview)
+# Migrate from pod managed-identity to workload identity
This article focuses on migrating from a pod-managed identity to Azure Active Directory (Azure AD) workload identity (preview) for your Azure Kubernetes Service (AKS) cluster. It also provides guidance depending on the version of the [Azure Identity][azure-identity-supported-versions] client library used by your container-based application. - ## Before you begin - The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
az identity federated-credential create --name federatedIdentityName --identity-
## Deploy the workload with migration sidecar
+> [!NOTE]
+> The migration sidecar is **not supported for production usage**. This feature was designed to give customers time to migrate there application SDK's to a supported version and not be a long running process.
+ If your application is using managed identity and still relies on IMDS to get an access token, you can use the workload identity migration sidecar to start migrating to workload identity. This sidecar is a migration solution and in the long-term applications, you should modify their code to use the latest Azure Identity SDKs that support client assertion. To update or deploy the workload, add these pod annotations only if you want to use the migration sidecar. You inject the following [annotation][pod-annotations] values to use the sidecar in your pod specification:
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service (AKS) description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 03/27/2023 Last updated : 04/18/2023
-# Use Azure AD workload identity (preview) with Azure Kubernetes Service (AKS)
+# Use Azure AD workload identity with Azure Kubernetes Service (AKS)
-Today with Azure Kubernetes Service (AKS), you can assign [managed identities at the pod-level][use-azure-ad-pod-identity], which has been a preview feature. This pod-managed identity allows the hosted workload or application access to resources through Azure Active Directory (Azure AD). For example, a workload stores files in Azure Storage, and when it needs to access those files, the pod authenticates itself against the resource as an Azure managed identity. This authentication method has been replaced with [Azure Active Directory (Azure AD) workload identities][azure-ad-workload-identity] (preview), which integrate with the Kubernetes native capabilities to federate with any external identity providers. This approach is simpler to use and deploy, and overcomes several limitations in Azure AD pod-managed identity:
+Workloads deployed on an Azure Kubernetes Services (AKS) cluster require Azure Active Directory (Azure AD) application credentials or managed identities to access Azure AD protected resources, such as Azure Key Vault and Microsoft Graph. Azure AD workload identity integrates with the capabilities native to Kubernetes to federate with external identity providers.
-- Removes the scale and performance issues that existed for identity assignment-- Supports Kubernetes clusters hosted in any cloud or on-premises-- Supports both Linux and Windows workloads-- Removes the need for Custom Resource Definitions and pods that intercept [Azure Instance Metadata Service][azure-instance-metadata-service] (IMDS) traffic-- Avoids the complicated and error-prone installation steps such as cluster role assignment from the previous iteration
+[Azure AD workload identity][azure-ad-workload-identity] uses [Service Account Token Volume Projection][service-account-token-volume-projection] enabling pods to use a Kubernetes identity (that is, a service account). A Kubernetes token is issued and [OIDC federation][oidc-federation] enables Kubernetes applications to access Azure resources securely with Azure AD based on annotated service accounts.
Azure AD workload identity works especially well with the Azure Identity client library using the [Azure SDK][azure-sdk-download] and the [Microsoft Authentication Library][microsoft-authentication-library] (MSAL) if you're using [application registration][azure-ad-application-registration]. Your workload can use any of these libraries to seamlessly authenticate and access Azure cloud resources.
-This article helps you understand this new authentication feature, and reviews the options available to plan your migration phases and project strategy.
-
+This article helps you understand this new authentication feature, and reviews the options available to plan your project strategy and potential migration from Azure AD pod-managed identity.
## Dependencies - AKS supports Azure AD workload identities on version 1.22 and higher. -- The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+- The Azure CLI version 2.47.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+## Azure Identity SDK
+
+The following client libraries are the **minimum** version required
-- The `aks-preview` extension version 0.5.102 or later.
+| Language | Library | Minimum Version | Example |
+|--|--|-|-|
+| Go | [azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go) | [sdk/azidentity/v1.3.0-beta.1](https://github.com/Azure/azure-sdk-for-go/releases/tag/sdk/azidentity/v1.3.0-beta.1)| [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) |
+| C# | [azure-sdk-for-net](https://github.com/Azure/azure-sdk-for-net) | [Azure.Identity_1.5.0](https://github.com/Azure/azure-sdk-for-net/releases/tag/Azure.Identity_1.5.0)| [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) |
+| JavaScript/TypeScript | [azure-sdk-for-js](https://github.com/Azure/azure-sdk-for-js) | [@azure/identity_2.0.0](https://github.com/Azure/azure-sdk-for-js/releases/tag/@azure/identity_2.0.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) |
+| Python | [azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python) | [azure-identity_1.7.0](https://github.com/Azure/azure-sdk-for-python/releases/tag/azure-identity_1.7.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) |
+| Java | [azure-sdk-for-java]() | [azure-identity_1.4.0](https://github.com/Azure/azure-sdk-for-java/releases/tag/azure-identity_1.4.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) |
-- The following are the minimum versions of the [Azure Identity][azure-identity-libraries] client library supported:
+## Microsoft Authentication Library (MSAL)
- * [.NET][dotnet-azure-identity-client-library] 1.5.0
- * [Java][java-azure-identity-client-library] 1.4.0
- * [JavaScript][javascript-azure-identity-client-library] 2.0.0
- * [Python][python-azure-identity-client-library] 1.7.0
+The following client libraries are the **minimum** version required
+
+| Language | Library | Image | Example | Has Windows |
+|--|--|-|-|-|
+| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes |
+| C# | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes |
+| JavaScript/TypeScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No |
+| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No |
+| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No |
## Limitations - You can only have 20 federated identity credentials per managed identity. - It takes a few seconds for the federated identity credential to be propagated after being initially added.
-## Language SDK examples
- - [Azure Identity SDK](https://azure.github.io/azure-workload-identity/docs/topics/language-specific-examples/azure-identity-sdk.html)
- - [MSAL](https://azure.github.io/azure-workload-identity/docs/topics/language-specific-examples/msal.html)
- ## How it works In this security model, the AKS cluster acts as token issuer, Azure Active Directory uses OpenID Connect to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library or the Microsoft Authentication Library.
The following diagram summarizes the authentication sequence using OpenID Connec
### Webhook Certificate Auto Rotation
-Similar to other webhook addons, the certificate will be rotated by cluster certificate [auto rotation](https://learn.microsoft.com/azure/aks/certificate-rotation#certificate-auto-rotation) operation.
+Similar to other webhook addons, the certificate will be rotated by cluster certificate [auto rotation][auto-rotation] operation.
## Service account labels and annotations
The following table summarizes our migration or deployment recommendations for w
<!-- EXTERNAL LINKS --> [azure-sdk-download]: https://azure.microsoft.com/downloads/ [custom-resource-definition]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/-
+[service-account-token-volume-projection]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection
+[oidc-federation]: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens
<!-- INTERNAL LINKS --> [use-azure-ad-pod-identity]: use-azure-ad-pod-identity.md [azure-ad-workload-identity]: ../active-directory/develop/workload-identities-overview.md
-[azure-instance-metadata-service]: ../virtual-machines/linux/instance-metadata-service.md
[microsoft-authentication-library]: ../active-directory/develop/msal-overview.md [azure-ad-application-registration]: ../active-directory/develop/application-model.md#register-an-application [install-azure-cli]: /cli/azure/install-azure-cli
The following table summarizes our migration or deployment recommendations for w
[deploy-configure-workload-identity-new-cluster]: workload-identity-deploy-cluster.md [tutorial-use-workload-identity]: ./learn/tutorial-kubernetes-workload-identity.md [workload-identity-migration-sidecar]: workload-identity-migrate-from-pod-identity.md
-[dotnet-azure-identity-client-library]: /dotnet/api/overview/azure/identity-readme
-[java-azure-identity-client-library]: /java/api/overview/azure/identity-readme
-[javascript-azure-identity-client-library]: /javascript/api/overview/azure/identity-readme
-[python-azure-identity-client-library]: /python/api/overview/azure/identity-readme
+[auto-rotation]: certificate-rotation.md#certificate-auto-rotation
analysis-services Analysis Services Addservprinc Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-addservprinc-admins.md
The following Resource Manager template deploys an Analysis Services server with
## Using managed identities
-A managed identity can also be added to the Analysis Services Admins list. For example, you might have a [Logic App with a system-assigned managed identity](../logic-apps/create-managed-service-identity.md), and want to grant it the ability to administer your server.
-
-In most parts of the Azure portal and APIs, managed identities are identified using their service principal object ID. However, Analysis Services requires that they be identified using their client ID. To obtain the client ID for a service principal, you can use the Azure CLI:
-
-```azurecli
-az ad sp show --id <ManagedIdentityServicePrincipalObjectId> --query appId -o tsv
-```
-
-Alternatively you can use PowerShell:
-
-```powershell
-(Get-AzureADServicePrincipal -ObjectId <ManagedIdentityServicePrincipalObjectId>).AppId
-```
-
-You can then use this client ID in conjunction with the tenant ID to add the managed identity to the Analysis Services Admins list, as described above.
+Managed identies that are added to database or server roles will be unable to login to the service or do any operations. Managed identities for service principals are not supported in Azure Analysis Services.
## Related information
analysis-services Analysis Services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-service-principal.md
Service principals are an Azure Active Directory application resource you create
In Analysis Services, service principals are used with Azure Automation, PowerShell unattended mode, custom client applications, and web apps to automate common tasks. For example, provisioning servers, deploying models, data refresh, scale up/down, and pause/resume can all be automated by using service principals. Permissions are assigned to service principals through role membership, much like regular Azure AD UPN accounts.
-Analysis Services also supports operations performed by managed identities using service principals. To learn more, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) and [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-analysis-services).
+Analysis Services does not support operations performed by managed identities using service principals. To learn more, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) and [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-analysis-services).
## Create service principals
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
Managed and self-hosted gateways support all available [policies](api-management
| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted<sup>1</sup> | | | -- | -- | - | | [Dapr integration](api-management-policies.md#dapr-integration-policies) | ❌ | ❌ | ✔️ |
-| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ❌ | ❌ |
+| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ | ❌ |
| [Quota and rate limit](api-management-policies.md#access-restriction-policies) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup> | [Set GraphQL resolver](set-graphql-resolver-policy.md) | ✔️ | ❌ | ❌ |
api-management Authentication Authorization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-authorization-overview.md
There are different reasons for wanting to do this. For example:
### Token management by API Management
-API Management also supports acquisition and secure storage of OAuth 2.0 tokens for certain downstream services using the [authorizations](authorizations-overview.md) (preview) feature, including through use of custom policies and caching.
+API Management also supports acquisition and secure storage of OAuth 2.0 tokens for certain downstream services using the [authorizations](authorizations-overview.md) feature, including through use of custom policies and caching.
-With authorizations, API Management manages the tokens for access to OAuth 2.0 backends, simplifying the development of client apps that access APIs.
+With authorizations, API Management manages the tokens for access to OAuth 2.0 backends, allowing you to delegate authentication to your API Management instance to simplify access by client apps to a given backend service or SaaS platform.
### Other options
api-management Authorizations Configure Common Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-configure-common-providers.md
+
+ Title: Configure authorization providers - Azure API Management | Microsoft Docs
+description: Learn how to configure common identity providers for authorizations in Azure API Management. Example providers are Azure Active Directory and a generic OAuth 2.0 provider. An authorization manages authorization tokens to an OAuth 2.0 backend service.
++++ Last updated : 02/07/2023+++
+# Configure identity providers for API authorizations
+
+In this article, you learn about configuring identity providers for [authorizations](authorizations-overview.md) in your API Management instance. Settings for the following common providers are shown:
+
+* Azure AD provider
+* Generic OAuth 2.0 provider
+
+You add identity provider settings when configuring an authorization in your API Management instance. For a step-by-step example of configuring an Azure AD provider and authorization, see:
+
+* [Create an authorization with the Microsoft Graph API](authorizations-how-to-azure-ad.md)
+
+## Prerequisites
+
+To configure any of the supported providers in API Management, first configure an OAuth 2.0 app in the identity provider that will be used to authorize API access. For configuration details, see the provider's developer documentation.
+
+* If you're creating an authorization that uses the authorization code grant type, configure a **Redirect URL** (sometimes called Authorization Callback URL or a similar name) in the app. For the value, enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`.
+
+* Depending on your scenario, configure app settings such as scopes (API permissions).
+
+* Minimally, retrieve the following app credentials that will be configured in API Management: the app's **client id** and **client secret**.
+
+* Depending on the provider and your scenario, you might need to retrieve other settings such as authorization endpoint URLs or scopes.
+
+## Azure AD provider
+
+Authorizations support the Azure AD identity provider, which is the identity service in Microsoft Azure that provides identity management and access control capabilities. It allows users to securely sign in using industry-standard protocols.
+
+* **Supported grant types**: authorization code, client credentials
+
+> [!NOTE]
+> Currently, the Azure AD authorization provider supports only the Azure AD v1.0 endpoints.
+
+
+### Azure AD provider settings
+
++
+## Generic OAuth 2.0 providers
+
+Authorizations support two generic providers:
+* Generic OAuth 2.0
+* Generic OAuth 2.0 with PKCE
+
+A generic provider allows you to use your own OAuth 2.0 identity provider based on your specific needs.
+
+> [!NOTE]
+> We recommend using the generic OAuth 2.0 with PKCE provider for improved security if your identity provider supports it. [Learn more](https://oauth.net/2/pkce/)
+
+* **Supported grant types**: authorization code, client credentials
+
+### Generic authorization provider settings
++
+## Other identity providers
+
+API Management supports several providers for popular SaaS offerings, such as GitHub. You can select from a list of these providers in the Azure portal when you create an authorization.
++
+**Supported grant types**: authorization code, client credentials (depends on provider)
+
+Required settings for these providers differ from provider to provider but are similar to those for the [generic OAuth 2.0 providers](#generic-oauth-20-providers). Consult the developer documentation for each provider.
+
+## Next steps
+
+* Learn more about [authorizations](authorizations-overview.md) in API Management.
+* Create an authorization for [Azure AD](authorizations-how-to-azure-ad.md) or [GitHub](authorizations-how-to-github.md).
api-management Authorizations How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to-azure-ad.md
+
+ Title: Create authorization with Microsoft Graph API - Azure API Management | Microsoft Docs
+description: Learn how to create and use an authorization to the Microsoft Graph API in Azure API Management. An authorization manages authorization tokens to an OAuth 2.0 backend service.
++++ Last updated : 04/10/2023+++
+# Create an authorization with the Microsoft Graph API
+
+This article guides you through the steps required to create an [authorization](authorizations-overview.md) with the Microsoft Graph API within Azure API Management. The authorization code grant type is used in this example.
+
+You learn how to:
+
+> [!div class="checklist"]
+> * Create an Azure AD application
+> * Create and configure an authorization in API Management
+> * Configure an access policy
+> * Create a Microsoft Graph API in API Management and configure a policy
+> * Test your Microsoft Graph API in API Management
+
+## Prerequisites
+
+- Access to an Azure Active Directory (Azure AD) tenant where you have permissions to create an app registration and to grant admin consent for the app's permissions. [Learn more](../active-directory/roles/delegate-app-roles.md#restrict-who-can-create-applications)
+
+ If you want to create your own developer tenant, you can sign up for the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program).
+- A running API Management instance. If you need to, [create an Azure API Management instance](get-started-create-service-instance.md).
+- Enable a [system-assigned managed identity](api-management-howto-use-managed-service-identity.md) for API Management in the API Management instance.
+
+## Step 1: Create an Azure AD application
+
+Create an Azure AD application for the API and give it the appropriate permissions for the requests that you want to call.
+
+1. Sign into the [Azure portal](https://portal.azure.com/) with an account with sufficient permissions in the tenant.
+1. Under **Azure Services**, search for **Azure Active Directory**.
+1. On the left menu, select **App registrations**, and then select **+ New registration**.
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-registration.png" alt-text="Screenshot of creating an Azure AD app registration in the portal.":::
+
+1. On the **Register an application** page, enter your application registration settings:
+ 1. In **Name**, enter a meaningful name that will be displayed to users of the app, such as *MicrosoftGraphAuth*.
+ 1. In **Supported account types**, select an option that suits your scenario, for example, **Accounts in this organizational directory only (Single tenant)**.
+ 1. Set the **Redirect URI** to **Web**, and enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the name of the API Management service where you will configure the authorization provider.
+ 1. Select **Register**.
+1. On the left menu, select **API permissions**, and then select **+ Add a permission**.
+ :::image type="content" source="./media/authorizations-how-to-azure-ad/add-permission.png" alt-text="Screenshot of adding an API permission in the portal.":::
+
+ 1. Select **Microsoft Graph**, and then select **Delegated permissions**.
+ > [!NOTE]
+ > Make sure the permission **User.Read** with the type **Delegated** has already been added.
+ 1. Type **Team**, expand the **Team** options, and then select **Team.ReadBasic.All**. Select **Add permissions**.
+ 1. Next, select **Grant admin consent for Default Directory**. The status of the permissions will change to **Granted for Default Directory**.
+1. On the left menu, select **Overview**. On the **Overview** page, find the **Application (client) ID** value and record it for use in Step 2.
+1. On the left menu, select **Certificates & secrets**, and then select **+ New client secret**.
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-secret.png" alt-text="Screenshot of creating an app secret in the portal.":::
+
+ 1. Enter a **Description**.
+ 1. Select any option for **Expires**.
+ 1. Select **Add**.
+ 1. Copy the client secret's **Value** before leaving the page. You will need it in Step 2.
+
+## Step 2: Configure an authorization in API Management
+
+1. Sign into the [portal](https://portal.azure.com) and go to your API Management instance.
+1. On the left menu, select **Authorizations**, and then select **+ Create**.
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-authorization.png" alt-text="Screenshot of creating an API authorization in the portal.":::
+1. On the **Create authorization** page, enter the following settings, and select **Create**:
+
+ |Settings |Value |
+ |||
+ |**Provider name** | A name of your choice, such as *aad-01* |
+ |**Identity provider** | Select **Azure Active Directory v1** |
+ |**Grant type** | Select **Authorization code** |
+ |**Client id** | Paste the value you copied earlier from the app registration |
+ |**Client secret** | Paste the value you copied earlier from the app registration |
+ |**Resource URL** | `https://graph.microsoft.com` |
+ |**Tenant ID** | Optional for Azure AD identity provider. Default is *Common* |
+ |**Scopes** | Optional for Azure AD identity provider. Automatically configured from AD app's API permissions. |
+ |**Authorization name** | A name of your choice, such as *aad-auth-01* |
+
+1. After the authorization provider and authorization are created, select **Next**.
+
+## Step 3: Authorize with Azure AD and configure an access policy
+
+1. On the **Login** tab, select **Login with Azure Active Directory**. Before the authorization will work, it needs to be authorized.
+ :::image type="content" source="media/authorizations-how-to-azure-ad/login-azure-ad.png" alt-text="Screenshot of login with Azure AD in the portal.":::
+
+1. When prompted, sign in to your organizational account.
+1. On the confirmation page, select **Allow access**.
+1. After successful authorization, the browser is redirected to API Management and the window is closed. In API Management, select **Next**.
+1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
+1. For this example, select **API Management service `<service name>`**.
+
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-access-policy.png" alt-text="Screenshot of selecting a managed identity to use the authorization.":::
+
+1. Select **Complete**.
+
+> [!NOTE]
+> If you update your Microsoft Graph permissions after this step, you will have to repeat Steps 2 and 3.
+
+## Step 4: Create a Microsoft Graph API in API Management and configure a policy
+
+1. Sign into the [portal](https://portal.azure.com) and go to your API Management instance.
+1. On the left menu, select **APIs > + Add API**.
+1. Select **HTTP** and enter the following settings. Then select **Create**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *msgraph* |
+ |**Web service URL** | `https://graph.microsoft.com/v1.0` |
+ |**API URL suffix** | *msgraph* |
+
+1. Navigate to the newly created API and select **Add Operation**. Enter the following settings and select **Save**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getprofile* |
+ |**URL** for GET | /me |
+
+1. Follow the preceding steps to add another operation with the following settings.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getJoinedTeams* |
+ |**URL** for GET | /me/joinedTeams |
+
+1. Select **All operations**. In the **Inbound processing** section, select the (**</>**) (code editor) icon.
+1. Copy the following, and paste in the policy editor. Make sure the `provider-id` and `authorization-id` correspond to the values you configured in Step 2. Select **Save**.
+
+ ```xml
+ <policies>
+ <inbound>
+ <base />
+ <get-authorization-context provider-id="aad-01" authorization-id="aad-auth-01" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />
+ <set-header name="authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+ </set-header>
+ </inbound>
+ <backend>
+ <base />
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
+ <on-error>
+ <base />
+ </on-error>
+ </policies>
+ ```
+The preceding policy definition consists of two parts:
+
+* The [get-authorization-context](get-authorization-context-policy.md) policy fetches an authorization token by referencing the authorization provider and authorization that were created earlier.
+* The [set-header](set-header-policy.md) policy creates an HTTP header with the fetched authorization token.
+
+## Step 5: Test the API
+1. On the **Test** tab, select one operation that you configured.
+1. Select **Send**.
+
+ :::image type="content" source="media/authorizations-how-to-azure-ad/graph-api-response.png" alt-text="Screenshot of testing the Graph API in the portal.":::
+
+ A successful response returns user data from the Microsoft Graph.
+
+## Next steps
+
+* Learn more about [access restriction policies](api-management-access-restriction-policies.md)
+* Learn more about [scopes and permissions](../active-directory/develop/scopes-oidc.md) in Azure AD.
api-management Authorizations How To Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to-github.md
+
+ Title: Create authorization with GitHub API - Azure API Management | Microsoft Docs
+description: Learn how to create and use an authorization to the GitHub API in Azure API Management. An authorization manages authorization tokens to an OAuth 2.0 backend service.
++++ Last updated : 04/10/2023+++
+# Create an authorization with the GitHub API
+
+In this article, you learn how to create an [authorization](authorizations-overview.md) in API Management and call a GitHub API that requires an authorization token. The authorization code grant type is used in this example.
+
+You learn how to:
+
+> [!div class="checklist"]
+> * Register an application in GitHub
+> * Configure an authorization in API Management.
+> * Authorize with GitHub and configure access policies.
+> * Create an API in API Management and configure a policy.
+> * Test your GitHub API in API Management
+
+## Prerequisites
+
+- A GitHub account is required.
+ A running API Management instance. If you need to, [create an Azure API Management instance](get-started-create-service-instance.md).
+- Enable a [system-assigned managed identity](api-management-howto-use-managed-service-identity.md) for API Management in the API Management instance.
+
+## Step 1: Register an application in GitHub
+
+1. Sign in to GitHub.
+1. In your account profile, go to **Settings > Developer Settings > OAuth Apps > New OAuth app**.
+
+
+ :::image type="content" source="media/authorizations-how-to-github/register-application.png" alt-text="Screenshot of registering a new OAuth application in GitHub.":::
+ 1. Enter an **Application name** and **Homepage URL** for the application. For this example, you can supply a placeholder URL such as `http://localhost`.
+ 1. Optionally, add an **Application description**.
+ 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the name of the API Management instance where you will configure the authorization provider.
+1. Select **Register application**.
+1. On the **General** page, copy the **Client ID**, which you'll use in Step 2.
+1. Select **Generate a new client secret**. Copy the secret, which won't be displayed again, and which you'll use in Step 2.
+
+ :::image type="content" source="media/authorizations-how-to-github/generate-secret.png" alt-text="Screenshot showing how to get client ID and client secret for the application in GitHub.":::
+
+## Step 2: Configure an authorization in API Management
+
+1. Sign into the [portal](https://portal.azure.com) and go to your API Management instance.
+1. On the left menu, select **Authorizations** > **+ Create**.
+
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-authorization.png" alt-text="Screenshot of creating an API Management authorization in the Azure portal.":::
+1. On the **Create authorization** page, enter the following settings, and select **Create**:
+
+ |Settings |Value |
+ |||
+ |**Provider name** | A name of your choice, such as *github-01* |
+ |**Identity provider** | Select **GitHub** |
+ |**Grant type** | Select **Authorization code** |
+ |**Client ID** | Paste the value you copied earlier from the app registration |
+ |**Client secret** | Paste the value you copied earlier from the app registration |
+ |**Scope** | For this example, set the scope to *User* |
+ |**Authorization name** | A name of your choice, such as *github-auth-01* |
+
+1. After the authorization provider and authorization are created, select **Next**.
+
+## Step 3: Authorize with GitHub and configure access policies
+
+1. On the **Login** tab, select **Login with GitHub**. Before the authorization will work, it needs to be authorized at GitHub.
+
+ :::image type="content" source="media/authorizations-how-to-github/authorize-with-github.png" alt-text="Screenshot of logging into the GitHub authorization from the portal.":::
+
+1. If prompted, sign in to your GitHub account.
+1. Select **Authorize** so that the application can access the signed-in userΓÇÖs account.
+1. On the confirmation page, select **Allow access**.
+1. After successful authorization, the browser is redirected to API Management and the window is closed. In API Management, select **Next**.
+1. After successful authorization, the browser is redirected to API Management and the window is closed. When prompted during redirection, select **Allow access**. In API Management, select **Next**.
+1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
+
+1. For this example, select **API Management service `<service name>`**.
+
+ :::image type="content" source="media/authorizations-how-to-azure-ad/create-access-policy.png" alt-text="Screenshot of selecting a managed identity to use the authorization.":::
+1. Select **Complete**.
+
+
+## Step 4: Create an API in API Management and configure a policy
+
+1. Sign into the [portal](https://portal.azure.com) and go to your API Management instance.
+1. On the left menu, select **APIs > + Add API**.
+1. Select **HTTP** and enter the following settings. Then select **Create**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *githubuser* |
+ |**Web service URL** | `https://api.github.com` |
+ |**API URL suffix** | *githubuser* |
+
+2. Navigate to the newly created API and select **Add Operation**. Enter the following settings and select **Save**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getauthdata* |
+ |**URL** for GET | /user |
+
+ :::image type="content" source="media/authorizations-how-to-github/add-operation.png" alt-text="Screenshot of adding a getauthdata operation to the API in the portal.":::
+
+1. Follow the preceding steps to add another operation with the following settings.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getauthfollowers* |
+ |**URL** for GET | /user/followers |
+
+1. Select **All operations**. In the **Inbound processing** section, select the (**</>**) (code editor) icon.
+1. Copy the following, and paste in the policy editor. Make sure the provider-id and authorization-id correspond to the names in Step 2. Select **Save**.
+
+ ```xml
+ <policies>
+ <inbound>
+ <base />
+ <get-authorization-context provider-id="github-01" authorization-id="github-auth-01" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />
+ <set-header name="Authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+ </set-header>
+ <set-header name="User-Agent" exists-action="override">
+ <value>API Management</value>
+ </set-header>
+ </inbound>
+ <backend>
+ <base />
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
+ <on-error>
+ <base />
+ </on-error>
+ </policies>
+ ```
+
+The preceding policy definition consists of three parts:
+
+* The [get-authorization-context](get-authorization-context-policy.md) policy fetches an authorization token by referencing the authorization provider and authorization that were created earlier.
+* The first [set-header](set-header-policy.md) policy creates an HTTP header with the fetched authorization token.
+* The second [set-header](set-header-policy.md) policy creates a `User-Agent` header (GitHub API requirement).
+
+## Step 5: Test the API
+
+1. On the **Test** tab, select one operation that you configured.
+1. Select **Send**.
+
+ :::image type="content" source="media/authorizations-how-to-github/test-api.png" alt-text="Screenshot of testing the API successfully in the portal.":::
+
+ A successful response returns user data from the GitHub API.
+
+## Next steps
+
+* Learn more about [access restriction policies](api-management-access-restriction-policies.md).
+* Learn more about GitHub's [REST API](https://docs.github.com/en/rest?apiVersion=2022-11-28)
api-management Authorizations How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to.md
- Title: Create and use authorization in Azure API Management | Microsoft Docs
-description: Learn how to create and use an authorization in Azure API Management. An authorization manages authorization tokens to OAuth 2.0 backend services. The example uses GitHub as an identity provider.
---- Previously updated : 06/03/2022---
-# Configure and use an authorization
-
-In this article, you learn how to create an [authorization](authorizations-overview.md) (preview) in API Management and call a GitHub API that requires an authorization token. The authorization code grant type will be used.
-
-Four steps are needed to set up an authorization with the authorization code grant type:
-
-1. Register an application in the identity provider (in this case, GitHub).
-1. Configure an authorization in API Management.
-1. Authorize with GitHub and configure access policies.
-1. Create an API in API Management and configure a policy.
-
-## Prerequisites
--- A GitHub account is required.-- Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).-- Enable a [managed identity](api-management-howto-use-managed-service-identity.md) for API Management in the API Management instance. -
-## Step 1: Register an application in GitHub
-
-1. Sign in to GitHub.
-1. In your account profile, go to **Settings > Developer Settings > OAuth Apps > Register a new application**.
-
-
- :::image type="content" source="media/authorizations-how-to/register-application.png" alt-text="Screenshot of registering a new OAuth application in GitHub.":::
- 1. Enter an **Application name** and **Homepage URL** for the application.
- 1. Optionally, add an **Application description**.
- 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the API Management service name that is used.
-1. Select **Register application**.
-1. In the **General** page, copy the **Client ID**, which you'll use in a later step.
-1. Select **Generate a new client secret**. Copy the secret, which won't be displayed again, and which you'll use in a later step.
-
- :::image type="content" source="media/authorizations-how-to/generate-secret.png" alt-text="Screenshot showing how to get client ID and client secret for the application in GitHub.":::
-
-## Step 2: Configure an authorization in API Management
-
-1. Sign into Azure portal and go to your API Management instance.
-1. In the left menu, select **Authorizations** > **+ Create**.
-
- :::image type="content" source="media/authorizations-how-to/create-authorization.png" alt-text="Screenshot of creating an API Management authorization in the Azure portal.":::
-1. In the **Create authorization** window, enter the following settings, and select **Create**:
-
- |Settings |Value |
- |||
- |**Provider name** | A name of your choice, such as *github-01* |
- |**Identity provider** | Select **GitHub** |
- |**Grant type** | Select **Authorization code** |
- |**Client id** | Paste the value you copied earlier from the app registration |
- |**Client secret** | Paste the value you copied earlier from the app registration |
- |**Scope** | Set the scope to `User` |
- |**Authorization name** | A name of your choice, such as *auth-01* |
-
-
-
-1. After the authorization provider and authorization are created, select **Next**.
-
-1. On the **Login** tab, select **Login with GitHub**. Before the authorization will work, it needs to be authorized at GitHub.
-
- :::image type="content" source="media/authorizations-how-to/authorize-with-github.png" alt-text="Screenshot of logging into the GitHub authorization from the portal.":::
-
-## Step 3: Authorize with GitHub and configure access policies
-
-1. Sign in to your GitHub account if you're prompted to do so.
-1. Select **Authorize** so that the application can access the signed-in userΓÇÖs account.
-
- :::image type="content" source="media/authorizations-how-to/consent-to-authorization.png" alt-text="Screenshot of consenting to authorize with GitHub.":::
-
- After authorization, the browser is redirected to API Management and the window is closed. If prompted during redirection, select **Allow access**. In API Management, select **Next**.
-1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
-
-1. Select **Managed identity** **+ Add members** and then select your subscription.
-1. In **Managed identity**, select **API Management service**, and then select the API Management instance that is used. Click **Select** and then **Complete**.
-
- :::image type="content" source="media/authorizations-how-to/select-managed-identity.png" alt-text="Screenshot of selecting a managed identity to use the authorization.":::
-
-## Step 4: Create an API in API Management and configure a policy
-
-1. Sign into Azure portal and go to your API Management instance.
-1. In the left menu, select **APIs > + Add API**.
-1. Select **HTTP** and enter the following settings. Then select **Create**.
-
- |Setting |Value |
- |||
- |**Display name** | *github* |
- |**Web service URL** | https://api.github.com/users |
- |**API URL suffix** | *github* |
-
-2. Navigate to the newly created API and select **Add Operation**. Enter the following settings and select **Save**.
-
- |Setting |Value |
- |||
- |**Display name** | *getdata* |
- |**URL** | /data |
-
- :::image type="content" source="media/authorizations-how-to/add-operation.png" alt-text="Screenshot of adding a getdata operation to the API in the portal.":::
-
-1. In the **Inbound processing** section, select the (**</>**) (code editor) icon.
-1. Copy the following, and paste in the policy editor. Make sure the provider-id and authorization-id correspond to the names in step 2.3. Select **Save**.
-
- ```xml
- <policies>
- <inbound>
- <base />
- <get-authorization-context provider-id="github-01" authorization-id="auth-01" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />
- <set-header name="Authorization" exists-action="override">
- <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
- </set-header>
- <rewrite-uri template="@(context.Request.Url.Query.GetValueOrDefault("username",""))" copy-unmatched-params="false" />
- <set-header name="User-Agent" exists-action="override">
- <value>API Management</value>
- </set-header>
- </inbound>
- <backend>
- <base />
- </backend>
- <outbound>
- <base />
- </outbound>
- <on-error>
- <base />
- </on-error>
- </policies>
- ```
-
- The policy to be used consists of four parts.
-
- - Fetch an authorization token.
- - Create an HTTP header with the fetched authorization token.
- - Create an HTTP header with a `User-Agent` header (GitHub requirement). [Learn more](https://docs.github.com/rest/overview/resources-in-the-rest-api#user-agent-required)
- - Because the incoming request to API Management will consist of a query parameter called *username*, add the username to the backend call.
-
- > [!NOTE]
- > The `get-authorization-context` policy references the authorization provider and authorization that were created earlier. [Learn more](get-authorization-context-policy.md) about how to configure this policy.
-
- :::image type="content" source="media/authorizations-how-to/policy-configuration-cropped.png" lightbox="media/authorizations-how-to/policy-configuration.png" alt-text="Screenshot of configuring policy in the portal.":::
-1. Test the API.
- 1. On the **Test** tab, enter a query parameter with the name *username*.
- 1. As value, enter the username that was used to sign into GitHub, or another valid GitHub username.
- 1. Select **Send**.
- :::image type="content" source="media/authorizations-how-to/test-api.png" alt-text="Screenshot of testing the API successfully in the portal.":::
-
- A successful response returns user data from the GitHub API.
-
-## Next steps
-
-Learn more about [access restriction policies](api-management-access-restriction-policies.md).
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
Title: About OAuth 2.0 authorizations in Azure API Management | Microsoft Docs
-description: Learn about authorizations in Azure API Management, a feature that simplifies the process of managing OAuth 2.0 authorization tokens to APIs
+ Title: About API authorizations in Azure API Management
+description: Learn about API authorizations in Azure API Management, a feature that simplifies the process of managing OAuth 2.0 authorization tokens to backend SaaS APIs
Previously updated : 06/03/2022 Last updated : 04/10/2023 +
-# Authorizations overview
+# What are API authorizations?
-API Management authorizations (preview) simplify the process of managing authorization tokens to OAuth 2.0 backend services.
-By configuring any of the supported identity providers and creating an authorization using the standardized OAuth 2.0 flow, API Management can retrieve and refresh access tokens to be used inside of API management or sent back to a client.
-This feature enables APIs to be exposed with or without a subscription key, and the authorization to the backend service uses OAuth 2.0.
+API Management *authorizations* provide a simple and reliable way to unbundle and abstract authorizations from web APIs. Authorizations greatly simplify the process of authenticating and authorizing users across one or more backend or SaaS services. With authorizations, easily configure OAuth 2.0, consent, acquire tokens, cache tokens, and refresh tokens without writing a single line of code. Use authorizations to delegate authentication to your API Management instance.
-Some example scenarios that will be possible through this feature are:
+This feature enables APIs to be exposed with or without a subscription key, use OAuth 2.0 authorizations to the backend services, and reduce development costs in ramping up, implementing, and maintaining security features with service integrations.
-- Citizen/low code developers using Power Apps or Power Automate can easily connect to SaaS providers that are using OAuth 2.0. -- Unattended scenarios such as an Azure function using a timer trigger can utilize this feature to connect to a backend API using OAuth 2.0. -- A marketing team in an enterprise company could use the same authorization for interacting with a social media platform using OAuth 2.0.-- Exposing APIs in API Management as a custom connector in Logic Apps where the backend service requires OAuth 2.0 flow. -- On behalf of a scenario where a service such as Dropbox or any other service protected by OAuth 2.0 flow is used by multiple clients. -- Connect to different services that require OAuth 2.0 authorization using synthetic GraphQL in API Management. -- Enterprise Application Integration (EAI) patterns using service-to-service authorization can use the client credentials grant type against backend APIs that use OAuth 2.0. -- Single-page applications that only want to retrieve an access token to be used in a client's SDK against an API using OAuth 2.0.
-The feature consists of two parts, management and runtime:
+## Key scenarios
-* The **management** part takes care of configuring identity providers, enabling the consent flow for the identity provider, and managing access to the authorizations.
+Using authorizations in API Management, customers can enable different scenarios and easily connect to SaaS providers or backend services that are using OAuth 2.0. Here are some example scenarios where this feature could be used:
+* Easily connect to a SaaS backend by attaching the stored authorization token and proxying requests
-* The **runtime** part uses the [`get-authorization-context`](get-authorization-context-policy.md) policy to fetch and store access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, the refresh token is used to try to fetch a new authorization and refresh token from the configured identity provider. If the call to the backend provider is successful, the new authorization token will be used, and both the authorization token and refresh token will be stored encrypted.
+* Proxy requests to an Azure App Service web app or Azure Functions backend by attaching the authorization token, which can later send requests to a SaaS backend applying transformation logic
+* Proxy requests to GraphQL federation backends by attaching multiple access tokens to easily perform federation
+* Expose a retrieve token endpoint, acquire a cached token, and call a SaaS backend on behalf of user from any compute, for example, a console app or Kubernetes daemon. Combine your favorite SaaS SDK in a supported language.
+
+* Azure Functions unattended scenarios when connecting to multiple SaaS backends.
+
+* Durable Functions gets a step closer to Logic Apps with SaaS connectivity.
+
+* With authorizations every API in API Management can act as a Logic Apps custom connector.
+
+## How do authorizations work?
+
+Authorizations consist of two parts, **management** and **runtime**.
+
+* The **management** part takes care of configuring identity providers, enabling the consent flow for the identity provider, and managing access to the authorizations. For details, see [Process flow - management](#process-flowmanagement).
+
+* The **runtime** part uses the [`get-authorization-context` policy](get-authorization-context-policy.md) to fetch and store the authorization's access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, API Management uses an OAuth 2.0 flow to refresh the stored tokens from the identity provider. Then the access token is used to authorize access to the backend service. For details, see [Process flow - runtime](#process-flowruntime).
+
During the policy execution, access to the tokens is also validated using access policies.
+### Process flow - management
+
+The following image summarizes the process flow for creating an authorization in API Management that uses the authorization code grant type.
-### Requirements
-- Managed system-assigned identity must be enabled for the API Management instance. -- API Management instance must have outbound connectivity to internet on port `443` (HTTPS).
+| Step | Description
+| | |
+| 1 | Client sends a request to create an authorization provider |
+| 2 | Authorization provider is created, and a response is sent back |
+| 3| Client sends a request to create an authorization |
+| 4| Authorization is created, and a response is sent back with the information that the authorization isn't "connected"|
+|5| Client sends a request to retrieve a login URL to start the OAuth 2.0 consent at the identity provider. The request includes a post-redirect URL to be used in the last step|
+|6|Response is returned with a login URL that should be used to start the consent flow. |
+|7|Client opens a browser with the login URL that was provided in the previous step. The browser is redirected to the identity provider OAuth 2.0 consent flow |
+|8|After the consent is approved, the browser is redirected with an authorization code to the redirect URL configured at the identity provider|
+|9|API Management uses the authorization code to fetch access and refresh tokens|
+|10|API Management receives the tokens and encrypts them|
+|11 |API Management redirects to the provided URL from step 5|
-### Limitations
+### Process flow - runtime
-For public preview the following limitations exist:
-- Authorizations feature only supports Service Principal and Managed Identity as access policies.-- Authorizations feature only supports /.default app-only scopes while acquire token for https://.../authorizationmanager audience.-- Authorizations feature is not supported in the following regions: swedencentral, australiacentral, australiacentral2, jioindiacentral.-- Authorizations feature is not supported in National Clouds.-- Authorizations feature is not supported on self-hosted gateways.-- Supported identity providers can be found in [this](https://github.com/Azure/APIManagement-Authorizations/blob/main/docs/identityproviders.md) GitHub repository.-- Maximum configured number of authorization providers per API Management instance: 1,000-- Maximum configured number of authorizations per authorization provider: 10,000-- Maximum configured number of access policies per authorization: 100-- Maximum requests per minute per service: 250
+The following image shows the process flow to fetch and store authorization and refresh tokens based on an authorization that uses the authorization code grant type. After the tokens have been retrieved, a call is made to the backend API.
-### Authorization providers
-
-Authorization provider configuration includes which identity provider and grant type are used. Each identity provider requires different configurations.
-* An authorization provider configuration can only have one grant type.
-* One authorization provider configuration can have multiple authorizations.
-* You can find the supported identity providers for public preview in [this](https://github.com/Azure/APIManagement-Authorizations/blob/main/docs/identityproviders.md) GitHub repository.
+| Step | Description
+| | |
+| 1 |Client sends request to API Management instance|
+|2|The [`get-authorization-context`](get-authorization-context-policy.md) policy checks if the access token is valid for the current authorization|
+|3|If the access token has expired but the refresh token is valid, API Management tries to fetch new access and refresh tokens from the configured identity provider|
+|4|The identity provider returns both an access token and a refresh token, which are encrypted and saved to API Management|
+|5|After the tokens have been retrieved, the access token is attached using the `set-header` policy as an authorization header to the outgoing request to the backend API|
+|6| Response is returned to API Management|
+|7| Response is returned to the client|
-With the Generic OAuth 2.0 provider, other identity providers that support the standards of OAuth 2.0 flow can be used.
+## How to configure authorizations?
-### Authorizations
+### Requirements
-To use an authorization provider, at least one *authorization* is required. The process of configuring an authorization differs based on the used grant type. Each authorization provider configuration only supports one grant type. For example, if you want to configure Azure AD to use both grant types, two authorization provider configurations are needed.
+* Managed system-assigned identity must be enabled for the API Management instance.
-**Authorization code grant type**
+* API Management instance must have outbound connectivity to internet on port 443 (HTTPS).
-Authorization code grant type is bound to a user context, meaning a user needs to consent to the authorization. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All identity providers support authorization code. [Read more about Authorization code grant type](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1).
+### Availability
-**Client credentials grant type**
+* All API Management service tiers
-Client credentials grant type isn't bound to a user and is often used in application-to-application scenarios. No consent is required for client credentials grant type, and the authorization doesn't become invalid. [Read more about Client Credentials grant type](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.4).
+* Not supported in self-hosted gateway
+* Not supported in sovereign clouds or in the following regions: australiacentral, australiacentral2, jioindiacentral
-### Access policies
-Access policies determine which identities can use the authorization that the access policy is related to. The supported identities are managed identities, user identities, and service principals. The identities must belong to the same tenant as the API Management tenant.
+### Configuration steps
-- **Managed identities** - System- or user-assigned identity for the API Management instance that is being used.-- **User identities** - Users in the same tenant as the API Management instance. -- **Service principals** - Applications in the same Azure AD tenant as the API Management instance.
+Configuring an authorization in your API Management instance consists of three steps: configuring an authorization provider, consenting to access by logging in, and creating access policies.
-### Process flow for creating authorizations
-The following image shows the process flow for creating an authorization in API Management using the grant type authorization code. For public preview no API documentation is available.
+#### Step 1 - Authorization provider
+During Step 1, you configure your authorization provider. You can choose between different [identity providers](authorizations-configure-common-providers.md) and grant types (authorization code or client credential). Each identity provider requires specific configurations. Important things to keep in mind:
+* An authorization provider configuration can only have one grant type.
+* One authorization provider configuration can have [multiple authorization connections](configure-authorization-connection.md).
-1. Client sends a request to create an authorization provider.
-1. Authorization provider is created, and a response is sent back.
-1. Client sends a request to create an authorization.
-1. Authorization is created, and a response is sent back with the information that the authorization is not "connected".
-1. Client sends a request to retrieve a login URL to start the OAuth 2.0 consent at the identity provider. The request includes a post-redirect URL to be used in the last step.
-1. Response is returned with a login URL that should be used to start the consent flow.
-1. Client opens a browser with the login URL that was provided in the previous step. The browser is redirected to the identity provider OAuth 2.0 consent flow.
-1. After the consent is approved, the browser is redirected with an authorization code to the redirect URL configured at the identity provider.
-1. API Management uses the authorization code to fetch access and refresh tokens.
-1. API Management receives the tokens and encrypts them.
-1. API Management redirects to the provided URL from step 5.
+> [!NOTE]
+> With the Generic OAuth 2.0 provider, other identity providers that support the standards of [OAuth 2.0 flow](https://oauth.net/2/) can be used.
+>
-### Process flow for runtime
+To use an authorization provider, at least one *authorization* is required. Each authorization is a separate connection to the authorization provider. The process of configuring an authorization differs based on the configured grant type. Each authorization provider configuration only supports one grant type. For example, if you want to configure Azure AD to use both grant types, two authorization provider configurations are needed. The following table summarizes the two grant types.
-The following image shows the process flow to fetch and store authorization and refresh tokens based on a configured authorization. After the tokens have been retrieved a call is made to the backend API.
+|Grant type |Description |
+|||
+|Authorization code | Bound to a user context, meaning a user needs to consent to the authorization. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All identity providers support authorization code. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1) |
+|Client credentials | Isn't bound to a user and is often used in application-to-application scenarios. No consent is required for client credentials grant type, and the authorization doesn't become invalid. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.4) |
-1. Client sends request to API Management instance.
-1. The policy [`get-authorization-context`](get-authorization-context-policy.md) checks if the access token is valid for the current authorization.
-1. If the access token has expired but the refresh token is valid, API Management tries to fetch new access and refresh tokens from the configured identity provider.
-1. The identity provider returns both an access token and a refresh token, which are encrypted and saved to API Management.
-1. After the tokens have been retrieved, the access token is attached using the `set-header` policy as an authorization header to the outgoing request to the backend API.
-1. Response is returned to API Management.
-1. Response is returned to the client.
+### Step 2 - Log in
-### Error handling
+For authorizations based on the authorization code grant type, you must authenticate to the provider and *consent* to authorization. After successful login and authorization by the identity provider, the provider returns valid access and refresh tokens, which are encrypted and saved by API Management. For details, see [Process flow - runtime](#process-flowruntime).
-If acquiring the authorization context results in an error, the outcome depends on how the attribute `ignore-error` is configured in the policy `get-authorization-context`. If the value is set to `false` (default), an error with `500 Internal Server Error` will be returned. If the value is set to `true`, the error will be ignored and execution will proceed with the context variable set to `null`.
+### Step 3 - Access policy
-If the value is set to `false`, and the on-error section in the policy is configured, the error will be available in the property `context.LastError`. By using the on-error section, the error that is sent back to the client can be adjusted. Errors from API Management can be caught using standard Azure alerts. Read more about [handling errors in policies](api-management-error-handling-policies.md).
+You configure one or more *access policies* for each authorization. The access policies determine which [Azure AD identities](../active-directory/develop/app-objects-and-service-principals.md) can gain access to your authorizations at runtime. Authorizations currently support managed identities and service principals.
-### Authorizations FAQ
-##### How can I provide feedback and influence the roadmap for this feature?
+|Identity |Description | Benefits | Considerations |
+|||--|-|
+|Service principal | Identity whose tokens can be used to authenticate and grant access to specific Azure resources, when an organization is using Azure Active Directory (Azure AD). By using a service principal, organizations avoid creating fictitious users to manage authentication when they need to access a resource. A service principal is an Azure AD identity that represents a registered Azure AD application. | Permits more tightly scoped access to authorization. Isn't tied to specific API Management instance. Relies on Azure AD for permission enforcement. | Getting the [authorization context](get-authorization-context-policy.md) requires an Azure AD token. |
+|Managed identity | Service principal of a special type that represents an Azure AD identity for an Azure service. Managed identities are tied to, and can only be used with, an Azure resource. Managed identities eliminate the need for you to manually create and manage service principals directly.<br/><br/>When a system-assigned managed identity is enabled, a service principal representing that managed identity is created in your tenant automatically and tied to your resource's lifecycle.|No credentials are needed.|Identity is tied to specific Azure infrastructure. Anyone with Contributor access to API Management instance can access any authorization granting managed identity permissions. |
+| Managed identity `<Your API Management instance name>` | This option corresponds to a managed identity tied to your API Management instance. | Quick selection of system-assigned managed identity for the corresponding API management instance. | Identity is tied to your API Management instance. Anyone with Contributor access to API Management instance can access any authorization granting managed identity permissions. |
-Please use [this](https://aka.ms/apimauthorizations/feedback) form to provide feedback.
+## Security considerations
-##### How are the tokens stored in API Management?
+The access token and other authorization secrets (for example, client secrets) are encrypted with an envelope encryption and stored in an internal, multitenant storage. The data are encrypted with AES-128 using a key that is unique per data. Those keys are encrypted asymmetrically with a master certificate stored in Azure Key Vault and rotated every month.
-The access token and other secrets (for example, client secrets) are encrypted with an envelope encryption and stored in an internal, multitenant storage. The data are encrypted with AES-128 using a key that is unique per data; those keys are encrypted asymmetrically with a master certificate stored in Azure Key Vault and rotated every month.
+### Limits
-##### When are the access tokens refreshed?
+| Resource | Limit |
+| --| -|
+| Maximum number of authorization providers per service instance| 1,000 |
+| Maximum number of authorizations per authorization provider| 10,000 |
+| Maximum number of access policies per authorization | 100 |
+| Maximum number of authorization requests per minute per authorization | 250 |
-When the policy `get-authorization-context` is executed at runtime, API Management checks if the stored access token is valid. If the token has expired or is near expiry, API Management uses the refresh token to fetch a new access token and a new refresh token from the configured identity provider. If the refresh token has expired, an error is thrown, and the authorization needs to be reauthorized before it will work.
-##### What happens if the client secret expires at the identity provider?
-At runtime API Management can't fetch new tokens, and an error will occur.
+## Frequently asked questions (FAQ)
++
+### When are the access tokens refreshed?
+
+For an authorization of type authorization code, access tokens are refreshed as follows: When the `get-authorization-context` policy is executed at runtime, API Management checks if the stored access token is valid. If the token has expired or is near expiry, API Management uses the refresh token to fetch a new access token and a new refresh token from the configured identity provider. If the refresh token has expired, an error is thrown, and the authorization needs to be reauthorized before it will work.
+
+### What happens if the client secret expires at the identity provider?
+
+At runtime API Management can't fetch new tokens, and an error occurs.
* If the authorization is of type authorization code, the client secret needs to be updated on authorization provider level. * If the authorization is of type client credentials, the client secret needs to be updated on authorizations level.
-##### Is this feature supported using API Management running inside a VNet?
+### Is this feature supported using API Management running inside a VNet?
-Yes, as long as API Management gateway has outbound internet connectivity on port `443`.
+Yes, as long as outbound connectivity on port 443 is enabled to the **ServiceConnectors** service tag. For more information, see [Virtual network configuration reference](virtual-network-reference.md#required-ports).
-##### What happens when an authorization provider is deleted?
+### What happens when an authorization provider is deleted?
All underlying authorizations and access policies are also deleted.
-##### Are the access tokens cached by API Management?
+### Are the access tokens cached by API Management?
The access token is cached by the API management until 3 minutes before the token expiration time.
-##### What grant types are supported?
-
-For public preview, the Azure AD identity provider supports authorization code and client credentials.
-
-The other identity providers support authorization code. After public preview, more identity providers and grant types will be added.
-
-### Next steps
-- Learn how to [configure and use an authorization](authorizations-how-to.md).-- See [reference](authorizations-reference.md) for supported identity providers in authorizations.-- Use [policies]() together with authorizations. -- Authorizations [samples](https://github.com/Azure/APIManagement-Authorizations) GitHub repository. -- Learn more about OAuth 2.0:
+## Next steps
- * [OAuth 2.0 overview](https://aaronparecki.com/oauth-2-simplified/)
- * [OAuth 2.0 specification](https://oauth.net/2/)
+Learn how to:
+- Configure [identity providers](authorizations-configure-common-providers.md) for authorizations
+- Configure and use an authorization for the [Microsoft Graph API](authorizations-how-to-azure-ad.md) or the [GitHub API](authorizations-how-to-github.md)
+- Configure [multiple authorization connections](configure-authorization-connection.md) for a provider
api-management Authorizations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-reference.md
- Title: Reference for OAuth 2.0 authorizations - Azure API Management | Microsoft Docs
-description: Reference for identity providers supported in authorizations in Azure API Management. API Management authorizations manage OAuth 2.0 authorization tokens to APIs.
--- Previously updated : 05/02/2022---
-# Authorizations reference
-This article is a reference for the supported identity providers in API Management [authorizations](authorizations-overview.md) (preview) and their configuration options.
-
-## Azure Active Directory
--
-**Supported grant types**: authorization code and client credentials
--
-### Authorization provider - Authorization code grant type
-
-| Name | Required | Description | Default |
-|||||
-| Provider name | Yes | Name of Authorization provider. | |
-| Client id | Yes | The id used to identify this application with the service provider. | |
-| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
-| Login URL | No | The Azure Active Directory login URL. | https://login.windows.net |
-| Tenant ID | No | The tenant ID of your Azure Active Directory application. | common |
-| Resource URL | Yes | The resource to get authorization for. | |
-| Scopes | No | Scopes used for the authorization. Multiple scopes could be defined separate with a space, for example, "User.Read User.ReadBasic.All" | |
--
-### Authorization - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Authorization name | Yes | Name of Authorization. | |
-
-
-
-### Authorization provider - Client credentials code grant type
-| Name | Required | Description | Default |
-|||||
-| Provider name | Yes | Name of Authorization provider. | |
-| Login URL | No | The Azure Active Directory login URL. | https://login.windows.net |
-| Tenant ID | No | The tenant ID of your Azure Active Directory application. | common |
-| Resource URL | Yes | The resource to get authorization for. | |
--
-### Authorization - Client credentials code grant type
-| Name | Required | Description | Default |
-|||||
-| Authorization name | Yes | Name of Authorization. | |
-| Client id | Yes | The id used to identify this application with the service provider. | |
-| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
-
-
-
-## Google, LinkedIn, Spotify, Dropbox, GitHub
-
-**Supported grant types**: authorization code
-
-### Authorization provider - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Provider name | Yes | Name of Authorization provider. | |
-| Client id | Yes | The id used to identify this application with the service provider. | |
-| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
-| Scopes | No | Scopes used for the authorization. Depending on the identity provider, multiple scopes are separated by space or comma. Default for most identity providers is space. | |
--
-### Authorization - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Authorization name | Yes | Name of Authorization. | |
-
-
-
-## Generic OAuth 2
-
-**Supported grant types**: authorization code
--
-### Authorization provider - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Provider name | Yes | Name of Authorization provider. | |
-| Client id | Yes | The id used to identify this application with the service provider. | |
-| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
-| Authorization URL | No | The authorization endpoint URL. | |
-| Token URL | No | The token endpoint URL. | |
-| Refresh URL | No | The token refresh endpoint URL. | |
-| Scopes | No | Scopes used for the authorization. Depending on the identity provider, multiple scopes are separated by space or comma. Default for most identity providers is space. | |
--
-### Authorization - Authorization code grant type
-| Name | Required | Description | Default |
-|||||
-| Authorization name | Yes | Name of Authorization. | |
-
-## Next steps
-
-Learn more about [authorizations](authorizations-overview.md) and how to [create and use authorizations](authorizations-how-to.md)
api-management Configure Authorization Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-authorization-connection.md
+
+ Title: Configure multiple authorization connections - Azure API Management
+description: Learn how to set up multiple authorization connections to a configured authorization provider using the portal.
++++ Last updated : 03/16/2023+++
+# Configure multiple authorization connections
+
+You can configure multiple authorizations (also called *authorization connections*) to an authorization provider in your API Management instance. For example, if you configured Azure AD as an authorization provider, you might need to create multiple authorizations for different scenarios and users.
+
+In this article, you learn how to add an authorization connection to an existing provider, using the portal. For an overview of configuration steps, see [How to configure authorizations?](authorizations-overview.md#how-to-configure-authorizations)
+
+## Prerequisites
+
+* An API Management instance. If you need to, [create one](get-started-create-service-instance.md).
+* A configured authorization provider. For example, see the steps to create a provider for [GitHub](authorizations-how-to-github.md) or [Azure AD](authorizations-how-to-azure-ad.md).
+
+## Create an authorization connection - portal
+
+1. Sign into [portal](https://portal.azure.com) and go to your API Management instance.
+1. In the left menu, select **Authorizations**.
+1. Select the authorization provider that you want to create multiple connections for (for example, *mygithub*).
+
+ :::image type="content" source="media/configure-authorization-connection/select-provider.png" alt-text="Screenshot of selecting an authorization provider in the portal.":::
+1. In the provider windows, select **Authorization**, and then select **+ Create**.
+
+ :::image type="content" source="media/configure-authorization-connection/create-authorization.png" alt-text="Screenshot of creating an authorization connection in the portal.":::
+1. Complete the steps for your authorization connection.
+ 1. On the **Authorization** tab, enter an **Authorization name**. Select **Create**, then select **Next**.
+ 1. On the **Login** tab (for authorization code grant type), complete the steps to login to the authorization provider to allow access. Select **Next**.
+ 1. On the **Access policy** tab, assign access to the Azure AD identity or identities that can use the authorization. Select **Complete**.
+1. The new connection appears in the list of authorizations, and shows a status of **Connected**.
+
+ :::image type="content" source="media/configure-authorization-connection/list-authorizations.png" alt-text="Screenshot of list of authorization connections in the portal.":::
+
+If you want to create another authorization connection for the provider, complete the preceding steps.
+
+## Manage authorizations - portal
+
+You can manage authorization provider settings and authorization connections in the portal. For example, you might need to update client credentials for the authorization provider.
+
+To update provider settings:
+
+1. Sign into [portal](https://portal.azure.com) and go to your API Management instance.
+1. In the left menu, select **Authorizations**.
+1. Select the authorization provider that you want to manage.
+1. In the provider windows, select **Settings**.
+1. In the provider settings, make updates, and select **Save**.
+
+ :::image type="content" source="media/configure-authorization-connection/update-provider.png" alt-text="Screenshot of updating authorization provider settings in the portal.":::
+
+To update an authorization connection:
+
+1. Sign into [portal](https://portal.azure.com) and go to your API Management instance.
+1. In the left menu, select **Authorizations**.
+1. Select the authorization provider (for example, *mygithub*).
+1. In the provider window, select **Authorization**.
+1. In the row for the authorization connection you want to update, select the context (...) menu, and select from the options. For example, to manage access policies, select **Access policies**.
+
+ :::image type="content" source="media/configure-authorization-connection/update-connection.png" alt-text="Screenshot of updating an authorization connection in the portal.":::
+
+## Next steps
+
+* Learn more about [configuring identity providers](authorizations-configure-common-providers.md) for authorizations.
+* Review [limits](authorizations-overview.md#limits) for authorization providers and authorizations.
++++
api-management Get Authorization Context Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-authorization-context-policy.md
Previously updated : 12/08/2022 Last updated : 03/20/2023 # Get authorization context
-Use the `get-authorization-context` policy to get the authorization context of a specified [authorization](authorizations-overview.md) (preview) configured in the API Management instance.
+Use the `get-authorization-context` policy to get the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.
-The policy fetches and stores authorization and refresh tokens from the configured authorization provider.
-
-If `identity-type=jwt` is configured, a JWT token is required to be validated. The audience of this token must be `https://azure-api.net/authorization-manager`.
+The policy fetches and stores authorization and refresh tokens from the configured authorization provider.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
If `identity-type=jwt` is configured, a JWT token is required to be validated. T
| Attribute | Description | Required | Default | |||||
-| provider-id | The authorization provider resource identifier. | Yes | N/A |
-| authorization-id | The authorization resource identifier. | Yes | N/A |
-| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). | Yes | N/A |
-| identity-type | Type of identity to be checked against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute. | No | `managed` |
-| identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: `https://azure-api.net/authorization-manager` <br> - `oid`: Permission object ID <br> - `tid`: Permission tenant ID | No | N/A |
-| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource is not found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | `false` |
+| provider-id | The authorization provider resource identifier. Policy expressions are allowed. | Yes | N/A |
+| authorization-id | The authorization resource identifier. Policy expressions are allowed. | Yes | N/A |
+| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). Policy expressions are allowed. | Yes | N/A |
+| identity-type | Type of identity to check against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute.<br/><br/>Policy expressions are allowed. | No | `managed` |
+| identity | An Azure AD JWT bearer token to check against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: `https://azure-api.net/authorization-manager` <br> - `oid`: Permission object ID <br> - `tid`: Permission tenant ID<br/><br/>Policy expressions are allowed. | No | N/A |
+| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource isn't found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500`<br/><br/>If you set the value to `false`, and the policy configuration includes an `on-error` section, the error is available in the `context.LastError` property.<br/><br/>Policy expressions are allowed. | No | `false` |
### Authorization object
class Authorization
| Property Name | Description | | -- | -- | | AccessToken | Bearer access token to authorize a backend HTTP request. |
-| Claims | Claims returned from the authorization serverΓÇÖs token response API (see [RFC6749#section-5.1](https://datatracker.ietf.org/doc/html/rfc6749#section-5.1)). |
+| Claims | Claims returned from the authorization server's token response API (see [RFC6749#section-5.1](https://datatracker.ietf.org/doc/html/rfc6749#section-5.1)). |
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption
+
+### Usage notes
+
+* Configure `identity-type=jwt` when the [access policy](authorizations-overview.md#step-3access-policy) for the authorization is assigned to a service principal. Only `/.default` app-only scopes are supported for the JWT.
## Examples
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 3443 | Inbound | TCP | ApiManagement / VirtualNetwork | **Management endpoint for Azure portal and PowerShell** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / Storage | **Dependency on Azure Storage** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / AzureConnectors | [Authorizations](authorizations-overview.md) dependency (optional) | External & Internal |
| * / 1433 | Outbound | TCP | VirtualNetwork / Sql | **Access to Azure SQL endpoints** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / AzureKeyVault | **Access to Azure Key Vault** | External & Internal | | * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / EventHub | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and [Azure Monitor](api-management-howto-use-azure-monitor.md) (optional) | External & Internal |
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 443 | Outbound | TCP | VirtualNetwork / Storage | **Dependency on Azure Storage** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / AzureKeyVault | Access to Azure Key Vault for [named values](api-management-howto-properties.md) integration (optional) | External & Internal |
+| * / 443 | Outbound | TCP | VirtualNetwork / AzureConnectors | [Authorizations](authorizations-overview.md) dependency (optional) | External & Internal |
| * / 1433 | Outbound | TCP | VirtualNetwork / Sql | **Access to Azure SQL endpoints** | External & Internal | | * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / Azure Event Hubs | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional)| External & Internal | | * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-hotpatch.md
Title: Hotpatch for Windows Server Azure Edition
-description: Learn how Hotpatch for Windows Server Azure Edition works and how to enable it
+description: Learn how hotpatch for Windows Server Azure Edition works and how to enable it
Previously updated : 02/22/2021 Last updated : 04/18/2023 # Hotpatch for new virtual machines
-<!--
> [!IMPORTANT]
-> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the Hotpatch capability described below.
-> This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
>
+> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the hotpatch capability described below. This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> [!IMPORTANT]
-> Hotpatch is supported on _Windows Server 2022 Datacenter: Azure Edition (Server Core)_.
+> [!NOTE]
+> Hotpatch is supported on _Windows Server 2022 Datacenter: Azure Edition_.
-Hotpatching is a new way to install updates on supported _Windows Server Azure Edition_ virtual machines (VMs) that doesnΓÇÖt require a reboot after installation. This article covers information about Hotpatch for supported _Windows Server Azure Edition_ VMs, which has the following benefits:
+Hotpatching is a new way to install updates on supported _Windows Server Azure Edition_ virtual machines (VMs) that doesnΓÇÖt require a reboot after installation. This article covers information about hotpatch for supported _Windows Server Azure Edition_ VMs, which has the following benefits:
* Lower workload impact with less reboots * Faster deployment of updates as the packages are smaller, install faster, and have easier patch orchestration with Azure Update Manager
-* Better protection, as the Hotpatch update packages are scoped to Windows security updates that install faster without rebooting
+* Better protection, as the hotpatch update packages are scoped to Windows security updates that install faster without rebooting
## How hotpatch works
-Hotpatch works by first establishing a baseline with a Windows Update Latest Cumulative Update. Hotpatches are periodically released (for example, on the second Tuesday of the month) that build on that baseline. Hotpatches will contain updates that don't require a reboot. Periodically (starting at every three months), the baseline is refreshed with a new Latest Cumulative Update.
+Hotpatch works by first establishing a baseline with a Windows Update Latest Cumulative Update. Hotpatches are periodically released (for example, on the second Tuesday of the month) that builds on that baseline. Hotpatches will contain updates that don't require a reboot. Periodically (starting at every three months), the baseline is refreshed with a new Latest Cumulative Update.
:::image type="content" source="media\automanage-hotpatch\hotpatch-sample-schedule.png" alt-text="Hotpatch Sample Schedule.":::
-There are two types of baselines: **Planned baselines** and **unplanned baselines**.
-* **Planned baselines** are released on a regular cadence, with hotpatch releases in between. Planned baselines include all the updates in a comparable _Latest Cumulative Update_ for that month, and require a reboot.
+There are two types of baselines: **Planned baselines** and **Unplanned baselines**.
+* **Planned baselines** are released on a regular cadence, with hotpatch releases in between. Planned baselines include all the updates in a comparable _Latest Cumulative Update_ for that month, and require a reboot.
* The sample schedule above illustrates four planned baseline releases in a calendar year (five total in the diagram), and eight hotpatch releases.
-* **Unplanned baselines** are released when an important update (such as a zero-day fix) is released, and that particular update can't be released as a Hotpatch. When unplanned baselines are released, a hotpatch release will be replaced with an unplanned baseline in that month. Unplanned baselines also include all the updates in a comparable _Latest Cumulative Update_ for that month, and also require a reboot.
+* **Unplanned baselines** are released when an important update (such as a zero-day fix) is released, and that particular update can't be released as a hotpatch. When unplanned baselines are released, a hotpatch release will be replaced with an unplanned baseline in that month. Unplanned baselines also include all the updates in a comparable _Latest Cumulative Update_ for that month, and also require a reboot.
* The sample schedule above illustrates two unplanned baselines that would replace the hotpatch releases for those months (the actual number of unplanned baselines in a year isn't known in advance). ## Regional availability
Hotpatch is available in all global Azure regions.
> [!NOTE] > You can preview onboarding Automanage machine best practices during VM creation in the Azure portal using [this link](https://aka.ms/AzureEdition).
-To start using Hotpatch on a new VM, follow these steps:
+To start using hotpatch on a new VM, follow these steps:
1. Start creating a new VM from the Azure portal
- * You can preview onboarding Automanage machine best practices during VM creation in the Azure portal using [this link](https://aka.ms/AzureEdition).
+ * You can preview onboarding Automanage machine best practices during VM creation in the Azure portal by visiting the [Azure Marketplace](https://aka.ms/AzureEdition).
1. Supply details during VM creation
- * Ensure that a supported _Windows Server Azure Edition_ image is selected in the Image dropdown. Use [this guide](automanage-windows-server-services-overview.md#getting-started-with-windows-server-azure-edition) to determine which images are supported.
- * On the Management tab under section ΓÇÿGuest OS updatesΓÇÖ, the checkbox for 'Enable hotpatch' will be selected. Patch orchestration options will be set to 'Azure-orchestrated'.
- * If you create a VM using [this link](https://aka.ms/AzureEdition), on the Management tab under section 'Azure Automanage', select 'Dev/Test' or 'Production' for 'Azure Automanage environment' to evaluate Automanage machine best practices while in preview.
+ * Ensure that a supported _Windows Server Azure Edition_ image is selected in the Image dropdown. See [automanage windows server services](automanage-windows-server-services-overview.md#getting-started-with-windows-server-azure-edition) to determine which images are supported.
+ * On the Management tab under section ΓÇÿGuest OS updatesΓÇÖ, the checkbox for 'Enable hotpatch' will be selected. Patch orchestration options are set to 'Azure-orchestrated'.
+ * If you create a VM by visiting the [Azure Marketplace](https://aka.ms/AzureEdition), on the Management tab under section 'Azure Automanage', select 'Dev/Test' or 'Production' for 'Azure Automanage environment' to evaluate Automanage machine best practices while in preview.
1. Create your new VM
az provider register --namespace Microsoft.Compute
When [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled on a VM, the available Critical and Security patches are downloaded and applied automatically. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
-With Hotpatch enabled on supported _Windows Server Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months will require VM reboots. Additional Critical or Security patches may also be available periodically which may require VM reboots.
+With hotpatch enabled on supported _Windows Server Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months require VM reboots. Additional Critical or Security patches may also be available periodically, which may require VM reboots.
The VM is assessed automatically every few days and multiple times within any 30-day period to determine the applicable patches for that VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity.
-Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM will be assessed and applicable patches will be installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days.
+Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM is assessed and applicable patches are installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days.
Definition updates and other patches not classified as Critical or Security won't be installed through automatic VM guest patching. ## Understanding the patch status for your VM
-To view the patch status for your VM, navigate to the **Guest + host updates** section for your VM in the Azure portal. Under the **Guest OS updates** section, click on ΓÇÿGo to Hotpatch (Preview)ΓÇÖ to view the latest patch status for your VM.
+To view the patch status for your VM, navigate to the **Guest + host updates** section for your VM in the Azure portal. Under the **Guest OS updates** section, select ΓÇÿGo to Hotpatch (Preview)ΓÇÖ to view the latest patch status for your VM.
-On this screen, you'll see the Hotpatch status for your VM. You can also review if there any available patches for your VM that haven't been installed. As described in the ΓÇÿPatch installationΓÇÖ section above, all security and critical updates will be automatically installed on your VM using [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) and no extra actions are required. Patches with other update classifications aren't automatically installed. Instead, they're viewable in the list of available patches under the ΓÇÿUpdate complianceΓÇÖ tab. You can also view the history of update deployments on your VM through the ΓÇÿUpdate historyΓÇÖ. Update history from the past 30 days is displayed, along with patch installation details.
+On this screen, you'll see the hotpatch status for your VM. You can also review if there any available patches for your VM that haven't been installed. As described in the ΓÇÿPatch installationΓÇÖ section above, all security and critical updates are automatically installed on your VM using [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) and no extra actions are required. Patches with other update classifications aren't automatically installed. Instead, they're viewable in the list of available patches under the ΓÇÿUpdate complianceΓÇÖ tab. You can also view the history of update deployments on your VM through the ΓÇÿUpdate historyΓÇÖ. Update history from the past 30 days is displayed, along with patch installation details.
:::image type="content" source="media\automanage-hotpatch\hotpatch-management-ui.png" alt-text="Hotpatch Management.":::
Similar to on-demand assessment, you can also install patches on-demand for your
## Supported updates
-Hotpatch covers Windows Security updates and maintains parity with the content of security updates issued to in the regular (non-Hotpatch) Windows update channel.
+Hotpatch covers Windows Security updates and maintains parity with the content of security updates issued to in the regular (non-hotpatch) Windows update channel.
-There are some important considerations to running a supported _Windows Server Azure Edition_ VM with Hotpatch enabled. Reboots are still required to install updates that aren't included in the Hotpatch program. Reboots are also required periodically after a new baseline has been installed. These reboots keep the VM in sync with non-security patches included in the latest cumulative update.
-* Patches that are currently not included in the Hotpatch program include non-security updates released for Windows, and non-Windows updates (such as .NET patches). These types of patches need to be installed during a baseline month, and will require a reboot.
+There are some important considerations to running a supported _Windows Server Azure Edition_ VM with hotpatch enabled. Reboots are still required to install updates that aren't included in the hotpatch program. Reboots are also required periodically after a new baseline has been installed. These reboots keep the VM in sync with non-security patches included in the latest cumulative update.
+* Patches that are currently not included in the hotpatch program include non-security updates released for Windows, and non-Windows updates (such as .NET patches). These types of patches need to be installed during a baseline month, and will require a reboot.
## Frequently asked questions
There are some important considerations to running a supported _Windows Server A
* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with updates that donΓÇÖt require a reboot to take effect. The baseline is updated periodically with a new cumulative update. The cumulative update includes all security and quality updates and requires a reboot.
-### Why should I use Hotpatch?
+### Why should I use hotpatch?
-* When you use Hotpatch on a supported _Windows Server Azure Edition_ image, your VM will have higher availability (fewer reboots), and faster updates (smaller packages that are installed faster without the need to restart processes). This process results in a VM that is always up to date and secure.
+* When you use hotpatch on a supported _Windows Server Azure Edition_ image, your VM will have higher availability (fewer reboots), and faster updates (smaller packages that are installed faster without the need to restart processes). This process results in a VM that is always up to date and secure.
-### What types of updates are covered by Hotpatch?
+### What types of updates are covered by hotpatch?
* Hotpatch currently covers Windows security updates.
-### When will I receive the first Hotpatch update?
+### When will I receive the first hotpatch update?
* Hotpatch updates are typically released on the second Tuesday of each month. For more information, see below.
-### What will the Hotpatch schedule look like?
+### What will the hotpatch schedule look like?
-* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with Hotpatch updates released monthly. Baselines will be released starting out every three months. See the image below for an example of an annual three-month schedule (including example unplanned baselines due to zero-day fixes).
+* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with hotpatch updates released monthly. Baselines will be released starting out every three months. See the image below for an example of an annual three-month schedule (including example unplanned baselines due to zero-day fixes).
:::image type="content" source="media\automanage-hotpatch\hotpatch-sample-schedule.png" alt-text="Hotpatch Sample Schedule.":::
-### Are reboots still needed for a VM enrolled in Hotpatch?
+### Are reboots still needed for a VM enrolled in hotpatch?
-* Reboots are still required to install updates not included in the Hotpatch program, and are required periodically after a baseline (Windows Update Latest Cumulative Update) has been installed. This reboot will keep your VM in sync with all the patches included in the cumulative update. Baselines (which require a reboot) will start out on a three-month cadence and increase over time.
+* Reboots are still required to install updates not included in the hotpatch program, and are required periodically after a baseline (Windows Update Latest Cumulative Update) has been installed. This reboot will keep your VM in sync with all the patches included in the cumulative update. Baselines (which require a reboot) will start out on a three-month cadence and increase over time.
-### Are my applications affected when a Hotpatch update is installed?
+### Are my applications affected when a hotpatch update is installed?
-* Because Hotpatch patches the in-memory code of running processes without the need to restart the process, your applications will be unaffected by the patching process. Note that this is separate from any potential performance and functionality implications of the patch itself.
+* Because hotpatch patches the in-memory code of running processes without the need to restart the process, your applications are unaffected by the patching process. This is separate from any potential performance and functionality implications of the patch itself.
-### Can I turn off Hotpatch on my VM?
+### Can I turn off hotpatch on my VM?
-* You can turn off Hotpatch on a VM via the Azure portal. Turning off Hotpatch will unenroll the VM from Hotpatch, which reverts the VM to typical update behavior for Windows Server. Once you unenroll from Hotpatch on a VM, you can re-enroll that VM when the next Hotpatch baseline is released.
+* You can turn off hotpatch on a VM via the Azure portal. Turning off hotpatch will unenroll the VM from hotpatch, which reverts the VM to typical update behavior for Windows Server. Once you unenroll from hotpatch on a VM, you can re-enroll that VM when the next hotpatch baseline is released.
### Can I upgrade from my existing Windows Server OS? * Yes, upgrading from existing versions of Windows Server (such as Windows Server 2016 or Windows Server 2019) to _Windows Server 2022 Datacenter: Azure Edition_ is supported.
-### How can I get troubleshooting support for Hotpatching?
+### How can I get troubleshooting support for hotpatching?
* You can file a [technical support case ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). For the Service option, search for and select **Virtual Machine running Windows** under Compute. Select **Azure Features** for the problem type and **Automatic VM Guest Patching** for the problem subtype.
automanage Automanage Windows Server Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-windows-server-services-overview.md
Previously updated : 02/13/2022 Last updated : 04/18/2023
Azure Automanage for Windows Server brings new capabilities specifically to _Win
- SMB over QUIC - Extended network for Azure
-<!--
> [!IMPORTANT]
-> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the Hotpatch capability described below.
-> This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
>
+> Hotpatch is currently in Public Preview. An opt-in procedure is needed to use the Hotpatch capability described below. This preview is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Automanage for Windows Server capabilities can be found in one or more of these _Windows Server Azure Edition_ images:
Capabilities vary by image, see [getting started](#getting-started-with-windows-
Hotpatch is available on the following images:
+- Windows Server 2022 Datacenter: Azure Edition (Desktop Experience)
- Windows Server 2022 Datacenter: Azure Edition (Core)
-Hotpatch gives you the ability to apply security updates on your VM without rebooting. Additionally, Automanage for Windows Server automates the onboarding, configuration, and orchestration of hot patching. To learn more, see [Hotpatch](automanage-hotpatch.md).
+Hotpatch gives you the ability to apply security updates on your VM without rebooting. Additionally, Automanage for Windows Server automates the onboarding, configuration, and orchestration of hot patching. To learn more, see [Hotpatch](automanage-hotpatch.md).
### SMB over QUIC
SMB over QUIC offers an "SMB VPN" for telecommuters, mobile device users, and br
SMB over QUIC is also integrated with [Automanage machine best practices for Windows Server](automanage-windows-server.md) to help make SMB over QUIC management easier. QUIC uses certificates to provide its encryption and organizations often struggle to maintain complex public key infrastructures. Automanage machine best practices ensure that certificates do not expire without warning and that SMB over QUIC stays enabled for maximum continuity of service. To learn more, see [SMB over QUIC](/windows-server/storage/file-server/smb-over-quic) and [SMB over QUIC management with Automanage machine best practices](automanage-smb-over-quic.md).
-
### Extended network for Azure
Extended Network for Azure is available on the following images:
Azure Extended Network enables you to stretch an on-premises subnet into Azure to let on-premises virtual machines keep their original on-premises private IP addresses when migrating to Azure. To learn more, see [Azure Extended Network](/windows-server/manage/windows-admin-center/azure/azure-extended-network). - ## Getting started with Windows Server Azure Edition
-It's important to consider up front, which Automanage for Windows Server capabilities you would like to use, then choose a corresponding VM image that supports all of those capabilities. Some of the _Windows Server Azure Edition_ images support only a subset of capabilities, see the table below for more details.
+It's important to consider up front, which Automanage for Windows Server capabilities you would like to use, then choose a corresponding VM image that supports all of those capabilities. Some of the _Windows Server Azure Edition_ images support only a subset of capabilities.
> [!NOTE] > If you would like to preview the upcoming version of **Windows Server Azure Edition**, see [Windows Server VNext Datacenter: Azure Edition](windows-server-azure-edition-vnext.md).
It's important to consider up front, which Automanage for Windows Server capabil
To start using Automanage for Windows Server capabilities on a new VM, use your preferred method to create an Azure VM, and select the _Windows Server Azure Edition_ image that corresponds to the set of [capabilities](#getting-started-with-windows-server-azure-edition) that you would like to use.
-<!--
> [!IMPORTANT] > Some capabilities have specific configuration steps to perform during VM creation, and some capabilities that are in preview have specific opt-in and portal viewing requirements. See the individual capability topics above to learn more about using that capability with your VM.> ## Next steps > [!div class="nextstepaction"]
-> [Learn more about Azure Automanage](overview-about.md)
+> [Learn more about Azure Automanage](overview-about.md)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Valid values:
| `node` | [JavaScript](functions-reference-node.md)<br/>[TypeScript](functions-reference-node.md#typescript) | | `powershell` | [PowerShell](functions-reference-powershell.md) | | `python` | [Python](functions-reference-python.md) |
+| `custom` | [Other](functions-custom-handlers.md) |
## FUNCTIONS\_WORKER\_SHARED\_MEMORY\_DATA\_TRANSFER\_ENABLED
azure-functions Functions Node Upgrade V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md
The http request and response types are now a subset of the [fetch standard](htt
If you see the following error, make sure you [set the `EnableWorkerIndexing` flag](#enable-v4-programming-model) and you're using the minimum version of all [requirements](#requirements): > No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).+
+If you see the following error, make sure you're using Node.js version 18.x:
+
+> System.Private.CoreLib: Exception while executing function: Functions.httpTrigger1. System.Private.CoreLib: Result: Failure
+> Exception: undici_1.Request is not a constructor
+
+For any other issues or feedback, feel free to file an issue on our [GitHub repo](https://github.com/Azure/azure-functions-nodejs-library/issues).
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
For more information, see the [Geolocation service] documentation.
### Render service
-[Render service V2] introduces a new version of the [Get Map Tile V2 API] that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 tile sizes (where applicable) and numerous map types such as road, weather, contour, or map tiles. For a complete list, see [TilesetID] in the REST API documentation. It's recommended that you use Render service V2 instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service V2, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API].
+[Render V2 service] introduces a new version of the [Get Map Tile V2 API] that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 tile sizes (where applicable) and numerous map types such as road, weather, contour, or map tiles. For a complete list, see [TilesetID] in the REST API documentation. It's recommended that you use Render V2 service instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render V2 service, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API].
### Route service
Stay up to date on Azure Maps:
[Geolocation service]: /rest/api/maps/geolocation [Get Map Tile V2 API]: /rest/api/maps/render-v2/get-map-tile [Get Weather along route API]: /rest/api/maps/weather/getweatheralongroute
-[Render service V2]: /rest/api/maps/render-v2
+[Render V2 service]: /rest/api/maps/render-v2
[REST APIs]: /rest/api/maps/ [Route service]: /rest/api/maps/route [routeset API]: /rest/api/maps/v20220901preview/routeset
azure-maps How To Secure Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-device-code.md
Title: How to secure input constrained device with Azure AD and Azure Maps REST APIs
+ Title: How to secure an input constrained device using Azure AD and Azure Maps REST API
-description: How to configure a browser-less application which supports sign-in to Azure AD and calls Azure Maps REST APIs.
+description: How to configure a browser-less application that supports sign-in to Azure AD and calls Azure Maps REST API.
Last updated 06/12/2020
-# Secure an input constrained device with Azure AD and Azure Maps REST APIs
+# Secure an input constrained device by using Azure Active Directory (Azure AD) and Azure Maps REST APIs
-This guide discusses how to secure public applications or devices that cannot securely store secrets or accept browser input. These types of applications fall under the category of IoT or internet of things. Some examples of these applications may include: Smart TV devices or sensor data emitting applications.
+This guide discusses how to secure public applications or devices that can't securely store secrets or accept browser input. These types of applications fall under the internet of things (IoT) category. Examples include Smart TVs and sensor data emitting applications.
[!INCLUDE [authentication details](./includes/view-authentication-details.md)] ## Create an application registration in Azure AD > [!NOTE]
-> * **Prerequisite Reading:** [Scenario: Desktop app that calls web APIs](../active-directory/develop/scenario-desktop-overview.md)
+>
+> * **Prerequisite Reading:** [Scenario: Desktop app that calls web APIs]
> * The following scenario uses the device code flow, which does not involve a web browser to acquire a token.
-Create the device based application in Azure AD to enable Azure AD sign in. This application will be granted access to Azure Maps REST APIs.
+Create the device based application in Azure AD to enable Azure AD sign in, which is granted access to Azure Maps REST APIs.
1. In the Azure portal, in the list of Azure services, select **Azure Active Directory** > **App registrations** > **New registration**.
- > [!div class="mx-imgBorder"]
- > ![App registration](./media/how-to-manage-authentication/app-registration.png)
+ :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing application registration in Azure AD":::
-2. Enter a **Name**, choose **Accounts in this organizational directory only** as the **Supported account type**. In **Redirect URIs**, specify **Public client / native (mobile & desktop)** then add `https://login.microsoftonline.com/common/oauth2/nativeclient` to the value. For more details please see Azure AD [Desktop app that calls web APIs: App registration](../active-directory/develop/scenario-desktop-app-registration.md). Then **Register** the application.
+2. Enter a **Name**, choose **Accounts in this organizational directory only** as the **Supported account type**. In **Redirect URIs**, specify **Public client / native (mobile & desktop)** then add `https://login.microsoftonline.com/common/oauth2/nativeclient` to the value. For more information, see Azure AD [Desktop app that calls web APIs: App registration]. Then **Register** the application.
- > [!div class="mx-imgBorder"]
- > ![Add app registration details for name and redirect uri](./media/azure-maps-authentication/devicecode-app-registration.png)
+ :::image type="content" source="./media/azure-maps-authentication/devicecode-app-registration.png" alt-text="A screenshot showing the settings used to register an application.":::
-3. Navigate to **Authentication** and enable **Treat application as a public client**. This will enable device code authentication with Azure AD.
+3. Navigate to **Authentication** and enable **Treat application as a public client** to enable device code authentication with Azure AD.
- > [!div class="mx-imgBorder"]
- > ![Enable app registration as public client](./media/azure-maps-authentication/devicecode-public-client.png)
+ :::image type="content" source="./media/azure-maps-authentication/devicecode-public-client.png" alt-text="A screenshot showing the advanced settings used to specify treating the application as a public client.":::
4. To assign delegated API permissions to Azure Maps, go to the application. Then select **API permissions** > **Add a permission**. Under **APIs my organization uses**, search for and select **Azure Maps**.
- > [!div class="mx-imgBorder"]
- > ![Add app API permissions](./media/how-to-manage-authentication/app-permissions.png)
+ :::image type="content" source="./media/how-to-manage-authentication/app-permissions.png" alt-text="A screenshot showing where you request API permissions.":::
5. Select the check box next to **Access Azure Maps**, and then select **Add permissions**.
- > [!div class="mx-imgBorder"]
- > ![Select app API permissions](./media/how-to-manage-authentication/select-app-permissions.png)
+ :::image type="content" source="./media/how-to-manage-authentication/select-app-permissions.png" alt-text="A screenshot showing where you specify the app permissions you require.":::
-6. Configure Azure role-based access control (Azure RBAC) for users or groups. See [Grant role-based access for users to Azure Maps](#grant-role-based-access-for-users-to-azure-maps).
+6. Configure Azure role-based access control (Azure RBAC) for users or groups. For more information, see [Grant role-based access for users to Azure Maps].
-7. Add code for acquiring token flow in the application, for implementation details see [Device code flow](../active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md). When acquiring tokens, reference the scope: `user_impersonation` which was selected on earlier steps.
+7. Add code for acquiring token flow in the application, for implementation details see [Device code flow]. When acquiring tokens, reference the scope: `user_impersonation` that was selected on earlier steps.
> [!Tip] > Use Microsoft Authentication Library (MSAL) to acquire access tokens.
- > See recommendations on [Desktop app that calls web APIs: Code configuration](../active-directory/develop/scenario-desktop-app-configuration.md)
+ > For more information, see [Desktop app that calls web APIs: Code configuration] in the active directory documentation.
8. Compose the HTTP request with the acquired token from Azure AD, and sent request with a valid HTTP client.
x-ms-client-id: 30d7cc….9f55
Authorization: Bearer eyJ0e….HNIVN ```
- The sample request body below is in GeoJSON:
+ The following sample request body is in GeoJSON:
```json {
Operation-Location: https://us.atlas.microsoft.com/mapData/operations/{udid}?api
Access-Control-Expose-Headers: Operation-Location ``` - [!INCLUDE [grant role-based access to users](./includes/grant-rbac-users.md)] ## Next steps Find the API usage metrics for your Azure Maps account:+ > [!div class="nextstepaction"]
-> [View usage metrics](how-to-view-api-usage.md)
+> [View usage metrics]
+
+[Desktop app that calls web APIs: App registration]: ../active-directory/develop/scenario-desktop-app-registration.md
+[Desktop app that calls web APIs: Code configuration]: ../active-directory/develop/scenario-desktop-app-configuration.md
+[Device code flow]: ../active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md
+[Grant role-based access for users to Azure Maps]: #grant-role-based-access-for-users-to-azure-maps
+[Scenario: Desktop app that calls web APIs]: ../active-directory/develop/scenario-desktop-overview.md
+[View usage metrics]: how-to-view-api-usage.md
azure-maps How To Secure Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-app.md
# How to secure a single-page web application with non-interactive sign-in
-This article describes how to secure a single-page web application with Azure Active Directory (Azure AD), when the user isn't able to sign in to Azure AD.
+Secure a single-page web application with Azure Active Directory (Azure AD), even when the user isn't able to sign in to Azure AD.
-To create this non-interactive authentication flow, we'll create an Azure Function secure web service that's responsible for acquiring access tokens from Azure AD. This web service will be exclusively available only to your single-page web application.
+To create this non-interactive authentication flow, first create an Azure Function secure web service that's responsible for acquiring access tokens from Azure AD. This web service is exclusively available only to your single-page web application.
[!INCLUDE [authentication details](./includes/view-authentication-details.md)]
-> [!Tip]
+> [!TIP]
> Azure Maps can support access tokens from user sign-on or interactive flows. You can use interactive flows for a more restricted scope of access revocation and secret management. ## Create an Azure function To create a secured web service application that's responsible for authentication to Azure AD:
-1. Create a function in the Azure portal. For more information, see [Getting started with Azure Functions](../azure-functions/functions-get-started.md).
+1. Create a function in the Azure portal. For more information, see [Getting started with Azure Functions].
-2. Configure CORS policy on the Azure function to be accessible by the single-page web application. The CORS policy secures browser clients to the allowed origins of your web application. For more information, see [Add CORS functionality](../app-service/app-service-web-tutorial-rest-api.md#add-cors-functionality).
+2. Configure CORS policy on the Azure function to be accessible by the single-page web application. The CORS policy secures browser clients to the allowed origins of your web application. For more information, see [Add CORS functionality].
-3. [Add a system-assigned identity](../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) on the Azure function to enable creation of a service principal to authenticate to Azure AD.
+3. [Add a system-assigned identity] on the Azure function to enable creation of a service principal to authenticate to Azure AD.
-4. Grant role-based access for the system-assigned identity to the Azure Maps account. For details, see [Grant role-based access](#grant-role-based-access-for-users-to-azure-maps).
+4. Grant role-based access for the system-assigned identity to the Azure Maps account. For more information, see [Grant role-based access].
-5. Write code for the Azure function to obtain Azure Maps access tokens using system-assigned identity with one of the supported mechanisms or the REST protocol. For more information, see [Obtain tokens for Azure resources](../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity)
+5. Write code for the Azure function to obtain Azure Maps access tokens using system-assigned identity with one of the supported mechanisms or the REST protocol. For more information, see [Obtain tokens for Azure resources].
Here's an example REST protocol:
To create a secured web service application that's responsible for authenticatio
6. Configure security for the Azure function HttpTrigger:
- 1. [Create a function access key](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#authorization-keys)
+ 1. [Create a function access key]
1. [Secure HTTP endpoint](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#secure-an-http-endpoint-in-production) for the Azure function in production. 7. Configure a web application Azure Maps Web SDK.
Find the API usage metrics for your Azure Maps account:
Explore other samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"] > [Azure Maps Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ClientGrant)+
+[Getting started with Azure Functions]: ../azure-functions/functions-get-started.md
+[Add CORS functionality]: ../app-service/app-service-web-tutorial-rest-api.md#add-cors-functionality
+[Add a system-assigned identity]: ../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity
+[Grant role-based access]: #grant-role-based-access-for-users-to-azure-maps
+[Obtain tokens for Azure resources]: ../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity
+[Create a function access key]: ../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#authorization-keys
azure-maps How To Secure Spa Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md
Title: How to secure a single page application with user sign-in
-description: How to configure a single page application which supports Azure AD single-sign-on with Azure Maps Web SDK.
+description: How to configure a single page application that supports Azure AD single-sign-on with Azure Maps Web SDK.
Last updated 06/12/2020
# Secure a single page application with user sign-in
-The following guide pertains to an application which is hosted on a content server or has minimal web server dependencies. The application provides protected resources secured only to Azure AD users. The objective of the scenario is to enable the web application to authenticate to Azure AD and call Azure Maps REST APIs on behalf of the user.
+The following guide pertains to an application that is hosted on a content server or has minimal web server dependencies. The application provides protected resources secured only to Azure AD users. The objective of the scenario is to enable the web application to authenticate to Azure AD and call Azure Maps REST APIs on behalf of the user.
[!INCLUDE [authentication details](./includes/view-authentication-details.md)]
Create the web application in Azure AD for users to sign in. The web application
1. In the Azure portal, in the list of Azure services, select **Azure Active Directory** > **App registrations** > **New registration**.
- > [!div class="mx-imgBorder"]
- > ![App registration](./media/how-to-manage-authentication/app-registration.png)
+ :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="Screenshot showing the new registration page in the App registrations blade in Azure Active Directory.":::
-2. Enter a **Name**, choose a **Support account type**, provide a redirect URI which will represent the url which Azure AD will issue the token and is the url where the map control is hosted. For a detailed sample please see [Azure Maps Azure AD samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ImplicitGrant). Then select **Register**.
+2. Enter a **Name**, choose a **Support account type**, provide a redirect URI that represents the url which Azure AD issues the token and is the url where the map control is hosted. For a detailed sample, see [Azure Maps Azure AD samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ImplicitGrant). Then select **Register**.
3. To assign delegated API permissions to Azure Maps, go to the application. Then under **App registrations**, select **API permissions** > **Add a permission**. Under **APIs my organization uses**, search for and select **Azure Maps**.
- > [!div class="mx-imgBorder"]
- > ![Add app API permissions](./media/how-to-manage-authentication/app-permissions.png)
+ :::image type="content" source="./media/how-to-manage-authentication/app-permissions.png" alt-text="Screenshot showing a list of APIs my organization uses.":::
4. Select the check box next to **Access Azure Maps**, and then select **Add permissions**.
- > [!div class="mx-imgBorder"]
- > ![Select app API permissions](./media/how-to-manage-authentication/select-app-permissions.png)
+ :::image type="content" source="./media/how-to-manage-authentication/select-app-permissions.png" alt-text="Screenshot showing the request app API permissions screen.":::
5. Enable `oauth2AllowImplicitFlow`. To enable it, in the **Manifest** section of your app registration, set `oauth2AllowImplicitFlow` to `true`.
Create the web application in Azure AD for users to sign in. The web application
``` 7. Configure Azure role-based access control (Azure RBAC) for users or groups. See the [following sections to enable Azure RBAC](#grant-role-based-access-for-users-to-azure-maps).
-
+ [!INCLUDE [grant role access to users](./includes/grant-rbac-users.md)] ## Next steps
azure-maps How To Show Attribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-attribution.md
Title: Show the correct map copyright attribution information
-description: The map copyright attribution information must be displayed in any applications that use the Render V2 API, including web and mobile applications. In this article, you'll learn how to display the correct attribution every time you display or update a tile.
+description: The map copyright attribution information must be displayed in all applications that use the Render V2 API, including web and mobile applications. This article discusses how to display the correct attribution every time you display or update a tile.
Last updated 3/16/2022
# Show the correct copyright attribution
-When using the [Azure Maps Render service V2], either as a basemap or layer, you're required to display the appropriate data provider copyright attribution on the map. This information should be displayed in the lower right-hand corner of the map.
+When using the Azure Maps [Render V2 service], either as a basemap or layer, you're required to display the appropriate data provider copyright attribution on the map. This information should be displayed in the lower right-hand corner of the map.
-The above image is an example of a map from the Render service V2, displaying the road style. It shows the copyright attribution in the lower right-hand corner of the map.
+The above image is an example of a map from the Render V2 service, displaying the road style. It shows the copyright attribution in the lower right-hand corner of the map.
-The above image is an example of a map from the Render service V2, displaying the satellite style. note that there's another data provider listed.
+The above image is an example of a map from the Render V2 service, displaying the satellite style. note that there's another data provider listed.
## The Get Map Attribution API
The [Get Map Attribution API] enables you to request map copyright attribution i
The map copyright attribution information must be displayed on the map in any applications that use the Render V2 API, including web and mobile applications.
-The attribution is automatically displayed and updated on the map When using any of the Azure Maps SDKs. This includes the [Web SDK], [Android SDK] and the [iOS SDK].
+The attribution is automatically displayed and updated on the map When using any of the Azure Maps SDKs, including the [Web], [Android] and [iOS] SDKs.
When using map tiles from the Render service in a third-party map, you must display and update the copyright attribution information on the map.
Since the data providers can differ depending on the *region* and *zoom* level,
### How to use the Get Map Attribution API
-You'll need the following information to run the `attribution` command:
+You need the following information to run the `attribution` command:
| Parameter | Type | Description | | -- | | -- |
https://atlas.microsoft.com/map/attribution?subscription-key={Your-Azure-Maps-Su
## Additional information
-* For more information, see the [Azure Maps Render service V2] documentation.
+* For more information, see the [Render V2 service] documentation.
-[Azure Maps Render service V2]: /rest/api/maps/render-v2
+[Android]: how-to-use-android-map-control-library.md
+[Authentication with Azure Maps]: azure-maps-authentication.md
[Get Map Attribution API]: /rest/api/maps/render-v2/get-map-attribution
-[Web SDK]: how-to-use-map-control.md
-[Android SDK]: how-to-use-android-map-control-library.md
-[iOS SDK]: how-to-use-ios-map-control-library.md
-[Tileset Create API]: /rest/api/maps/v2/tileset/create
[Get Map Attribution]: /rest/api/maps/render-v2/get-map-attribution#tilesetid
+[iOS]: how-to-use-ios-map-control-library.md
+[Render V2 service]: /rest/api/maps/render-v2
+[Tileset Create API]: /rest/api/maps/v2/tileset/create
[TilesetID]: /rest/api/maps/render-v2/get-map-attribution#tilesetid
-[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
-[Authentication with Azure Maps]: azure-maps-authentication.md
+[Web]: how-to-use-map-control.md
+[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
# Best practices for Azure Maps Route service
-The Route Directions and Route Matrix APIs in Azure Maps [Route service] can be used to calculate the estimated arrival times (ETAs) for each requested route. Route APIs consider factors such as real-time traffic information and historic traffic data, like the typical road speeds on the requested day of the week and time of day. The APIs return the shortest or fastest routes available to multiple destinations at a time in sequence or in optimized order, based on time or distance. Users can also request specialized routes and details for walkers, bicyclists, and commercial vehicles like trucks. In this article, we'll share the best practices to call Azure Maps [Route service], and you'll learn how-to:
+The Route Directions and Route Matrix APIs in Azure Maps [Route service] can be used to calculate the estimated arrival times (ETAs) for each requested route. Route APIs consider factors such as real-time traffic information and historic traffic data, like the typical road speeds on the requested date and time. The APIs return the shortest or fastest routes available to multiple destinations at a time in sequence or in optimized order, based on time or distance. Users can also request specialized routes and details for walkers, bicyclists, and commercial vehicles like trucks. This article discusses best practices for calling the Azure Maps [Route service], including how-to:
* Choose between the Route Directions APIs and the Matrix Routing API * Request historic and predicted travel times, based on real-time and historical traffic data
This article uses the [Postman] application to build REST calls, but you can cho
## Choose between Route Directions and Matrix Routing
-The Route Directions APIs return instructions including the travel time and the coordinates for a route path. The Route Matrix API lets you calculate the travel time and distances for a set of routes that are defined by origin and destination locations. For every given origin, the Matrix API calculates the cost (travel time and distance) of routing from that origin to every given destination. These API allow you to specify parameters such as the desired departure time, arrival times, and the vehicle type, like car or truck. They all use real-time or predictive traffic data accordingly to return the most optimal routes.
+The Route Directions APIs return instructions including the travel time and the coordinates for a route path. The Route Matrix API lets you calculate the travel time and distances for a set of routes defined by origin and destination locations. For every given origin, the Matrix API calculates the cost (travel time and distance) of routing from that origin to every given destination. These API allow you to specify parameters such as the desired departure time, arrival times, and the vehicle type, like car or truck. They all use real-time or predictive traffic data accordingly to return the most optimal routes.
Consider calling Route Directions APIs if your scenario is to:
Consider calling Matrix Routing API if your scenario is to:
* Sort potential routes by their actual travel distance or time. The Matrix API returns only travel times and distances for each origin and destination combination. * Cluster data based on travel time or distances. For example, your company has 50 employees, find all employees that live within 20 minute Drive Time from your office.
-Here is a comparison to show some capabilities of the Route Directions and Matrix APIs:
+Here's a comparison to show some capabilities of the Route Directions and Matrix APIs:
| Azure Maps API | Max number of queries in the request | Avoid areas | Truck and electric vehicle routing | Waypoints and Traveling Salesman optimization | Supporting points | | :--: | :--: | :--: | :--: | :--: | :--: |
To learn more about electric vehicle routing capabilities, see our tutorial on h
## Request historic and real-time data
-By default, the Route service assumes the traveling mode is a car and the departure time is now. It returns route based on real-time traffic conditions unless a route calculation request specifies otherwise. Fixed time-dependent traffic restrictions, like 'Left turns aren't allowed between 4:00 PM to 6:00 PM' are captured and will be considered by the routing engine. Road closures, like roadworks, will be considered unless you specifically request a route that ignores the current live traffic. To ignore the current traffic, set `traffic` to `false` in your API request.
+By default, the Route service assumes the traveling mode is a car and the departure time is now. It returns route based on real-time traffic conditions unless a route calculation request specifies otherwise. The routing engine factors fixed time-dependent traffic restrictions, like 'Left turns aren't allowed between 4:00 PM to 6:00 PM'. Road closures, like roadworks, are considered unless you specifically request a route that ignores the current live traffic. To ignore the current traffic, set `traffic` to `false` in your API request.
-The route calculation **travelTimeInSeconds** value includes the delay due to traffic. It's generated by leveraging the current and historic travel time data, when departure time is set to now. If your departure time is set in the future, the APIs return predicted travel times based on historical data.
+The route calculation **travelTimeInSeconds** value includes the delay due to traffic. It's generated by using the current and historic travel time data, when departure time is set to now. If your departure time is set in the future, the APIs return predicted travel times based on historical data.
-If you include the **computeTravelTimeFor=all** parameter in your request, then the summary element in the response will have the following additional fields including historical traffic conditions:
+If you include the **computeTravelTimeFor=all** parameter in your request, then the summary element in the response has the following fields including historical traffic conditions:
| Element | Description| | : | : |
In the first example below the departure time is set to the future, at the time
https://atlas.microsoft.com/route/directions/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&query=51.368752,-0.118332:51.385426,-0.128929&travelMode=car&traffic=true&departAt=2025-03-29T08:00:20&computeTravelTimeFor=all ```
-The response contains a summary element, like the one below. Because the departure time is set to the future, the **trafficDelayInSeconds** value is zero. The **travelTimeInSeconds** value is calculated using time-dependent historic traffic data. So, in this case, the **travelTimeInSeconds** value is equal to the **historicTrafficTravelTimeInSeconds** value.
+The response contains a summary element, like the following example. Because the departure time is set to the future, the **trafficDelayInSeconds** value is zero. The **travelTimeInSeconds** value is calculated using time-dependent historic traffic data. So, in this case, the **travelTimeInSeconds** value is equal to the **historicTrafficTravelTimeInSeconds** value.
```json "summary": {
The response contains a summary element, like the one below. Because the departu
### Sample query
-In the second example below, we have a real-time routing request, where departure time is now. It's not explicitly specified in the URL because it's the default value.
+In the next example, we have a real-time routing request, where departure time is now. It's not explicitly specified in the URL because it's the default value.
```http https://atlas.microsoft.com/route/directions/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&query=47.6422356,-122.1389797:47.6641142,-122.3011268&travelMode=car&traffic=true&computeTravelTimeFor=all ```
-The response contains a summary as shown below. Because of congestions, the **trafficDelaysInSeconds** value is greater than zero. It's also greater than **historicTrafficTravelTimeInSeconds**.
+The response contains a summary as shown in the following example. Because of congestion, the **trafficDelaysInSeconds** value is greater than zero. It's also greater than **historicTrafficTravelTimeInSeconds**.
```json "summary": {
The response contains a summary as shown below. Because of congestions, the **tr
## Request route and leg details
-By default, the Route service will return an array of coordinates. The response will contain the coordinates that make up the path in a list named `points`. Route response also includes the distance from the start of the route and the estimated elapsed time. These values can be used to calculate the average speed for the entire route.
+By default, the Route service returns an array of coordinates. The response contains the coordinates that make up the path in a list named `points`. Route response also includes the distance from the start of the route and the estimated elapsed time. These values can be used to calculate the average speed for the entire route.
The following image shows the `points` element.
The Route API returns directions that accommodate the dimensions of the truck an
### Sample query
-Changing the US Hazmat Class, from the above query, will result in a different route to accommodate this change.
+Changing the US Hazmat Class, from the above query, results in a different route to accommodate this change.
```http https://atlas.microsoft.com/route/directions/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&vehicleWidth=2&vehicleHeight=2&vehicleCommercial=true&vehicleLoadType=USHazmatClass9&travelMode=truck&instructionsType=text&query=51.368752,-0.118332:41.385426,-0.128929 ```
-The response below is for a truck carrying a class 9 hazardous material, which is less dangerous than a class 1 hazardous material. When you expand the `guidance` element to read the directions, you'll notice that the directions aren't the same. There are more route instructions for the truck carrying class 1 hazardous material.
+The following response is for a truck carrying a class 9 hazardous material, which is less dangerous than a class 1 hazardous material. When you expand the `guidance` element to read the directions, notice that the directions aren't the same. There are more route instructions for the truck carrying class 1 hazardous material.
![Truck with class 9 hazwaste](media/how-to-use-best-practices-for-routing/truck-with-hazwaste9-img.png)
The response contains the sections that are suitable for traffic along the given
![Traffic sections](media/how-to-use-best-practices-for-routing/traffic-section-type-img.png)
-This option can be used to color the sections when rendering the map, as in the image below:
+This option can be used to color the sections when rendering the map, as in The following image:
![Colored sections rendered on map](media/how-to-use-best-practices-for-routing/show-traffic-sections-img.png)
Azure Maps currently provides two forms of route optimizations:
* Traveling salesman optimization, which changes the order of the waypoints to obtain the best order to visit each stop
-For multi-stop routing, up to 150 waypoints may be specified in a single route request. The starting and ending coordinate locations can be the same, as would be the case with a round trip. But you need to provide at least one additional waypoint to make the route calculation. Waypoints can be added to the query in-between the origin and destination coordinates.
+For multi-stop routing, up to 150 waypoints may be specified in a single route request. The starting and ending coordinate locations can be the same, as would be the case with a round trip. But you need to provide at least one more waypoint to make the route calculation. Waypoints can be added to the query in-between the origin and destination coordinates.
If you want to optimize the best order to visit the given waypoints, then you need to specify **computeBestOrder=true**. This scenario is also known as the traveling salesman optimization problem.
The response describes the path length to be 140,851 meters, and that it would t
![Non-optimized response](media/how-to-use-best-practices-for-routing/non-optimized-response-img.png)
-The image below illustrates the path resulting from this query. This path is one possible route. It's not the optimal path based on time or distance.
+The following image illustrates the path resulting from this query. This path is one possible route. It's not the optimal path based on time or distance.
![Non-optimized image](media/how-to-use-best-practices-for-routing/non-optimized-image-img.png)
The response describes the path length to be 91,814 meters, and that it would ta
![Optimized response](media/how-to-use-best-practices-for-routing/optimized-response-img.png)
-The image below illustrates the path resulting from this query.
+The following image illustrates the path resulting from this query.
![Optimized image](media/how-to-use-best-practices-for-routing/optimized-image-img.png)
You might have situations where you want to reconstruct a route to calculate zer
3. Order the locations based on the distance from the start of the route 4. Add these locations as supporting points in a new route request to [Post Route Directions]. To learn more about the supporting points, see the [Post Route Directions API documentation].
-When calling [Post Route Directions], you can set the minimum deviation time or the distance constraints, along with the supporting points. Use these parameters if you want to offer alternative routes, but you also want to limit the travel time. When these constraints are used, the alternative routes will follow the reference route from the origin point for the given time or distance. In other words, the other routes diverge from the reference route per the given constraints.
+When calling [Post Route Directions], you can set the minimum deviation time or the distance constraints, along with the supporting points. Use these parameters if you want to offer alternative routes, but you also want to limit the travel time. When these constraints are used, the alternative routes follow the reference route from the origin point for the given time or distance. In other words, the other routes diverge from the reference route per the given constraints.
-The image below is an example of rendering alternative routes with specified deviation limits for the time and the distance.
+The following image is an example of rendering alternative routes with specified deviation limits for the time and the distance.
![Alternative routes](media/how-to-use-best-practices-for-routing/alternative-routes-img.png)
The Azure Maps Web SDK provides a [Service module]. This module is a helper libr
To learn more, please see: > [!div class="nextstepaction"]
-> [Azure Maps Route service](/rest/api/maps/route)
+> [Azure Maps Route service]
> [!div class="nextstepaction"]
-> [How to use the Service module](./how-to-use-services-module.md)
+> [How to use the Service module]
> [!div class="nextstepaction"]
-> [Show route on the map](./map-route.md)
+> [Show route on the map]
> [!div class="nextstepaction"]
-> [Azure Maps npm Package](https://www.npmjs.com/package/azure-maps-rest )
+> [Azure Maps npm Package]
-[Route service]: /rest/api/maps/route
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Routing Coverage]: routing-coverage.md
-[Postman]: https://www.postman.com/downloads/
-[RouteType]: /rest/api/maps/route/postroutedirections#routetype
+[Azure Maps npm Package]: https://www.npmjs.com/package/azure-maps-rest
+[Azure Maps Route service]: /rest/api/maps/route
+[How to use the Service module]: how-to-use-services-module.md
[Point of Interest]: /rest/api/maps/search/getsearchpoi
-[Post Route Directions]: /rest/api/maps/route/postroutedirections
[Post Route Directions API documentation]: /rest/api/maps/route/postroutedirections#supportingpoints
+[Post Route Directions]: /rest/api/maps/route/postroutedirections
+[Postman]: https://www.postman.com/downloads/
+[Route service]: /rest/api/maps/route
+[RouteType]: /rest/api/maps/route/postroutedirections#routetype
+[Routing Coverage]: routing-coverage.md
[Service module]: /javascript/api/azure-maps-rest/
+[Show route on the map]: map-route.md
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps How To Use Feedback Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-feedback-tool.md
Title: Provide data feedback to Azure Maps | Microsoft Azure Maps
+ Title: Provide data feedback to Azure Maps
+ description: Provide data feedback using Microsoft Azure Maps feedback tool.
Azure Maps has been available since May 2018. Azure Maps has been providing fresh map data, easy-to-use REST APIs, and powerful SDKs to support our enterprise customers with different kind of business use cases. The real world is changing every second, and itΓÇÖs crucial for us to provide a factual digital representation to our customers. Our customers that are planning to open or close facilities need our maps to update promptly. So, they can efficiently plan for delivery, maintenance, or customer service at the right facilities. We have created the Azure Maps data feedback site to empower our customers to provide direct data feedback. CustomersΓÇÖ data feedback goes directly to our data providers and their map editors. They can quickly evaluate and incorporate feedback into our mapping products.
-[Azure Maps Data feedback site](https://feedback.azuremaps.com) provides an easy way for our customers to provide map data feedback, especially on business points of interest and residential addresses. This article guides you on how to provide different kinds of feedback using the Azure Maps feedback site.
+[Azure Maps Data feedback site] provides an easy way for our customers to provide map data feedback, especially on business points of interest and residential addresses. This article guides you on how to provide different kinds of feedback using the Azure Maps feedback site.
-## Add a business place or a residential address
+## Add a business place or a residential address
-You may want to provide feedback about a missing point of interest or a residential address. There are two ways to do so. Open the Azure Map data feedback site, search for the missing location's coordinates, and then click "Add a place"
+You may want to provide feedback about a missing point of interest or a residential address. There are two ways to do so. Open the Azure Map data feedback site, search for the missing location's coordinates, and then select **Add a place**.
![search missing location](./media/how-to-use-feedback-tool/search-poi.png)
-Or, you can interact with the map. Click on the location to drop a pin at the coordinate and click "Add a place".
+Or, you can interact with the map. Select the location to drop a pin at the coordinate then select **Add a place**.
![add pin](./media/how-to-use-feedback-tool/add-poi.png)
-Upon clicking, you'll be directed to a form to provide the corresponding details for the place.
+Once selected, you're directed to a form to provide the corresponding details for the place.
![add a place](./media/how-to-use-feedback-tool/add-a-place.png)
-## Fix a business place or a residential address
+## Fix a business place or a residential address
-The feedback site also allows you to search and locate a business place or an address. You can provide feedback to fix the address or the pin location, if they aren't correct. To provide feedback to fix the address, use the search bar to search for a business place or residential address. Click on the location of your interest from the results list. Click on "Fix this place".
+The feedback site also allows you to search and locate a business place or an address. You can provide feedback to fix the address or the pin location, if they aren't correct. To provide feedback to fix the address, use the search bar to search for a business place or residential address. Select the location of your interest from the results list, then **Fix this place**.
![search place to fix](./media/how-to-use-feedback-tool/fix-place.png)
-To provide feedback to fix the address, fill out the "Fix a place" form, and then click on the "submit" button.
+To provide feedback to fix the address, fill out the **Fix a place** form, then select **Submit**.
![fix form](./media/how-to-use-feedback-tool/fix-form.png)
-If the pin location for the place is wrong, check the checkbox on the "Fix a place" form that says "The pin location is incorrect". Move the pin to the correct location, and then click the "submit" button.
+If the pin location for the place is wrong, select the **The pin location is incorrect** checkbox. Move the pin to the correct location, and then select **Submit**.
![move pin location](./media/how-to-use-feedback-tool/move-pin.png)
-## Add a comment
+## Add a comment
-In addition to letting you search for a location, the feedback tool also lets you add a free form text comment for details related to the location. To add a comment, search for the location or click on the location. Click "Add a comment", write a comment, and then click "Submit".
+In addition to letting you search for a location, the feedback tool also lets you add a free form text comment for details related to the location. To add a comment, search for the location or select the location, write a comment in the **Add a comment** field then **Submit**.
![add comment](./media/how-to-use-feedback-tool/add-comment.png)
-## Track status
+## Track status
-You can also track the status of your request by checking the "I want to track status" box and providing your email while making a request. You will receive a tracking link in the email that provides an up-to-date status of your request.
+You can also track the status of your request by selecting the **I want to track status** box and providing your email while making a request. You receive a tracking link in the email that provides an up-to-date status of your request.
![feedback status](./media/how-to-use-feedback-tool/feedback-status.png) - ## Next steps
-To post any technical questions related to Azure Maps, visit:
+For any technical questions related to Azure Maps, see [Microsoft Q & A].
-* [Microsoft Q & A](/answers/topics/azure-maps.html)
+[Azure Maps Data feedback site]: https://feedback.azuremaps.com
+[Microsoft Q & A]: /answers/topics/azure-maps.html
azure-maps Release Notes Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-indoor-module.md
This document contains information about new features and other changes to the Azure Maps Indoor Module.
+## [0.2.1]
+
+### New features (0.2.1)
+
+- multiple statesets are now supported for map configurations with multiple tileset, instead of single stateset ID, a mapping between tileset IDs and stateset ids can be passed:
+
+ ```js
+ indoorManager.setOptions({
+ statesetId: {
+ 'tilesetId1': 'stasetId1',
+ 'tilesetId2': 'stasetId2'
+ }
+ });
+
+ indoorManager.setDynamicStyling(true)
+ ```
+
+- autofocus and autofocusOptions support: when you set autofocus on `IndoorManagerOptions`, the camera is focused on the indoor facilities once the indoor map is loaded. Camera options can be further customized via autofocus options:
+
+ ```js
+ indoorManager.setOptions({
+ autofocus: true,
+ autofocusOptions: {
+ padding: { top: 50, bottom: 50, left: 50, right: 50 }
+ }
+ });
+ ```
+
+- focusCamera support: instead of `autofocus`, you can call `focusCamera` directly. (Alternative to `autofocus`, when indoor map configuration is used, tilesetId can be provided to focus on a specific facility only, otherwise bounds that enclose all facilities are used):
+
+ ```js
+ indoorManager.focusCamera({
+ type: 'ease',
+ duration: 1000,
+ padding: { top: 50, bottom: 50, left: 50, right: 50 }
+ })
+ ```
+
+- level name labels in LevelControl (in addition to `ordinal`, LevelControl can now display level names derived from 'name' property of level features):
+
+ ```js
+ indoorManager.setOptions({
+ levelControl: new LevelControl({ levelLabel: 'name' })
+ });
+ ```
+### Changes (0.2.1)
+
+- non level-bound features are now displayed
+
+### Bug fixes (0.2.1)
+
+- fix facility state not initialized when tile loads don't emit `sourcedata` event
+
+- level preference sorting fixed
+ ## [0.2.0] ### New features (0.2.0)
Stay up to date on Azure Maps:
> [Azure Maps Blog] [drawing package 2.0]: ./drawing-package-guide.md
+[0.2.1]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.1
[0.2.0]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.0 [Azure Maps Creator Samples]: https://samples.azuremaps.com/?search=creator [Azure Maps Blog]: https://techcommunity.microsoft.com/t5/azure-maps-blog/bg-p/AzureMapsBlog
azure-monitor Prometheus Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/prometheus-alerts.md
Last updated 09/15/2022
# Prometheus alerts in Azure Monitor
-Prometheus alert rules allow you to define alert conditions, using queries which are written in Prometheus Query Language (Prom QL) that are applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it is fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule.
+Prometheus alert rules allow you to define alert conditions, using queries which are written in Prometheus Query Language (Prom QL). The rule queries are applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it is fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule.
> [!NOTE] > Azure Monitor managed service for Prometheus, including Prometheus metrics, is currently in public preview and does not yet have all of its features enabled. Prometheus metrics are displayed with alerts generated by other types of alert rules, but they currently have a difference experience for creating and managing them. ## Create Prometheus alert rule
-Prometheus alert rules are created as part of a Prometheus rule group which is stored in [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details.
+Prometheus alert rules are created as part of a Prometheus rule group, which is applied on the [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details.
## View Prometheus alerts View fired and resolved Prometheus alerts in the Azure portal with other alert types. Use the following steps to filter on only Prometheus alerts.
View fired and resolved Prometheus alerts in the Azure portal with other alert t
:::image type="content" source="media/prometheus-metric-alerts/view-alerts.png" lightbox="media/prometheus-metric-alerts/view-alerts.png" alt-text="Screenshot of a list of alerts in Azure Monitor with a filter for Prometheus alerts."::: 4. Click the alert name to view the details of a specific fired/resolved alert.
-## Next steps
+
+## Explore Prometheus alerts in Grafana
+1. In the fired alerts details pane, you can click the **View query in Grafana** link.
+2. A browser tab will be opened taking you to the [Azure Managed Grafana](../../managed-grafan) instance connected to your Azure Monitor Workspace.
+3. Grafana will be opened in Explore mode, presenting the chart for your alert rule expression query which triggered the alert, around the alert firing time. You can further explore the query in Grafana to identify the reason causing the alert to fire.
+> [!NOTE]
+> 1. If there is no Azure Managed Grafana connected to your Azure Monitor Workspace, a link to Grafana will not be available.
+> 2. In order to view the alert query in Explore mode, you must have either a Grafana Admin or Grafana Editor role permissions. If you don't have the needed permissions, you will get a respective Grafana error.
+
+## Next steps
- [Create a Prometheus rule group](../essentials/prometheus-rule-groups.md).
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
description: Search logs generated by Trace, NLog, or Log4Net.
ms.devlang: csharp Previously updated : 03/22/2023 Last updated : 04/18/2023 - # Explore .NET/.NET Core and Python trace logs in Application Insights
-Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to [Azure Application Insights][start]. For Python applications, send diagnostic tracing logs by using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search for them. Those logs are merged with the other log files from your application. You can use them to identify traces that are associated with each user request and correlate them with other events and exception reports.
+Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to Azure Application Insights. For Python applications, send diagnostic tracing logs by using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search for them. Those logs are merged with the other log files from your application. You can use them to identify traces that are associated with each user request and correlate them with other events and exception reports.
> [!NOTE] > Do you need the log-capture module? It's a useful adapter for third-party loggers. But if you aren't already using NLog, log4Net, or System.Diagnostics.Trace, consider calling [**Application Insights TrackTrace()**](./api-custom-events-metrics.md#tracktrace) directly.
Install your chosen logging framework in your project, which should result in an
## Configure Application Insights to collect logs
-[Add Application Insights to your project](./asp-net.md) if you haven't done that yet. You'll see an option to include the log collector.
+[Add Application Insights to your project](./asp-net.md) if you haven't done that yet and there is an option to include the log collector.
Or right-click your project in Solution Explorer to **Configure Application Insights**. Select the **Configure trace collection** option.
You can also add a severity level to your message. And, like other telemetry, yo
new Dictionary<string, string> { { "database", "db.ID" } }); ```
-Now you can easily filter out in [Search][diagnostic] all the messages of a particular severity level that relate to a particular database.
+Now you can easily filter out in **Transaction Search** all the messages of a particular severity level that relate to a particular database.
## AzureLogHandler for OpenCensus Python
logger.warning('Hello, World!')
Run your app in debug mode or deploy it live.
-In your app's overview pane in the [Application Insights portal][portal], select [Search][diagnostic].
+In your app's overview pane in the Application Insights portal, select **Transaction Search**.
You can, for example:
The Application Insights Java agent collects logs from Log4j, Logback, and java.
### <a name="emptykey"></a>Why do I get the "Instrumentation key cannot be empty" error message?
-You probably installed the logging adapter NuGet package without installing Application Insights. In Solution Explorer, right-click *ApplicationInsights.config*, and select **Update Application Insights**. You'll be prompted to sign in to Azure and create an Application Insights resource or reuse an existing one. That should fix the problem.
+You probably installed the logging adapter NuGet package without installing Application Insights. In Solution Explorer, right-click *ApplicationInsights.config*, and select **Update Application Insights**. You are prompted to sign in to Azure and create an Application Insights resource or reuse an existing one. It should fix the problem.
### Why can I see traces but not other events in diagnostic search?
Perhaps your application sends voluminous amounts of data and you're using the A
## <a name="add"></a>Next steps
-* [Diagnose failures and exceptions in ASP.NET][exceptions]
-* [Learn more about Search][diagnostic]
-* [Set up availability and responsiveness tests][availability]
-* [Troubleshooting][qna]
+* [Diagnose failures and exceptions in ASP.NET](asp-net-exceptions.md)
+* [Learn more about Transaction Search](diagnostic-search.md)
+* [Set up availability and responsiveness tests](availability-overview.md)
<!--Link references-->
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
If your monitoring of a business application is limited to functionality provide
- Collect detailed application usage and performance data such as response time, failure rates, and request rates. - Collect browser data such as page views and load performance. - Detect exceptions and drill into stack trace and related requests.-- Perform advanced analysis using features such as [distributed tracing](app/distributed-tracing.md) and [smart detection](alerts/proactive-diagnostics.md).
+- Perform advanced analysis using features such as [distributed tracing](app/distributed-tracing-telemetry-correlation.md) and [smart detection](alerts/proactive-diagnostics.md).
- Use [metrics explorer](essentials/metrics-getting-started.md) to interactively analyze performance data. - Use [log queries](logs/log-query-overview.md) to interactively analyze collected telemetry together with data collected for Azure services and VM insights.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Title: Optimize costs in Azure Monitor
+ Title: Cost optimization in Azure Monitor
description: Recommendations for reducing costs in Azure Monitor.
Last updated 03/29/2023
-# Optimize costs in Azure Monitor
-You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+# Cost optimization in Azure Monitor
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
-> [!NOTE]
-> This article describes [Cost optimization](/azure/architecture/framework/cost/) for Azure Monitor as part of the [Azure Well-Architected Framework](/azure/architecture/framework/). This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
->
-> - Reliability
-> - Security
-> - Cost Optimization
-> - Operational Excellence
-> - Performance Efficiency
+This article describes [Cost optimization](/azure/architecture/framework/cost/) for Azure Monitor as part of the [Azure Well-Architected Framework](/azure/architecture/framework/). This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
-## Design considerations
+- Reliability
+ - Security
+ - Cost Optimization
+ - Operational Excellence
+ - Performance Efficiency
-Azure Monitor includes the following design considerations related to cost:
-- Log Analytics workspace architecture<br><br>You can start using Azure Monitor with a single Log Analytics workspace by using default options. As your monitoring environment grows, you'll need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces. There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. This may include trade-offs between functionality and cost depending on your particular priorities.<br><br>See [Design a Log Analytics workspace architecture](logs/workspace-design.md) for a list of criteria to consider when designing a workspace architecture.
+## Azure Monitor Logs
-## Checklist
-**Log Analytics workspace configuration**
+## Azure resources
-> [!div class="checklist"]
-> - Configure pricing tier or dedicated cluster to optimize your cost depending on your usage.
-> - Configure tables used for debugging, troubleshooting, and auditing as Basic Logs.
-> - Configure data retention and archiving.
-**Data collection**
+### Design checklist
> [!div class="checklist"]
-> - Use diagnostic settings and transformations to collect only critical resource log data from Azure resources.
-> - Configure VM agents to collect only critical events.
-> - Use transformations to filter resource logs for [supported tables](logs/tables-feature-support.md).
-> - Ensure that VMs aren't sending data to multiple workspaces.
+> - Collect only critical resource log data from Azure resources.
-**Monitor usage**
-> [!div class="checklist"]
-> - Send alert when data collection is high.
-> - Analyze your collected data at regular intervals to determine if there are opportunities to further reduce your cost.
-> - Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget.
+### Configuration recommendations
+| Recommendation | Benefit |
+|:|:|
+| Collect only critical resource log data from Azure resources. | When you create [diagnostic settings](essentials/diagnostic-settings.md) to send [resource logs](essentials/resource-logs.md) for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, you can use a [workspace transformation](essentials/data-collection-transformations.md?#workspace-transformation-dcr) to further filter unneeded data for those resources that use a [supported table](logs/tables-feature-support.md). See [Diagnostic settings in Azure Monitor](essentials/diagnostic-settings.md#controlling-costs) for details on how to configure diagnostic settings and using transformations to filter their data. |
-## Configuration recommendations
+## Virtual machines
+### Design checklist
-### Log Analytics workspace configuration
-You may be able to significantly reduce your costs by optimizing the configuration of your Log Analytics workspaces. You can commit to a minimum amount of data collection in exchange for a reduced rate, and optimize your costs for the functionality and retention of data in particular tables.
+> [!div class="checklist"]
+> - Configure VM agents to collect only important events.
+> - Ensure that VMs aren't sending data to multiple workspaces.
+> - Use transformations to filter unnecessary data from collected events.
-| Recommendation | Description |
-|:|:|
-| Configure pricing tier or dedicated cluster for your Log Analytics workspaces. | By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers) or [dedicated cluster](logs/logs-dedicated-clusters.md), which allows you to commit to a daily minimum of data collected in exchange for a lower rate.<br><br>See [Azure Monitor Logs cost calculations and options](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers.
-| Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. | Tables in a Log Analytics workspace configured for [Basic Logs](logs/basic-logs-configure.md) have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost.<br><br>See [Configure Basic Logs in Azure Monitor](logs/basic-logs-configure.md) for more information about Basic Logs and [Query Basic Logs in Azure Monitor](.//logs/basic-logs-query.md) for details on query limitations. |
-| Configure data retention and archiving. | There is a charge for retaining data in a Log Analytics workspace beyond the default of 30 days (90 days in Sentinel if enabled on the workspace). If you need to retain data for compliance reasons or for occasional investigation or analysis of historical data, configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost.<br><br>See [Configure data retention and archive policies in Azure Monitor Logs](logs/data-retention-archive.md) for details on how to configure your workspace and how to work with archived data. |
+### Configuration recommendations
+| Recommendation | Benefit |
+|:|:|
+| Configure VM agents to collect only important events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-data-collection.md#controlling-costs) for guidance on data to collect and strategies for using [XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to limit it.|
+| Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. |
+| Use transformations to filter unnecessary data from collected events. | [Transformations](essentials/data-collection-transformations.md) can be used in data collection rules to remove unnecessary data or even entire columns from events collected from the virtual machine which can significantly reduce the cost for their ingestion and retention. |
+## Container insights
-### Data collection
-Since Azure Monitor charges for the collection of data, your goal should be to collect the minimal amount of data required to meet your monitoring requirements. You have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you're not using for alerting or analysis.
+### Design checklist
-#### Azure resources
+> [!div class="checklist"]
+> - Configure agent collection to remove unneeded data.
+> - Modify settings for collection of metric data.
+> - Limit Prometheus metrics collected.
+> - Configure Basic Logs.
+### Configuration recommendations
-| Recommendation | Description |
+| Recommendation | Benefit |
|:|:|
-| Collect only critical resource log data from Azure resources. | When you create [diagnostic settings](essentials/diagnostic-settings.md) to send [resource logs](essentials/resource-logs.md) for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, you can use a [workspace transformation](essentials/data-collection-transformations.md?#workspace-transformation-dcr) to further filter unneeded data for those resources that use a [supported table](logs/tables-feature-support.md). See [Diagnostic settings in Azure Monitor](essentials/diagnostic-settings.md#controlling-costs) for details on how to configure diagnostic settings and using transformations to filter their data. |
+| Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data in ContainerLogs you don't need. |
+| Modify settings for collection of metric data. | You can reduce your costs by modifying the default collection settings Container insights uses for the collection of metric data. See [Enable cost optimization settings (preview)](containers/container-insights-cost-config.md) for details on modifying both the frequency that metric data is collected and the namespaces that are collected. |
+| Limit Prometheus metrics collected. | If you configured Prometheus metric scraping, then follow the recommendations at [Controlling ingestion to reduce cost](containers/container-insights-cost.md#prometheus-metrics-scraping) to optimize your data collection for cost. |
+| Configure Basic Logs. | [Convert your schema to ContainerLogV2](containers/container-insights-logging-v2.md) which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). |
-#### Virtual machines
-| Recommendation | Description |
-|:|:|
-| Configure VM agents to collect only critical events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-data-collection.md#controlling-costs) for guidance on data to collect and strategies for using XPath queries and transformations to limit it.|
-| Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. |
+## Application Insights
-#### Container insights
-
-| Recommendation | Description |
-|:|:|
-| Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data in ContainerLogs you don't need. |
-| Modify settings for collection of metric data | You can reduce your costs by modifying the default collection settings Container insights uses for the collection of metric data. See [Enable cost optimization settings (preview)](containers/container-insights-cost-config.md) for details on modifying both the frequency that metric data is collected and the namespaces that are collected. |
-| Limit Prometheus metrics collected | If you configured Prometheus metric scraping, then follow the recommendations at [Controlling ingestion to reduce cost](containers/container-insights-cost.md#prometheus-metrics-scraping) to optimize your data collection for cost. |
-| Configure Basic Logs | [Convert your schema to ContainerLogV2](containers/container-insights-logging-v2.md) which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). |
+### Design checklist
+> [!div class="checklist"]
+> - Use sampling to tune the amount of data collected.
+> - Use sampling to tune the amount of data collected.
+> - Limit the number of Ajax calls.
+> - Disable unneeded modules.
+> - Pre-aggregate metrics from any calls to TrackMetric.
+> - Limit the use of custom metrics.
+> - Ensure use of updated SDKs.
-#### Application Insights
+### Configuration recommendations
-| Recommendation | Description |
+| Recommendation | Benefit |
|:|:|
-| Change to Workspace-based Application Insights | Ensure that your Application Insights resources are [Workspace-based](app/create-workspace-resource.md) so that they can leveage new cost savings tools such as [Basic Logs](logs/basic-logs-configure.md), [Commitment Tiers](logs/cost-logs.md#commitment-tiers), [Retention by data type and Data Archive](logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). |
+| Change to Workspace-based Application Insights | Ensure that your Application Insights resources are [Workspace-based](app/create-workspace-resource.md) so that they can leverage new cost savings tools such as [Basic Logs](logs/basic-logs-configure.md), [Commitment Tiers](logs/cost-logs.md#commitment-tiers), [Retention by data type and Data Archive](logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). |
| Use sampling to tune the amount of data collected. | [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. | | Limit the number of Ajax calls. | [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too. | | Disable unneeded modules. | [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required. |
Since Azure Monitor charges for the collection of data, your goal should be to c
| Limit the use of custom metrics. | The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs. Using this option can result in the creation of more pre-aggregation metrics. | | Ensure use of updated SDKs. | Earlier versions of the ASP.NET Core SDK and Worker Service SDK [collect many counters by default](app/eventcounters.md#default-counters-collected), which were collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected). |
-#### All log data collection
-
-| Recommendation | Description |
-|:|:|
-| Remove unnecssary data during data ingestion | After following all of the preveious recommendations, consider using Azure Monitor [data collection transformations](essentials/data-collection-transformations.md) to reduce the size of your data during ingestion. |
--
-## Monitor workspace and analyze usage
-
-After you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have other opportunities to further filter out collected data that hasn't proven to be useful.
--
-| Recommendation | Description |
-|:|:|
-| Send alert when data collection is high. | To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period. See [Send alert when data collection is high](logs/analyze-usage.md#send-alert-when-data-collection-is-high) for details. |
-| Analyze collected data | Periodically analyze data collection using methods in [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. |
-| Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. | A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in [When to use a daily cap](logs/daily-cap.md). See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for information on how the daily cap works and how to configure one. |
-- ## Next step
azure-monitor Best Practices Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-logs.md
+
+ Title: Best practices for Azure Monitor Logs
+description: Provides a template for a Well-Architected Framework (WAF) article specific to Log Analytics workspaces in Azure Monitor.
+++ Last updated : 03/29/2023+++
+# Best practices for Azure Monitor Logs
+This article provides architectural best practices for Azure Monitor Logs. The guidance is based on the five pillars of architecture excellence described in [Azure Well-Architected Framework](/azure/architecture/framework/).
+++
+## Reliability
+In the cloud, we acknowledge that failures happen. Instead of trying to prevent failures altogether, the goal is to minimize the effects of a single failing component. Use the following information to minimize failure of your Log Analytics workspaces and to protect the data they collect.
+++
+## Security
+Security is one of the most important aspects of any architecture. Azure Monitor provides features to employ both the principle of least privilege and defense-in-depth. Use the following information to maximize the security of your Log Analytics workspaces and ensure that only authorized users access collected data.
+++
+## Cost optimization
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+
+> [!NOTE]
+> See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
+++
+## Operational excellence
+Operational excellence refers to operations processes required keep a service running reliably in production. Use the following information to minimize the operational requirements for supporting Log Analytics workspaces.
+++
+## Performance efficiency
+Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner. Use the following information to ensure that your Log Analytics workspaces and log queries are configured for maximum performance.
++
+## Next step
+
+- [Get best practices for a complete deployment of Azure Monitor](best-practices.md).
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
Read more about Azure Monitor logs including their sources of data in [Logs in A
Traces are series of related events that follow a user request through a distributed system. They can be used to determine the behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components.
-Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md). Trace data is stored with other application log data collected by Application Insights. This way it's available to the same analysis tools as other log data including log queries, dashboards, and alerts.
+Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing-telemetry-correlation.md). Trace data is stored with other application log data collected by Application Insights. This way it's available to the same analysis tools as other log data including log queries, dashboards, and alerts.
-Read more about distributed tracing at [What is distributed tracing?](app/distributed-tracing.md).
+Read more about distributed tracing at [What is distributed tracing?](app/distributed-tracing-telemetry-correlation.md).
### Changes
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
When you enable Application Insights for an application by installing an instrum
| Destination | Description | Reference | |:|:|:| | Azure Monitor Logs | Operational data about your application including page views, application requests, exceptions, and traces. | [Analyze log data in Azure Monitor](logs/log-query-overview.md) |
-| | Dependency information between application components to support Application Map and telemetry correlation. | [Telemetry correlation in Application Insights](app/correlation.md) <br> [Application Map](app/app-map.md) |
+| | Dependency information between application components to support Application Map and telemetry correlation. | [Telemetry correlation in Application Insights](app/distributed-tracing-telemetry-correlation.md) <br> [Application Map](app/app-map.md) |
| | Results of availability tests that test the availability and responsiveness of your application from different locations on the public Internet. | [Monitor availability and responsiveness of any web site](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) | | Azure Monitor Metrics | Application Insights collects metrics describing the performance and operation of the application in addition to custom metrics that you define in your application into the Azure Monitor metrics database. | [Log-based and pre-aggregated metrics in Application Insights](app/pre-aggregated-metrics-log-metrics.md)<br>[Application Insights API for custom events and metrics](app/api-custom-events-metrics.md) | | Azure Monitor Change Analysis | Change Analysis detects and provides insights on various types of changes in your application. | [Use Change Analysis in Azure Monitor](./change/change-analysis.md) |
azure-monitor Stream Monitoring Data Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/stream-monitoring-data-event-hubs.md
Before you configure streaming for any data source, you need to [create an Event
| [Operating system (guest)](../data-sources.md#operating-system-guest) | Azure virtual machines | Install the [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md) on Windows and Linux virtual machines in Azure. For more information, see [Streaming Azure Diagnostics data in the hot path by using event hubs](../agents/diagnostics-extension-stream-event-hubs.md) for details on Windows VMs. See [Use Linux Diagnostic extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md#protected-settings) for details on Linux VMs. | | [Application code](../data-sources.md#application-code) | Application Insights | Use diagnostic settings to stream to event hubs. This tier is only available with workspace-based Application Insights resources. For help with setting up workspace-based Application Insights resources, see [Workspace-based Application Insights resources](../app/create-workspace-resource.md#workspace-based-application-insights-resources) and [Migrate to workspace-based Application Insights resources](../app/convert-classic-resource.md#migrate-to-workspace-based-application-insights-resources).|
+## Stream diagnostics data
+
+Use diagnostics setting to stream logs and metrics to Event Hubs.
+For information on how to set up diagnostic settings, see [Create diagnostic settings](./diagnostic-settings.md?tabs=portal#create-diagnostic-settings)
+
+The following JSON is an example of metrics data sent to an event hub:
+
+```json
+[
+ {
+ "records": [
+ {
+ "count": 2,
+ "total": 0.217,
+ "minimum": 0.042,
+ "maximum": 0.175,
+ "average": 0.1085,
+ "resourceId": "/SUBSCRIPTIONS/ABCDEF12-3456-78AB-CD12-34567890ABCD/RESOURCEGROUPS/RG-001/PROVIDERS/MICROSOFT.WEB/SITES/SCALEABLEWEBAPP1",
+ "time": "2023-04-18T09:03:00.0000000Z",
+ "metricName": "CpuTime",
+ "timeGrain": "PT1M"
+ },
+ {
+ "count": 2,
+ "total": 0.284,
+ "minimum": 0.053,
+ "maximum": 0.231,
+ "average": 0.142,
+ "resourceId": "/SUBSCRIPTIONS/ABCDEF12-3456-78AB-CD12-34567890ABCD/RESOURCEGROUPS/RG-001/PROVIDERS/MICROSOFT.WEB/SITES/SCALEABLEWEBAPP1",
+ "time": "2023-04-18T09:04:00.0000000Z",
+ "metricName": "CpuTime",
+ "timeGrain": "PT1M"
+ },
+ {
+ "count": 1,
+ "total": 1,
+ "minimum": 1,
+ "maximum": 1,
+ "average": 1,
+ "resourceId": "/SUBSCRIPTIONS/ABCDEF12-3456-78AB-CD12-34567890ABCD/RESOURCEGROUPS/RG-001/PROVIDERS/MICROSOFT.WEB/SITES/SCALEABLEWEBAPP1",
+ "time": "2023-04-18T09:03:00.0000000Z",
+ "metricName": "Requests",
+ "timeGrain": "PT1M"
+ },
+ ...
+ ]
+ }
+]
+```
+
+The following JSON is an example of log data sent to an event hub:
++
+```json
+[
+ {
+ "records": [
+ {
+ "time": "2023-04-18T09:39:56.5027358Z",
+ "category": "AuditEvent",
+ "operationName": "VaultGet",
+ "resultType": "Success",
+ "correlationId": "12345678-abc-4bc5-9f31-950eaf3bfcb4",
+ "callerIpAddress": "10.0.0.10",
+ "identity": {
+ "claim": {
+ "http://schemas.microsoft.com/identity/claims/objectidentifier": "123abc12-abcd-9876-cdef-123abc456def",
+ "appid": "12345678-a1a1-b2b2-c3c3-9876543210ab"
+ }
+ },
+ "properties": {
+ "id": "https://mykeyvault.vault.azure.net/",
+ "clientInfo": "AzureResourceGraph.IngestionWorkerService.global/1.23.1.224",
+ "requestUri": "https://northeurope.management.azure.com/subscriptions/ABCDEF12-3456-78AB-CD12-34567890ABCD/resourceGroups/rg-001/providers/Microsoft.KeyVault/vaults/mykeyvault?api-version=2023-02-01&MaskCMKEnabledProperties=true",
+ "httpStatusCode": 200,
+ "properties": {
+ "sku": {
+ "Family": "A",
+ "Name": "Standard",
+ "Capacity": null
+ },
+ "tenantId": "12345678-abcd-1234-abcd-1234567890ab",
+ "networkAcls": null,
+ "enabledForDeployment": 0,
+ "enabledForDiskEncryption": 0,
+ "enabledForTemplateDeployment": 0,
+ "enableSoftDelete": 1,
+ "softDeleteRetentionInDays": 90,
+ "enableRbacAuthorization": 0,
+ "enablePurgeProtection": null
+ }
+ },
+ "resourceId": "/SUBSCRIPTIONS/ABCDEF12-3456-78AB-CD12-34567890ABCD/RESOURCEGROUPS/RG-001/PROVIDERS/MICROSOFT.KEYVAULT/VAULTS/mykeyvault",
+ "operationVersion": "2023-02-01",
+ "resultSignature": "OK",
+ "durationMs": "16"
+ }
+ ],
+ "EventProcessedUtcTime": "2023-04-18T09:42:07.0944007Z",
+ "PartitionId": 1,
+ "EventEnqueuedUtcTime": "2023-04-18T09:41:14.9410000Z"
+ },
+...
+```
+ ## Manual streaming with a logic app+ For data that you can't directly stream to an event hub, you can write to Azure Storage Then you can use a time-triggered logic app that [pulls data from Azure Blob Storage](../../connectors/connectors-create-api-azureblobstorage.md#add-action) and [pushes it as a message to the event hub](../../connectors/connectors-create-api-azure-event-hubs.md#add-action).
+## Query events from your Event Hubs
+
+Use the process data query function to see the contents of monitoring events sent to your event hub.
+
+Follow the steps below to query your event data using the Azure portal:
+1. Select **Process data** from your event hub.
+1. Find the tile entitled **Enable real time insights from events** and select **Start**.
+1. Select **Refresh** in the **Input preview** section of the page to fetch events from your event hub.
++ ## Partner tools with Azure Monitor integration Routing your monitoring data to an event hub with Azure Monitor enables you to easily integrate with external SIEM and monitoring tools. The following table lists examples of tools with Azure Monitor integration.
azure-monitor Query Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-optimization.md
Cross-region and cross-cluster execution of queries requires the system to seria
A query that spans more than five workspaces is considered a query that consumes excessive resources. Queries can't span more than 100 workspaces. > [!IMPORTANT]
-> In some multi-workspace scenarios, the CPU and data measurements won't be accurate and will represent the measurement of only a few of the workspaces.
+> - In some multi-workspace scenarios, the CPU and data measurements won't be accurate and will represent the measurement of only a few of the workspaces.
+> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Resource Manager resource ID, consume less resources and are more performant. See [Create a log query across multiple workspaces](./cross-workspace-query.md#identify-workspace-resources)
## Parallelism Azure Monitor Logs uses large clusters of Azure Data Explorer to run queries. These clusters vary in scale and potentially get up to dozens of compute nodes. The system automatically scales the clusters according to workspace placement logic and capacity.
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
Title: Design a Log Analytics workspace architecture description: The article describes the considerations and recommendations for customers preparing to deploy a workspace in Azure Monitor. Previously updated : 05/25/2022 Last updated : 04/05/2023
The following table presents criteria to consider when you design your workspace
| Criteria | Description | |:|:|
-| [Segregate operational and security data](#segregate-operational-and-security-data) | Many customers will create separate workspaces for their operational and security data for data ownership and the extra cost from Microsoft Sentinel. In some cases, you might be able to save costs by consolidating into a single workspace to qualify for a commitment tier. |
+| [Operational and security data](#operational-and-security-data) | You may choose to combine operational data from Azure Monitor in the same workspace as security data from Microsoft Sentinel or separate each into their own workspace. Combining them gives you better visibility across all your data, while your security standards might require separating them so that your security team has a dedicated workspace. You may also have cost implications to each strategy. |
| [Azure tenants](#azure-tenants) | If you have multiple Azure tenants, you'll usually create a workspace in each one. Several data sources can only send monitoring data to a workspace in the same Azure tenant. | | [Azure regions](#azure-regions) | Each workspace resides in a particular Azure region. You might have regulatory or compliance requirements to store data in specific locations. | | [Data ownership](#data-ownership) | You might choose to create separate workspaces to define data ownership. For example, you might create workspaces by subsidiaries or affiliated companies. |
The following table presents criteria to consider when you design your workspace
| [Legacy agent limitations](#legacy-agent-limitations) | Legacy virtual machine agents have limitations on the number of workspaces they can connect to. | | [Data access control](#data-access-control) | Configure access to the workspace and to different tables and data from different resources. |
-### Segregate operational and security data
-Most customers who use both Azure Monitor and Microsoft Sentinel will create a dedicated workspace for each to segregate ownership of data between operational and security teams. This approach also helps to optimize costs. If Microsoft Sentinel is enabled in a workspace, all data in that workspace is subject to Microsoft Sentinel pricing, even if it's operational data collected by Azure Monitor.
+### Operational and security data
+The decision whether to combine your operational data from Azure Monitor in the same workspace as security data from Microsoft Sentinel or separate each into their own workspace depends on your security requirements and the potential cost implications for your environment.
+
+**Dedicated workspaces**
+Creating dedicated workspaces for Azure Monitor and Microsoft Sentinel will allow you to segregate ownership of data between operational and security teams. This approach may also help to optimize costs since when Microsoft Sentinel is enabled in a workspace, all data in that workspace is subject to Microsoft Sentinel pricing even if it's operational data collected by Azure Monitor.
A workspace with Microsoft Sentinel gets three months of free data retention instead of 31 days. This scenario typically results in higher costs for operational data in a workspace without Microsoft Sentinel. See [Azure Monitor Logs pricing details](cost-logs.md#workspaces-with-microsoft-sentinel).
-The exception is if combining data in the same workspace helps you reach a [commitment tier](#commitment-tiers), which provides a discount to your ingestion charges. For example, consider an organization that has operational data and security data each ingesting about 50 GB per day. Combining the data in the same workspace would allow a commitment tier at 100 GB per day. That scenario would provide a 15% discount for Azure Monitor and a 50% discount for Microsoft Sentinel.
+
+**Combined workspace**
+Combing your data from Azure Monitor and Microsoft Sentinel in the same workspace gives you better visibility across all of your data allowing you to easily combine both in queries and workbooks. If access to the security data should be limited to a particular team, you can use [table level RBAC](../logs/manage-access.md#set-table-level-read-access) to block particular users from tables with security data or limit users to accessing the workspace using [resource-context](../logs/manage-access.md#access-mode).
+
+This configuration may result in cost savings if helps you reach a [commitment tier](#commitment-tiers), which provides a discount to your ingestion charges. For example, consider an organization that has operational data and security data each ingesting about 50 GB per day. Combining the data in the same workspace would allow a commitment tier at 100 GB per day. That scenario would provide a 15% discount for Azure Monitor and a 50% discount for Microsoft Sentinel.
If you create separate workspaces for other criteria, you'll usually create more workspace pairs. For example, if you have two Azure tenants, you might create four workspaces with an operational and security workspace in each tenant. -- **If you use both Azure Monitor and Microsoft Sentinel:** Create a separate workspace for each. Consider combining the two if it helps you reach a commitment tier.
+- **If you use both Azure Monitor and Microsoft Sentinel:** Consider separating each in a dedicated workspace if required by your security team or if it results in a cost savings. Consider combining the two for better visibility of your combined monitoring data or if it helps you reach a commitment tier.
- **If you use both Microsoft Sentinel and Microsoft Defender for Cloud:** Consider using the same workspace for both solutions to keep security data in one place. ### Azure tenants
Most resources can only send monitoring data to a workspace in the same Azure te
- **If you have multiple Azure tenants:** Create a workspace for each tenant. For other options including strategies for service providers, see [Multiple tenant strategies](#multiple-tenant-strategies). ### Azure regions
-Each Log Analytics workspaces resides in a [particular Azure region](https://azure.microsoft.com/global-infrastructure/geographies/). You might have regulatory or compliance purposes for keeping data in a particular region. For example, an international company might locate a workspace in each major geographical region, such as the United States and Europe.
+Each Log Analytics workspace resides in a [particular Azure region](https://azure.microsoft.com/global-infrastructure/geographies/). You might have regulatory or compliance purposes for keeping data in a particular region. For example, an international company might locate a workspace in each major geographical region, such as the United States and Europe.
- **If you have requirements for keeping data in a particular geography:** Create a separate workspace for each region with such requirements. - **If you don't have requirements for keeping data in a particular geography:** Use a single workspace for all regions.
azure-signalr Signalr Quickstart Azure Functions Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-javascript.md
When you run the `func new` command from the root directory of the project, the
1. Run the following command to create the `index` function.
- ```bash
- func new -n index -t HttpTrigger
- ```
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
1. Edit *index/function.json* and replace the contents with the following json code:
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+ }
+ ```
1. Edit *index/index.js* and replace the contents with the following code:
- ```javascript
- var fs = require('fs').promises
-
- module.exports = async function (context, req) {
- const path = context.executionContext.functionDirectory + '/../content/https://docsupdatetracker.net/index.html'
- try {
- var data = await fs.readFile(path);
- context.res = {
- headers: {
- 'Content-Type': 'text/html'
- },
- body: data
- }
- context.done()
- } catch (err) {
- context.log.error(err);
- context.done(err);
- }
- }
- ```
+ ```javascript
+ var fs = require('fs').promises
+
+ module.exports = async function (context, req) {
+ const path = context.executionContext.functionDirectory + '/../content/https://docsupdatetracker.net/index.html'
+ try {
+ var data = await fs.readFile(path);
+ context.res = {
+ headers: {
+ 'Content-Type': 'text/html'
+ },
+ body: data
+ }
+ context.done()
+ } catch (err) {
+ context.log.error(err);
+ context.done(err);
+ }
+ }
+ ```
### Create the negotiate function 1. Run the following command to create the `negotiate` function.
- ```bash
- func new -n negotiate -t HttpTrigger
- ```
+ ```bash
+ func new -n negotiate -t HttpTrigger
+ ```
1. Edit *negotiate/function.json* and replace the contents with the following json code:-
- ```json
- {
- "disabled": false,
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "post"
- ],
- "name": "req",
- "route": "negotiate"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "serverless",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "in"
- }
- ]
- }
- ```
-
+ ```json
+ {
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "post"
+ ],
+ "name": "req",
+ "route": "negotiate"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+1. Edit *negotiate/index.js* and replace the content with the following JavaScript code:
+ ```js
+ module.exports = async function (context, req, connectionInfo) {
+ context.res.body = connectionInfo;
+ };
+ ```
### Create a broadcast function. 1. Run the following command to create the `broadcast` function.
- ```bash
- func new -n broadcast -t TimerTrigger
- ```
+ ```bash
+ func new -n broadcast -t TimerTrigger
+ ```
1. Edit *broadcast/function.json* and replace the contents with the following code: -
- ```json
- {
- "bindings": [
- {
- "name": "myTimer",
- "type": "timerTrigger",
- "direction": "in",
- "schedule": "*/5 * * * * *"
- },
- {
- "type": "signalR",
- "name": "signalRMessages",
- "hubName": "serverless",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "out"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "name": "myTimer",
+ "type": "timerTrigger",
+ "direction": "in",
+ "schedule": "*/5 * * * * *"
+ },
+ {
+ "type": "signalR",
+ "name": "signalRMessages",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
1. Edit *broadcast/index.js* and replace the contents with the following code:
-
- ```javascript
- var https = require('https');
-
- var etag = '';
- var star = 0;
-
- module.exports = function (context) {
- var req = https.request("https://api.github.com/repos/azure/azure-signalr", {
- method: 'GET',
- headers: {'User-Agent': 'serverless', 'If-None-Match': etag}
- }, res => {
- if (res.headers['etag']) {
- etag = res.headers['etag']
- }
-
- var body = "";
-
- res.on('data', data => {
- body += data;
- });
- res.on("end", () => {
- if (res.statusCode === 200) {
- var jbody = JSON.parse(body);
- star = jbody['stargazers_count'];
- }
-
- context.bindings.signalRMessages = [{
- "target": "newMessage",
- "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${star}` ]
- }]
- context.done();
- });
- }).on("error", (error) => {
- context.log(error);
- context.res = {
- status: 500,
- body: error
- };
- context.done();
- });
- req.end();
- }
- ```
+
+ ```javascript
+ var https = require('https');
+
+ var etag = '';
+ var star = 0;
+
+ module.exports = function (context) {
+ var req = https.request("https://api.github.com/repos/azure/azure-signalr", {
+ method: 'GET',
+ headers: {'User-Agent': 'serverless', 'If-None-Match': etag}
+ }, res => {
+ if (res.headers['etag']) {
+ etag = res.headers['etag']
+ }
+
+ var body = "";
+
+ res.on('data', data => {
+ body += data;
+ });
+ res.on("end", () => {
+ if (res.statusCode === 200) {
+ var jbody = JSON.parse(body);
+ star = jbody['stargazers_count'];
+ }
+
+ context.bindings.signalRMessages = [{
+ "target": "newMessage",
+ "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${star}` ]
+ }]
+ context.done();
+ });
+ }).on("error", (error) => {
+ context.log(error);
+ context.res = {
+ status: 500,
+ body: error
+ };
+ context.done();
+ });
+ req.end();
+ }
+ ```
### Create the https://docsupdatetracker.net/index.html file
azure-video-indexer Emotions Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/emotions-detection.md
Previously updated : 06/15/2022 Last updated : 04/17/2023 # Emotions detection
-Emotion detection is an Azure Video Indexer AI feature that automatically detects emotions a video's transcript lines. Each sentence can either be detected as "Anger", "Fear", "Joy", "Neutral", and "Sad". The model works on text only (labeling emotions in video transcripts.) This model doesn't infer the emotional state of people, may not perform where input is ambiguous or unclear, like sarcastic remarks. Thus, the model shouldn't be used for things like assessing employee performance or the emotional state of a person.
+Emotions detection is an Azure Video Indexer AI feature that automatically detects emotions in video's transcript lines. Each sentence can either be detected as "Anger", "Fear", "Joy", "Sad", or none of the above if no other emotion was detected.
-The model doesn't have context of the input data, which can impact its accuracy. To increase the accuracy, it's recommended for the input data to be in a clear and unambiguous format.
+The model works on text only (labeling emotions in video transcripts.) This model doesn't infer the emotional state of people, may not perform where input is ambiguous or unclear, like sarcastic remarks. Thus, the model shouldn't be used for things like assessing employee performance or the emotional state of a person.
## Prerequisites
During the emotions detection procedure, the transcript of the video is processe
|Emotions detection |Each sentence is sent to the emotions detection model. The model produces the confidence level of each emotion. If the confidence level exceeds a specific threshold, and there is no ambiguity between positive and negative emotions, the emotion is detected. In any other case, the sentence is labeled as neutral.| |Confidence level |The estimated confidence level of the detected emotions is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score. |
-## Example use cases
-
-* Personalization of keywords to match customer interests, for example websites about England posting promotions about English movies or festivals.
-* Deep-searching archives for insights on specific keywords to create feature stories about companies, personas or technologies, for example by a news agency.
- ## Considerations and limitations when choosing a use case Below are some considerations to keep in mind when using emotions detection:
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Previously updated : 05/10/2022 Last updated : 04/17/2023 <!-- VERSION 2.3 Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. -->
The following schemas are in use by Azure Video Indexer
"ExternalId": null, "Filename": "1 Second Video 1.mp4", "AnimationModelId": null,
- "BrandsCategories": null
+ "BrandsCategories": null,
+ "CustomLanguages": null,
+ "ExcludedAIs": "Face",
+ "LogoGroupId": "ea9d154d-0845-456c-857e-1c9d5d925d95"
} } } ``` - ## Next steps <!-- replace below with the proper link to your main monitoring service article -->
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 04/06/2023 Last updated : 04/17/2023
To stay up-to-date with the most recent Azure Video Indexer developments, this a
## April 2023
-## Observed people tracing improvements
+### Excluding sensitive AI models
+
+Following the Microsoft Responsible AI agenda, Azure Video Indexer now allows you to exclude specific AI models when indexing media files. The list of sensitive AI models includes: face detection, observed people, emotions, labels identification.
+
+This feature is currently available through the API, and is available in all presets except the Advanced preset.
+
+### Observed people tracing improvements
For more information, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 04/06/2023 Last updated : 04/18/2023
In summary, the **Availability Zone** will only appear when
![Backup jobs filtered](./media/backup-azure-arm-restore-vms/secbackupjobs.png)
+## Cross Subscription Restore (preview)
+
+Azure Backup now allows you to perform Cross Subscription Restore (CSR), which helps you to restore Azure VMs in a subscription that is different from the default one. Default subscription contains the recovery points.
+
+This feature is enabled for Recovery Services vault by default. However, there may be instances when you may need to block Cross Subscription Restore based on your cloud infrastructure. So, you can enable, disable, or permanently disable Cross Subscription Restore for the existing vaults by going to *Vault* > **Properties** > **Cross Subscription Restore**.
++
+>[!Note]
+>- CSR once permanently disabled on a vault can't be re-enabled because it's an irreversible operation. 
+>- If CSR is disabled but not permanently disabled, then you can reverse the operation by selecting *Vault* > **Properties** > **Cross Subscription Restore** > **Enable**.
+>- If a Recovery Services vault is moved to a different subscription when CSR is disabled or permanently disabled, restore to the original subscription fails.
+ ## Restoring unmanaged VMs and disks as managed You're provided with an option to restore [unmanaged disks](../storage/common/storage-disaster-recovery-guidance.md#azure-unmanaged-disks) as [managed disks](../virtual-machines/managed-disks-overview.md) during restore. By default, the unmanaged VMs / disks are restored as unmanaged VMs / disks. However, if you choose to restore as managed VMs / disks, it's now possible to do so. These restore operations aren't triggered from the snapshot phase but only from the vault phase. This feature isn't available for unmanaged encrypted VMs.
backup Backup Azure Diagnostic Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-diagnostic-events.md
To send your vault diagnostics data to Log Analytics:
1. Select **Resource specific** in the toggle, and select the following five events: **CoreAzureBackup**, **AddonAzureBackupJobs**, **AddonAzureBackupPolicy**, **AddonAzureBackupStorage**, and **AddonAzureBackupProtectedInstance**. 1. Select **Save**.
-
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/recovery-services-vault-diagnostics-settings-inline.png" alt-text="Screenshot shows the recovery services vault diagnostics settings." lightbox="./media/backup-azure-configure-backup-reports/recovery-services-vault-diagnostics-settings-expanded.png":::
# [Backup vaults](#tab/backup-vaults)
To send your vault diagnostics data to Log Analytics:
4. Select the following events: **CoreAzureBackup**, **AddonAzureBackupJobs**, **AddonAzureBackupPolicy**, and **AddonAzureBackupProtectedInstance**. 5. Select **Save**.
-
-
+ :::image type="content" source="./media/backup-azure-configure-backup-reports/backup-vault-diagnostics-settings.png" alt-text="Screenshot shows the backup vault diagnostics settings.":::
After data flows into the Log Analytics workspace, dedicated tables for each of these events are created in your workspace. You can query any of these tables directly. You can also perform joins or unions between these tables if necessary.
backup Configure Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/configure-reports.md
Azure Resource Manager resources, such as Recovery Services vaults, record infor
In the monitoring section of your Recovery Services vault, select **Diagnostics settings** and specify the target for the Recovery Services vault's diagnostic data. To learn more about using diagnostic events, see [Use diagnostics settings for Recovery Services vaults](./backup-azure-diagnostic-events.md). - Azure Backup also provides a built-in Azure Policy definition, which automates the configuration of diagnostics settings for all Recovery Services vaults in a given scope. To learn how to use this policy, see [Configure vault diagnostics settings at scale](./azure-policy-configure-diagnostics.md).
Azure Backup also provides a built-in Azure Policy definition, which automates t
In the monitoring section of your Backup vault, select **Diagnostics settings** and specify the target for the Backup vault's diagnostic data. -
In the monitoring section of your Backup vault, select **Diagnostics settings**
After you've configured your vaults to send data to Log Analytics, view your Backup reports by going to the Backup center and selecting **Backup Reports**. Select the relevant workspace(s) on the **Get started** tab. - The report contains various tabs:
The report contains various tabs:
Use this tab to get a high-level overview of your backup estate. You can get a quick glance of the total number of backup items, total cloud storage consumed, the number of protected instances, and the job success rate per workload type. For more detailed information about a specific backup artifact type, go to the respective tabs.
-
##### Backup Items
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
During the public preview of Azure Chaos Studio, there are a few limitations and
## Limitations
+* The target resources must be in [one of the regions supported by the Azure Chaos Studio Preview](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio)
* For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service: * Regional endpoints to allowlist are listed [in this article](chaos-studio-permissions-security.md#network-security). * If sending telemetry data to Application Insights, the IPs [in this document](../azure-monitor/app/ip-addresses.md) are also required.
cloud-services-extended-support Certificates And Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/certificates-and-key-vault.md
Key Vault is used to store certificates that are associated to Cloud Services (e
1. Sign in to the Azure portal and navigate to the Key Vault. If you do not have a Key Vault set up, you can opt to create one in this same window.
-2. Select **Access polices**
+2. Select **Access Configuration**
:::image type="content" source="media/certs-and-key-vault-1.png" alt-text="Image shows selecting access policies from the key vault blade.":::
-3. Ensure the access policies include the following property:
+3. Ensure the access configuration include the following property:
- **Enable access to Azure Virtual Machines for deployment** :::image type="content" source="media/certs-and-key-vault-2.png" alt-text="Image shows access policies window in the Azure portal.":::
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-overview.md
These are top scenarios involving combinations of resources, features, and Cloud
| Service | Configuration | Comments | |||| | [Azure AD Domain Services](../active-directory-domain-services/migrate-from-classic-vnet.md) | Virtual networks that contain Azure Active Directory Domain services. | Virtual network containing both Cloud Service deployment and Azure AD Domain services is supported. Customer first needs to separately migrate Azure AD Domain services and then migrate the virtual network left only with the Cloud Service deployment |
-| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing a prod slot deployment can be migrated. It is not reccomended to migrate staging slot as this can result in issues with retaining service FQDN |
+| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing a prod slot deployment can be migrated. It is not reccomended to migrate staging slot as this can result in issues with retaining service FQDN. To migrate staging slot, first promote staging deployment to production and then migrate to ARM. |
| Cloud Service | Deployment not in a publicly visible virtual network (default virtual network deployment) | A Cloud Service can be in a publicly visible virtual network, in a hidden virtual network or not in any virtual network. Cloud Services in a hidden virtual network and publicly visible virtual networks are supported for migration. Customer can use the Validate API to tell if a deployment is inside a default virtual network or not and thus determine if it can be migrated. | |Cloud Service | XML extensions (BGInfo, Visual Studio Debugger, Web Deploy, and Remote Debugging). | All xml extensions are supported for migration | Virtual Network | Virtual network containing multiple Cloud Services. | Virtual network contain multiple cloud services is supported for migration. The virtual network and all the Cloud Services within it will be migrated together to Azure Resource Manager. |
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md
This article discusses the technical details regarding the migration tool as per
- Each Cloud Services (extended support) deployment is an independent Cloud Service. Deployment are no longer grouped into a cloud service using slots. - If you have two slots in your Cloud Service (classic), you need to delete one slot (staging) and use the migration tool to move the other (production) slot to Azure Resource Manager. - The public IP address on the Cloud Service deployment remains the same after migration to Azure Resource Manager and is exposed as a Basic SKU IP (dynamic or static) resource. -- The DNS name and domain (cloudapp.azure.net) for the migrated cloud service remains the same.
+- The DNS name and domain (cloudapp.net) for the migrated cloud service remains the same.
### Virtual network migration - If a Cloud Services deployment is in a virtual network, then during migration all Cloud Services and associated virtual network resources are migrated together.
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
Install the *nvidia-docker-2* software package. ```bash
-distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+DISTRIBUTION=$(. /etc/os-release;echo $ID$VERSION_ID)
``` ```bash curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - ``` ```bash
-curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
+curl -s -L https://nvidia.github.io/nvidia-docker/$DISTRIBUTION/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
``` ```bash sudo apt-get update
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/data-formats.md
+
+ Title: Custom Text Analytics for health data formats
+
+description: Learn about the data formats accepted by custom text analytics for health.
++++++ Last updated : 04/14/2023++++
+# Accepted data formats in custom text analytics for health
+
+Use this article to learn about formatting your data to be imported into custom text analytics for health.
+
+If you are trying to [import your data](../how-to/create-project.md#import-project) into custom Text Analytics for health, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use the Language Studio to [label your documents](../how-to/label-data.md).
+
+Your Labels file should be in the `json` format below to be used when importing your labels into a project.
+
+```json
+{
+ "projectFileVersion": "{API-VERSION}",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectName": "{PROJECT-NAME}",
+ "projectKind": "CustomHealthcare",
+ "description": "Trying out custom Text Analytics for health",
+ "language": "{LANGUAGE-CODE}",
+ "multilingual": true,
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "settings": {}
+ },
+ "assets": {
+ "projectKind": "CustomHealthcare",
+ "entities": [
+ {
+ "category": "Entity1",
+ "compositionSetting": "{COMPOSITION-SETTING}",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "One",
+ "synonyms": [
+ {
+ "language": "en",
+ "values": [
+ "EntityNumberOne",
+ "FirstEntity"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ },
+ {
+ "category": "Entity2"
+ },
+ {
+ "category": "MedicationName",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "research drugs",
+ "synonyms": [
+ {
+ "language": "en",
+ "values": [
+ "rdrug a",
+ "rdrug b"
+ ]
+ }
+ ]
+
+ }
+ ]
+ }
+ "prebuilts": "MedicationName"
+ }
+ ],
+ "documents": [
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 500,
+ "labels": [
+ {
+ "category": "Entity1",
+ "offset": 25,
+ "length": 10
+ },
+ {
+ "category": "Entity2",
+ "offset": 120,
+ "length": 8
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 100,
+ "labels": [
+ {
+ "category": "Entity2",
+ "offset": 20,
+ "length": 5
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#) to learn more about multilingual support. | `true`|
+|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
+| `storageInputContainerName` |`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
+| `category` | | The name of the entity type, which can be user defined for new entity definitions, or predefined for prebuilt entities. For more information, see the entity naming rules below.| |
+|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
+| `list` | | Array containing all the sublists you have in the project for a specific entity. Lists can be added to prebuilt entities or new entities with learned components.| |
+|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
+| `listKey`| `One` | A normalized value for the list of synonyms to map back to in prediction. | `One` |
+|`synonyms`|`[]`|Array containing all the synonyms|synonym|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the synonym in your sublist. If your project is a multilingual project and you want to support your list of synonyms for all the languages in your project, you have to explicitly add your synonyms to each language. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
+| `values`| `"EntityNumberone"`, `"FirstEntity"` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"EntityNumberone"`, `"FirstEntity"` |
+| `prebuilts` | `MedicationName` | The name of the prebuilt component populating the prebuilt entity. [Prebuilt entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project by default but you can extend them with list components in your labels file. | `MedicationName` |
+| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
+| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
+| `dataset` | `{DATASET}` | The test set to which this file goes to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting). Possible values for this field are `Train` and `Test`. |`Train`|
+| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
+| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
+| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | | The start position for the entity text. | `25`|
+| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
+
+## Entity naming rules
+
+1. [Prebuilt entity names](../../text-analytics-for-health/concepts/health-entity-categories.md) are predefined. They must be populated with a prebuilt component and it must match the entity name.
+2. New user defined entities (entities with learned components or labeled text) can't use prebuilt entity names.
+3. New user defined entities can't be populated with prebuilt components as prebuilt components must match their associated entities names and have no labeled data assigned to them in the documents array.
+++
+## Next steps
+* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
+* See the [how-to article](../how-to/label-data.md) more information about labeling your data.
+* When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/entity-components.md
+
+ Title: Entity components in custom Text Analytics for health
+
+description: Learn how custom Text Analytics for health extracts entities from text
++++++ Last updated : 04/14/2023++++
+# Entity components in custom text analytics for health
+
+In custom Text Analytics for health, entities are relevant pieces of information that are extracted from your unstructured input text. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
+
+## Component types
+
+An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
+
+The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you can't add learned components. Similarly, you can create new entities with learned and list components, but you can't populate them with additional prebuilt components.
+
+### Learned component
+
+The learned component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels to your data for the entity. If you do not label any data, it will not have a learned component.
+
+The Text Analytics for health entities, which by default have prebuilt components can't be extended with learned components, meaning they do not require or accept further labeling to function.
++
+### List component
+
+The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
+
+In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
+++
+### Prebuilt component
+
+The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components. Entities with prebuilt components are pretrained and can extract information relating to their categories without any labels.
+++
+## Entity options
+
+When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
+
+### Combine components
+
+Combine components as one entity when they overlap by taking the union of all the components.
+
+Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your input data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
++
+By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
++
+Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
++
+With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
+++
+### Don't combine components
+
+Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your labeled data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ labeled as Software:
++
+When you do not combine components, the entity will return twice:
+++
+## How to use components and options
+
+Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
+
+A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have a **Medication Name** entity, which has a `Medication.Name` prebuilt component added to it, the entity may not predict all the medication names specific to your domain. You can use a list component to extend the values of the Medication Name entity and thereby extending the prebuilt with your own values of Medication Names.
+
+Other times you may be interested in extracting an entity through context such as a **medical device**. You would label for the learned component of the medical device to learn _where_ a medical device is based on its position within the sentence. You may also have a list of medical devices that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
+
+When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
++
+## Next steps
+
+* [Entities with prebuilt components](../../text-analytics-for-health/concepts/health-entity-categories.md)
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/evaluation-metrics.md
+
+ Title: Custom text analytics for health evaluation metrics
+
+description: Learn about evaluation metrics in custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Evaluation metrics for custom Text Analytics for health models
+
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data labels (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. User defined entities are **included** in the evaluation factoring in Learned and List components; Text Analytics for health prebuilt entities are **not** factored in the model evaluation. For evaluation, custom Text Analytics for health uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
+
+>[!NOTE]
+> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
+
+## Model-level and entity-level evaluation metrics
+
+Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
+
+The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
+
+### Example
+
+*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
+
+The model extracting entities from this text could have the following predictions:
+
+| Entity | Predicted as | Actual type |
+|--|--|--|
+| John Smith | Person | Person |
+| Frederick | Person | City |
+| Forrest | City | Person |
+| Fannie Thomas | Person | Person |
+| Colorado Springs | City | City |
+
+### Entity-level evaluation for the *person* entity
+
+The model would have the following entity-level evaluation, for the *person* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
+| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+
+* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
+* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
+* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
+
+### Entity-level evaluation for the *city* entity
+
+The model would have the following entity-level evaluation, for the *city* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
+| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
+
+### Model-level evaluation for the collective model
+
+The model would have the following evaluation for the model in its entirety:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
+| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
+| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
+
+## Interpreting entity-level evaluation metrics
+
+So what does it actually mean to have high precision or high recall for a certain entity?
+
+| Recall | Precision | Interpretation |
+|--|--|--|
+| High | High | This entity is handled well by the model. |
+| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
+| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
+| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
+
+## Guidance
+
+After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
+
+* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
+
+* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
+
+* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
+
+* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
+
+* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
++
+## Confusion matrix
+
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
+The matrix compares the expected labels with the ones predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
+
+The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
++
+You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
+
+* The values in the diagonal are the *True Positive* values of each entity.
+* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
+* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
+
+Similarly,
+
+* The *true positive* of the model is the sum of *true Positives* for all entities.
+* The *false positive* of the model is the sum of *false positives* for all entities.
+* The *false Negative* of the model is the sum of *false negatives* for all entities.
+
+## Next steps
+
+* [Custom text analytics for health overview](../overview.md)
+* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
+* [Train a model](../how-to/train-model.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/call-api.md
+
+ Title: Send a custom Text Analytics for health request to your custom model
+description: Learn how to send a request for custom text analytics for health.
+++++++ Last updated : 04/14/2023+
+ms.devlang: REST API
+++
+# Send queries to your custom Text Analytics for health model
+
+After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
+You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api).
+
+## Test deployed model
+
+You can use Language Studio to submit the custom Text Analytics for health task and visualize the results.
++
+## Send a custom text analytics for health request to your model
+
+# [Language Studio](#tab/language-studio)
++
+# [REST API](#tab/rest-api)
+
+First you will need to get your resource key and endpoint:
++
+### Submit a custom Text Analytics for health task
++
+### Get task results
+++++
+## Next steps
+
+* [Custom text analytics for health](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/create-project.md
+
+ Title: Using Azure resources in custom Text Analytics for health
+
+description: Learn about the steps for using Azure resources with custom text analytics for health.
++++++ Last updated : 04/14/2023++++
+# How to create custom Text Analytics for health project
+
+Use this article to learn how to set up the requirements for starting with custom text analytics for health and create a project.
+
+## Prerequisites
+
+Before you start using custom text analytics for health, you need:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+## Create a Language resource
+
+Before you start using custom text analytics for health, you'll need an Azure Language resource. It's recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions preconfigured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text analytics for health.
+
+You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
+
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
+
+## Create Language resource and connect storage account
+
+You can create a resource in the following ways:
+
+* The Azure portal
+* Language Studio
+* PowerShell
+
+> [!Note]
+> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
+++++
+> [!NOTE]
+> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
+> * You can only connect your language resource to one storage account.
+
+## Using a pre-existing Language resource
++
+## Create a custom Text Analytics for health project
+
+Once your resource and storage container are configured, create a new custom text analytics for health project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Import project
+
+If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Get project details
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Delete project
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
+
+* After you define your schema, you can start [labeling your data](label-data.md), which will be used for model training, evaluation, and finally making predictions.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/deploy-model.md
+
+ Title: Deploy a custom Text Analytics for health model
+
+description: Learn about deploying a model for custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Deploy a custom text analytics for health model
+
+Once you're satisfied with how your model performs, it's ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure storage account.
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+
+For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).
+
+## Deploy model
+
+After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/design-schema.md
+
+ Title: Preparing data and designing a schema for custom Text Analytics for health
+
+description: Learn about how to select and prepare data, to be successful in creating custom TA4H projects.
++++++ Last updated : 04/14/2023++++
+# How to prepare data and define a schema for custom Text Analytics for health
+
+In order to create a custom TA4H model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it entailing defining the entity types or categories that you need your model to extract from the text at runtime.
+
+## Schema design
+
+Custom Text Analytics for health allows you to extend and customize the Text Analytics for health entity map. The first step of the process is building your schema, which allows you to define the new entity types or categories that you need your model to extract from text in addition to the Text Analytics for health existing entities at runtime.
+
+* Review documents in your dataset to be familiar with their format and structure.
+
+* Identify the entities you want to extract from the data.
+
+ For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
+
+* Avoid entity types ambiguity.
+
+ **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
+
+ For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
+
+* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
+
+ For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
++
+## Add entities
+
+To add entities to your project:
+
+1. Move to **Entities** pivot from the top of the page.
+
+2. [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project. To add additional entity categories, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
+
+3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
+
+4. Entities are defined by [entity components](../concepts/entity-components.md): learned, list or prebuilt. Text Analytics for health entities are by default populated with the prebuilt component and cannot have learned components. Your newly defined entities can be populated with the learned component once you add labels for them in your data but cannot be populated with the prebuilt component.
+
+5. You can add a [list](../concepts/entity-components.md#list-component) component to any of your entities.
+
+
+### Add list component
+
+To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
+
+1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
+
+2. For multilingual projects, from the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
+
+ <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
+
+### Define entity options
+
+Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and click on the **Save** button at the top.
+
+ <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
++
+After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
++
+## Data selection
+
+The quality of data you train your model with affects model performance greatly.
+
+* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
+
+* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
+
+* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
+
+* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
+
+* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
+
+> [!NOTE]
+> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
+
+## Data preparation
+
+As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
+
+* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
+* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
+
+You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
+
+You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/label-data.md) in Language studio.
+
+## Test set
+
+When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set includes documents that represent all entities used in your project.
+
+## Next steps
+
+If you haven't already, create a custom Text Analytics for health project. If it's your first time using custom Text Analytics for health, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/fail-over.md
+
+ Title: Back up and recover your custom Text Analytics for health models
+
+description: Learn how to save and recover your custom Text Analytics for health models.
++++++ Last updated : 04/14/2023++++
+# Back up and recover your custom Text Analytics for health models
+
+When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure Language resources in different regions and synchronizing custom models across them.
+
+If your app or business depends on the use of a custom Text Analytics for health model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./train-model.md) and [deploy](./deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
+
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure Language resources in different Azure regions. [Create your resources](./create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](../quickstart.md?pivots=rest-api#create-a-new-azure-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
++
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get training status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
+
+ [!INCLUDE [get project details](../includes/rest-api/get-project-details.md)]
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
++
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
+
+* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/label-data.md
+
+ Title: How to label your data for custom Text Analytics for health
+
+description: Learn how to label your data for use with custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Label your data using the Language Studio
+
+Data labeling is a crucial step in development lifecycle. In this step, you label your documents with the new entities you defined in your schema to populate their learned components. This data will be used in the next step when training your model so that your model can learn from the labeled data to know which entities to extract. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project, but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio).
+
+## Prerequisites
+
+Before you can label your data, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data labeling guidelines
+
+After preparing your data, designing your schema and creating your project, you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON document in your storage container that you have connected to this project.
+
+As you label your data, keep in mind:
+
+* You can't add labels for Text Analytics for health entities as they're pretrained prebuilt entities. You can only add labels to new entity categories that you defined during schema definition.
+
+If you want to improve the recall for a prebuilt entity, you can extend it by adding a list component while you are [defining your schema](design-schema.md).
+
+* In general, more labeled data leads to better results, provided the data is labeled accurately.
+
+* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
+
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the documents.
+ * **Label completely**: Label all the instances of the entity in all your documents.
+
+ > [!NOTE]
+ > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your schema, and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
+
+## Label your data
+
+Use the following steps to label your data:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
+
+ <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
+
+ >[!TIP]
+ > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
+ > You can also use the filters to view the documents that are labeled with a specific entity type.
+
+3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
+
+ > [!NOTE]
+ > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document. Hebrew is not supported with multi-lingual projects.
+
+4. In the right side pane, you can use the **Add entity type** button to add additional entities to your project that you missed during schema definition.
+
+ <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
+
+5. You have two options to label your document:
+
+ |Option |Description |
+ |||
+ |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
+ |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
+
+ The below screenshot shows labeling using a brush.
+
+ :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
+
+6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each. The prebuilt entities will be shown for reference but you will not be able to label for these prebuilt entities as they are pretrained.
+
+7. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. See [training and testing sets](train-model.md#data-splitting) for information on how they are used for model training and evaluation.
+
+ > [!TIP]
+ > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
+
+7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances* where you can view count of all labeled instances of a specific entity type.
+ * *Documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
+
+7. When you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
+
+## Remove labels
+
+To remove a label
+
+1. Select the entity you want to remove a label from.
+2. Scroll through the menu that appears, and select **Remove label**.
+
+## Delete entities
+
+You cannot delete any of the Text Analytics for health pretrained entities because they have a prebuilt component. You are only permitted to delete newly defined entity categories. To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity removes all its labeled instances from your dataset.
+
+## Next steps
+
+After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/train-model.md
+
+ Title: How to train your custom Text Analytics for health model
+
+description: Learn about how to train your model for custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Train your custom Text Analytics for health model
+
+Training is the process where the model learns from your [labeled data](label-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
+
+To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
++
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data splitting
+
+Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
+The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
+After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated. Model training and evaluation are only for newly defined entities with learned components; therefore, Text Analytics for health entities are excluded from model training and evaluation due to them being entities with prebuilt components. It's recommended to make sure that all your labeled entities are adequately represented in both the training and testing set.
+
+Custom Text Analytics for health supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**:The system splits your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](label-data.md).
+
+## Train model
+
+# [Language studio](#tab/Language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+### Start training job
++
+### Get training job status
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/view-model-evaluation.md
+
+ Title: Evaluate a Custom Text Analytics for health model
+
+description: Learn how to evaluate and score your Custom Text Analytics for health model
++++++ Last updated : 04/14/2023+++++
+# View a custom text analytics for health model's evaluation and details
+
+After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you train a new model, as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](label-data.md).
+
+## Prerequisites
+
+Before viewing model evaluation, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md)
+* A [successfully trained model](train-model.md)
++
+## Model details
+
+There are several metrics you can use to evaluate your mode. See the [performance metrics](../concepts/evaluation-metrics.md) article for more information on the model details described in this article.
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* [Deploy your model](deploy-model.md)
+* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/language-support.md
+
+ Title: Language and region support for custom Text Analytics for health
+
+description: Learn about the languages and regions supported by custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Language support for custom text analytics for health
+
+Use this article to learn about the languages currently supported by custom Text Analytics for health.
+
+## Multilingual option
+
+With custom Text Analytics for health, you can train a model in one language and use it to extract entities from documents other languages. This feature saves you the trouble of building separate projects for each language and instead combining your datasets in a single project, making it easy to scale your projects to multiple languages. You can train your project entirely with English documents, and query it in: French, German, Italian, and others. You can enable the multilingual option as part of the project creation process or later through the project settings.
+
+You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. In the [data labeling](how-to/label-data.md) page in Language Studio, you can select the language of the document you're adding. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better. When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+Hebrew is not supported in multilingual projects. If the primary language of the project is Hebrew, you will not be able to add training data in other languages, or query the model with other languages. Similarly, if the primary language of the project is not Hebrew, you will not be able to add training data in Hebrew, or query the model in Hebrew.
+
+## Language support
+
+Custom Text Analytics for health supports `.txt` files in the following languages:
+
+| Language | Language code |
+| | |
+| English | `en` |
+| French | `fr` |
+| German | `de` |
+| Spanish | `es` |
+| Italian | `it` |
+| Portuguese (Portugal) | `pt-pt` |
+| Hebrew | `he` |
++
+## Next steps
+
+* [Custom Text Analytics for health overview](overview.md)
+* [Service limits](reference/service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/overview.md
+
+ Title: Custom Text Analytics for health - Azure Cognitive Services
+
+description: Customize an AI model to label and extract healthcare information from documents using Azure Cognitive Services.
++++++ Last updated : 04/14/2023++++
+# What is custom Text Analytics for health?
+
+Custom Text Analytics for health is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models on top of [Text Analytics for health](../text-analytics-for-health/overview.md) for custom healthcare entity recognition tasks.
+
+Custom Text Analytics for health enables users to build custom AI models to extract healthcare specific entities from unstructured text, such as clinical notes and reports. By creating a custom Text Analytics for health project, developers can iteratively define new vocabulary, label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+
+This documentation contains the following article types:
+
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through creating making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/label-data.md) contain instructions for using the service in more specific or customized ways.
+
+## Example usage scenarios
+
+Similarly to Text Analytics for health, custom Text Analytics for health can be used in multiple [scenarios](../text-analytics-for-health/overview.md#example-use-cases) across a variety of healthcare industries. However, the main usage of this feature is to provide a layer of customization on top of Text Analytics for health to extend its existing entity map.
++
+## Project development lifecycle
+
+Using custom Text Analytics for health typically involves several different steps.
++
+* **Define your schema**: Know your data and define the new entities you want extracted on top of the existing Text Analytics for health entity map. Avoid ambiguity.
+
+* **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the files.
+ * **Label completely**: Label all the instances of the entity in all your files.
+
+* **Train the model**: Your model starts learning from your labeled data.
+
+* **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
+
+* **Deploy the model**: Deploying a model makes it available for use via an API.
+
+* **Extract entities**: Use your custom models for entity extraction tasks.
+
+## Reference documentation and code samples
+
+As you use custom Text Analytics for health, see the following reference documentation for Azure Cognitive Services for Language:
+
+|APIs| Reference documentation|
+||||
+|REST APIs (Authoring) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-authoring) |
+|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-runtime/submit-job) |
++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using custom Text Analytics for health.
+
+* As you go through the project development lifecycle, review the glossary to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](reference/service-limits.md) for information such as [regional availability](reference/service-limits.md#regional-availability).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/quickstart.md
+
+ Title: Quickstart - Custom Text Analytics for health (Custom TA4H)
+
+description: Quickly start building an AI model to categorize and extract information from healthcare unstructured text.
++++++ Last updated : 04/14/2023++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: custom Text Analytics for health
+
+Use this article to get started with creating a custom Text Analytics for health project where you can train custom models on top of Text Analytics for health for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract healthcare related named entities and are trained by learning from labeled data.
+
+In this article, we use Language Studio to demonstrate key concepts of custom Text Analytics for health. As an example weΓÇÖll build a custom Text Analytics for health model to extract the Facility or treatment location from short discharge notes.
+++++++
+## Next steps
+
+* [Text analytics for health overview](./overview.md)
+
+After you've created entity extraction model, you can:
+
+* [Use the runtime API to extract entities](how-to/call-api.md)
+
+When you start to create your own custom Text Analytics for health projects, use the how-to articles to learn more about data labeling, training and consuming your model in greater detail:
+
+* [Data selection and schema design](how-to/design-schema.md)
+* [Tag data](how-to/label-data.md)
+* [Train a model](how-to/train-model.md)
+* [Model evaluation](how-to/view-model-evaluation.md)
+
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/glossary.md
+
+ Title: Definitions used in custom Text Analytics for health
+
+description: Learn about definitions used in custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Terms and definitions used in custom Text Analytics for health
+
+Use this article to learn about some of the definitions and terms you may encounter when using Custom Text Analytics for health
+
+## Entity
+Entities are words in input data that describe information relating to a specific category or concept. If your entity is complex and you would like your model to identify specific parts, you can break your entity into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Prebuilt entity component
+
+Prebuilt entity components represent pretrained entity components that belong to the [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md). These entities are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components.
++
+## Learned entity component
+
+The learned entity component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by labeling your data for the entity. If you do not label any data with the entity, it will not have a learned component. Learned components cannot be added to entities with prebuilt components.
+
+## List entity component
+A list entity component represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
+
+The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "clinics" and you have the words "clinic a, clinic b, clinic c" in the list, then the size entity will be predicted for all instances of the input data where "clinic a, clinic b, clinic c" are used regardless of the context. List components can be added to all entities regardless of whether they are prebuilt or newly defined.
+
+## Model
+A model is an object that's trained to do a certain task, in this case custom Text Analytics for health models perform all the features of Text Analytics for health in addition to custom entity extraction for the user's defined entities. Models are trained by providing labeled data to learn from so they can later be used to understand context from the input text.
+
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Overfitting
+
+Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+## Project
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+
+## Recall
+Measures the model's ability to predict actual positive entities. It's the ratio between the predicted true positives and what was actually labeled. The recall metric reveals how many of the predicted entities are correct.
++
+## Schema
+Schema is defined as the combination of entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about what are the new entities should you add to your project to extend the existing [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md) and which new vocabulary should you add to the prebuilt entities using list components to enhance their recall. For example, adding a new entity for patient name or extending the prebuilt entity "Medication Name" with a new research drug (Ex: research drug A).
+
+## Training data
+Training data is the set of information that is needed to train a model.
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/service-limits.md
+
+ Title: Custom Text Analytics for health service limits
+
+description: Learn about the data and service limits when using Custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Custom Text Analytics for health service limits
+
+Use this article to learn about the data and service limits when using custom Text Analytics for health.
+
+## Language resource limits
+
+* Your Language resource has to be created in one of the [supported regions](#regional-availability).
+
+* Your resource must be one of the supported pricing tiers:
+
+ |Tier|Description|Limit|
+ |--|--|--|
+ |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
+
+
+* You can only connect one storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](../how-to/create-project.md#create-language-resource-and-connect-storage-account)
+
+* You can have up to 500 projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+## Regional availability
+
+Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment.
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| East US | Γ£ô | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+|Document size|--|125,000 characters. You can send up to 20 documents as long as they collectively do not exceed 125,000 characters|
+
+> [!TIP]
+> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|S|Training time| Unlimited, free |
+|S|Prediction Calls| 5,000 text records for free per language resource|
+
+## Document limits
+
+* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
+
+* All files uploaded in your container must contain data. Empty files are not allowed for training.
+
+* All files should be available at the root of your container.
+
+## Data limits
+
+The following limits are observed for authoring.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Documents count | 10 | 100,000 |
+|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
+|Count of entity types | 1 | 200 |
+|Entity length in characters | 1 | 500 |
+|Count of trained models per project| 0 | 10 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Item | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters. See the supported [data format](../concepts/data-formats.md#entity-naming-rules) for more information on entity names when importing a labels file. |
+| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
++
+## Next steps
+
+* [Custom text analytics for health overview](../overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
Previously updated : 12/09/2022 Last updated : 04/14/2023
This Language service unifies the following previously available Cognitive Servi
The Language service also provides several new features as well, which can either be:
-* Pre-configured, which means the AI models that the feature uses are not customizable. You just send your data, and use the feature's output in your applications.
+* Preconfigured, which means the AI models that the feature uses are not customizable. You just send your data, and use the feature's output in your applications.
* Customizable, which means you'll train an AI model using our tools to fit your data specifically. > [!TIP]
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/named-entity-recognition.png" alt-text="A screenshot of a named entity recognition example." lightbox="media/studio-examples/named-entity-recognition.png"::: :::column-end::: :::column span="":::
- [Named entity recognition](./named-entity-recognition/overview.md) is a pre-configured feature that categorizes entities (words or phrases) in unstructured text across several pre-defined category groups. For example: people, events, places, dates, [and more](./named-entity-recognition/concepts/named-entity-categories.md).
+ [Named entity recognition](./named-entity-recognition/overview.md) is a preconfigured feature that categorizes entities (words or phrases) in unstructured text across several predefined category groups. For example: people, events, places, dates, [and more](./named-entity-recognition/concepts/named-entity-categories.md).
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/personal-information-detection.png" alt-text="A screenshot of a PII detection example." lightbox="media/studio-examples/personal-information-detection.png"::: :::column-end::: :::column span="":::
- [PII detection](./personally-identifiable-information/overview.md) is a pre-configured feature that identifies, categorizes, and redacts sensitive information in both [unstructured text documents](./personally-identifiable-information/how-to-call.md), and [conversation transcripts](./personally-identifiable-information/how-to-call-for-conversations.md). For example: phone numbers, email addresses, forms of identification, [and more](./personally-identifiable-information/concepts/entity-categories.md).
+ [PII detection](./personally-identifiable-information/overview.md) is a preconfigured feature that identifies, categorizes, and redacts sensitive information in both [unstructured text documents](./personally-identifiable-information/how-to-call.md), and [conversation transcripts](./personally-identifiable-information/how-to-call-for-conversations.md). For example: phone numbers, email addresses, forms of identification, [and more](./personally-identifiable-information/concepts/entity-categories.md).
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/language-detection.png" alt-text="A screenshot of a language detection example." lightbox="media/studio-examples/language-detection.png"::: :::column-end::: :::column span="":::
- [Language detection](./language-detection/overview.md) is a pre-configured feature that can detect the language a document is written in, and returns a language code for a wide range of languages, variants, dialects, and some regional/cultural languages.
+ [Language detection](./language-detection/overview.md) is a preconfigured feature that can detect the language a document is written in, and returns a language code for a wide range of languages, variants, dialects, and some regional/cultural languages.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/sentiment-analysis-example.png" alt-text="A screenshot of a sentiment analysis example." lightbox="media/studio-examples/sentiment-analysis-example.png"::: :::column-end::: :::column span="":::
- [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) are pre-configured features that help you find out what people think of your brand or topic by mining text for clues about positive or negative sentiment, and can associate them with specific aspects of the text.
+ [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) are preconfigured features that help you find out what people think of your brand or topic by mining text for clues about positive or negative sentiment, and can associate them with specific aspects of the text.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/summarization-example.png" alt-text="A screenshot of a summarization example." lightbox="media/studio-examples/summarization-example.png"::: :::column-end::: :::column span="":::
- [Summarization](./summarization/overview.md) is a pre-configured feature that uses extractive text summarization to produce a summary of documents and conversation transcriptions. It extracts sentences that collectively represent the most important or relevant information within the original content.
+ [Summarization](./summarization/overview.md) is a preconfigured feature that uses extractive text summarization to produce a summary of documents and conversation transcriptions. It extracts sentences that collectively represent the most important or relevant information within the original content.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/key-phrases.png" alt-text="A screenshot of a key phrase extraction example." lightbox="media/studio-examples/key-phrases.png"::: :::column-end::: :::column span="":::
- [Key phrase extraction](./key-phrase-extraction/overview.md) is a pre-configured feature that evaluates and returns the main concepts in unstructured text, and returns them as a list.
+ [Key phrase extraction](./key-phrase-extraction/overview.md) is a preconfigured feature that evaluates and returns the main concepts in unstructured text, and returns them as a list.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="media/studio-examples/entity-linking.png" alt-text="A screenshot of an entity linking example." lightbox="media/studio-examples/entity-linking.png"::: :::column-end::: :::column span="":::
- [Entity linking](./entity-linking/overview.md) is a pre-configured feature that disambiguates the identity of entities (words or phrases) found in unstructured text and returns links to Wikipedia.
+ [Entity linking](./entity-linking/overview.md) is a preconfigured feature that disambiguates the identity of entities (words or phrases) found in unstructured text and returns links to Wikipedia.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png"::: :::column-end::: :::column span="":::
- [Text analytics for health](./text-analytics-for-health/overview.md) is a pre-configured feature that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
+ [Text analytics for health](./text-analytics-for-health/overview.md) is a preconfigured feature that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
:::column-end::: :::row-end:::
The Language service also provides several new features as well, which can eithe
:::column-end::: :::row-end:::
+### Custom text analytics for health
+
+ :::column span="":::
+ :::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a custom text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png":::
+ :::column-end:::
+ :::column span="":::
+ [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) is a custom feature that extract healthcare specific entities from unstructured text, using a model you create.
+ :::column-end:::
+ ## Which Language service feature should I use? This section will help you decide which Language service feature you should use for your application:
This section will help you decide which Language service feature you should use
|What do you want to do? |Document format |Your best solution | Is this solution customizable?* | ||||| | Detect and/or redact sensitive information such as PII and PHI. | Unstructured text, <br> transcribed conversations | [PII detection](./personally-identifiable-information/overview.md) | |
-| Extract categories of information without creating a custom model. | Unstructured text | The [pre-configured NER feature](./named-entity-recognition/overview.md) | |
+| Extract categories of information without creating a custom model. | Unstructured text | The [preconfigured NER feature](./named-entity-recognition/overview.md) | |
| Extract categories of information using a model specific to your data. | Unstructured text | [Custom NER](./custom-named-entity-recognition/overview.md) | Γ£ô | |Extract main topics and important phrases. | Unstructured text | [Key phrase extraction](./key-phrase-extraction/overview.md) | | | Determine the sentiment and opinions expressed in text. | Unstructured text | [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) | | | Summarize long chunks of text or conversations. | Unstructured text, <br> transcribed conversations. | [Summarization](./summarization/overview.md) | | | Disambiguate entities and get links to Wikipedia. | Unstructured text | [Entity linking](./entity-linking/overview.md) | | | Classify documents into one or more categories. | Unstructured text | [Custom text classification](./custom-text-classification/overview.md) | Γ£ô|
-| Extract medical information from clinical/medical documents. | Unstructured text | [Text analytics for health](./text-analytics-for-health/overview.md) | |
-| Build an conversational application that responds to user inputs. | Unstructured user inputs | [Question answering](./question-answering/overview.md) | Γ£ô |
+| Extract medical information from clinical/medical documents, without building a model. | Unstructured text | [Text analytics for health](./text-analytics-for-health/overview.md) | |
+| Extract medical information from clinical/medical documents using a model that's trained on your data. | Unstructured text | [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) | |
+| Build a conversational application that responds to user inputs. | Unstructured user inputs | [Question answering](./question-answering/overview.md) | Γ£ô |
| Detect the language that a text was written in. | Unstructured text | [Language detection](./language-detection/overview.md) | | | Predict the intention of user inputs and extract information from them. | Unstructured user inputs | [Conversational language understanding](./conversational-language-understanding/overview.md) | Γ£ô | | Connect apps from conversational language understanding, LUIS, and question answering. | Unstructured user inputs | [Orchestration workflow](./orchestration-workflow/overview.md) | Γ£ô |
-\* If a feature is customizable, you can train an AI model using our tools to fit your data specifically. Otherwise a feature is pre-configured, meaning the AI models it uses cannot be changed. You just send your data, and use the feature's output in your applications.
+\* If a feature is customizable, you can train an AI model using our tools to fit your data specifically. Otherwise a feature is preconfigured, meaning the AI models it uses cannot be changed. You just send your data, and use the feature's output in your applications.
## Migrate from Text Analytics, QnA Maker, or Language Understanding (LUIS)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 03/09/2023 Last updated : 04/14/2023
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## April 2023
+* [Custom Text analytics for health](./custom-text-analytics-for-health/overview.md) is available in public preview, which enables you to build custom AI models to extract healthcare specific entities from unstructured text
* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below. * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md). * Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
cognitive-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md
The table below outlines the various ways content filtering can appear:
As part of your application design you'll need to think carefully on how to maximize the benefits of your applications while minimizing the harms. Consider the following best practices: -- How you want to handle scenarios where your users send in-appropriate or miss-use your application. Check the finish_reason to see if the generation is filtered.
+- How you want to handle scenarios where your users send inappropriate input or misuse your application. Check the finish_reason to see if the generation is filtered.
- If it's critical that the content filters run on your generations, check that there's no `error` object in the `content_filter_result`. - To help with monitoring for possible misuse, applications serving multiple end-users should pass the `user` parameter with each API call. The `user` should be a unique identifier for the end-user. Don't send any actual user identifiable information as the value.
communication-services Advisor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advisor-overview.md
The following SDKs are supported for this feature, along with all their supporte
The following documents may be interesting to you: -- [Logging and diagnostics](./logging-and-diagnostics.md)
+- [Logging and diagnostics](./analytics/enable-logging.md)
+- Access logs for [voice and video](./analytics/logs/voice-and-video-logs.md), [chat](./analytics/logs/chat-logs.md), [email](./analytics/logs/email-logs.md), [network traversal](./analytics/logs/network-traversal-logs.md), [recording](./analytics/logs/recording-logs.md), [SMS](./analytics/logs/sms-logs.md) and [call automation](./analytics/logs/call-automation-logs.md).
- [Metrics](./metrics.md)
communication-services Call Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/call-logs-azure-monitor.md
- Title: Azure Communication Services - Call Logs -
-description: Learn about Call Summary and Call Diagnostic Logs in Azure Monitor
---- Previously updated : 10/25/2021-----
-# Call Summary and Call Diagnostic Logs
-
-> [!IMPORTANT]
-> The following refers to logs enabled through [Azure Monitor](../../../azure-monitor/overview.md) (see also [FAQ](../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](./enable-logging.md)
--
-## Data Concepts
-The following are high level descriptions of data concepts specific to Voice and Video calling within your Communications Services that are important to review in order to understand the meaning of the data captured in the logs.
-
-### Entities and IDs
-
-A *Call*, as it relates to the entities represented in the data, is an abstraction represented by the `correlationId`. `CorrelationId`s are unique per Call, and are time-bound by `callStartTime` and `callDuration`. Every Call is an event that contains data from two or more *Endpoints*, which represent the various human, bot, or server participants in the Call.
-
-A *Participant* (`participantId`) is present only when the Call is a *Group* Call, as it represents the connection between an Endpoint and the server.
-
-An *Endpoint* is the most unique entity, represented by `endpointId`. `EndpointType` tells you whether the Endpoint represents a human user (PSTN, VoIP), a Bot (Bot), or the server that is managing multiple Participants within a Call. When an `endpointType` is `"Server"`, the Endpoint will not be assigned a unique ID. By analyzing endpointType and the number of `endpointIds`, you can determine how many users and other non-human Participants (bots, servers) join a Call. Our native SDKs (Androis, iOS) reuse the same `endpointId` for a user across multiple Calls, thus enabling an understanding of experience across sessions. This differs from web-based Endpoints, which will always generate a new `endpointId` for each new Call.
-
-A *Stream* is the most granular entity, as there is one Stream per direction (inbound/outbound) and `mediaType` (e.g. audio, video).
---
-## Data Definitions
-
-### Call Summary Log
-The Call Summary Log contains data to help you identify key properties of all Calls. A different Call Summary Log will be created per each `participantId` (`endpointId` in the case of P2P calls) in the Call.
-
-> [!IMPORTANT]
-> Participant information in the call summary log will vary based on the participant tenant. The SDK and OS version will be redacted if the participant is not within the same tenant (also referred to as cross-tenant) as the ACS resource. Cross-tenantsΓÇÖ participants are classified as external users invited by a resource tenant to join and collaborate during a call.
-
-| Property | Description |
-|-||
-| time | The timestamp (UTC) of when the log was generated. |
-| operationName | The operation associated with log record. |
-| operationVersion | The api-version associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
-| correlationId | `correlationId` is the unique ID for a Call. The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call, and it can be used to join data from different logs. If you ever need to open a support case with Microsoft, the `correlationId` will be used to easily identify the Call you're troubleshooting. |
-| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
-| callStartTime | A timestamp for the start of the call, based on the first attempted connection from any Endpoint. |
-| callDuration | The duration of the Call expressed in seconds, based on the first attempted connection and end of the last connection between two endpoints. |
-| callType | Will contain either `"P2P"` or `"Group"`. A `"P2P"` Call is a direct 1:1 connection between only two, non-server endpoints. A `"Group"` Call is a Call that has more than two endpoints or is created as `"Group"` Call prior to the connection. |
-| teamsThreadId | This ID is only relevant when the Call is organized as a Microsoft Teams meeting, representing the Microsoft Teams ΓÇô Azure Communication Services interoperability use-case. This ID is exposed in operational logs. You can also get this ID through the Chat APIs. |
-| participantId | This ID is generated to represent the two-way connection between a `"Participant"` Endpoint (`endpointType` = `"Server"`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
-| participantStartTime | Timestamp for beginning of the first connection attempt by the participant. |
-| participantDuration | The duration of each Participant connection in seconds, from `participantStartTime` to the timestamp when the connection is ended. |
-| participantEndReason | Contains Calling SDK error codes emitted by the SDK when relevant for each `participantId`. See Calling SDK error codes below. |
-| endpointId | Unique ID that represents each Endpoint connected to the call, where the Endpoint type is defined by `endpointType`. When the value is `null`, the connected entity is the Communication Services server (`endpointType`= `"Server"`). `EndpointId` can sometimes persist for the same user across multiple calls (`correlationId`) for native clients. The number of `endpointId`s will determine the number of Call Summary Logs. A distinct Summary Log is created for each `endpointId`. |
-| endpointType | This value describes the properties of each Endpoint connected to the Call. Can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. |
-| sdkVersion | Version string for the Communication Services Calling SDK version used by each relevant Endpoint. (Example: `"1.1.00.20212500"`) |
-| osVersion | String that represents the operating system and version of each Endpoint device. |
-| participantTenantId | The ID of the Microsoft tenant associated with the participant. This field is used to guide cross-tenant redaction.
--
-### Call Diagnostic Log
-Call Diagnostic Logs provide important information about the Endpoints and the media transfers for each Participant, as well as measurements that help to understand quality issues.
-For each Endpoint within a Call, a distinct Call Diagnostic Log is created for outbound media streams (audio, video, etc.) between Endpoints.
-In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In Group Calls the participantId serves as key identifier to join the related outbound logs into a distinct Participant connection. Please note that Call diagnostic logs will remain intact and will be the same regardless of the participant tenant.
-> Note: In this document P2P and group calls are by default within the same tenant, for all call scenarios that are cross-tenant they will be specified accordingly throughout the document.
-
-| Property | Description |
-||-|
-| operationName | The operation associated with log record. |
-| operationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
-| correlationId | The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationId` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationId` will be used to easily identify the Call you're troubleshooting. |
-| participantId | This ID is generated to represent the two-way connection between a "Participant" Endpoint (`endpointType` = `ΓÇ£ServerΓÇ¥`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
-| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams object ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
-| endpointId | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationId`) for native clients but will be unique for every Call when the client is a web browser. |
-| endpointType | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, `"Voicemail"`, `"Anonymous"`, or `"Unknown"`. |
-| mediaType | This string value describes the type of media being transmitted between endpoints within each stream. Possible values include `ΓÇ£AudioΓÇ¥`, `ΓÇ£VideoΓÇ¥`, `ΓÇ£VBSSΓÇ¥` (Video-Based Screen Sharing), and `ΓÇ£AppSharingΓÇ¥`. |
-| streamId | Non-unique integer which, together with `mediaType`, can be used to uniquely identify streams of the same `participantId`. |
-| transportType | String value which describes the network transport protocol per `participantId`. Can contain `"UDPΓÇ¥`, `ΓÇ£TCPΓÇ¥`, or `ΓÇ£UnrecognizedΓÇ¥`. `"Unrecognized"` indicates that the system could not determine if the `transportType` was TCP or UDP. |
-| roundTripTimeAvg | This is the average time it takes to get an IP packet from one Endpoint to another within a `participantDuration`. This network propagation delay is essentially tied to physical distance between the two points and the speed of light, including additional overhead taken by the various routers in between. The latency is measured as one-way or Round-trip Time (RTT). Its value expressed in milliseconds, and an RTT greater than 500ms should be considered as negatively impacting the Call quality. |
-| roundTripTimeMax | The maximum RTT (ms) measured per media stream during a `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| jitterAvg | This is the average change in delay between successive packets. Azure Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering, which is approximately at `jitterAvg` >30 ms, that a negative quality impact is likely occurring. The packets arriving at different speeds cause a speaker's voice to sound robotic. This is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| jitterMax | The is the maximum jitter value measured between packets per media stream. Bursts in network conditions can cause issues in the audio/video traffic flow. |
-| packetLossRateAvg | This is the average percentage of packets that are lost. Packet loss directly affects audio qualityΓÇöfrom small, individual lost packets that have almost no impact to back-to-back burst losses that cause audio to cut out completely. The packets being dropped and not arriving at their intended destination cause gaps in the media, resulting in missed syllables and words, and choppy video and sharing. A packet loss rate of greater than 10% (0.1) should be considered a rate that's likely having a negative quality impact. This is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
-| packetLossRateMax | This value represents the maximum packet loss rate (%) per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. Bursts in network conditions can cause issues in the audio/video traffic flow.
-### P2P vs. Group Calls
-
-There are two types of Calls (represented by `callType`): P2P and Group.
-
-**P2P** calls are a connection between only two Endpoints, with no server Endpoint. P2P calls are initiated as a Call between those Endpoints and are not created as a group Call event prior to the connection.
-
- :::image type="content" source="media\call-logs-azure-monitor\p2p-diagram.png" alt-text="Screenshot displays P2P call across 2 endpoints.":::
-
-**Group** Calls include any Call that has more than 2 Endpoints connected. Group Calls will include a server Endpoint, and the connection between each Endpoint and the server. P2P Calls that add an additional Endpoint during the Call cease to be P2P, and they become a Group Call. By viewing the `participantStartTime` and `participantDuration`, the timeline of when each Endpoint joined the Call can be determined.
--
- :::image type="content" source="media\call-logs-azure-monitor\group-call-version-a.png" alt-text="Screenshot displays group call across multiple endpoints.":::
--
-## Log Structure
-
-Two types of logs are created: **Call Summary** logs and **Call Diagnostic** logs.
-
-Call Summary Logs contain basic information about the Call, including all the relevant IDs, timestamps, Endpoint and SDK information. For each participant within a call, a distinct call summary log is created (if someone rejoins a call, they will have the same EndpointId, but a different ParticipantId, so there will be two Call Summary logs for that endpoint).
-
-Call Diagnostic Logs contain information about the Stream as well as a set of metrics that indicate quality of experience measurements. For each Endpoint within a Call (including the server), a distinct Call Diagnostic Log is created for each media stream (audio, video, etc.) between Endpoints. In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In a Group Call, each stream associated with `endpointType`= `"Server"` will create a log containing data for the inbound streams, and all other streams will create logs containing data for the outbound streams for all non-sever endpoints. In Group Calls, use the `participantId` as the key to join the related inbound/outbound logs into a distinct Participant connection.
-
-### Example 1: P2P Call
-
-The below diagram represents two endpoints connected directly in a P2P Call. In this example, 2 Call Summary Logs would be created (one per `participantID`) and four Call Diagnostic Logs would be created (one per media stream). Each log will contain data relating to the outbound stream of the `participantID`.
---
-### Example 2: Group Call
-
-The below diagram represents a Group Call example with three `participantIDs`, which means three `participantIDs` (`endpointIds` can potentially appear in multiple Participants, e.g. when rejoining a Call from the same device) and a Server Endpoint. One Call Summary Logs would be created per `participantID`, and four Call Diagnostic Logs would be created relating to each `participantID`, one for each media stream.
-
-
-### Example 3: P2P Call cross-tenant
-The below diagram represents two participants across multiple tenants that are connected directly in a P2P Call. In this example, one Call Summary Logs would be created (one per participant) with redacted OS and SDK versioning and four Call Diagnostic Logs would be created (one per media stream). Each log will contain data relating to the outbound stream of the `participantID`.
-
--
-### Example 4: Group Call cross-tenant
-The below diagram represents a Group Call example with three `participantIds` across multiple tenants. One Call Summary Logs would be created per participant with redacted OS and SDK versioning, and four Call Diagnostic Logs would be created relating to each `participantId` , one for each media stream.
---
-> [!NOTE]
-> Only outbound diagnostic logs will be supported in this release.
-> Please note that participants and bots identity are treated the same way, as a result OS and SDK versioning associated to the bot and the participant will be redacted
--
-
-## Sample Data
-
-### P2P Call
--
-Shared fields for all logs in the call:
-
-```json
-"time": "2021-07-19T18:46:50.188Z",
-"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
-"correlationId": "8d1a8374-344d-4502-b54b-ba2d6daaf0ae",
-```
-
-#### Call Summary Logs
-Call Summary Logs have shared operation and category information:
-
-```json
-"operationName": "CallSummary",
-"operationVersion": "1.0",
-"category": "CallSummary",
-
-```
-Call Summary for VoIP user 1
-```json
-"properties": {
- "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
- "callStartTime": "2021-07-19T17:54:05.113Z",
- "callDuration": 6,
- "callType": "P2P",
- "teamsThreadId": "null",
- "participantId": "null",
- "participantStartTime": "2021-07-19T17:54:06.758Z",
- "participantDuration": "5",
- "participantEndReason": "0",
- "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
- "endpointType": "VoIP",
- "sdkVersion": "1.0.1.0",
- "osVersion": "Windows 10.0.17763 Arch: x64"
-}
-```
-
-Call summary for VoIP user 2
-```json
-"properties": {
- "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
- "callStartTime": "2021-07-19T17:54:05.335Z",
- "callDuration": 6,
- "callType": "P2P",
- "teamsThreadId": "null",
- "participantId": "null",
- "participantStartTime": "2021-07-19T17:54:06.335Z",
- "participantDuration": "5",
- "participantEndReason": "0",
- "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
- "endpointType": "VoIP",
- "sdkVersion": "1.1.0.0",
- "osVersion": "null"
-}
-```
-Call Summary Logs crossed tenants: Call summary for VoIP user 1
-```json
-"properties": {
- "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
- "callStartTime": "2022-08-14T06:18:27.010Z",
- "callDuration": 520,
- "callType": "P2P",
- "teamsThreadId": "null",
- "participantId": "null",
- "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
- "participantStartTime": "2022-08-14T06:18:27.010Z",
- "participantDuration": "520",
- "participantEndReason": "0",
- "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
- "endpointType": "VoIP",
- "sdkVersion": "Redacted",
- "osVersion": "Redacted"
-}
-```
-Call summary for PSTN call (**Please note:** P2P or group call logs emitted will have OS, and SDK version redacted regardless is the participant or botΓÇÖs tenant)
-```json
-"properties": {
- "identifier": "b1999c3e-bbbb-4650-9b23-9999bdabab47",
- "callStartTime": "2022-08-07T13:53:12Z",
- "callDuration": 1470,
- "callType": "Group",
- "teamsThreadId": "19:36ec5177126fff000aaa521670c804a3@thread.v2",
- "participantId": " b25cf111-73df-4e0a-a888-640000abe34d",
- "participantStartTime": "2022-08-07T13:56:45Z",
- "participantDuration": 960,
- "participantEndReason": "0",
- "endpointId": "8731d003-6c1e-4808-8159-effff000aaa2",
- "endpointType": "PSTN",
- "sdkVersion": "Redacted",
- "osVersion": "Redacted"
-}
-```
-
-#### Call Diagnostic Logs
-Call diagnostics logs share operation information:
-```json
-"operationName": "CallDiagnostics",
-"operationVersion": "1.0",
-"category": "CallDiagnostics",
-```
-Diagnostic log for audio stream from VoIP Endpoint 1 to VoIP Endpoint 2:
-```json
-"properties": {
- "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
- "participantId": "null",
- "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
- "endpointType": "VoIP",
- "mediaType": "Audio",
- "streamId": "1000",
- "transportType": "UDP",
- "roundTripTimeAvg": "82",
- "roundTripTimeMax": "88",
- "jitterAvg": "1",
- "jitterMax": "1",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for audio stream from VoIP Endpoint 2 to VoIP Endpoint 1:
-```json
-"properties": {
- "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
- "participantId": "null",
- "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
- "endpointType": "VoIP",
- "mediaType": "Audio",
- "streamId": "1363841599",
- "transportType": "UDP",
- "roundTripTimeAvg": "78",
- "roundTripTimeMax": "84",
- "jitterAvg": "1",
- "jitterMax": "1",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for video stream from VoIP Endpoint 1 to VoIP Endpoint 2:
-```json
-"properties": {
- "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
- "participantId": "null",
- "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
- "endpointType": "VoIP",
- "mediaType": "Video",
- "streamId": "2804",
- "transportType": "UDP",
- "roundTripTimeAvg": "103",
- "roundTripTimeMax": "143",
- "jitterAvg": "0",
- "jitterMax": "4",
- "packetLossRateAvg": "3.146336E-05",
- "packetLossRateMax": "0.001769911"
-}
-```
-### Group Call
-
-The data would be generated in three Call Summary Logs and 6 Call Diagnostic Logs. Shared fields for all logs in the Call:
-```json
-"time": "2021-07-05T06:30:06.402Z",
-"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
-"correlationId": "341acde7-8aa5-445b-a3da-2ddadca47d22",
-```
-
-#### Call Summary Logs
-Call Summary Logs have shared operation and category information:
-```json
-"operationName": "CallSummary",
-"operationVersion": "1.0",
-"category": "CallSummary",
-```
-
-Call summary for VoIP Endpoint 1:
-```json
-"properties": {
- "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
- "callStartTime": "2021-07-05T06:16:40.240Z",
- "callDuration": 87,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
- "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
- "participantStartTime": "2021-07-05T06:16:44.235Z",
- "participantDuration": "82",
- "participantEndReason": "0",
- "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
- "endpointType": "VoIP",
- "sdkVersion": "1.0.0.3",
- "osVersion": "Darwin Kernel Version 18.7.0: Mon Nov 9 15:07:15 PST 2020; root:xnu-4903.272.3~3/RELEASE_ARM64_S5L8960X"
-}
-```
-Call summary for VoIP Endpoint 3:
-```json
-"properties": {
- "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
- "callStartTime": "2021-07-05T06:16:40.240Z",
- "callDuration": 87,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLTk2ZDUtYTZlM2I2ZjgxOTkw@thread.v2",
- "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
- "participantStartTime": "2021-07-05T06:16:40.240Z",
- "participantDuration": "87",
- "participantEndReason": "0",
- "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
- "endpointType": "VoIP",
- "sdkVersion": "1.0.0.3",
- "osVersion": "Android 11.0; Manufacturer: Google; Product: redfin; Model: Pixel 5; Hardware: redfin"
-}
-```
-Call summary for PSTN Endpoint 2:
-```json
-"properties": {
- "identifier": "null",
- "callStartTime": "2021-07-05T06:16:40.240Z",
- "callDuration": 87,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
- "participantId": "515650f7-8204-4079-ac9d-d8f4bf07b04c",
- "participantStartTime": "2021-07-05T06:17:10.447Z",
- "participantDuration": "52",
- "participantEndReason": "0",
- "endpointId": "46387150-692a-47be-8c9d-1237efe6c48b",
- "endpointType": "PSTN",
- "sdkVersion": "null",
- "osVersion": "null"
-}
-```
-Call Summary Logs cross-tenant
-```json
-"properties": {
- "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
- "callStartTime": "2022-08-14T06:18:27.010Z",
- "callDuration": 912,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
- "participantId": "aa1dd7da-5922-4bb1-a4fa-e350a111fd9c",
- "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
- "participantStartTime": "2022-08-14T06:18:27.010Z",
- "participantDuration": "902",
- "participantEndReason": "0",
- "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
- "endpointType": "VoIP",
- "sdkVersion": "Redacted",
- "osVersion": "Redacted"
-}
-```
-Call summary log crossed tenant with bot as a participant
-Call summary for bot
-```json
-
-"properties": {
- "identifier": "b1902c3e-b9f7-4650-9b23-9999bdabab47",
- "callStartTime": "2022-08-09T16:00:32Z",
- "callDuration": 1470,
- "callType": "Group",
- "teamsThreadId": "19:meeting_MmQwZDcwYTQtZ000HWE6NzI4LTg1YTAtNXXXXX99999ZZZZZ@thread.v2",
- "participantId": "66e9d9a7-a434-4663-d91d-fb1ea73ff31e",
- "participantStartTime": "2022-08-09T16:14:18Z",
- "participantDuration": 644,
- "participantEndReason": "0",
- "endpointId": "69680ec2-5ac0-4a3c-9574-eaaa77720b82",
- "endpointType": "Bot",
- "sdkVersion": "Redacted",
- "osVersion": "Redacted"
-}
-```
-#### Call Diagnostic Logs
-Call diagnostics logs share operation information:
-```json
-"operationName": "CallDiagnostics",
-"operationVersion": "1.0",
-"category": "CallDiagnostics",
-```
-Diagnostic log for audio stream from VoIP Endpoint 1 to Server Endpoint:
-```json
-"properties": {
- "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
- "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
- "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
- "endpointType": "VoIP",
- "mediaType": "Audio",
- "streamId": "14884",
- "transportType": "UDP",
- "roundTripTimeAvg": "46",
- "roundTripTimeMax": "48",
- "jitterAvg": "0",
- "jitterMax": "1",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 1:
-```json
-"properties": {
- "identifier": null,
- "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
- "endpointId": null,
- "endpointType": "Server",
- "mediaType": "Audio",
- "streamId": "2001",
- "transportType": "UDP",
- "roundTripTimeAvg": "42",
- "roundTripTimeMax": "44",
- "jitterAvg": "1",
- "jitterMax": "1",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for audio stream from VoIP Endpoint 3 to Server Endpoint:
-```json
-"properties": {
- "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
- "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
- "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
- "endpointType": "VoIP",
- "mediaType": "Audio",
- "streamId": "13783",
- "transportType": "UDP",
- "roundTripTimeAvg": "45",
- "roundTripTimeMax": "46",
- "jitterAvg": "1",
- "jitterMax": "2",
- "packetLossRateAvg": "0",
- "packetLossRateMax": "0"
-}
-```
-Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 3:
-```json
-"properties": {
- "identifier": "null",
- "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
- "endpointId": null,
- "endpointType": "Server"
- "mediaType": "Audio",
- "streamId": "1000",
- "transportType": "UDP",
- "roundTripTimeAvg": "45",
- "roundTripTimeMax": "46",
- "jitterAvg": "1",
- "jitterMax": "4",
- "packetLossRateAvg": "0",
-```
-### Error Codes
-The `participantEndReason` will contain a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, per Endpoint. See [troubleshooting in Azure communication Calling SDK error codes](../troubleshooting-info.md?tabs=csharp%2cios%2cdotnet#calling-sdk-error-codes)
communication-services Enable Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/enable-logging.md
The following are instructions for configuring your Azure Monitor resource to st
These instructions apply to the following Communications Services logs: -- [Call Summary and Call Diagnostic logs](call-logs-azure-monitor.md)
+- [Call Summary and Call Diagnostic logs](logs/voice-and-video-logs.md)
## Access Diagnostic Settings To access Diagnostic Settings for your Communications Services, start by navigating to your Communications Services home page within Azure portal:
They're all viable and flexible options that can adapt to your specific storage
By choosing to send your logs to a [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-overview.md) destination, you enable more features within Azure Monitor generally and for your Communications Services. Log Analytics is a tool within Azure portal used to create, edit, and run [queries](../../../azure-monitor/logs/queries.md) with data in your Azure Monitor logs and metrics and [Workbooks](../../../azure-monitor/visualize/workbooks-overview.md), [alerts](../../../azure-monitor/alerts/alerts-log.md), [notification actions](../../../azure-monitor/alerts/action-groups.md), [REST API access](/rest/api/loganalytics/), and many others.
-For your Communications Services logs, we've provided a useful [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack) to provide an initial set of insights to quickly analyze and understand your data. These query packs are described here: [Log Analytics for Communications Services](log-analytics.md).
+For your Communications Services logs, we've provided a useful [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack) to provide an initial set of insights to quickly analyze and understand your data. These query packs are described here: [Log Analytics for Communications Services](query-call-logs.md).
communication-services Call Automation Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/call-automation-logs.md
+
+ Title: Azure Communication Services Call Automation logs
+
+description: Learn about logging for Azure Communication Services Call Automation.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services Call Automation Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Call Automation operational logs** - provides operational information on Call Automation API requests. These logs can be used to identify failure points, query all requests made in a call (using Correlation ID or Server Call ID) or query all requests made by a specific service application in the call (using Participant ID).
+
+## Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+## Call Automation operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `OperationName` | The operation associated with log record. |
+| `CorrelationID` | The identifier to identify a call and correlate events for a unique call. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `ResultType` | The status of the operation. |
+| `ResultSignature` | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `DurationMs` | The duration of the operation in milliseconds. |
+| `CallerIpAddress` | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+| `Level` | The severity level of the event. |
+| `URI` | The URI of the request. |
+| `CallConnectionId` | ID representing the call connection, if available. This ID is different for each participant and is used to identify their connection to the call. |
+| `ServerCallId` | A unique ID to identify a call. |
+| `SDKVersion` | SDK version used for the request. |
+| `SDKType` | The SDK type used for the request. |
+| `ParticipantId` | ID to identify the call participant that made the request. |
+| `SubOperationName` | Used to identify the sub type of media operation (play, recognize) |
communication-services Chat Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/chat-logs.md
+
+ Title: Azure Communication Services chat logs
+
+description: Learn about logging for Azure Communication Services chat.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services chat logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Authentication operational logs** - provides basic information related to the Authentication service
+* **Chat operational logs** - provides basic information related to the chat service
+
+## Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+## Authentication operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `OperationName` | The operation associated with log record. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `ResultType` | The status of the operation. |
+| `ResultSignature` | The sub-status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `DurationMs` | The duration of the operation in milliseconds. |
+| `CallerIpAddress` | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+| `Level` | The severity level of the event. |
+| `URI` | The URI of the request. |
+| `SdkType` | The SDK type used in the request. |
+| `PlatformType` | The platform type used in the request. |
+| `Identity` | The identity of Azure Communication Services or Teams user related to the operation. |
+| `Scopes` | The Communication Services scopes present in the access token. |
+
+## Chat operational logs
+
+| Property | Description |
+| -- | |
+| TimeGenerated | The timestamp (UTC) of when the log was generated. |
+| OperationName | The operation associated with log record. |
+| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| OperationVersion | The api-version associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| ResultType | The status of the operation. |
+| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| ResultDescription | The static text description of this operation. |
+| DurationMs | The duration of the operation in milliseconds. |
+| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+| Level | The severity level of the event. |
+| URI | The URI of the request. |
+| UserId | The request sender's user ID. |
+| ChatThreadId | The chat thread ID associated with the request. |
+| ChatMessageId | The chat message ID associated with the request. |
+| SdkType | The Sdk type used in the request. |
+| PlatformType | The platform type used in the request. |
communication-services Email Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/email-logs.md
+
+ Title: Azure Communication Services email logs
+
+description: Learn about logging for Azure Communication Services email.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services email logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Email Send Mail operational logs** - provides detailed information related to the Email service send mail requests.
+* **Email Status Update operational logs** - provides message and recipient level delivery status updates related to the Email service send mail requests.
+* **Email User Engagement operational logs** - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.
+
+## Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+## Email Send Mail operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `Location` | The region where the operation was processed. |
+| `OperationName` | The operation associated with log record. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
+| `Size` | Represents the total size in megabytes of the email body, subject, headers and attachments. |
+| `ToRecipientsCount` | The total # of unique email addresses on the To line. |
+| `CcRecipientsCount` | The total # of unique email addresses on the Cc line. |
+| `BccRecipientsCount` | The total # of unique email addresses on the Bcc line. |
+| `UniqueRecipientsCount` | This is the deduplicated total recipient count for the To, Cc and Bcc address fields. |
+| `AttachmentsCount` | The total # of attachments. |
+
+## Email Status Update operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `Location` | The region where the operation was processed. |
+| `OperationName` | The operation associated with log record. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
+| `RecipientId` | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
+| `DeliveryStatus` | The terminal status of the message. |
+
+## Email User Engagement operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `Location` | The region where the operation was processed. |
+| `OperationName` | The operation associated with log record. |
+| `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
+| `RecipientId` | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
+| `EngagementType` | The type of user engagement being tracked. |
+| `EngagementContext` | The context represents what the user interacted with. |
+| `UserAgent` | The user agent string from the client. |
communication-services Network Traversal Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/network-traversal-logs.md
+
+ Title: Azure Communication Services Network Traversal logs
+
+description: Learn about logging for Azure Communication Services Network Traversal.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services Network Traversal Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Network Traversal operational logs** - provides basic information related to the Network Traversal service
+
+## Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+## Network Traversal operational logs
+
+| Dimension | Description|
+||--|
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `OperationName` | The operation associated with log record. |
+| `CorrelationId` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `OperationVersion` | The API-version associated with the operation or version of the operation (if there's no API version). |
+| `Category` | The log category of the event. Logs with the same log category and resource type will have the same properties fields. |
+| `ResultType` | The status of the operation (for example, Succeeded or Failed). |
+| `ResultSignature` | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `DurationMs` | The duration of the operation in milliseconds. |
+| `Level` | The severity level of the operation. |
+| `URI` | The URI of the request. |
+| `Identity` | The request sender's identity, if provided. |
+| `SdkType` | The SDK type being used in the request. |
+| `PlatformType` | The platform type being used in the request. |
+| `RouteType` | The routing methodology to where the ICE server will be located from the client (for example, Any or Nearest). |
+
communication-services Recording Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/recording-logs.md
+
+ Title: Azure Communication Services - Call Recording summary logs
+
+description: Learn about logging for Azure Communication Services Recording.
++++ Last updated : 10/27/2021+++++
+# Azure Communication Services Call Recording Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **Call Recording Summary Logs** - provides summary information for call recordings like:
+ - Call duration.
+ - Media content (for example, audio/video, unmixed, or transcription).
+ - Format types used for the recording (for example, WAV or MP4).
+ - The reason why the recording ended.
+
+A recording file is generated at the end of a call or meeting. The recording can be initiated and stopped by either a user or an app (bot). It can also end because of a system failure.
+
+Summary logs are published after a recording is ready to be downloaded. The logs are published within the standard latency time for Azure Monitor resource logs. See [Log data ingestion time in Azure Monitor](../../../../azure-monitor/logs/data-ingestion-time.md#azure-metrics-resource-logs-activity-log).
+
+### Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+### Call Recording summary logs schema
+
+| Property name | Data type | Description |
+|- |--|--|
+|`timeGenerated`|DateTime|Time stamp (UTC) of when the log was generated.|
+|`operationName`|String|Operation associated with a log record.|
+|`correlationId`|String|ID that's used to correlate events between tables.|
+|`recordingID`|String|ID for the recording that this log refers to.|
+|`category`|String|Log category of the event. Logs with the same log category and resource type have the same property fields.|
+|`resultType`|String| Status of the operation.|
+|`level`|String |Severity level of the operation.|
+|`chunkCount`|Integer|Total number of chunks created for the recording.|
+|`channelType`|String|Channel type of the recording, such as mixed or unmixed.|
+|`recordingStartTime`|DateTime|Time that the recording started.|
+|`contentType`|String|Content of the recording, such as audio only, audio/video, or transcription.|
+|`formatType`|String|File format of the recording.|
+|`recordingLength`|Double|Duration of the recording in seconds.|
+|`audioChannelsCount`|Integer|Total number of audio channels in the recording.|
+|`recordingEndReason`|String|Reason why the recording ended.|
+
+### Call Recording and example data
+
+```json
+"operationName": "Call Recording Summary",
+"operationVersion": "1.0",
+"category": "RecordingSummaryPUBLICPREVIEW",
+
+```
+A call can have one recording or many recordings, depending on how many times a recording event is triggered.
+
+For example, if an agent initiates an outbound call on a recorded line and the call drops because of a poor network signal, `callid` will have one `recordingid` value. If the agent calls back the customer, the system generates a new `callid` instance and a new `recordingid` value.
++
+#### Example: Call Recording for one call to one recording
+
+```json
+"properties"
+{
+ "TimeGenerated":"2022-08-17T23:18:26.4332392Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "zzzzzz-cada-4164-be10-0000000000",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuZHBvaW5xxxxxxxxFmNjkwxxxxxxxxxxxxSZXNvdXJjZVNwZWNpZmljSWQiOiJiZGU5YzE3Ni05M2Q3LTRkMWYtYmYwNS0yMTMwZTRiNWNlOTgifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-16T09:07:54.0000000Z",
+ "RecordingLength": "73872.94",
+ "ChunkCount": 6,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+```
+
+If the agent initiates a recording and then stops and restarts the recording multiple times while the call is still on, `callid` will have many `recordingid` values, depending on how many times the recording events were triggered.
+
+#### Example: Call Recording for one call to many recordings
+
+```json
+
+{
+ "TimeGenerated": "2022-08-17T23:55:46.6304762Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "xxxxxxx-cf78-4156-zzzz-0000000fa29cc",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuZHBxxxxxxxxxxxxjkwMC05MmEwLTRlZDYtOTcxYS1kYzZlZTkzNjU0NzciLCJSxxxxxNwZWNpZmljSWQiOiI5ZmY2ZTY2Ny04YmQyLTQ0NzAtYmRkYy00ZTVhMmUwYmNmOTYifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-17T23:55:43.3304762Z",
+ "RecordingLength": 3.34,
+ "ChunkCount": 1,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+{
+ "TimeGenerated": "2022-08-17T23:55:56.7664976Z",
+ "OperationName": "RecordingSummary",
+ "Category": "CallRecordingSummary",
+ "CorrelationId": "xxxxxxx-cf78-4156-zzzz-0000000fa29cc",
+ "ResultType": "Succeeded",
+ "Level": "Informational",
+ "RecordingId": "eyJQbGF0Zm9ybUVuxxxxxxiOiI4NDFmNjkwMC1mMjBiLTQzNmQtYTg0Mi1hODY2YzE4M2Y0YTEiLCJSZXNvdXJjZVNwZWNpZmljSWQiOiI2YzRlZDI4NC0wOGQ1LTQxNjEtOTExMy1jYWIxNTc3YjM1ODYifQ",
+ "RecordingEndReason": "CallEnded",
+ "RecordingStartTime": "2022-08-17T23:55:54.0664976Z",
+ "RecordingLength": 2.7,
+ "ChunkCount": 1,
+ "ContentType": "Audio - Video",
+ "ChannelType": "mixed",
+ "FormatType": "mp4",
+ "AudioChannelsCount": 1
+}
+```
+
+## Next steps
+
+- Get [Call Recording insights](../insights/call-recording-insights.md)
+- Learn more about [Call Recording](../../voice-video-calling/call-recording.md).
+
communication-services Sms Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/sms-logs.md
+
+ Title: Azure Communication Services SMS logs
+
+description: Learn about logging for Azure Communication Services SMS.
++++ Last updated : 04/14/2023+++++
+# Azure Communication Services SMS Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Pre-requisites
+
+Azure Communications Services provides monitoring and analytics features via [Azure Monitor Logs overview](../../../../azure-monitor/logs/data-platform-logs.md) and [Azure Monitor Metrics](../../../../azure-monitor/essentials/data-platform-metrics.md). Each Azure resource requires its own diagnostic setting, which defines the following criteria:
+ * Categories of logs and metric data sent to the destinations defined in the setting. The available categories will vary for different resource types.
+ * One or more destinations to send the logs. Current destinations include Log Analytics workspace, Event Hubs, and Azure Storage.
+ * A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), then create multiple settings. Each resource can have up to five diagnostic settings.
+
+The following are instructions for configuring your Azure Monitor resource to start creating logs and metrics for your Communications Services. For detailed documentation about using Diagnostic Settings across all Azure resources, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+> [!NOTE]
+> Under diagnostic setting name please select ΓÇ£SMS OperationalΓÇ¥ to enable the logs for SMS.
+
+## **Overview**
+
+SMS operational logs are records of events and activities that provide insights into your SMS API requests. They captured details about the performance and functionality of the SMS primitive, including details about the status of message whether they were successfully delivered, blocked, or failed to send.
+SMS operational logs contain information that help identify trends and patterns, resolve issues that might be impacting performance such failed message deliveries or serve issues. The logs include the following details:
+ * Messages sent.
+ * Message received.
+ * Messages delivered.
+ * Messages opt-in & opt-out.
+
+## Resource log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Usage logs** - provides usage data associated with each billed service offering
+* **SMS operational logs** - provides basic information related to the SMS service
++
+### Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+### SMS operational logs
+
+| Property | Description |
+| -- | |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `OperationName` | The operation associated with log record. |
+| `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `OperationVersion` | The api-version associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `ResultType` | The status of the operation. |
+| `ResultSignature` | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| `ResultDescription` | The static text description of this operation. |
+| `DurationMs` | The duration of the operation in milliseconds. |
+| `CallerIpAddress` | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+| `Level` | The severity level of the event. |
+| `URI` | The URI of the request. |
+| `OutgoingMessageLength` | The number of characters in the outgoing message. |
+| `IncomingMessageLength` | The number of characters in the incoming message. |
+| `DeliveryAttempts` | The number of attempts made to deliver this message. |
+| `PhoneNumber` | The phone number the SMS message is being sent from. |
+| `SdkType` | The SDK type used in the request. |
+| `PlatformType` | The platform type used in the request. |
+| `Method` | The method used in the request. |
+|`NumberType`| The type of number, the SMS message is being sent from. It can be either **LongCodeNumber** or **ShortCodeNumber** or **DynamicAlphaSenderID**|
+|`MessageID`|Represent the unique messageId generated for every outgoing and incoming message. This can be found in the SMS API response object|
+|`Country`|Represent the countries where the SMS messages were sent to or received from|
+
+#### Example SMS sent log
+
+```json
+
+ [
+ {
+ "TimeGenerated": "2022-09-26T15:58:30.100Z",
+ "OperationName": "SMSMessagesSent",
+ "CorrelationId": "dDRmubfpNZZZZZnxBtw3Q.0",
+ "OperationVersion": "2020-07-20-preview1",
+ "Category":"SMSOperational",
+ "ResultType": "Succeeded",
+ "ResultSignature": 202,
+ "DurationMs": 130,
+ "CallerIpAddress": "127.0.0.1",
+ "Level": "Informational",
+ "URI": "https://sms-e2e-prod.communication.azure.com/sms?api-version=2020-07-20-preview1",
+ "OutgoingMessageLength": 151,
+ "IncomingMessageLength": 0,
+ "DeliveryAttempts": 0,
+ "PhoneNumber": "+18445791704",
+ "NumberType": "LongCodeNumber",
+ "SdkType": "azsdk-net-Communication.Sms",
+ "PlatformType": "Microsoft Windows 10.0.17763",
+ "Method": "POST",
+ "MessageId": "Outgoing_20230118181300ff00e5c9-876d-4958-86e3-4637484fe5bd_noam",
+ "Country": "US"
+ }
+ ]
+
+```
+
+#### Example SMS delivery report log
+```json
+
+ [
+ {
+ "TimeGenerated": "2022-09-26T15:58:30.200Z",
+ "OperationName": "SMSDeliveryReportsReceived",
+ "CorrelationId": "tl8WpUTESTSTSTccYadXJm.0",
+ "Category":"SMSOperational",
+ "ResultType": "Succeeded",
+ "ResultSignature": 200,
+ "DurationMs": 130,
+ "CallerIpAddress": "127.0.0.1",
+ "Level": "Informational",
+ "URI": "https://global.smsgw.prod.communication.microsoft.com/rtc/telephony/sms/DeliveryReport",
+ "OutgoingMessageLength": 0,
+ "IncomingMessageLength": 0,
+ "DeliveryAttempts": 1,
+ "PhoneNumber": "+18445791704",
+ "NumberType": "LongCodeNumber",
+ "SdkType": "",
+ "PlatformType": "",
+ "Method": "POST",
+ "MessageId": "Outgoing_20230118181300ff00e5c9-876d-4958-86e3-4637484fe5bd_noam",
+ "Country": "US"
+ }
+ ]
+
+```
+
+#### Example SMS received log
+```json
+
+ [
+ {
+ "TimeGenerated": "2022-09-27T15:58:30.200Z",
+ "OperationName": "SMSMessagesReceived",
+ "CorrelationId": "e2KFTSTSTI/5PTx4ZZB.0",
+ "Category":"SMSOperational",
+ "ResultType": "Succeeded",
+ "ResultSignature": 200,
+ "DurationMs": 130,
+ "CallerIpAddress": "127.0.0.1",
+ "Level": "Informational",
+ "URI": "https://global.smsgw.prod.communication.microsoft.com/rtc/telephony/sms/inbound",
+ "OutgoingMessageLength": 0,
+ "IncomingMessageLength": 110,
+ "DeliveryAttempts": 0,
+ "PhoneNumber": "+18445791704",
+ "NumberType": "LongCodeNumber",
+ "SdkType": "",
+ "PlatformType": "",
+ "Method": "POST",
+ "MessageId": "Incoming_2023011818121211c6ee31-63fe-477c-8d51-f800543c6694",
+ "Country": "US"
+ }
+ ]
+
+```
+
+ (see also [FAQ](../../../../azure-monitor/faq.yml)).
communication-services Voice And Video Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md
+
+ Title: Azure Communication Services - voice and video logs
+
+description: Learn about logging for Azure Communication Services Voice and Video.
++++ Last updated : 03/21/2023+++++
+# Azure Communication Services voice and video Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Data Concepts
+The following are high level descriptions of data concepts specific to Voice and Video calling. These concepts are important to review in order to understand the meaning of the data captured in the logs.
+
+### Entities and IDs
+
+A *Call*, as represented in the data, is an abstraction depicted by the `correlationId`. `CorrelationId`s are unique per Call, and are time-bound by `callStartTime` and `callDuration`. Every Call is an event that contains data from two or more *Endpoints*, which represent the various human, bot, or server participants in the Call.
+
+A *Participant* (`participantId`) is present only when the Call is a *Group* Call, as it represents the connection between an Endpoint and the server.
+
+An *Endpoint* is the most unique entity, represented by `endpointId`. `EndpointType` tells you whether the Endpoint represents a human user (PSTN, VoIP), a Bot (Bot), or the server that is managing multiple Participants within a Call. When an `endpointType` is `"Server"`, the Endpoint is not assigned a unique ID. By analyzing endpointType and the number of `endpointIds`, you can determine how many users and other non-human Participants (bots, servers) join a Call. Our native SDKs (Android, iOS) reuse the same `endpointId` for a user across multiple Calls, thus enabling an understanding of experience across sessions. This differs from web-based Endpoints, which always generates a new `endpointId` for each new Call.
+
+A *Stream* is the most granular entity, as there is one Stream per direction (inbound/outbound) and `mediaType` (for example, audio and video).
+
+## Data Definitions
+
+### Usage logs schema
+
+| Property | Description |
+| -- | |
+| `Timestamp` | The timestamp (UTC) of when the log was generated. |
+| `Operation Name` | The operation associated with log record. |
+| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties change in the future. |
+| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `Properties` | Other data applicable to various modes of Communication Services. |
+| `Record ID` | The unique ID for a given usage record. |
+| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `Quantity` | The number of units used or consumed for this record. |
+
+### Call Summary log schema
+The Call Summary Log contains data to help you identify key properties of all Calls. A different Call Summary Log is created per each `participantId` (`endpointId` in the case of P2P calls) in the Call.
+
+> [!IMPORTANT]
+> Participant information in the call summary log vary based on the participant tenant. The SDK and OS version is redacted if the participant is not within the same tenant (also referred to as cross-tenant) as the ACS resource. Cross-tenantsΓÇÖ participants are classified as external users invited by a resource tenant to join and collaborate during a call.
+
+| Property | Description |
+|-|-|
+| `time` | The timestamp (UTC) of when the log was generated. |
+| `operationName` | The operation associated with log record. |
+| `operationVersion` | The api-version associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
+| `correlationId` | `correlationId` is the unique ID for a Call. The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call, and it can be used to join data from different logs. If you ever need to open a support case with Microsoft, the `correlationId` is used to easily identify the Call you're troubleshooting. |
+| `identifier` | This value is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
+| `callStartTime` | A timestamp for the start of the call, based on the first attempted connection from any Endpoint. |
+| `callDuration` | The duration of the Call expressed in seconds, based on the first attempted connection and end of the last connection between two endpoints. |
+| `callType` | Contains either `"P2P"` or `"Group"`. A `"P2P"` Call is a direct 1:1 connection between only two, non-server endpoints. A `"Group"` Call is a Call that has more than two endpoints or is created as `"Group"` Call prior to the connection. |
+| `teamsThreadId` | This ID is only relevant when the Call is organized as a Microsoft Teams meeting, representing the Microsoft Teams ΓÇô Azure Communication Services interoperability use-case. This ID is exposed in operational logs. You can also get this ID through the Chat APIs. |
+| `participantId` | This ID is generated to represent the two-way connection between a `"Participant"` Endpoint (`endpointType` = `"Server"`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
+| `participantStartTime` | Timestamp for beginning of the first connection attempt by the participant. |
+| `participantDuration` | The duration of each Participant connection in seconds, from `participantStartTime` to the timestamp when the connection is ended. |
+| `participantEndReason` | Contains Calling SDK error codes emitted by the SDK when relevant for each `participantId`. See Calling SDK error codes. |
+| `endpointId` | Unique ID that represents each Endpoint connected to the call, where the Endpoint type is defined by `endpointType`. When the value is `null`, the connected entity is the Communication Services server (`endpointType`= `"Server"`). `EndpointId` can sometimes persist for the same user across multiple calls (`correlationId`) for native clients. The number of `endpointId`s determine the number of Call Summary Logs. A distinct Summary Log is created for each `endpointId`. |
+| `endpointType` | This value describes the properties of each Endpoint connected to the Call. Can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. |
+| `sdkVersion` | Version string for the Communication Services Calling SDK version used by each relevant Endpoint. (Example: `"1.1.00.20212500"`) |
+| `osVersion` | String that represents the operating system and version of each Endpoint device. |
+| `participantTenantId` | The ID of the Microsoft tenant associated with the participant. This field is used to guide cross-tenant redaction.
++
+### Call Diagnostic log schema
+Call Diagnostic Logs provide important information about the Endpoints and the media transfers for each Participant, and as measurements that help to understand quality issues.
+For each Endpoint within a Call, a distinct Call Diagnostic Log is created for outbound media streams (audio, video, etc.) between Endpoints.
+In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In Group Calls the participantId serves as key identifier to join the related outbound logs into a distinct Participant connection. Note that Call diagnostic logs remain intact and are the same regardless of the participant tenant.
+
+> [!NOTE]
+> In this document, P2P and group calls are by default within the same tenant, for all call scenarios that are cross-tenant they are specified accordingly throughout the document.
+
+| Property | Description |
+||-|
+| `operationName` | The operation associated with log record. |
+| `operationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
+| `correlationId` | The `correlationId` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationId` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationId` can used to easily identify the Call you're troubleshooting. |
+| `participantId` | This ID is generated to represent the two-way connection between a "Participant" Endpoint (`endpointType` = `ΓÇ£ServerΓÇ¥`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
+| `identifier` | This valueis the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams object ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
+| `endpointId` | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationId`) for native clients but are unique for every Call when the client is a web browser. |
+| `endpointType` | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, `"Voicemail"`, `"Anonymous"`, or `"Unknown"`. |
+| `mediaType` | This string value describes the type of media being transmitted between endpoints within each stream. Possible values include `ΓÇ£AudioΓÇ¥`, `ΓÇ£VideoΓÇ¥`, `ΓÇ£VBSSΓÇ¥` (Video-Based Screen Sharing), and `ΓÇ£AppSharingΓÇ¥`. |
+| `streamId` | Non-unique integer which, together with `mediaType`, can be used to uniquely identify streams of the same `participantId`.|
+| `transportType` | String value which describes the network transport protocol per `participantId`. Can contain `"UDPΓÇ¥`, `ΓÇ£TCPΓÇ¥`, or `ΓÇ£UnrecognizedΓÇ¥`. `"Unrecognized"` indicates that the system could not determine if the `transportType` was TCP or UDP. |
+| `roundTripTimeAvg` | This metric is the average time it takes to get an IP packet from one Endpoint to another within a `participantDuration`. This network propagation delay is related to the physical distance between the two points, the speed of light, and any overhead taken by the various routers in between. The latency is measured as one-way or Round-trip Time (RTT). Its value expressed in milliseconds, and an RTT greater than 500ms should be considered as negatively impacting the Call quality. |
+| `roundTripTimeMax` | The maximum RTT (ms) measured per media stream during a `participantDuration` in a group Call or `callDuration` in a P2P Call. |
+| `jitterAvg` | This metric is the average change in delay between successive packets. Azure Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering, which is approximately at `jitterAvg` >30 ms, that a negative quality impact is likely occurring. The packets arriving at different speeds cause a speaker's voice to sound robotic. This metric is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
+| `jitterMax` | This metric is the maximum jitter value measured between packets per media stream. Bursts in network conditions can cause issues in the audio/video traffic flow. |
+| `packetLossRateAvg` | This metric is the average percentage of packets that are lost. Packet loss directly affects audio qualityΓÇöfrom small, individual lost packets that have almost no impact to back-to-back burst losses that cause audio to cut out completely. The packets being dropped and not arriving at their intended destination cause gaps in the media, resulting in missed syllables and words, and choppy video and sharing. A packet loss rate of greater than 10% (0.1) should be considered a rate that's likely having a negative quality impact. This metric is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
+| `packetLossRateMax` | This value represents the maximum packet loss rate (%) per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. Bursts in network conditions can cause issues in the audio/video traffic flow.
+### P2P vs. Group Calls
+
+There are two types of Calls (represented by `callType`): P2P and Group.
+
+**P2P** calls are a connection between only two Endpoints, with no server Endpoint. P2P calls are initiated as a Call between those Endpoints and are not created as a group Call event prior to the connection.
+
+ :::image type="content" source="../media/call-logs-azure-monitor/p2p-diagram.png" alt-text="Screenshot displays P2P call across 2 endpoints.":::
+
+**Group** Calls include any Call that has more than 2 Endpoints connected. Group Calls include a server Endpoint, and the connection between each Endpoint and the server. P2P Calls that add an additional Endpoint during the Call cease to be P2P, and they become a Group Call. You can determine the timeline of when each endpoints joined the call by using the `participantStartTime` and `participantDuration` metrics.
++
+ :::image type="content" source="../media/call-logs-azure-monitor/group-call-version-a.png" alt-text="Screenshot displays group call across multiple endpoints.":::
++
+## Log Structure
+
+Two types of logs are created: **Call Summary** logs and **Call Diagnostic** logs.
+
+Call Summary Logs contain basic information about the Call, including all the relevant IDs, timestamps, Endpoint and SDK information. For each participant within a call, a distinct call summary log is created (if someone rejoins a call, they have the same EndpointId, but a different ParticipantId, so there can be two Call Summary logs for that endpoint).
+
+Call Diagnostic Logs contain information about the Stream as well as a set of metrics that indicate quality of experience measurements. For each Endpoint within a Call (including the server), a distinct Call Diagnostic Log is created for each media stream (audio, video, etc.) between Endpoints. In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In a Group Call, each stream associated with `endpointType`= `"Server"` creates a log containing data for the inbound streams, and all other streams creates logs containing data for the outbound streams for all non-sever endpoints. In Group Calls, use the `participantId` as the key to join the related inbound/outbound logs into a distinct Participant connection.
+
+### Example 1: P2P Call
+
+The below diagram represents two endpoints connected directly in a P2P Call. In this example, 2 Call Summary Logs would be created (one per `participantID`) and four Call Diagnostic Logs would be created (one per media stream). Each log contains data relating to the outbound stream of the `participantID`.
+++
+### Example 2: Group Call
+
+The below diagram represents a Group Call example with three `participantIDs`, which means three `participantIDs` (`endpointIds` can potentially appear in multiple Participants, e.g. when rejoining a Call from the same device) and a Server Endpoint. One Call Summary Logs would be created per `participantID`, and four Call Diagnostic Logs would be created relating to each `participantID`, one for each media stream.
+
+
+### Example 3: P2P Call cross-tenant
+The below diagram represents two participants across multiple tenants that are connected directly in a P2P Call. In this example, one Call Summary Logs would be created (one per participant) with redacted OS and SDK versioning and four Call Diagnostic Logs would be created (one per media stream). Each log contains data relating to the outbound stream of the `participantID`.
+
++
+### Example 4: Group Call cross-tenant
+The below diagram represents a Group Call example with three `participantIds` across multiple tenants. One Call Summary Logs would be created per participant with redacted OS and SDK versioning, and four Call Diagnostic Logs would be created relating to each `participantId` , one for each media stream.
+++
+> [!NOTE]
+> Only outbound diagnostic logs can be supported in this release.
+> Please note that participants and bots identity are treated the same way, as a result OS and SDK versioning associated to the bot and the participant can be redacted
+
+## Sample Data
+
+### P2P Call
+
+Shared fields for all logs in the call:
+
+```json
+"time": "2021-07-19T18:46:50.188Z",
+"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
+"correlationId": "8d1a8374-344d-4502-b54b-ba2d6daaf0ae",
+```
+
+#### Call Summary Logs
+Call Summary Logs have shared operation and category information:
+
+```json
+"operationName": "CallSummary",
+"operationVersion": "1.0",
+"category": "CallSummary",
+
+```
+Call Summary for VoIP user 1
+```json
+"properties": {
+ "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
+ "callStartTime": "2021-07-19T17:54:05.113Z",
+ "callDuration": 6,
+ "callType": "P2P",
+ "teamsThreadId": "null",
+ "participantId": "null",
+ "participantStartTime": "2021-07-19T17:54:06.758Z",
+ "participantDuration": "5",
+ "participantEndReason": "0",
+ "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.0.1.0",
+ "osVersion": "Windows 10.0.17763 Arch: x64"
+}
+```
+
+Call summary for VoIP user 2
+```json
+"properties": {
+ "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
+ "callStartTime": "2021-07-19T17:54:05.335Z",
+ "callDuration": 6,
+ "callType": "P2P",
+ "teamsThreadId": "null",
+ "participantId": "null",
+ "participantStartTime": "2021-07-19T17:54:06.335Z",
+ "participantDuration": "5",
+ "participantEndReason": "0",
+ "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.1.0.0",
+ "osVersion": "null"
+}
+```
+Call Summary Logs crossed tenants: Call summary for VoIP user 1
+```json
+"properties": {
+ "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
+ "callStartTime": "2022-08-14T06:18:27.010Z",
+ "callDuration": 520,
+ "callType": "P2P",
+ "teamsThreadId": "null",
+ "participantId": "null",
+ "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
+ "participantStartTime": "2022-08-14T06:18:27.010Z",
+ "participantDuration": "520",
+ "participantEndReason": "0",
+ "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
+ "endpointType": "VoIP",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+Call summary for PSTN call
+
+> [!NOTE]
+> P2P or group call logs emitted have OS, and SDK version redacted regardless is the participant or botΓÇÖs tenant
+
+```json
+"properties": {
+ "identifier": "b1999c3e-bbbb-4650-9b23-9999bdabab47",
+ "callStartTime": "2022-08-07T13:53:12Z",
+ "callDuration": 1470,
+ "callType": "Group",
+ "teamsThreadId": "19:36ec5177126fff000aaa521670c804a3@thread.v2",
+ "participantId": " b25cf111-73df-4e0a-a888-640000abe34d",
+ "participantStartTime": "2022-08-07T13:56:45Z",
+ "participantDuration": 960,
+ "participantEndReason": "0",
+ "endpointId": "8731d003-6c1e-4808-8159-effff000aaa2",
+ "endpointType": "PSTN",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+
+#### Call Diagnostic Logs
+Call diagnostics logs share operation information:
+```json
+"operationName": "CallDiagnostics",
+"operationVersion": "1.0",
+"category": "CallDiagnostics",
+```
+Diagnostic log for audio stream from VoIP Endpoint 1 to VoIP Endpoint 2:
+```json
+"properties": {
+ "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
+ "participantId": "null",
+ "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "1000",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "82",
+ "roundTripTimeMax": "88",
+ "jitterAvg": "1",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from VoIP Endpoint 2 to VoIP Endpoint 1:
+```json
+"properties": {
+ "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
+ "participantId": "null",
+ "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "1363841599",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "78",
+ "roundTripTimeMax": "84",
+ "jitterAvg": "1",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for video stream from VoIP Endpoint 1 to VoIP Endpoint 2:
+```json
+"properties": {
+ "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
+ "participantId": "null",
+ "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
+ "endpointType": "VoIP",
+ "mediaType": "Video",
+ "streamId": "2804",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "103",
+ "roundTripTimeMax": "143",
+ "jitterAvg": "0",
+ "jitterMax": "4",
+ "packetLossRateAvg": "3.146336E-05",
+ "packetLossRateMax": "0.001769911"
+}
+```
+### Group Call
+
+The data would be generated in three Call Summary Logs and 6 Call Diagnostic Logs. Shared fields for all logs in the Call:
+```json
+"time": "2021-07-05T06:30:06.402Z",
+"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-TEST-RG/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
+"correlationId": "341acde7-8aa5-445b-a3da-2ddadca47d22",
+```
+
+#### Call Summary Logs
+Call Summary Logs have shared operation and category information:
+```json
+"operationName": "CallSummary",
+"operationVersion": "1.0",
+"category": "CallSummary",
+```
+
+Call summary for VoIP Endpoint 1:
+```json
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
+ "callStartTime": "2021-07-05T06:16:40.240Z",
+ "callDuration": 87,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
+ "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
+ "participantStartTime": "2021-07-05T06:16:44.235Z",
+ "participantDuration": "82",
+ "participantEndReason": "0",
+ "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.0.0.3",
+ "osVersion": "Darwin Kernel Version 18.7.0: Mon Nov 9 15:07:15 PST 2020; root:xnu-4903.272.3~3/RELEASE_ARM64_S5L8960X"
+}
+```
+Call summary for VoIP Endpoint 3:
+```json
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
+ "callStartTime": "2021-07-05T06:16:40.240Z",
+ "callDuration": 87,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLTk2ZDUtYTZlM2I2ZjgxOTkw@thread.v2",
+ "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
+ "participantStartTime": "2021-07-05T06:16:40.240Z",
+ "participantDuration": "87",
+ "participantEndReason": "0",
+ "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.0.0.3",
+ "osVersion": "Android 11.0; Manufacturer: Google; Product: redfin; Model: Pixel 5; Hardware: redfin"
+}
+```
+Call summary for PSTN Endpoint 2:
+```json
+"properties": {
+ "identifier": "null",
+ "callStartTime": "2021-07-05T06:16:40.240Z",
+ "callDuration": 87,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
+ "participantId": "515650f7-8204-4079-ac9d-d8f4bf07b04c",
+ "participantStartTime": "2021-07-05T06:17:10.447Z",
+ "participantDuration": "52",
+ "participantEndReason": "0",
+ "endpointId": "46387150-692a-47be-8c9d-1237efe6c48b",
+ "endpointType": "PSTN",
+ "sdkVersion": "null",
+ "osVersion": "null"
+}
+```
+Call Summary Logs cross-tenant
+```json
+"properties": {
+ "identifier": "1e4c59e1-r1rr-49bc-893d-990dsds8f9f5",
+ "callStartTime": "2022-08-14T06:18:27.010Z",
+ "callDuration": 912,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLT77777OOOOO99999jgxOTkw@thread.v2",
+ "participantId": "aa1dd7da-5922-4bb1-a4fa-e350a111fd9c",
+ "participantTenantId": "02cbdb3c-155a-4b95-b829-6d56a45787ca",
+ "participantStartTime": "2022-08-14T06:18:27.010Z",
+ "participantDuration": "902",
+ "participantEndReason": "0",
+ "endpointId": "02cbdb3c-155a-4d98-b829-aaaaa61d44ea",
+ "endpointType": "VoIP",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+Call summary log crossed tenant with bot as a participant
+Call summary for bot
+```json
+
+"properties": {
+ "identifier": "b1902c3e-b9f7-4650-9b23-9999bdabab47",
+ "callStartTime": "2022-08-09T16:00:32Z",
+ "callDuration": 1470,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MmQwZDcwYTQtZ000HWE6NzI4LTg1YTAtNXXXXX99999ZZZZZ@thread.v2",
+ "participantId": "66e9d9a7-a434-4663-d91d-fb1ea73ff31e",
+ "participantStartTime": "2022-08-09T16:14:18Z",
+ "participantDuration": 644,
+ "participantEndReason": "0",
+ "endpointId": "69680ec2-5ac0-4a3c-9574-eaaa77720b82",
+ "endpointType": "Bot",
+ "sdkVersion": "Redacted",
+ "osVersion": "Redacted"
+}
+```
+#### Call Diagnostic Logs
+Call diagnostics logs share operation information:
+```json
+"operationName": "CallDiagnostics",
+"operationVersion": "1.0",
+"category": "CallDiagnostics",
+```
+Diagnostic log for audio stream from VoIP Endpoint 1 to Server Endpoint:
+```json
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
+ "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
+ "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "14884",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "46",
+ "roundTripTimeMax": "48",
+ "jitterAvg": "0",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 1:
+```json
+"properties": {
+ "identifier": null,
+ "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
+ "endpointId": null,
+ "endpointType": "Server",
+ "mediaType": "Audio",
+ "streamId": "2001",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "42",
+ "roundTripTimeMax": "44",
+ "jitterAvg": "1",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from VoIP Endpoint 3 to Server Endpoint:
+```json
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
+ "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
+ "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "13783",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "45",
+ "roundTripTimeMax": "46",
+ "jitterAvg": "1",
+ "jitterMax": "2",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 3:
+```json
+"properties": {
+ "identifier": "null",
+ "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
+ "endpointId": null,
+ "endpointType": "Server"
+ "mediaType": "Audio",
+ "streamId": "1000",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "45",
+ "roundTripTimeMax": "46",
+ "jitterAvg": "1",
+ "jitterMax": "4",
+ "packetLossRateAvg": "0",
+```
+### Error Codes
+The `participantEndReason` contains a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, per Endpoint. See [troubleshooting in Azure communication Calling SDK error codes](../../troubleshooting-info.md?tabs=csharp%2cios%2cdotnet#calling-sdk-error-codes)
communication-services Query Call Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/query-call-logs.md
+
+ Title: Azure Communication Services - query call logs
+
+description: About using Log Analytics for Call Summary and Call Diagnostic logs
++++ Last updated : 10/25/2021+++++
+# Query call logs
+
+## Overview and access
+
+Before you can take advantage of [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your Communications Services logs, you must first follow the steps outlined in [Enable logging in Diagnostic Settings](enable-logging.md). Once you've enabled your logs and a [Log Analytics Workspace](../../../azure-monitor/logs/workspace-design.md), you will have access to many helpful [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) that will help you quickly visualize and understand the data available in your logs, which are described below. Through Log Analytics, you also get access to more Communications Services Insights via Azure Monitor Workbooks, the ability to create our own queries and Workbooks, [Log Analytics APIs overview](../../../azure-monitor/logs/api/overview.md) to any query.
+
+### Access
+You can access the queries by starting on your Communications Services resource page, and then clicking on "Logs" in the left navigation within the Monitor section:
++
+From there, you're presented with a modal screen that contains all of the [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) available for your Communications Services, with list of Query Packs available to navigate to the left.
++
+If you close the modal screen, you can still navigate to the various query packs, directly access data in the form of tables based on the schema of the logs and metrics you've enabled in your Diagnostic Setting. Here, you can create your own queries from the data using [KQL (Kusto)](/azure/data-explorer/kusto/query/). Learn more about using, editing, and creating queries by reading more about: [Log Analytics Queries](../../../azure-monitor/logs/queries.md)
+++
+## Default query packs for call summary and call diagnostic logs
+The following are descriptions of each query in the [default query pack](../../../azure-monitor/logs/query-packs.md#default-query-pack), for the [Call Summary and Call Diagnostic logs](logs/voice-and-video-logs.md) including code samples and example outputs for each query available:
+### Call Overview Queries
+#### Number of participants per call
+
+```
+// Count number of calls and participants,
+// and print average participants per call
+ACSCallSummary
+| distinct CorrelationId, ParticipantId, EndpointId
+| summarize num_participants=count(), num_calls=dcount(CorrelationId)
+| extend avg_participants = todecimal(num_participants) / todecimal(num_calls)
+```
+
+Sample output:
++
+#### Number of participants per group call
+
+```
+// Count number of participants per group call
+ACSCallSummary
+| where CallType == 'Group'
+| distinct CorrelationId, ParticipantId
+| summarize num_participants=count() by CorrelationId
+| summarize participant_counts=count() by num_participants
+| order by num_participants asc
+| render columnchart with (xcolumn = num_participants, title="Number of participants per group call")
+```
+
+Sample output:
++
+#### Ratio of call types
+
+```
+// Ratio of call types
+ACSCallSummary
+| summarize call_types=dcount(CorrelationId) by CallType
+| render piechart title="Call Type Ratio"
+
+```
+
+Sample output:
++
+#### Call duration distribution
+
+```
+// Call duration histogram
+ACSCallSummary
+| distinct CorrelationId, CallDuration
+|summarize duration_counts=count() by CallDuration
+| order by CallDuration asc
+| render columnchart with (xcolumn = CallDuration, title="Call duration histogram")
+```
+
+Sample output:
++
+#### Call duration percentiles
+
+```
+// Call duration percentiles
+ACSCallSummary
+| distinct CorrelationId, CallDuration
+| summarize avg(CallDuration), percentiles(CallDuration, 50, 90, 99)
+```
+
+Sample output:
++
+### Endpoint information queries
+
+#### Number of endpoints per call
+
+```
+// Count number of calls and endpoints,
+// and print average endpoints per call
+ACSCallSummary
+| distinct CorrelationId, EndpointId
+| summarize num_endpoints=count(), num_calls=dcount(CorrelationId)
+| extend avg_endpoints = todecimal(num_endpoints) / todecimal(num_calls)
+```
+
+Sample output:
++
+#### Ratio of SDK versions
+
+```
+// Ratio of SDK Versions
+ACSCallSummary
+| distinct CorrelationId, ParticipantId, EndpointId, SdkVersion
+| summarize sdk_counts=count() by SdkVersion
+| order by SdkVersion asc
+| render piechart title="SDK Version Ratio"
+```
+
+Sample output:
++
+#### Ratio of OS versions (simplified OS name)
+
+```
+// Ratio of OS Versions (simplified OS name)
+ACSCallSummary
+| distinct CorrelationId, ParticipantId, EndpointId, OsVersion
+| extend simple_os = case( indexof(OsVersion, "Android") != -1, tostring(split(OsVersion, ";")[0]),
+ indexof(OsVersion, "Darwin") != -1, tostring(split(OsVersion, ":")[0]),
+ indexof(OsVersion, "Windows") != -1, tostring(split(OsVersion, ".")[0]),
+ OsVersion
+ )
+| summarize os_counts=count() by simple_os
+| order by simple_os asc
+| render piechart title="OS Version Ratio"
+```
+
+Sample output:
++
+### Media stream queries
+#### Streams per call
+
+```
+// Count number of calls and streams,
+// and print average streams per call
+ACSCallDiagnostics
+| summarize num_streams=count(), num_calls=dcount(CorrelationId)
+| extend avg_streams = todecimal(num_streams) / todecimal(num_calls)
+```
+Sample output:
++
+#### Streams per call histogram
+
+```
+// Distribution of streams per call
+ACSCallDiagnostics
+| summarize streams_per_call=count() by CorrelationId
+| summarize stream_counts=count() by streams_per_call
+| order by streams_per_call asc
+| render columnchart title="Streams per call histogram"
+```
++
+#### Ratio of media types
+
+```
+// Ratio of media types by call
+ACSCallDiagnostics
+| summarize media_types=count() by MediaType
+| render piechart title="Media Type Ratio"
+```
++
+### Quality metrics queries
+
+#### Average telemetry values
+
+```
+// Average telemetry values over all streams
+ACSCallDiagnostics
+| summarize Avg_JitterAvg=avg(JitterAvg),
+ Avg_JitterMax=avg(JitterMax),
+ Avg_RoundTripTimeAvg=avg(RoundTripTimeAvg),
+ Avg_RoundTripTimeMax=avg(RoundTripTimeMax),
+ Avg_PacketLossRateAvg=avg(PacketLossRateAvg),
+ Avg_PacketLossRateMax=avg(PacketLossRateMax)
+```
++
+#### JitterAvg histogram
+
+```
+// Jitter Average Histogram
+ACSCallDiagnostics
+| where isnotnull(JitterAvg)
+| summarize JitterAvg_counts=count() by JitterAvg
+| order by JitterAvg asc
+| render columnchart with (xcolumn = JitterAvg, title="JitterAvg histogram")
+```
++
+#### JitterMax histogram
+
+```
+// Jitter Max Histogram
+ACSCallDiagnostics
+| where isnotnull(JitterMax)
+|summarize JitterMax_counts=count() by JitterMax
+| order by JitterMax asc
+| render columnchart with (xcolumn = JitterMax, title="JitterMax histogram")
+```
++
+#### PacketLossRateAvg histogram
+```
+// PacketLossRate Average Histogram
+ACSCallDiagnostics
+| where isnotnull(PacketLossRateAvg)
+|summarize PacketLossRateAvg_counts=count() by bin(PacketLossRateAvg, 0.01)
+| order by PacketLossRateAvg asc
+| render columnchart with (xcolumn = PacketLossRateAvg, title="PacketLossRateAvg histogram")
+```
++
+#### PacketLossRateMax histogram
+```
+// PacketLossRate Max Histogram
+ACSCallDiagnostics
+| where isnotnull(PacketLossRateMax)
+|summarize PacketLossRateMax_counts=count() by bin(PacketLossRateMax, 0.01)
+| order by PacketLossRateMax asc
+| render columnchart with (xcolumn = PacketLossRateMax, title="PacketLossRateMax histogram")
+```
++
+#### RoundTripTimeAvg histogram
+```
+// RoundTripTime Average Histogram
+ACSCallDiagnostics
+| where isnotnull(RoundTripTimeAvg)
+|summarize RoundTripTimeAvg_counts=count() by RoundTripTimeAvg
+| order by RoundTripTimeAvg asc
+| render columnchart with (xcolumn = RoundTripTimeAvg, title="RoundTripTimeAvg histogram")
+```
++
+#### RoundTripTimeMax histogram
+```
+// RoundTripTime Max Histogram
+ACSCallDiagnostics
+| where isnotnull(RoundTripTimeMax)
+|summarize RoundTripTimeMax_counts=count() by RoundTripTimeMax
+| order by RoundTripTimeMax asc
+| render columnchart with (xcolumn = RoundTripTimeMax, title="RoundTripTimeMax histogram")
+```
++
+#### Poor Jitter Quality
+```
+// Get proportion of calls with poor quality jitter
+// (defined as jitter being higher than 30ms)
+ACSCallDiagnostics
+| extend JitterQuality = iff(JitterAvg > 30, "Poor", "Good")
+| summarize count() by JitterQuality
+| render piechart title="Jitter Quality"
+```
+++
+#### PacketLossRate Quality
+```
+// Get proportion of calls with poor quality packet loss
+// rate (defined as packet loss being higher than 10%)
+ACSCallDiagnostics
+| extend PacketLossRateQuality = iff(PacketLossRateAvg > 0.1, "Poor", "Good")
+| summarize count() by PacketLossRateQuality
+| render piechart title="Packet Loss Rate Quality"
+```
++
+#### RoundTripTime Quality
+```
+// Get proportion of calls with poor quality packet loss
+// rate (defined as packet loss being higher than 10%)
+ACSCallDiagnostics
+| extend PacketLossRateQuality = iff(PacketLossRateAvg > 0.1, "Poor", "Good")
+| summarize count() by PacketLossRateQuality
+| render piechart title="Packet Loss Rate Quality"
+```
++
+### Parameterizable Queries
+
+#### Daily calls in the last week
+```
+// Histogram of daily calls over the last week
+ACSCallSummary
+| where CallStartTime > now() - 7d
+| distinct CorrelationId, CallStartTime
+| extend hour = floor(CallStartTime, 1d)
+| summarize event_count=count() by day
+| sort by day asc
+| render columnchart title="Number of calls in last week"
+```
++
+#### Calls per hour in last day
+```
+// Histogram of calls per hour in the last day
+ACSCallSummary
+| where CallStartTime > now() - 1d
+| distinct CorrelationId, CallStartTime
+| extend hour = floor(CallStartTime, 1h)
+| summarize event_count=count() by hour
+| sort by hour asc
+| render columnchart title="Number of calls per hour in last day"
+```
+
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/authentication.md
User access tokens are generated using the Identity SDK and are associated with
## Using identity for monitoring and metrics
-The user identity is intended to act as a primary key for logs and metrics collected through Azure Monitor. If you'd like to get a view of all of a specific user's calls, for example, you should set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a singular user. Learn more about [log analytics](../concepts/analytics/log-analytics.md), and [metrics](../concepts/metrics.md) available to you.
+The user identity is intended to act as a primary key for logs and metrics collected through Azure Monitor. If you'd like to get a view of all of a specific user's calls, for example, you should set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a singular user. Learn more about [log analytics](../concepts/analytics/query-call-logs.md), and [metrics](../concepts/metrics.md) available to you.
## Next steps
communication-services Call Logs Azure Monitor Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-logs-azure-monitor-access.md
To access telemetry for Azure Communication Services Voice & Video resources, follow these steps. ## Enable logging
-1. First, you will need to create a storage account for your logs. Go to [Create a storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) for instructions to complete this step. See also [Storage account overview](../../storage/common/storage-account-overview.md) for more information on the types and features of different storage options. If you already have an Azure storage account go to Step 2.
+1. First, you need to create a storage account for your logs. Go to [Create a storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) for instructions to complete this step. For more information, see [Storage account overview](../../storage/common/storage-account-overview.md) on the types and features of different storage options. If you already have an Azure storage account, go to Step 2.
-1. When you've created your storage account, next you need to enable logging by following the instructions in [Enable diagnostic logs in your resource](./logging-and-diagnostics.md#enable-diagnostic-logs-in-your-resource). You will select the check boxes for the logs "CallSummaryPRIVATEPREVIEW" and "CallDiagnosticPRIVATEPREVIEW".
+2. When you've created your storage account, next you need to enable logging by following the instructions in [Enable diagnostic logs in your resource](./analytics/enable-logging.md). You select the check boxes for the logs "CallSummaryPRIVATEPREVIEW" and "CallDiagnosticPRIVATEPREVIEW".
-1. Next, select the "Archive to a storage account" box and then select the storage account for your logs in the drop-down menu below. The "Send to Analytics workspace" option isn't currently available for Private Preview of this feature, but it will be made available when this feature is made public.
+3. Next, select the "Archive to a storage account" box and then select the storage account for your logs in the drop-down menu. The "Send to Analytics workspace" option isn't currently available for Private Preview of this feature, but it is made available when this feature is made public.
:::image type="content" source="media\call-logs-images\call-logs-access-diagnostic-setting.png" alt-text="Azure Monitor Diagnostic setting"::: -- ## Access Your Logs To access your logs, go to the storage account you designated in Step 3 above by navigating to [Storage Accounts](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Storage%2FStorageAccounts) in the Azure portal.
From there, you can download all logs or individual logs.
## Next Steps -- Learn more about [Logging and Diagnostics](./logging-and-diagnostics.md)
+- Access logs for [voice and video](./analytics/logs/voice-and-video-logs.md), [chat](./analytics/logs/chat-logs.md), [email](./analytics/logs/email-logs.md), [network traversal](./analytics/logs/network-traversal-logs.md), [recording](./analytics/logs/recording-logs.md), [SMS](./analytics/logs/sms-logs.md) and [call automation](./analytics/logs/call-automation-logs.md).
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
The test provides a **unique identifier** for your test, which you can provide o
- [Use Pre-Call Diagnostic APIs to build your own tech check](../voice-video-calling/pre-call-diagnostics.md) - [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)-- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
+- [Consume call logs with Azure Monitor](../analytics/logs/voice-and-video-logs.md)
communication-services Real Time Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/real-time-inspection.md
The tool includes the ability to download the logs captured using the `Download
- [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md) - [Leverage Network Diagnostic Tool](./network-diagnostic.md)-- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
+- [Consume call logs with Azure Monitor](../analytics/logs/voice-and-video-logs.md)
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | Honor setting "Teams Q&A" | No API available | | | Honor setting "Meeting reactions" | No API available | | DevOps | [Azure Metrics](../../metrics.md) | ✔️ |
-| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
| | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ | | | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
The following table shows supported server-side capabilities available in Azure
| | | | [Manage ACS call recording](../../voice-video-calling/call-recording.md) | ❌ | | [Azure Metrics](../../metrics.md) | ✔️ |
-| [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
| [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ | | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ |
communication-services Monitor Logs Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/monitor-logs-metrics.md
# Monitor logs for Teams external users
-In this article, you will learn which Azure logs, Azure metrics & Teams logs are emitted for Teams external users when joining Teams meetings. Azure Communication Services user joining Teams meeting emits the following metrics: [Authentication API](../../metrics.md) and [Chat API](../../metrics.md). Communication Services resource additionally tracks the following logs: [Call Summary](../../analytics/call-logs-azure-monitor.md) and [Call Diagnostic](../../analytics/call-logs-azure-monitor.md) Log. Teams administrator can use [Teams Admin Center](https://aka.ms/teamsadmincenter) and [Teams Call Quality Dashboard](https://cqd.teams.microsoft.com) to review logs stored for Teams external users joining Teams meetings organized by the tenant.
+In this article, you will learn which Azure logs, Azure metrics & Teams logs are emitted for Teams external users when joining Teams meetings. Azure Communication Services user joining Teams meeting emits the following metrics: [Authentication API](../../metrics.md) and [Chat API](../../metrics.md). Communication Services resource additionally tracks the following logs: [Call Summary](../../analytics/logs/voice-and-video-logs.md) and [Call Diagnostic](../../analytics/logs/voice-and-video-logs.md) Log. Teams administrator can use [Teams Admin Center](https://aka.ms/teamsadmincenter) and [Teams Call Quality Dashboard](https://cqd.teams.microsoft.com) to review logs stored for Teams external users joining Teams meetings organized by the tenant.
## Azure logs & metrics
Call summary and call diagnostics logs are emitted only for the following partic
- Azure Communication Services users joining the meeting from the same tenant. This includes users rejected in the lobby and Azure Communication Services users from different resources but in the same tenant. - Additional Teams users, phone users and bots joining the meeting only if the organizer and current Azure Communication Services resource are in the same tenant.
-If Azure Communication Services resource and Teams meeting organizer tenants are different, then some fields of the logs are redacted. You can find more information in the call summary & diagnostics logs [documentation](../../analytics/call-logs-azure-monitor.md). Bots indicate service logic provided during the meeting. Here is a list of frequently used bots:
+If Azure Communication Services resource and Teams meeting organizer tenants are different, then some fields of the logs are redacted. You can find more information in the call summary & diagnostics logs [documentation](../../analytics/logs/voice-and-video-logs.md). Bots indicate service logic provided during the meeting. Here is a list of frequently used bots:
- b1902c3e-b9f7-4650-9b23-5772bd429747 - Teams convenient recording ## Microsoft Teams logs
Teams administrator can see Teams external users in the overview of the meeting
- [Enable logs and metrics](../../analytics/enable-logging.md) - [Metrics](../../metrics.md)-- [Call summary and call diagnostics](../../analytics/call-logs-azure-monitor.md)
+- [Call summary and call diagnostics](../../analytics/logs/voice-and-video-logs.md)
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features that are currently available in
| | Honor setting "Spam filtering" | ✔️ | | | Honor setting "SIP devices can be used for calls" | ✔️ | | DevOps | [Azure Metrics](../metrics.md) | ✔️ |
-| | [Azure Monitor](../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Monitor](../analytics/logs/voice-and-video-logs.md) | ✔️ |
| | [Azure Communication Services Insights](../analytics/insights/voice-and-video-insights.md) | ✔️ | | | [Azure Communication Services Voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
communication-services Meeting Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md
The following list of capabilities is allowed when Teams user participates in Te
| | Honor setting "Mode for IP video" | ❌ | | | Honor setting "IP video" | ❌ | | | Honor setting "Local broadcasting" | ❌ |
-| | Honor setting "Media bit rate (Kbs)" | ❌ |
+| | Honor setting "Media bit rate (kBps)" | ❌ |
| | Honor setting "Network configuration lookup" | ❌ | | | Honor setting "Transcription" | No API available | | | Honor setting "Cloud recording" | No API available |
The following list of capabilities is allowed when Teams user participates in Te
| | Honor setting "Teams Q&A" | No API available | | | Honor setting "Meeting reactions" | No API available | | DevOps | [Azure Metrics](../../metrics.md) | ✔️ |
-| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
| | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ | | | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
Teams meeting organizers can configure the Teams meeting options to adjust the e
|[Allow camera for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️| |[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️| |Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️|
-|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Services don't support reactions. |❌|
+|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Services doesn't support reactions. |❌|
|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable| |[Provide CART Captions](https://support.microsoft.com/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
communication-services Phone Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/phone-capabilities.md
The following list of capabilities is supported for scenarios where at least one
| | Replace the caller ID with this service number | ❌ | | Teams dial out plan policies | Start a phone call honoring dial plan policy | ❌ | | DevOps | [Azure Metrics](../../metrics.md) | ✔️ |
-| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
-| | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ |
+| | [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
+| | [Azure Communication Services Insights](../../analytics/logs/voice-and-video-logs.md) | ✔️ |
| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | | | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/join-teams-meeting.md
During a Teams meeting, all chat messages sent by Teams users or Communication S
If the hosting Microsoft 365 organization has defined a retention policy that deletes chat messages for any of the Teams users in the meeting, then all copies of the most recently sent message that have been stored for Communication Services users will also be deleted in accordance with the policy. If there is not a retention policy defined, then the copies of the most recently sent message for all Communication Services users will be deleted after 30 days. For more information about Teams retention policies, review the article [Learn about retention for Microsoft Teams](/microsoft-365/compliance/retention-policies-teams). ## Diagnostics and call analytics
-After a Teams meeting ends, diagnostic information about the meeting is available using the [Communication Services logging and diagnostics](./logging-and-diagnostics.md) and using the [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) in the Teams admin center. Communication Services users will appear as "Anonymous" in Call Analytics screens. Communication Services users aren't included in the [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality).
+After a Teams meeting ends, diagnostic information about the meeting is available using the [Communication Services logging and diagnostics](./analytics/logs/voice-and-video-logs.md) and using the [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) in the Teams admin center. Communication Services users will appear as "Anonymous" in Call Analytics screens. Communication Services users aren't included in the [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality).
## Privacy Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/logging-and-diagnostics.md
- Title: Communication Services Logs-
-description: Learn about logging in Azure Communication Services
----- Previously updated : 06/30/2021-----
-# Communication Services logs
-
-Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
-
- >[!IMPORTANT]
- > For Audio/Video/Telephony call data refer to [Call Summary and Call Diagnostic Logs](../concepts/analytics/call-logs-azure-monitor.md)
-
-## Enable diagnostic logs in your resource
-
-Logging is turned off by default when a resource is created. To enable logging, navigate to the **Diagnostic settings** tab in the resource menu under the **Monitoring** section. Then select **Add diagnostic setting**.
-
-Next, select the archive target you want. Currently, we support storage accounts and Log Analytics as archive targets. After selecting the types of logs that you'd like to capture, save the diagnostic settings.
-
-New settings take effect in about 10 minutes. Logs will begin appearing in the configured archival target within the Logs pane of your Communication Services resource.
--
-For more information about configuring diagnostics, see the overview of [Azure resource logs](../../azure-monitor/essentials/platform-logs-overview.md).
-
-## Resource log categories
-
-Communication Services offers the following types of logs that you can enable:
-
-* **Usage logs** - provides usage data associated with each billed service offering
-* **Chat operational logs** - provides basic information related to the chat service
-* **SMS operational logs** - provides basic information related to the SMS service
-* **Authentication operational logs** - provides basic information related to the Authentication service
-* **Network Traversal operational logs** - provides basic information related to the Network Traversal service
-* **Email Send Mail operational logs** - provides detailed information related to the Email service send mail requests.
-* **Email Status Update operational logs** - provides message and recipient level delivery status updates related to the Email service send mail requests.
-* **Email User Engagement operational logs** - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.
-* **Call Automation operational logs** - provides operational information on Call Automation API requests. These logs can be used to identify failure points, query all requests made in a call (using Correlation ID or Server Call ID) or query all requests made by a specific service application in the call (using Participant ID).
-
-### Usage logs schema
-
-| Property | Description |
-| -- | |
-| Timestamp | The timestamp (UTC) of when the log was generated. |
-| Operation Name | The operation associated with log record. |
-| Operation Version | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| Correlation ID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| Properties | Other data applicable to various modes of Communication Services. |
-| Record ID | The unique ID for a given usage record. |
-| Usage Type | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
-| Unit Type | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
-| Quantity | The number of units used or consumed for this record. |
-
-### Chat operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| OperationVersion | The api-version associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| ResultType | The status of the operation. |
-| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| ResultDescription | The static text description of this operation. |
-| DurationMs | The duration of the operation in milliseconds. |
-| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
-| Level | The severity level of the event. |
-| URI | The URI of the request. |
-| UserId | The request sender's user ID. |
-| ChatThreadId | The chat thread ID associated with the request. |
-| ChatMessageId | The chat message ID associated with the request. |
-| SdkType | The Sdk type used in the request. |
-| PlatformType | The platform type used in the request. |
-
-### SMS operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| OperationVersion | The api-version associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| ResultType | The status of the operation. |
-| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| ResultDescription | The static text description of this operation. |
-| DurationMs | The duration of the operation in milliseconds. |
-| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
-| Level | The severity level of the event. |
-| URI | The URI of the request. |
-| OutgoingMessageLength | The number of characters in the outgoing message. |
-| IncomingMessageLength | The number of characters in the incoming message. |
-| DeliveryAttempts | The number of attempts made to deliver this message. |
-| PhoneNumber | The phone number the SMS message is being sent from. |
-| SdkType | The SDK type used in the request. |
-| PlatformType | The platform type used in the request. |
-| Method | The method used in the request. |
-|NumberType| The type of number, the SMS message is being sent from. It can be either **LongCodeNumber** or **ShortCodeNumber** |
-
-### Authentication operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| ResultType | The status of the operation. |
-| ResultSignature | The sub-status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| DurationMs | The duration of the operation in milliseconds. |
-| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
-| Level | The severity level of the event. |
-| URI | The URI of the request. |
-| SdkType | The SDK type used in the request. |
-| PlatformType | The platform type used in the request. |
-| Identity | The identity of Azure Communication Services or Teams user related to the operation. |
-| Scopes | The Communication Services scopes present in the access token. |
-
-### Network Traversal operational logs
-
-| Dimension | Description |
-||-|
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationId | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
-| OperationVersion | The API-version associated with the operation or version of the operation (if there's no API version). |
-| Category | The log category of the event. Logs with the same log category and resource type will have the same properties fields. |
-| ResultType | The status of the operation (for example, Succeeded or Failed). |
-| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| DurationMs | The duration of the operation in milliseconds. |
-| Level | The severity level of the operation. |
-| URI | The URI of the request. |
-| Identity | The request sender's identity, if provided. |
-| SdkType | The SDK type being used in the request. |
-| PlatformType | The platform type being used in the request. |
-| RouteType | The routing methodology to where the ICE server will be located from the client (for example, Any or Nearest). |
--
-### Email Send Mail operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| Location | The region where the operation was processed. |
-| OperationName | The operation associated with log record. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
-| Size | Represents the total size in megabytes of the email body, subject, headers and attachments. |
-| ToRecipientsCount | The total # of unique email addresses on the To line. |
-| CcRecipientsCount | The total # of unique email addresses on the Cc line. |
-| BccRecipientsCount | The total # of unique email addresses on the Bcc line. |
-| UniqueRecipientsCount | This is the deduplicated total recipient count for the To, Cc and Bcc address fields. |
-| AttachmentsCount | The total # of attachments. |
--
-### Email Status Update operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| Location | The region where the operation was processed. |
-| OperationName | The operation associated with log record. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
-| RecipientId | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
-| DeliveryStatus | The terminal status of the message. |
-
-### Email User Engagement operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| Location | The region where the operation was processed. |
-| OperationName | The operation associated with log record. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| CorrelationID | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |
-| RecipientId | The email address for the targeted recipient. If this is a message-level event, the property will be empty. |
-| EngagementType | The type of user engagement being tracked. |
-| EngagementContext | The context represents what the user interacted with. |
-| UserAgent | The user agent string from the client. |
--
-### Call Automation operational logs
-
-| Property | Description |
-| -- | |
-| TimeGenerated | The timestamp (UTC) of when the log was generated. |
-| OperationName | The operation associated with log record. |
-| CorrelationID | The identifier to identify a call and correlate events for a unique call. |
-| OperationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| Category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| ResultType | The status of the operation. |
-| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
-| DurationMs | The duration of the operation in milliseconds. |
-| CallerIpAddress | The caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
-| Level | The severity level of the event. |
-| URI | The URI of the request. |
-| CallConnectionId | ID representing the call connection, if available. This ID is different for each participant and is used to identify their connection to the call. |
-| ServerCallId | A unique ID to identify a call. |
-| SDKVersion | SDK version used for the request. |
-| SDKType | The SDK type used for the request. |
-| ParticipantId | ID to identify the call participant that made the request. |
-| SubOperationName | Used to identify the sub type of media operation (play, recognize) |
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The following tables summarize current availability:
| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | | USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | | USA | Short-Codes\** | General Availability | General Availability | - | - |
+| UK | Toll-Free | - | - | General Availability | General Availability\* |
+| UK | Local | - | - |
+| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| Canada | Local | - | - | General Availability | General Availability\* |
| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID\** | Public Preview | - | - | - | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :-- | :- | :- | :- | : | : | | UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - |
+| UK | Local | - | - | General Availability | General Availability\* |
| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | | USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | | Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : | | Slovakia | Local | - | - | Public Preview | Public Preview\* |
+| Slovakia | Toll-Free | - | - | Public Preview | Public Preview\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : | | Germany | Local | - | - | Public Preview | Public Preview\* |
+| Germany | Toll-Free | - | - | Public Preview | Public Preview\* |
| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
|Number type |Monthly fee | |--|--| |Geographic |USD 1.00/mo |
+|Toll-Free |USD 18.00/mo |
### Usage charges |Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.0234/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0234/min |Starting at USD 0.0401/min |
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
All prices shown below are in USD.
|Number type |Monthly fee | |--|--| |Geographic |USD 1.00/mo |
+|Toll-Free |USD 25.00/mo |
### Usage charges |Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.0270/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0270/min |Starting at USD 0.1151/min |
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
# Troubleshooting in Azure Communication Services
-This document will help you troubleshoot issues that you may experience within your Communication Services solution. If you're troubleshooting SMS, you can [enable delivery reporting with Event Grid](../quickstarts/sms/handle-sms-events.md) to capture SMS delivery details.
+This document helps you troubleshoot issues that you may experience within your Communication Services solution. If you're troubleshooting SMS, you can [enable delivery reporting with Event Grid](../quickstarts/sms/handle-sms-events.md) to capture SMS delivery details.
## Getting help
-We encourage developers to submit questions, suggest features, and report problems as issues. To aid in doing this we have a [dedicated support and help options page](../support.md) which lists your options for support.
+We encourage developers to submit questions, suggest features, and report problems as issues. To aid in doing this, we have a [dedicated support and help options page](../support.md) which lists your options for support.
To help you troubleshoot certain types of issues, you may be asked for any of the following pieces of information:
To help you troubleshoot certain types of issues, you may be asked for any of th
* **Short Code Program Brief ID**: This ID is used to identify a short code program brief application. * **Email message ID**: This ID is used to identify Send Email requests. * **Correlation ID**: This ID is used to identify requests made using Call Automation.
-* **Call logs**: These logs contain detailed information that can be used to troubleshoot calling and network issues.
+* **Call logs**: These logs contain detailed information that are used to troubleshoot calling and network issues.
Also take a look at our [service limits](service-limits.md) documentation for more information on throttling and limitations.
The MS-CV ID can be accessed by configuring diagnostics in the `clientOptions` o
### Client options example
-The following code snippets demonstrate diagnostics configuration. When the SDKs are used with diagnostics enabled, diagnostics details will be emitted to the configured event listener:
+The following code snippets demonstrate diagnostics configuration. When the SDKs are used with diagnostics enabled, diagnostics details can be emitted to the configured event listener:
# [C#](#tab/csharp) ```
chat_client = ChatClient(
## Access IDs required for Call Automation
-When troubleshooting issues with the Call Automation SDK, like call management or recording problems, you'll need to collect the IDs that help identify the failing call or operation. You can provide either of the two IDs mentioned here.
+When troubleshooting issues with the Call Automation SDK, like call management or recording problems, you need to collect the IDs that help identify the failing call or operation. You can provide either of the two IDs mentioned here.
- From the header of API response, locate the field `X-Ms-Skype-Chain-Id`. ![Screenshot of response header showing X-Ms-Skype-Chain-Id.](media/troubleshooting/response-header.png)
In addition to one of these IDs, please provide the details on the failing use c
## Access your client call ID
-When troubleshooting voice or video calls, you may be asked to provide a `call ID`. This can be accessed via the `id` property of the `call` object:
+When troubleshooting voice or video calls, you may be asked to provide a `call ID`. This value can be accessed via the `id` property of the `call` object:
# [JavaScript](#tab/javascript) ```javascript
async function main() {
}, { enableDeliveryReport: true // Optional parameter });
-console.log(result); // your message ID will be in the result
+console.log(result); // your message ID is in the result
} ```
The program brief ID can be found on the [Azure portal](https://portal.azure.com
## Access your email operation ID
-When troubleshooting send email or email message status requests, you may be asked to provide an `operation ID`. This can be accessed in the response:
+When troubleshooting send email or email message status requests, you may be asked to provide an `operation ID`. This value can be accessed in the response:
# [.NET](#tab/dotnet) ```csharp
const callClient = new CallClient();
``` You can use AzureLogger to redirect the logging output from Azure SDKs by overriding the `AzureLogger.log` method:
-This may be useful if you want to redirect logs to a location other than console.
+This value may be useful if you want to redirect logs to a location other than console.
```javascript import { AzureLogger } from '@azure/logger';
When developing for iOS, your logs are stored in `.blog` files. Note that you ca
These can be accessed by opening Xcode. Go to Windows > Devices and Simulators > Devices. Select your device. Under Installed Apps, select your application and click on "Download container".
-This will give you a `xcappdata` file. Right-click on this file and select ΓÇ£Show package contentsΓÇ¥. You'll then see the `.blog` files that you can then attach to your Azure support request.
+This process gives you a `xcappdata` file. Right-click on this file and select ΓÇ£Show package contentsΓÇ¥. You'll then see the `.blog` files that you can then attach to your Azure support request.
# [Android](#tab/android) When developing for Android, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
-On Android Studio, navigate to the Device File Explorer by selecting View > Tool Windows > Device File Explorer from both the simulator and the device. The `.blog` file will be located within your application's directory, which should look something like `/data/data/[app_name_space:com.contoso.com.acsquickstartapp]/files/acs_sdk.blog`. You can attach this file to your support request.
+On Android Studio, navigate to the Device File Explorer by selecting View > Tool Windows > Device File Explorer from both the simulator and the device. The `.blog` file is located within your application's directory, which should look something like `/data/data/[app_name_space:com.contoso.com.acsquickstartapp]/files/acs_sdk.blog`. You can attach this file to your support request.
On Android Studio, navigate to the Device File Explorer by selecting View > Tool
When developing for Windows, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted.
-These can be accessed by looking at where your app is keeping its local data. There are many ways to figure out where a UWP app keeps its local data, the following steps are just one of these ways:
+These are accessed by looking at where your app is keeping its local data. There are many ways to figure out where a UWP app keeps its local data, the following steps are just one of these ways:
1. Open a Windows Command Prompt (Windows Key + R) 2. Type `cmd.exe` 3. Type `where /r %USERPROFILE%\AppData acs*.blog`
To verify your Teams License eligibility via Teams web client, follow the steps
1. If the authentication is successful and you remain in the https://teams.microsoft.com/ domain, then your Teams License is eligible. If authentication fails or you're redirected to the https://teams.live.com/v2/ domain, then your Teams License isn't eligible to use Azure Communication Services support for Teams users. #### Checking your current Teams license via Microsoft Graph API
-You can find your current Teams license using [licenseDetails](/graph/api/resources/licensedetails) Microsoft Graph API that returns licenses assigned to a user. Follow the steps below to use the Graph Explorer tool to view licenses assigned to a user:
+You can find your current Teams license using [licenseDetails](/graph/api/resources/licensedetails) Microsoft Graph API that returns the licenses assigned to a user. Follow the steps below to use the Graph Explorer tool to view licenses assigned to a user:
1. Open your browser and navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) 1. Sign in to Graph Explorer using the credentials.
The below error codes are exposed by Call Automation SDK.
|--|--|--| | 400 | Bad request | The input request is invalid. Look at the error message to determine which input is incorrect. | 400 | Play Failed | Ensure your audio file is WAV, 16KHz, Mono and make sure the file url is publicly accessible. |
-| 400 | Recognize Failed | Check the error message. The message will highlight if this is due to timeout being reached or if operation was canceled. For more information about the error codes and messages you can check our how-to guide for [gathering user input](../how-tos/call-automation/recognize-action.md#event-codes).
+| 400 | Recognize Failed | Check the error message. The message highlights if this failure is due to timeout being reached or if operation was canceled. For more information about the error codes and messages you can check our how-to guide for [gathering user input](../how-tos/call-automation/recognize-action.md#event-codes).
| 401 | Unauthorized | HMAC authentication failed. Verify whether the connection string used to create CallAutomationClient is correct. | 403 | Forbidden | Request is forbidden. Make sure that you can have access to the resource you are trying to access. | 404 | Resource not found | The call you are trying to act on doesn't exist. For example, transferring a call that has already disconnected.
The below error codes are exposed by Call Automation SDK.
| 502 | Bad gateway | Retry after a delay with a fresh http client. Consider the below tips when troubleshooting certain issues. -- Your application is not getting IncomingCall Event Grid event: Make sure the application endpoint has been [validated with Event Grid](../../event-grid/webhook-event-delivery.md) at the time of creating event subscription. The provisioning status for your event subscription will be marked as succeeded if the validation was successful.
+- Your application is not getting IncomingCall Event Grid event: Make sure the application endpoint has been [validated with Event Grid](../../event-grid/webhook-event-delivery.md) at the time of creating event subscription. The provisioning status for your event subscription is marked as succeeded if the validation was successful.
- Getting the error 'The field CallbackUri is invalid': Call Automation does not support HTTP endpoints. Make sure the callback url you provide supports HTTPS. - PlayAudio action does not play anything: Currently only Wave file (.wav) format is supported for audio files. The audio content in the wave file must be mono (single-channel), 16-bit samples with a 16,000 (16KHz) sampling rate. - Actions on PSTN endpoints aren't working: CreateCall, Transfer, AddParticipant and Redirect to phone numbers require you to set the SourceCallerId in the action request. Unless you are using Direct Routing, the source caller ID should be a phone number owned by your Communication Services resource for the action to succeed.
The Azure Communication Services SMS SDK uses the following error codes to help
## Related information-- [Logs and diagnostics](logging-and-diagnostics.md)
+- Access logs for [voice and video](./analytics/logs/voice-and-video-logs.md), [chat](./analytics/logs/chat-logs.md), [email](./analytics/logs/email-logs.md), [network traversal](./analytics/logs/network-traversal-logs.md), [recording](./analytics/logs/recording-logs.md), [SMS](./analytics/logs/sms-logs.md) and [call automation](./analytics/logs/call-automation-logs.md).
- [Metrics](metrics.md) - [Service limits](service-limits.md)
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
When the Pre-Call diagnostic test runs, behind the scenes it uses calling minute
- [Check your network condition with the diagnostics tool](../developer-tools/network-diagnostic.md) - [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md) - [Enable Media Quality Statistics in your application](../voice-video-calling/media-quality-sdk.md)-- [Consume call logs with Azure Monitor](../analytics/call-logs-azure-monitor.md)
+- [Consume call logs with Azure Monitor](../analytics/logs/voice-and-video-logs.md)
communication-services Spotlight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/spotlight.md
+
+ Title: Spotlight states
+
+description: Use Azure Communication Services SDKs to send spotlight state.
+++++ Last updated : 03/01/2023++++
+# Spotlight states
++
+In this article, you'll learn how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability allows users in the call or meeting to pin and unpin videos for everyone.
+
+Since the video stream resolution of a participant is increased when spotlighted, it should be noted that the settings done on [Video Constraints](../../concepts/voice-video-calling/video-constraints.md) also apply to spotlight.
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md).
+- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
+
+Communication Services or Microsoft 365 users can call the spotlight APIs based on role type and conversation type
+
+**In a one to one call or group call scenario, the following APIs are supported for both Communication Services and Microsoft 365 users**
+
+|APIs| Organizer | Presenter | Attendee |
+|-|--|--|--|
+| startSpotlight | ✔️ | ✔️ | ✔️ |
+| stopSpotlight | ✔️ | ✔️ | ✔️ |
+| stopAllSpotlight | ✔️ | ✔️ | ✔️ |
+| getSpotlightedParticipants | ✔️ | ✔️ | ✔️ |
+
+**For meeting scenario the following APIs are supported for both Communication Services and Microsoft 365 users**
+
+|APIs| Organizer | Presenter | Attendee |
+|-|--|--|--|
+| startSpotlight | ✔️ | ✔️ | |
+| stopSpotlight | ✔️ | ✔️ | ✔️ |
+| stopAllSpotlight | ✔️ | ✔️ | |
+| getSpotlightedParticipants | ✔️ | ✔️ | ✔️ |
++
+## Next steps
+- [Learn how to manage calls](./manage-calls.md)
+- [Learn how to manage video](./manage-video.md)
+- [Learn how to record calls](./record-calls.md)
+- [Learn how to transcribe calls](./call-transcription.md)
communication-services Archive Chat Threads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/chat-sdk/archive-chat-threads.md
+
+ Title: Archive your chat threads
+
+description: Learn how to archive chat threads and messages with your own storage.
+++++ Last updated : 03/24/2023+++++
+# Archiving chat threads into your preferred storage solution
+
+In this guide, learn how to move chat messages into your own storage in real-time or chat threads once conversations are complete. Developers are able to maintain an archive of chat threads or messages for compliance reasons or to integrate with Azure OpenAI or both.
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A storage account, in this guide we take an example of Azure Blob Storage. You can use the portal to set up an [account](../../../event-grid/blob-event-quickstart-portal.md). You can use any other storage option that you prefer.
+- If you would like to archive messages in near real time, enable Azure Event Grid, which is a paid service (this prerequisite is only for option 2).
+
+## About Event Grid
+
+[Event Grid](../../../event-grid/overview.md) is a cloud-based eventing service. You need to subscribe to [communication service events](../../../event-grid/event-schema-communication-services.md), and trigger an event in order to archive the messages in near real time. Typically, you send events to an endpoint that processes the event data and takes actions.
+
+## Set up the environment
+
+To set up the environment that you use to generate and receive events, take the steps in the following sections.
+
+### Register an Event Grid resource provider
+
+If you haven't previously used Event Grid in your Azure subscription, you might need to register your Event Grid resource provider. To register the provider, follow these steps:
+
+1. Go to the Azure portal.
+1. On the left menu, select **Subscriptions**.
+1. Select the subscription that you use for Event Grid.
+1. On the left menu, under **Settings**, select **Resource providers**.
+1. Find **Microsoft.EventGrid**.
+1. If your resource provider isn't registered, select **Register**.
+
+It might take a moment for the registration to finish. Select **Refresh** to update the status. When **Registered** appears under **Status**, you're ready to continue.
+
+### Deploy the Event Grid viewer
+
+You need to use an Event Grid viewer to view events in near-real time. The viewer provides the user with the experience of a real-time feed.
+
+There are two methods for archiving chat threads. You can choose to archive messages when the thread is inactive or in near real time.
+
+## Option 1: Archiving inactive conversations using a back end application
+
+This option is suited when your chat volume is high and multiple parties are involved.
+
+Create a backend application to perform jobs to move chat threads into your own storage, we recommend archiving when the thread is no longer active, i.e the conversation with the customer is complete.
+
+The backend application would run a job to do the following steps:
+
+1. [List](../../quickstarts/chat/get-started.md?tabs=windows&pivots=platform-azcli#list-chat-messages-in-a-chat-thread) the messages in the chat thread you wish to archive
+2. Write the chat thread in the desired format you wish to store it in i.e JSON, CSV
+3. Copy the thread in the format as a blob into Azure Blob storage
+
+## Option 2: Archiving chat messages in real-time
+
+This option is suited if the chat volume is low as conversations are happening in real time.
++
+Follow these steps for archiving messages:
+
+- Subscribe to Event Grid events which come with Azure Event Grid through web hooks. Azure Communications Chat service supports the following [events](../../concepts/chat/concepts.md#real-time-notifications) for real-time notifications. The following events are recommended: Message Received [event](../../../event-grid/communication-services-chat-events.md#microsoftcommunicationchatmessagereceived-event), Message Edited [event](../../../event-grid/communication-services-chat-events.md#microsoftcommunicationchatmessageedited-event), and Message Deleted [event](../../../event-grid/communication-services-chat-events.md#microsoftcommunicationchatmessagedeleted-event).
+- Validate the [events](../../how-tos/event-grid/view-events-request-bin.md) by configuring your resource to receive these events
+- Test your Event Grid handler [locally](../../how-tos/event-grid/local-testing-event-grid.md) to ensure that you are receiving events that you need for archiving.
+
+> [!Note]
+> You would have to pay for [events](https://azure.microsoft.com/pricing/details/event-grid/).
+
+## Next steps
+
+* For an introduction to Azure Event Grid Concepts, see [Concepts in Event Grid](../../../event-grid/concepts.md)
+* Service [Limits](../../concepts/service-limits.md)
+* [Troubleshooting](../../concepts/troubleshooting-info.md)
+* Help and support [options](../../support.md)
+++
+
communication-services Enable User Engagement Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/enable-user-engagement-tracking.md
You can now subscribe to Email User Engagement operational logs - provides infor
## Next steps
-* [Get started with log analytics in Azure Communication Service](../../concepts/logging-and-diagnostics.md)
-
+- Access logs for [Email Communication Service](../../concepts/analytics/logs/email-logs.md).
The following documents may be interesting to you:
communication-services Click To Call Widget https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/widgets/click-to-call-widget.md
+
+ Title: Tutorial - Embed a Teams call widget into your web application
+
+description: Learn how to use Azure Communication Services to embed a calling widget into your web application.
+++++ Last updated : 04/17/2023+++++
+# Embed a Teams call widget into your web application
+
+Enable your customers to talk with your support agent on Teams through a call interface directly embedded into your web application.
+
+## Architecture overview
+
+## Prerequisites
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+
+## Set up an Azure Function to provide access tokens
+
+Follow instructions from our [trusted user access service tutorial](../trusted-service-tutorial.md) to deploy your Azure Function for access tokens. This service returns an access token that our widget uses to authenticate to Azure Communication Services and start the call to the Teams user we define.
+
+## Set up boilerplate vanilla web application
+
+1. Create an HTML file named `https://docsupdatetracker.net/index.html` and add the following code to it:
+
+``` html
+
+ <!DOCTYPE html>
+ <html>
+ <head>
+ <meta charset="utf-8">
+ <title>Call Widget App - Vanilla</title>
+ <link rel="stylesheet" href="style.css">
+ </head>
+ <body>
+ <div id="call-widget">
+ <div id="call-widget-header">
+ <div id="call-widget-header-title">Call Widget App</div>
+ <button class='widget'> ? </button >
+ <div class='callWidget'></div>
+ </div>
+ </div>
+ </body>
+ </html>
+
+```
+
+2. Create a CSS file named `style.css` and add the following code to it:
+
+``` css
+
+ .widget {
+ height: 75px;
+ width: 75px;
+ position: absolute;
+ right: 0;
+ bottom: 0;
+ background-color: blue;
+ margin-bottom: 35px;
+ margin-right: 35px;
+ border-radius: 50%;
+ text-align: center;
+ vertical-align: middle;
+ line-height: 75px;
+ color: white;
+ font-size: 30px;
+ }
+
+ .callWidget {
+ height: 400px;
+ width: 600px;
+ background-color: blue;
+ position: absolute;
+ right: 35px;
+ bottom: 120px;
+ z-index: 10;
+ display: none;
+ border-radius: 5px;
+ border-style: solid;
+ border-width: 5px;
+ }
+
+```
+
+1. Configure the call window to be hidden by default. We show it when the user clicks the button.
+
+``` html
+
+ <script>
+ var open = false;
+ const button = document.querySelector('.widget');
+ const content = document.querySelector('.callWidget');
+ button.addEventListener('click', async function() {
+ if(!open){
+ open = !open;
+ content.style.display = 'block';
+ button.innerHTML = 'X';
+ //Add code to initialize call widget here
+ } else if (open) {
+ open = !open;
+ content.style.display = 'none';
+ button.innerHTML = '?';
+ }
+ });
+
+ async function getAccessToken(){
+ //Add code to get access token here
+ }
+ </script>
+
+```
+
+At this point, we have set up a static HTML page with a button that opens a call widget when clicked. Next, we add the widget script code. It makes a call to our Azure Function to get the access token and then use it to initialize our call client for Azure Communication Services and start the call to the Teams user we define.
+
+## Fetch an access token from your Azure Function
+
+Add the following code to the `getAccessToken()` function:
+
+``` javascript
+
+ async function getAccessToken(){
+ const response = await fetch('https://<your-function-name>.azurewebsites.net/api/GetAccessToken?code=<your-function-key>');
+ const data = await response.json();
+ return data.token;
+ }
+
+```
+You need to add the URL of your Azure Function. You can find these values in the Azure portal under your Azure Function resource.
++
+## Initialize the call widget
+
+1. Add a script tag to load the call widget script:
+
+``` html
+
+ <script src="https://github.com/ddematheu2/ACS-UI-Library-Widget/releases/download/widget/callComposite.js"></script>
+
+```
+
+We provide a test script hosted on GitHub for you to use for testing. For production scenarios, we recommend hosting the script on your own CDN. For more information on how to build your own bundle, see [this article](https://azure.github.io/communication-ui-library/?path=/docs/use-composite-in-non-react-environment--page#build-your-own-composite-js-bundle-files).
+
+1. Add the following code under the button event listener:
+
+``` javascript
+
+ button.addEventListener('click', async function() {
+ if(!open){
+ open = !open;
+ content.style.display = 'block';
+ button.innerHTML = 'X';
+ let response = await getChatContext();
+ console.log(response);
+ const callAdapter = await callComposite.loadCallComposite(
+ {
+ displayName: "Test User",
+ locator: { participantIds: ['INSERT USER UNIQUE IDENTIFIER FROM MICROSOFT GRAPH']},
+ userId: response.user,
+ token: response.userToken
+ },
+ content,
+ {
+ formFactor: 'mobile',
+ key: new Date()
+ }
+ );
+ } else if (open) {
+ open = !open;
+ content.style.display = 'none';
+ button.innerHTML = '?';
+ }
+ });
+
+```
+
+Add a Microsoft Graph [User](https://learn.microsoft.com/graph/api/resources/user?view=graph-rest-1.0) ID to the `participantIds` array. You can find this value through [Microsoft Graph](https://learn.microsoft.com/graph/api/user-get?view=graph-rest-1.0&tabs=http) or through [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) for testing purposes. There you can grab the `id` value from the response.
+
+## Run code
+
+Open the `https://docsupdatetracker.net/index.html` in a browser. This code initializes the call widget when the button is clicked. It makes a call to our Azure Function to get the access token and then use it to initialize our call client for Azure Communication Services and start the call to the Teams user we define.
cosmos-db Database Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-encryption-at-rest.md
A: Microsoft has a set of internal guidelines for encryption key rotation, which
### Q: Can I use my own encryption keys? A: Yes, this feature is now available for new Azure Cosmos DB accounts and this should be done at the time of account creation. Please go through [Customer-managed Keys](./how-to-setup-cmk.md) document for more information.
+> [!WARNING]
+> The following field names are reserved on Cassandra API tables in accounts using Customer-managed Keys:
+>
+> - `id`
+> - `ttl`
+> - `_ts`
+> - `_etag`
+> - `_rid`
+> - `_self`
+> - `_attachments`
+> - `_epk`
+>
+> When Customer-managed Keys are not enabled, only field names beginning with `__sys_` are reserved.
+ ### Q: What regions have encryption turned on? A: All Azure Cosmos DB regions have encryption turned on for all user data.
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
You must store customer-managed keys in [Azure Key Vault](../key-vault/general/o
> [!NOTE] > Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation.
+> [!WARNING]
+> The following field names are reserved on Cassandra API tables in accounts using Customer-managed Keys:
+>
+> - `id`
+> - `ttl`
+> - `_ts`
+> - `_etag`
+> - `_rid`
+> - `_self`
+> - `_attachments`
+> - `_epk`
+>
+> When Customer-managed Keys are not enabled, only field names beginning with `__sys_` are reserved.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
cosmos-db Optimize Cost Reads Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-reads-writes.md
The only factor affecting the RU charge of a point read (besides the consistency
| 1 KB | 1 RU | | 100 KB | 10 RUs |
-Because point reads (key/value lookups on the item ID) are the most efficient kind of read, you should make sure your item ID has a meaningful value so you can fetch your items with a point read (instead of a query) when possible.
+Because point reads (key/value lookups on the item ID and partition key) are the most efficient kind of read, you should make sure your item ID has a meaningful value so you can fetch your items with a point read (instead of a query) when possible.
+
+> [!NOTE]
+> In the API for NoSQL, point reads can only be made using the REST API or SDKs. Queries that filter on one item's ID and partition key aren't considered a point read. To see an example using the .NET SDK, see [read an item in Azure Cosmos DB for NoSQL.](./nosql/how-to-dotnet-read-item.md)
### Queries
cosmos-db Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/request-units.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an
Azure Cosmos DB normalizes the cost of all database operations using Request Units (or RUs, for short). Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
-The cost to do a point read (fetching a single item by its ID and partition key value) for a 1-KB item is one Request Unit (or one RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos DB container, RUs measure the actual costs of using that API. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
+The cost to do a [point read](optimize-cost-reads-writes.md#point-reads) (fetching a single item by its ID and partition key value) for a 1-KB item is one Request Unit (or one RU). All other database operations are similarly assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos DB container, RUs measure the actual costs of using that API. Whether the database operation is a write, point read, or query, costs are always measured in RUs.
> [!VIDEO https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=772fba63-62c7-488c-acdb-a8f686a3b5f4]
cost-management-billing Create Enterprise Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-enterprise-subscription.md
Previously updated : 03/29/2023 Last updated : 04/18/2023
You need the following permissions to create subscriptions for an EA:
## Create an EA subscription
-Use the following information to create an EA subscription.
+An account owner uses the following information to create an EA subscription.
+
+>[!NOTE]
+> If you want to create an Enterprise Dev/Test subscription, an enterprise administrator must enable account owners to create them. Otherwise, the option to create them isn't available. To enable the dev/test offer for an enrollment, see [Enable the enterprise dev/test offer](direct-ea-administration.md#enable-the-enterprise-devtest-offer).
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Navigate to **Subscriptions** and then select **Add**.
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 04/06/2023 Last updated : 04/18/2023
Enterprise agreements and the customers accessing the agreements can have multip
1. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with. :::image type="content" source="./media/direct-ea-administration/select-billing-scope.png" alt-text="Screenshot showing select a billing account." lightbox="./media/direct-ea-administration/select-billing-scope.png" :::
+## Activate your enrollment
+
+To activate your enrollment, the initial enterprise administrator signs in to the Azure portal using their work, school, or Microsoft account.
+If you've been set up as the enterprise administrator, you don't need to receive the activation email. You can login to Azure portal and activate the enrollment.
+
+### To activate an enrollment
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes).
+1. Search for **Cost Management + Billing** and select it.
+ :::image type="content" source="./media/direct-ea-administration/search-cost-management.png" alt-text="Screenshot showing search for Cost Management + Billing." lightbox="./media/direct-ea-administration/search-cost-management.png" :::
+1. Select the enrollment that you want to activate.
+ :::image type="content" source="./media/direct-ea-administration/select-billing-scope.png" alt-text="Screenshot showing select a billing account." lightbox="./media/direct-ea-administration/select-billing-scope.png" :::
+1. Once the enrollment is selected, status of enrollment is changed to active.
+1. You can view the status of enrollment under **Essentials** on Summary view.
+ ## View enrollment details An Azure enterprise administrator (EA admin) can view and manage enrollment properties and policy to ensure that enrollment settings are correctly configured.
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
Previously updated : 11/17/2022 Last updated : 04/14/2023 # Transform data from an SAP ODP source using the SAP CDC connector in Azure Data Factory or Azure Synapse Analytics
To prepare an SAP CDC dataset, follow [Prepare the SAP CDC source dataset](sap-c
SAP CDC datasets can be used as source in mapping data flow. Since the raw SAP ODP change feed is difficult to interpret and to correctly update to a sink, mapping data flow takes care of this by evaluating technical attributes provided by the ODP framework (e.g., ODQ_CHANGEMODE) automatically. This allows users to concentrate on the required transformation logic without having to bother with the internals of the SAP ODP change feed, the right order of changes, etc.
+To get started, create a pipeline with a mapping data flow.
++
+Next, specify a staging folder in Azure Data Lake Gen2, which will serve as an intermediate storage for data extracted from SAP.
++ ### Mapping data flow properties To create a mapping data flow using the SAP CDC connector as a source, complete the following steps:
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud-- Previously updated : 03/29/2023 Last updated : 04/18/2023 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **PowerZure exploitation toolkit used to enumerate storage containers, shares, and tables**<br>(ARM_PowerZure.ShowStorageContent) | PowerZure exploitation toolkit was used to enumerate storage shares, tables, and containers. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **PowerZure exploitation toolkit used to execute a Runbook in your subscription**<br>(ARM_PowerZure.StartRunbook) | PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **PowerZure exploitation toolkit used to extract Runbooks content**<br>(ARM_PowerZure.AzureRunbookContent) | PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
-| **PREVIEW - Activity from a risky IP address**<br>(ARM.MCAS_ActivityFromAnonymousIPAddresses) | Users activity from an IP address that has been identified as an anonymous proxy IP address has been detected.<br>These proxies are used by people who want to hide their device's IP address, and can be used for malicious intent. This detection uses a machine learning algorithm that reduces false positives, such as mis-tagged IP addresses that are widely used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
-| **PREVIEW - Activity from infrequent country**<br>(ARM.MCAS_ActivityFromInfrequentCountry) | Activity from a location that wasn't recently or ever visited by any user in the organization has occurred.<br>This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
| **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High |
-| **PREVIEW - Impossible travel activity**<br>(ARM.MCAS_ImpossibleTravelActivity) | Two user activities (in a single or multiple sessions) have occurred, originating from geographically distant locations. This occurs within a time period shorter than the time it would have taken the user to travel from the first location to the second. This indicates that a different user is using the same credentials.<br>This detection uses a machine learning algorithm that ignores obvious false positives contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The detection has an initial learning period of seven days, during which it learns a new user's activity pattern.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
-| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
| **PREVIEW - Suspicious key vault recovery detected**<br>(Arm_Suspicious_Vault_Recovering) | Microsoft Defender for Resource Manager detected a suspicious recovery operation for a soft-deleted key vault resource.<br> The user recovering the resource is different from the user that deleted it. This is highly suspicious because the user rarely invokes such an operation. In addition, the user logged on without multi-factor authentication (MFA).<br> This might indicate that the user is compromised and is attempting to discover secrets and keys to gain access to sensitive resources, or to perform lateral movement across your network. | Lateral movement | Medium/high | | **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW - Suspicious invocation of a high-risk 'Credential Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Credential access | Medium |
Defender for Cloud's supported kill chain intents are based on [version 9 of the
## Defender for Servers alerts to be deprecated
-The following tables include the Defender for Servers security alerts [to be deprecated in April, 2023](upcoming-changes.md#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers).
+The following tables include the Defender for Servers security alerts [to be deprecated in April, 2023](release-notes.md#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers).
### Linux alerts to be deprecated
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
This section lists all of the cloud security graph components (connections and
| DEASM findings | Microsoft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP | | Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container | | Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod |
-| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Kubernetes image |
-| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Kubernetes image |
+| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image |
+| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image |
| Public IP metadata | Lists the metadata of an Public IP | Public IP | | Identity metadata | Lists the metadata of an identity | Azure AD Identity |
This section lists all of the cloud security graph components (connections and
| Has permission to | Indicates that an identity has permissions to a resource or a group of resources | Azure AD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources| | Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server | All Azure & AWS resources, All Kubernetes entities, All DevOps entities, Azure SQL database | | Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service |
-| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Kubernetes image, Kubernetes pod |
+| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Container image, Kubernetes pod |
| Member of | Indicates that the source identity is a member of the target identities group | Azure AD group, Azure AD user | Azure AD group | | Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod |
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Previously updated : 03/21/2023 Last updated : 04/18/2023 # Automatically configure vulnerability assessment for your machines
To assess your machines for vulnerabilities, you can use one of the following so
:::image type="content" source="media/auto-deploy-vulnerability-assessment/turn-on-deploy-vulnerability-assessment.png" alt-text="Screenshot showing where to turn on deployment of vulnerability assessment for machines." lightbox="media/auto-deploy-vulnerability-assessment/turn-on-deploy-vulnerability-assessment.png"::: > [!TIP]
- > If you select the "Microsoft Defender for Cloud built-in Qualys solution" solution, Defender for Cloud enables the following policy: [(Preview) Configure machines to receive a vulnerability assessment provider](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f13ce0167-8ca6-4048-8e6b-f996402e3c1b).
+ > If you select the "Microsoft Defender for Cloud built-in Qualys solution" solution, Defender for Cloud enables the following policy: [Configure machines to receive a vulnerability assessment provider](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f13ce0167-8ca6-4048-8e6b-f996402e3c1b).
1. Select **Apply** and then select **Save**.
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
+
+ Title: Agentless Container Posture for Microsoft Defender for Cloud
+description: Learn how Agentless Container Posture offers discovery and visibility for Containers without installing an agent on your machines.
++ Last updated : 04/16/2023+++
+# Agentless Container Posture (Preview)
+
+You can identify security risks that exist in containers and Kubernetes realms with the agentless discovery and visibility capability across SDLC and runtime.
+
+You can maximize the coverage of your container posture issues and extend your protection beyond the reach of agent-based assessments to provide a holistic approach to your posture improvement. This includes, for example, container vulnerability assessment insights as part of [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md) and Kubernetes [Attack Path](attack-path-reference.md#azure-containers) analysis.
+
+Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md).
+
+> [!IMPORTANT]
+> The Agentless Container Posture preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available" and are excluded from the service-level agreements and limited warranty. Agentless Container Posture previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use.
+
+## Capabilities
+
+Agentless Container Posture provides the following capabilities:
+
+- Using Kubernetes Attack Path analysis to visualize risks and threats to Kubernetes environments.
+- Using Cloud Security Explorer for risk hunting by querying various risk scenarios.
+- Viewing security insights, such as internet exposure, and other pre-defined security scenarios. For more information, search for `Kubernetes` in the [list of Insights](attack-path-reference.md#insights).
+- Agentless discovery and visibility within Kubernetes components.
+- Agentless container registry vulnerability assessment, using the image scanning results of your Azure Container Registry (ACR) with Cloud Security Explorer.
+
+ [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) for Containers in Defender Cloud Security Posture Management (CSPM) gives you frictionless, wide, and instant visibility on actionable posture issues without the need for installed agents, network connectivity requirements, or container performance impact.
+
+All of these capabilities are available as part of the [Defender Cloud Security Posture Management](concept-cloud-security-posture-management.md) plan.
+
+## Availability
+
+| Aspect | Details |
+|||
+|Release state:|Preview|
+|Pricing:|Requires [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts |
+| Permissions | You need to have access as a Subscription Owner, or, User Access Admin as well as Security Admin permissions for the Azure subscription used for onboarding |
+
+## Prerequisites
+
+You need to have a Defender for CSPM plan enabled. There's no dependency on Defender for Containers​.
+
+This feature uses trusted access. Learn more about [AKS trusted access prerequisites](/azure/aks/trusted-access-feature#prerequisites).
+
+## Onboard Agentless Containers for CSPM
+
+Onboarding Agentless Containers for CSPM will allow you to gain wide visibility into Kubernetes and containers registries across SDLC and runtime.
+
+**To onboard Agentless Containers for CSPM:**
+
+1. In the Azure portal, navigate to the Defender for Cloud's **Environment Settings** page.
+
+1. Select the subscription that's onboarded to the Defender CSPM plan, then select **Settings**.
+
+1. Ensure the **Agentless discovery for Kubernetes** and **Container registries vulnerability assessments** extensions are toggled to **On**.
+
+1. Select **Continue**.
+
+ :::image type="content" source="media/concept-agentless-containers/settings-continue.png" alt-text="Screenshot of selecting agentless discovery for Kubernetes and Container registries vulnerability assessments." lightbox="media/concept-agentless-containers/settings-continue.png":::
+
+1. Select **Save**.
+
+A notification message pops up in the top right corner that will verify that the settings were saved successfully.
+
+## Agentless Container Posture extensions
+
+### Container registries vulnerability assessments
+
+For container registries vulnerability assessments, recommendations are available based on the vulnerability assessment timeline.
+
+Learn more about [image scanning](defender-for-containers-vulnerability-assessment-azure.md).
+
+### Agentless discovery for Kubernetes
+
+The systemΓÇÖs architecture is based on a snapshot mechanism at intervals.
++
+By enabling the Agentless discovery for Kubernetes extension, the following process occurs:
+
+- **Create**: MDC (Microsoft Defender for Cloud) creates an identity in customer environments called CloudPosture/securityOperator/DefenderCSPMSecurityOperator.
+
+- **Assign**: MDC assigns 1 built-in role called **Kubernetes Agentless Operator** to that identity on subscription scope.
+
+ The role contains the following permissions:
+ - AKS read (Microsoft.ContainerService/managedClusters/read)
+ - AKS Trusted Access with the following permissions:
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
+
+ Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
+
+- **Discover**: Using the system assigned identity, MDC performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.
+
+- **Bind**: Upon discovery of an AKS cluster, MDC performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives MDC data plane read permission inside the cluster.
+
+### Refresh intervals
+
+Agentless information in Defender CSPM is updated once an hour through a snapshot mechanism. It can take up to **24 hours** to see results in Cloud Security Explorer and Attack Path.
+
+## FAQs
+
+### Why don't I see results from my clusters?
+
+If you don't see results from your clusters, check the following:
+
+- Do you have [stopped clusters](#what-do-i-do-if-i-have-stopped-clusters)?
+- Are your clusters [Read only (locked)](#what-do-i-do-if-i-have-read-only-clusters-locked)?
+
+### What do I do if I have stopped clusters?
+
+We suggest that you rerun the cluster to solve this issue.
+
+### What do I do if I have Read only clusters (locked)?
+
+We suggest that you do one of the following:
+
+- Remove the lock.
+- Perform the bind operation manually by doing an API request.
+
+Learn more about [locked resources](/azure/azure-resource-manager/management/lock-resources?tabs=json).
+
+## Next steps
+
+Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md).
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
For commercial and national cloud coverage, see the [features supported in diffe
Defender for Cloud offers foundational multicloud CSPM capabilities for free. These capabilities are automatically enabled by default on any subscription or account that has onboarded to Defender for Cloud. The foundational CSPM includes asset discovery, continuous assessment and security recommendations for posture hardening, compliance with Microsoft Cloud Security Benchmark (MCSB), and a [Secure score](secure-score-access-and-track.md) which measure the current status of your organization's posture.
-The optional Defender CSPM plan, provides advanced posture management capabilities such as [Attack path analysis](how-to-manage-attack-path.md), [Cloud security explorer](how-to-manage-cloud-security-explorer.md), advanced threat hunting, [security governance capabilities](concept-regulatory-compliance.md), and also tools to assess your [security compliance](review-security-recommendations.md) with a wide range of benchmarks, regulatory standards, and any custom security policies required in your organization, industry, or region.
+The optional Defender CSPM plan, provides advanced posture management capabilities such as [Attack path analysis](how-to-manage-attack-path.md), [Cloud security explorer](how-to-manage-cloud-security-explorer.md), advanced threat hunting, [security governance capabilities](governance-rules.md), and also tools to assess your [security compliance](review-security-recommendations.md) with a wide range of benchmarks, regulatory standards, and any custom security policies required in your organization, industry, or region.
### Plan pricing
The following table summarizes each plan and their cloud availability.
| Workflow automation | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | Remediation tracking | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| [Governance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| [Governance](governance-rules.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
| [Regulatory compliance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| Agentless discovery for Kubernetes | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
-| Agentless vulnerability assessments for container images, including registry scanning (\* Up to 20 unique images per billable resource) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
+| [Agentless discovery for Kubernetes](concept-agentless-containers.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
+| [Agentless vulnerability assessments for container images](defender-for-containers-vulnerability-assessment-azure.md), including registry scanning (\* Up to 20 unique images per billable resource) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
| Sensitive data discovery | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | Data flows discovery | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
-# Use Defender for Containers to scan your Azure Container Registry images for vulnerabilities
+# Scan your Azure Container Registry images for vulnerabilities
-This article explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
+As part of the protections provided within Microsoft Defender for Cloud, you can scan the container images that are stored in your Azure Resource Manager-based Azure Container Registry.
-To enable scanning of vulnerabilities in containers, you have to [enable Defender for Containers](defender-for-containers-enable.md). When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
+When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
Defender for Cloud filters and classifies findings from the scanner. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
The triggers for an image scan are:
- A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension. - Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
-
+ When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete. ## Prerequisites Before you can scan your ACR images: -- [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
+- You must enable one of the following plans on your subscription:
+
+ - [Defender CSPM](concept-cloud-security-posture-management.md). When you enable this plan, ensure you enable the **Container registries vulnerability assessments (preview)** extension.
+ - [Defender for Containers](defender-for-containers-enable.md).
- >[!NOTE]
- > This feature is charged per image.
+ >[!NOTE]
+ > This feature is charged per image. Learn more about the [pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/)
-- If you want to find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
+To find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
- Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
+Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
- Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
+Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
- You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-vulnerability-assessment-elastic.md) directly from the Azure portal.
+You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-vulnerability-assessment-elastic.md) directly from the Azure portal.
For a list of the types of images and container registries supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#registries-and-images).
defender-for-cloud Devops Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md
Title: Defender for DevOps FAQ description: If you're having issues with Defender for DevOps perhaps, you can solve it with these frequently asked questions. Previously updated : 02/23/2023 Last updated : 04/18/2023 # Defender for DevOps frequently asked questions (FAQ)
The ability to block developers from committing code with exposed secrets isn't
### I'm not able to configure Pull Request Annotations
-Make sure you have write (owner/contributor) access to the subscription.
+Make sure you have write (owner/contributor) access to the subscription. If you don't have this type of access today, you can get it through [activating an Azure Active Directory role in PIM](/azure/active-directory/privileged-identity-management/pim-how-to-activate-role).
### What programming languages are supported by Defender for DevOps?
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Security DevOps uses the following Open Source tools:
```yml name: MSDO windows-latest
- on:
+ on:
push: branches: [ main ] pull_request:
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
Learn more about [the cloud security graph, attack path analysis, and the cloud
## Prerequisites -- You must [enable agentless scanning](enable-vulnerability-assessment-agentless.md).- - You must [enable Defender CSPM](enable-enhanced-security.md).
+ - For Agentless Container Posture, you must enable the following extensions:
+ - Agentless discovery for Kubernetes (preview)
+ - Container registries vulnerability assessments (preview)
-- You must [enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. -
- When you enable Defender for Containers, you also gain the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in the security explorer.
+- You must [enable agentless scanning](enable-vulnerability-assessment-agentless.md).
- Required roles and permissions: - Security Reader
defender-for-cloud Iac Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
The IaC scanning tools that are included with Microsoft Security DevOps, are [Template Analyzer](https://github.com/Azure/template-analyzer) (which contains [PSRule](https://aka.ms/ps-rule-azure)) and [Terrascan](https://github.com/tenable/terrascan).
-Template Analyzer runs rules on ARM and Bicep templates. You can learn more about [Template Analyzer's rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-bpa-rules.md#built-in-rules).
+Template Analyzer runs rules on ARM and Bicep templates. You can learn more about [Template Analyzer's rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-rules.md#built-in-rules).
Terrascan runs rules on ARM, CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform templates. You can learn more about the [Terrascan rules](https://runterrascan.io/docs/policies/).
defender-for-cloud Protect Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/protect-network-resources.md
The network map can show you your Azure resources in a **Topology** view and a *
In the **Topology** view of the networking map, you can view the following insights about your networking resources: -- In the inner circle, you can see all the Vnets within your selected subscriptions, the next circle is all the subnets, the outer circle is all the virtual machines.
+- In the inner circle, you can see all the VNets within your selected subscriptions, the next circle is all the subnets, the outer circle is all the virtual machines.
- The lines connecting the resources in the map let you know which resources are associated with each other, and how your Azure network is structured. - Use the severity indicators to quickly get an overview of which resources have open recommendations from Defender for Cloud. - You can click any of the resources to drill down into them and view the details of that resource and its recommendations directly, and in the context of the Network map.
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
For other policies, you can create an exemption directly in the policy itself, b
### What Microsoft Defender plans or licenses do I need to use the regulatory compliance dashboard?
-If you've got *any* of the Microsoft Defender plan (except for Defender for Servers Plan 1) enabled on *any* of your Azure resources, you can access Defender for Cloud's regulatory compliance dashboard and all of its data.
+If you've got *any* of the Microsoft Defender plans (except for Defender for Servers Plan 1) enabled on *any* of your Azure resources, you can access Defender for Cloud's regulatory compliance dashboard and all of its data.
+
+> [!NOTE]
+> For Defender for Servers you'll get regulatory compliance only for plan 2.
## Next steps
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/17/2023 Last updated : 04/18/2023 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in April include:
+- [Agentless Container Posture in Defender CSPM (Preview)](#agentless-container-posture-in-defender-cspm-preview)
- [New preview Unified Disk Encryption recommendation](#unified-disk-encryption-recommendation-preview)-- [Changes in the recommendation "Machines should be configured securely"](#changes-in-the-recommendation-machines-should-be-configured-securely)
+- [Changes in the recommendation Machines should be configured securely](#changes-in-the-recommendation-machines-should-be-configured-securely)
- [Deprecation of App Service language monitoring policies](#deprecation-of-app-service-language-monitoring-policies)
+- [New alert in Defender for Resource Manager](#new-alert-in-defender-for-resource-manager)
+- [Three alerts in the Defender for Resource Manager plan have been deprecated](#three-alerts-in-the-defender-for-resource-manager-plan-have-been-deprecated)
+- [Alerts automatic export to Log Analytics workspace have been deprecated](#alerts-automatic-export-to-log-analytics-workspace-have-been-deprecated)
+- [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers)
+
+### Agentless Container Posture in Defender CSPM (Preview)
+
+The new Agentless Container Posture (Preview) capabilities are available as part of the Defender CSPM (Cloud Security Posture Management) plan.
+
+Agentless Container Posture allows security teams to identify security risks in containers and Kubernetes realms. An agentless approach allows security teams to gain visibility into their Kubernetes and containers registries across SDLC and runtime, removing friction and footprint from the workloads.
+
+Agentless Container Posture offers container vulnerability assessments that, combined with attack path analysis, enable security teams to prioritize and zoom into specific container vulnerabilities. You can also use cloud security explorer to uncover risks and hunt for container posture insights, such as discovery of applications running vulnerable images or exposed to the internet.
+
+Learn more at [Agentless Container Posture (Preview)](concept-agentless-containers.md).
### Unified Disk Encryption recommendation (preview) We have introduced a unified disk encryption recommendation in public preview, `Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost` and `Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost`.
-These recommendations replace `Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources` which detected Azure Disk Encryption and the policy `Virtual machines and virtual machine scale sets should have encryption at host enabled` which detected EncryptionAtHost. ADE and EncryptionAtHost provide comparable encryption at rest coverage, and we recommend enabling one of them on every virtual machine. The new recommendations detect whether either ADE or EncryptionAtHost are enabled and only warn if neither are enabled. We also warn if ADE is enabled on some, but not all disks of a VM (this condition isn't applicable to EncryptionAtHost).
+These recommendations replace `Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources`, which detected Azure Disk Encryption and the policy `Virtual machines and virtual machine scale sets should have encryption at host enabled`, which detected EncryptionAtHost. ADE and EncryptionAtHost provide comparable encryption at rest coverage, and we recommend enabling one of them on every virtual machine. The new recommendations detect whether either ADE or EncryptionAtHost are enabled and only warn if neither are enabled. We also warn if ADE is enabled on some, but not all disks of a VM (this condition isn't applicable to EncryptionAtHost).
The new recommendations require [Azure Automanage Machine Configuration](https://aka.ms/gcpol).
These recommendations are based on the following policies:
Learn more about [ADE and EncryptionAtHost and how to enable one of them](../virtual-machines/disk-encryption-overview.md).
-### Changes in the recommendation "Machines should be configured securely"
+### Changes in the recommendation Machines should be configured securely
The recommendation `Machines should be configured securely` was updated. The update improves the performance and stability of the recommendation and aligns its experience with the generic behavior of Defender for Cloud's recommendations. As part of this update, the recommendation's ID was changed from `181ac480-f7c4-544b-9865-11b8ffe87f47` to `c476dc48-8110-4139-91af-c8d940896b98`.
-No action is required on the customer side, and there's no expected impact on the secure score.
+No action is required on the customer side, and there's no expected effect on the secure score.
### Deprecation of App Service language monitoring policies
Customers can use alternative built-in policies to monitor any specified languag
These policies are no longer available in Defender for Cloud's built-in recommendations. You can [add them as custom recommendations](create-custom-recommendations.md) to have Defender for Cloud monitor them.
+### New alert in Defender for Resource Manager
+
+Defender for Resource Manager has the following new alert:
+
+| Alert (alert type) | Description | MITRE tactics | Severity |
+|||:-:||
+| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
+
+You can see a list of all of the [alerts available for Resource Manager](alerts-reference.md#alerts-resourcemanager).
+
+### Three alerts in the Defender for Resource Manager plan have been deprecated
+
+**Estimated date for change: March 2023**
+
+The following three alerts for the Defender for Resource Manager plan have been deprecated:
+
+- `Activity from a risky IP address (ARM.MCAS_ActivityFromAnonymousIPAddresses)`
+- `Activity from infrequent country (ARM.MCAS_ActivityFromInfrequentCountry)`
+- `Impossible travel activity (ARM.MCAS_ImpossibleTravelActivity)`
+
+In a scenario where activity from a suspicious IP address is detected, one of the following Defenders for Resource Manager plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present.
++
+### Alerts automatic export to Log Analytics workspace have been deprecated
+
+Defenders for Cloud security alerts are automatically exported to a default Log Analytics workspace on the resource level. This causes an indeterministic behavior and therefore we have deprecated this feature.
+
+Instead, you can export your security alerts to a dedicated Log Analytics workspace with [Continuous Export](continuous-export.md#set-up-a-continuous-export).
+
+If you have already configured continuous export of your alerts to a Log Analytics workspace, no further action is required.
+
+### Deprecation and improvement of selected alerts for Windows and Linux Servers
+
+The security alert quality improvement process for Defender for Servers includes the deprecation of some alerts for both Windows and Linux servers. The deprecated alerts are now sourced from and covered by Defender for Endpoint threat alerts.
+
+If you already have the Defender for Endpoint integration enabled, no further action is required. You may experience a decrease in your alerts volume in April 2023.
+
+If you don't have the Defender for Endpoint integration enabled in Defender for Servers, you'll need to enable the Defender for Endpoint integration to maintain and improve your alert coverage.
+
+All Defender for Servers customers, have full access to the Defender for EndpointΓÇÖs integration as a part of the [Defender for Servers plan](plan-defender-for-servers-select-plan.md#plan-features).
+
+You can learn more about [Microsoft Defender for Endpoint onboarding options](integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration).
+
+You can also view the [full list of alerts](alerts-reference.md#defender-for-servers-alerts-to-be-deprecated) that are set to be deprecated.
+
+Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).
+ ## March 2023 Updates in March include: -- [New alert in Defender for Resource Manager](#new-alert-in-defender-for-resource-manager) - [A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection](#a-new-defender-for-storage-plan-is-available-including-near-real-time-malware-scanning-and-sensitive-data-threat-detection) - [Data-aware security posture (preview)](#data-aware-security-posture-preview) - [Improved experience for managing the default Azure security policies](#improved-experience-for-managing-the-default-azure-security-policies)
Updates in March include:
- [New preview recommendation for Azure SQL Servers](#new-preview-recommendation-for-azure-sql-servers) - [New alert in Defender for Key Vault](#new-alert-in-defender-for-key-vault)
-### New alert in Defender for Resource Manager
-
-Defender for Resource Manager has the following new alert:
-
-| Alert (alert type) | Description | MITRE tactics | Severity |
-|||:-:||
-| **PREVIEW - Suspicious creation of compute resources detected**<br>(ARM_SuspiciousComputeCreation) | Microsoft Defender for Resource Manager identified a suspicious creation of compute resources in your subscription utilizing Virtual Machines/Azure Scale Set. The identified operations are designed to allow administrators to efficiently manage their environments by deploying new resources when needed. While this activity may be legitimate, a threat actor might utilize such operations to conduct crypto mining.<br> The activity is deemed suspicious as the compute resources scale is higher than previously observed in the subscription. <br> This can indicate that the principal is compromised and is being used with malicious intent. | Impact | Medium |
-
-You can see a list of all of the [alerts available for Resource Manager](alerts-reference.md#alerts-resourcemanager).
- ### A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection
-Cloud storage plays a key role in the organization and stores large volumes of valuable and sensitive data. Today we are announcing a new Defender for Storage plan. If youΓÇÖre using the previous plan (now renamed to "Defender for Storage (classic)"), you will need to proactively [migrate to the new plan](defender-for-storage-classic-migrate.md) in order to use the new features and benefits.
+Cloud storage plays a key role in the organization and stores large volumes of valuable and sensitive data. Today we're announcing a new Defender for Storage plan. If youΓÇÖre using the previous plan (now renamed to "Defender for Storage (classic)"), you'll need to proactively [migrate to the new plan](defender-for-storage-classic-migrate.md) in order to use the new features and benefits.
The new plan includes advanced security capabilities to help protect against malicious file uploads, sensitive data exfiltration, and data corruption. It also provides a more predictable and flexible pricing structure for better control over coverage and costs.
The new plan has new capabilities now in public preview:
- Detecting entities with no identities using SAS tokens
-These capabilities will enhance the existing Activity Monitoring capability, based on control and data plane log analysis and behavioral modeling to identify early signs of breach.
+These capabilities enhance the existing Activity Monitoring capability, based on control and data plane log analysis and behavioral modeling to identify early signs of breach.
All these capabilities are available in a new predictable and flexible pricing plan that provides granular control over data protection at both the subscription and resource levels.
Microsoft Defender for Cloud helps security teams to be more productive at reduc
We introduce an improved Azure security policy management experience for built-in recommendations that simplifies the way Defender for Cloud customers fine tune their security requirements. The new experience includes the following new capabilities: -- A simple interface allows better performance and fewer clicks when managing default security policies within Defender for Cloud, including enabling/disabling, denying, setting parameters and managing exemptions.
+- A simple interface allows better performance and fewer select when managing default security policies within Defender for Cloud, including enabling/disabling, denying, setting parameters and managing exemptions.
- A single view of all built-in security recommendations offered by the Microsoft cloud security benchmark (formerly the Azure security benchmark). Recommendations are organized into logical groups, making it easier to understand the types of resources covered, and the relationship between parameters and recommendations. - New features such as filters and search have been added.
Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com
### Defender CSPM (Cloud Security Posture Management) is now Generally Available (GA)
-We are announcing that Defender CSPM is now Generally Available (GA). Defender CSPM offers all of the services available under the Foundational CSPM capabilities and adds the following benefits:
+We're announcing that Defender CSPM is now Generally Available (GA). Defender CSPM offers all of the services available under the Foundational CSPM capabilities and adds the following benefits:
-- **Attack path analysis and ARG API** - Attack path analysis uses a graph-based algorithm that scans the cloud security graph to expose attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. You can also consume attack paths programmatically by querying Azure Resource Graph (ARG) API. Learn how to use [attack path analysis](how-to-manage-attack-path.md)
+- **Attack path analysis and ARG API** - Attack path analysis uses a graph-based algorithm that scans the cloud security graph to expose attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach. You can also consume attack paths programmatically by querying Azure Resource Graph (ARG) API. Learn how to use [attack path analysis](how-to-manage-attack-path.md)
- **Cloud Security explorer** - Use the Cloud Security Explorer to run graph-based queries on the cloud security graph, to proactively identify security risks in your multicloud environments. Learn more about [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer). Learn more about [Defender CSPM](overview-page.md).
We've added a new recommendation for Azure SQL Servers, `Azure SQL Server authen
The recommendation is based on the existing policy [`Azure SQL Database should have Azure Active Directory Only Authentication enabled`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fabda6d70-9778-44e7-84a8-06713e6db027)
-This recommendation disables local authentication methods and allows only Azure Active Directory Authentication which improves security by ensuring that Azure SQL Databases can exclusively be accessed by Azure Active Directory identities.
+This recommendation disables local authentication methods and allows only Azure Active Directory Authentication, which improves security by ensuring that Azure SQL Databases can exclusively be accessed by Azure Active Directory identities.
Learn how to [create servers with Azure AD-only authentication enabled in Azure SQL](/azure/azure-sql/database/authentication-azure-ad-only-authentication-create-server).
You can see a list of all of the [alerts available for Key Vault](alerts-referen
Updates in February include: - [Enhanced Cloud Security Explorer](#enhanced-cloud-security-explorer)-- [Recommendation to find vulnerabilities in running container images for Linux released for General Availability (GA)](#recommendation-to-find-vulnerabilities-in-running-container-images-released-for-general-availability-ga)
+- [Defender for Containers' vulnerability scans of running Linux images now GA](#defender-for-containers-vulnerability-scans-of-running-linux-images-now-ga)
- [Announcing support for the AWS CIS 1.5.0 compliance standard](#announcing-support-for-the-aws-cis-150-compliance-standard) - [Microsoft Defender for DevOps (preview) is now available in other regions](#microsoft-defender-for-devops-preview-is-now-available-in-other-regions) - [The built-in policy [Preview]: Private endpoint should be configured for Key Vault has been deprecated](#the-built-in-policy-preview-private-endpoint-should-be-configured-for-key-vault-has-been-deprecated)
Updates in February include:
An improved version of the cloud security explorer includes a refreshed user experience that removes query friction dramatically, added the ability to run multicloud and multi-resource queries, and embedded documentation for each query option.
-The Cloud Security Explorer now allows you to run cloud-abstract queries across resources. You can use either the pre-built query templates or use the custom search to apply filters to build your query. Learn [how to manage Cloud Security Explorer](how-to-manage-cloud-security-explorer.md).
+The Cloud Security Explorer now allows you to run cloud-abstract queries across resources. You can use either the prebuilt query templates or use the custom search to apply filters to build your query. Learn [how to manage Cloud Security Explorer](how-to-manage-cloud-security-explorer.md).
+
+### Defender for Containers' vulnerability scans of running Linux images now GA
+
+Defender for Containers detects vulnerabilities in running containers. Both Windows and Linux containers are supported.
-### Recommendation to find vulnerabilities in running container images released for General Availability (GA)
+In August 2022, this capability was [released in preview](release-notes-archive.md) for Windows and Linux. It's now released for general availability (GA) for Linux.
-The [Running container images should have vulnerability findings resolved](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) recommendation for Linux is now GA. The recommendation is used to identify unhealthy resources and is included in the calculations of your secure score.
+When vulnerabilities are detected, Defender for Cloud generates the following security recommendation listing the scan's findings: [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false).
-We recommend that you use the recommendation to remediate vulnerabilities in your Linux containers. Learn about [recommendation remediation](implement-security-recommendations.md).
+Learn more about [viewing vulnerabilities for running images](defender-for-containers-vulnerability-assessment-azure.md).
### Announcing support for the AWS CIS 1.5.0 compliance standard
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
In this example:
### Which recommendations are included in the secure score calculations?
-Only built-in recommendations have an impact on the secure score.
-
+Only built-in recommendations that are part of the default initiative, Azure Security Benchmark, have an impact on the secure score.
Recommendations flagged as **Preview** aren't included in the calculations of your secure score. They should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score. Preview recommendations are marked with: :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false":::
For related material, see the following articles:
- [Learn about the different elements of a recommendation](review-security-recommendations.md) - [Learn how to remediate recommendations](implement-security-recommendations.md) - [View the GitHub-based tools for working programmatically with secure score](https://github.com/Azure/Azure-Security-Center/tree/master/Secure%20Score)++
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
Microsoft Defender for Cloud is available in the following Azure cloud environme
| - [Microsoft Defender for Servers](./defender-for-servers-introduction.md) | GA | GA | GA | | - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available | | - [Microsoft Defender CSPM](./concept-cloud-security-posture-management.md) | GA | Not Available | Not Available |
+| - [Agentless discovery for Kubernetes](concept-agentless-containers.md) | Public Preview | Not Available | Not Available |
+| [Agentless vulnerability assessments for container images](defender-for-containers-vulnerability-assessment-azure.md), including registry scanning (\* Up to 20 unique images per billable resource) | Public Preview | Not Available | Not Available |
| - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA | | - [Microsoft Defender for Kubernetes](./defender-for-kubernetes-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA | GA | | - [Microsoft Defender for Containers](./defender-for-containers-introduction.md) <sup>[7](#footnote7)</sup> | GA | GA | GA |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 04/16/2023 Last updated : 04/18/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Changes in the recommendation "Machines should be configured securely"](#changes-in-the-recommendation-machines-should-be-configured-securely) | March 2023 |
-| [Three alerts in the Defender for Azure Resource Manager plan will be deprecated](#three-alerts-in-the-defender-for-resource-manager-plan-will-be-deprecated) | March 2023 |
-| [Alerts automatic export to Log Analytics workspace will be deprecated](#alerts-automatic-export-to-log-analytics-workspace-will-be-deprecated) | March 2023 |
-| [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 |
| [Deprecation of legacy compliance standards across cloud environments](#deprecation-of-legacy-compliance-standards-across-cloud-environments) | April 2023 |
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 |
| [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services) | April 2023 |
+| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 |
| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | June 2023 |
-### Changes in the recommendation "Machines should be configured securely"
-
-**Estimated date for change: March 2023**
-
-The recommendation `Machines should be configured securely` will be updated. The update will improve the performance and stability of the recommendation and align its experience with the generic behavior of Defender for Cloud's recommendations.
-
-As part of this update, the recommendation's ID will be changed from `181ac480-f7c4-544b-9865-11b8ffe87f47` to `c476dc48-8110-4139-91af-c8d940896b98`.
-
-No action is required on the customer side, and there's no expected downtime nor impact on the secure score.
--
-### Three alerts in the Defender for Resource Manager plan will be deprecated
-
-**Estimated date for change: March 2023**
-
-As we continue to improve the quality of our alerts, the following three alerts from the Defender for Resource Manager plan will be deprecated:
-1. `Activity from a risky IP address (ARM.MCAS_ActivityFromAnonymousIPAddresses)`
-1. `Activity from infrequent country (ARM.MCAS_ActivityFromInfrequentCountry)`
-1. `Impossible travel activity (ARM.MCAS_ImpossibleTravelActivity)`
-
-You can learn more details about each of these alerts from the [alerts reference list](alerts-reference.md#alerts-resourcemanager).
-
-In the scenario where an activity from a suspicious IP address is detected, one of the following Defenders for Resource Manager plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present.
-
-### Alerts automatic export to Log Analytics workspace will be deprecated
-
-**Estimated date for change: March 2023**
-
-Currently, Defender for Cloud security alerts are automatically exported to a default Log Analytics workspace on the resource level. This causes an indeterministic behavior and therefore, this feature is set to be deprecated.
-
-You can export your security alerts to a dedicated Log Analytics workspace with the [Continuous Export](continuous-export.md#set-up-a-continuous-export) feature.
-If you have already configured continuous export of your alerts to a Log Analytics workspace, no further action is required.
-
-### Deprecation and improvement of selected alerts for Windows and Linux Servers
-
-**Estimated date for change: April 2023**
-
-The security alert quality improvement process for Defender for Servers includes the deprecation of some alerts for both Windows and Linux servers. The deprecated alerts will now be sourced from and covered by Defender for Endpoint threat alerts.
-
-If you already have the Defender for Endpoint integration enabled, no further action is required. You may experience a decrease in your alerts volume in April 2023.
-
-If you don't have the Defender for Endpoint integration enabled in Defender for Servers, you'll need to enable the Defender for Endpoint integration to maintain and improve your alert coverage.
-
-All Defender for Servers customers, have full access to the Defender for EndpointΓÇÖs integration as a part of the [Defender for Servers plan](plan-defender-for-servers-select-plan.md#plan-features).
-
-You can learn more about [Microsoft Defender for Endpoint onboarding options](integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration).
-
-You can also view the [full list of alerts](alerts-reference.md#defender-for-servers-alerts-to-be-deprecated) that are set to be deprecated.
-
-Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).
- ### Deprecation of legacy compliance standards across cloud environments **Estimated date for change: April 2023**
-We are announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
+We're announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss) initiative. Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
-### Multiple changes to identity recommendations
-
-**Estimated date for change: May 2023**
-
-We announced previously the [availability of identity recommendations V2 (preview)](release-notes-archive.md#extra-recommendations-added-to-identity), which included enhanced capabilities.
-
-As part of these changes, the following recommendations will be released as General Availability (GA) and replace the V1 recommendations that are set to be deprecated.
-
-#### General Availability (GA) release of identity recommendations V2
-
-The following security recommendations will be released as GA and replace the V1 recommendations:
-
-|Recommendation | Assessment Key|
-|--|--|
-|Accounts with owner permissions on Azure resources should be MFA enabled | 6240402e-f77c-46fa-9060-a7ce53997754 |
-|Accounts with write permissions on Azure resources should be MFA enabled | c0cb17b2-0607-48a7-b0e0-903ed22de39b |
-| Accounts with read permissions on Azure resources should be MFA enabled | dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c |
-| Guest accounts with owner permissions on Azure resources should be removed | 20606e75-05c4-48c0-9d97-add6daa2109a |
-| Guest accounts with write permissions on Azure resources should be removed | 0354476c-a12a-4fcc-a79d-f0ab7ffffdbb |
-| Guest accounts with read permissions on Azure resources should be removed | fde1c0c9-0fd2-4ecc-87b5-98956cbc1095 |
-| Blocked accounts with owner permissions on Azure resources should be removed | 050ac097-3dda-4d24-ab6d-82568e7a50cf |
-| Blocked accounts with read and write permissions on Azure resources should be removed | 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
- #### Deprecation of identity recommendations V1 The following security recommendations will be deprecated as part of this change:
We've improved the coverage of the V2 identity recommendations by scanning all A
**Estimated date for change: April 2023**
-We are announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
+We're announcing the full deprecation of support of [`PCI DSS`](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [`PCI DSS v4`](/azure/compliance/offerings/offering-pci-dss) initiative. Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
Learn how to [Customize the set of standards in your regulatory compliance dashb
| Recommendation Name | Recommendation Description | Policy | |--|--|--| | Azure SQL Managed Instance authentication mode should be Azure Active Directory Only | Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Azure Active Directory identities. Learn more at: aka.ms/adonlycreate | [Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f78215662-041e-49ed-a9dd-5385911b3a1f) |
-| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory (AAD) only authentication methods improves security by ensuring that Synapse Workspaces exclusively require AAD identities for authentication. Learn more at: https://aka.ms/Synapse | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |
+| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory only authentication methods improves security by ensuring that Synapse Workspaces exclusively require Azure AD identities for authentication. Learn more at: https://aka.ms/Synapse | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |
| Azure Database for MySQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e) | | Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |
+### Multiple changes to identity recommendations
+
+**Estimated date for change: May 2023**
+
+We announced previously the [availability of identity recommendations V2 (preview)](release-notes-archive.md#extra-recommendations-added-to-identity), which included enhanced capabilities.
+
+As part of these changes, the following recommendations will be released as General Availability (GA) and replace the V1 recommendations that are set to be deprecated.
+
+#### General Availability (GA) release of identity recommendations V2
+
+The following security recommendations will be released as GA and replace the V1 recommendations:
+
+|Recommendation | Assessment Key|
+|--|--|
+|Accounts with owner permissions on Azure resources should be MFA enabled | 6240402e-f77c-46fa-9060-a7ce53997754 |
+|Accounts with write permissions on Azure resources should be MFA enabled | c0cb17b2-0607-48a7-b0e0-903ed22de39b |
+| Accounts with read permissions on Azure resources should be MFA enabled | dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c |
+| Guest accounts with owner permissions on Azure resources should be removed | 20606e75-05c4-48c0-9d97-add6daa2109a |
+| Guest accounts with write permissions on Azure resources should be removed | 0354476c-a12a-4fcc-a79d-f0ab7ffffdbb |
+| Guest accounts with read permissions on Azure resources should be removed | fde1c0c9-0fd2-4ecc-87b5-98956cbc1095 |
+| Blocked accounts with owner permissions on Azure resources should be removed | 050ac097-3dda-4d24-ab6d-82568e7a50cf |
+| Blocked accounts with read and write permissions on Azure resources should be removed | 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
+ ### DevOps Resource Deduplication for Defender for DevOps **Estimated date for change: June 2023**
-To improve the Defender for DevOps user experience and enable further integration with Defender for Coud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant.
+To improve the Defender for DevOps user experience and enable further integration with Defender for Cloud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant.
-If you do not have an instance of a DevOps organization onboarded more than once to your organization, no further action is required. If you do have more than one instance of a DevOps organization onboarded to your tenant, the subscription owner will be notified and will need to delete the DevOps Connector(s) they do not want to keep by navigating to Defender for Cloud Environment Settings.
+If you don't have an instance of a DevOps organization onboarded more than once to your organization, no further action is required. If you do have more than one instance of a DevOps organization onboarded to your tenant, the subscription owner will be notified and will need to delete the DevOps Connector(s) they don't want to keep by navigating to Defender for Cloud Environment Settings.
-Customers will have until June 30, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps.
+Customers will have until June 30, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists, will remain onboarded to Defender for DevOps.
## Next steps
defender-for-iot Tutorial Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-create-micro-agent-module-twin.md
This tutorial will help you learn how to create an individual `DefenderIotMicroA
## Device twins
-For IoT solutions built in Azure, device twins play a key role in both device management and process automation.
-
-Defender for IoT fully integrates with your existing IoT device management platform. Full integration, enables you to manage your device's security status, and allows you to make use of all existing device control capabilities. Integration is achieved by making use of the IoT Hub twin mechanism.
-
-Learn more about the concept of [Understand and use device twins in IoT Hub](../../iot-hub/iot-hub-devguide-device-twins.md).
-
-## Defender-IoT-micro-agent twin
-
-Defender for IoT uses a Defender-IoT-micro-agent twin for each device. The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security, for each specific device in your solution. Device security properties are configured through a dedicated Defender-IoT-micro-agent twin for safer communication, to enable updates, and maintenance that requires fewer resources.
-
-## Understanding DefenderIotMicroAgent module twins
- Device twins play a key role in both device management and process automation, for IoT solutions that are built in to Azure. Defender for IoT offers the capability to fully integrate your existing IoT device management platform, enabling you to manage your device security status and make use of the existing device control capabilities. You can integrate your Defender for IoT by using the IoT Hub twin mechanism.
To learn more about the general concept of module twins in Azure IoT Hub, see [U
Defender for IoT uses the module twin mechanism, and maintains a Defender-IoT-micro-agent twin named `DefenderIotMicroAgent` for each of your devices.
-To take full advantage of all Defender for IoT feature's, you need to create, configure, and use the Defender-IoT-micro-agent twins for every device in the service.
+To take full advantage of all Defender for IoT features, you need to create, configure, and use the Defender-IoT-micro-agent twins for every device in the service.
+
+## Defender-IoT-micro-agent twin
+
+Defender for IoT uses a Defender-IoT-micro-agent twin for each device. The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security for each specific device in your solution. Device security properties are configured through a dedicated Defender-IoT-micro-agent twin for safer communication, to enable updates, and maintenance that requires fewer resources.
In this tutorial you'll learn how to: