Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-domain-services | Administration Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/administration-concepts.md | Title: Management concepts for Azure AD Domain Services | Microsoft Docs description: Learn about how to administer an Azure Active Directory Domain Services managed domain and the behavior of user accounts and passwords -+ |
active-directory-domain-services | Alert Ldaps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-ldaps.md | Title: Resolve secure LDAP alerts in Azure AD Domain Services | Microsoft Docs description: Learn how to troubleshoot and resolve common alerts with secure LDAP for Azure Active Directory Domain Services. -+ ms.assetid: 81208c0b-8d41-4f65-be15-42119b1b5957 |
active-directory-domain-services | Alert Nsg | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-nsg.md | Title: Resolve network security group alerts in Azure AD DS | Microsoft Docs description: Learn how to troubleshoot and resolve network security group configuration alerts for Azure Active Directory Domain Services -+ ms.assetid: 95f970a7-5867-4108-a87e-471fa0910b8c |
active-directory-domain-services | Alert Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md | Title: Resolve service principal alerts in Azure AD Domain Services | Microsoft description: Learn how to troubleshoot service principal configuration alerts for Azure Active Directory Domain Services -+ ms.assetid: f168870c-b43a-4dd6-a13f-5cfadc5edf2c |
active-directory-domain-services | Change Sku | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/change-sku.md | Title: Change the SKU for an Azure AD Domain Services | Microsoft Docs description: Learn how to the SKU tier for an Azure AD Domain Services managed domain if your business requirements change -+ |
active-directory-domain-services | Check Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/check-health.md | Title: Check the health of Azure Active Directory Domain Services | Microsoft Do description: Learn how to check the health of an Azure Active Directory Domain Services (Azure AD DS) managed domain and understand status messages using the Azure portal. -+ ms.assetid: 8999eec3-f9da-40b3-997a-7a2587911e96 |
active-directory-domain-services | Compare Identity Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/compare-identity-solutions.md | Title: Compare Active Directory-based services in Azure | Microsoft Docs description: In this overview, you compare the different identity offerings for Active Directory Domain Services, Azure Active Directory, and Azure Active Directory Domain Services. -+ |
active-directory-domain-services | Concepts Forest Trust | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-forest-trust.md | Title: How trusts work for Azure AD Domain Services | Microsoft Docs description: Learn more about how forest trust work with Azure AD Domain Services -+ |
active-directory-domain-services | Concepts Migration Benefits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-migration-benefits.md | Title: Benefits of Classic deployment migration in Azure AD Domain Services | Mi description: Learn more about the benefits of migrating a Classic deployment of Azure Active Directory Domain Services to the Resource Manager deployment model -+ |
active-directory-domain-services | Concepts Replica Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-replica-sets.md | Title: Replica sets concepts for Azure AD Domain Services | Microsoft Docs description: Learn what replica sets are in Azure Active Directory Domain Services and how they provide redundancy to applications that require identity services. -+ |
active-directory-domain-services | Concepts Resource Forest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-resource-forest.md | Title: Resource forest concepts for Azure AD Domain Services | Microsoft Docs description: Learn what a resource forest is in Azure Active Directory Domain Services and how they benefit your organization in hybrid environment with limited user authentication options or security concerns. -+ |
active-directory-domain-services | Create Gmsa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-gmsa.md | Title: Group managed service accounts for Azure AD Domain Services | Microsoft D description: Learn how to create a group managed service account (gMSA) for use with Azure Active Directory Domain Services managed domains -+ ms.assetid: e6faeddd-ef9e-4e23-84d6-c9b3f7d16567 |
active-directory-domain-services | Create Ou | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-ou.md | Title: Create an organizational unit (OU) in Azure AD Domain Services | Microsof description: Learn how to create and manage a custom Organizational Unit (OU) in an Azure AD Domain Services managed domain. -+ ms.assetid: 52602ad8-2b93-4082-8487-427bdcfa8126 |
active-directory-domain-services | Create Resource Forest Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-resource-forest-powershell.md | Title: Create an Azure AD Domain Services resource forest using Azure PowerShell | Microsoft Docs description: In this article, learn how to create and configure an Azure Active Directory Domain Services resource forest and outbound forest to an on-premises Active Directory Domain Services environment using Azure PowerShell. -+ |
active-directory-domain-services | Delete Aadds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/delete-aadds.md | Title: Delete Azure Active Directory Domain Services | Microsoft Docs description: Learn how to disable, or delete, an Azure Active Directory Domain Services managed domain using the Azure portal -+ ms.assetid: 89e407e1-e1e0-49d1-8b89-de11484eee46 |
active-directory-domain-services | Deploy Azure App Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-azure-app-proxy.md | Title: Deploy Azure AD Application Proxy for Azure AD Domain Services | Microsof description: Learn how to provide secure access to internal applications for remote workers by deploying and configuring Azure Active Directory Application Proxy in an Azure Active Directory Domain Services managed domain -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d |
active-directory-domain-services | Deploy Kcd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-kcd.md | Title: Kerberos constrained delegation for Azure AD Domain Services | Microsoft description: Learn how to enable resource-based Kerberos constrained delegation (KCD) in an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d |
active-directory-domain-services | Deploy Sp Profile Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-sp-profile-sync.md | Title: Enable SharePoint User Profile service with Azure AD DS | Microsoft Docs description: Learn how to configure an Azure Active Directory Domain Services managed domain to support profile synchronization for SharePoint Server -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d |
active-directory-domain-services | Fleet Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/fleet-metrics.md | + + Title: Check fleet metrics of Azure Active Directory Domain Services | Microsoft Docs +description: Learn how to check fleet metrics of an Azure Active Directory Domain Services (Azure AD DS) managed domain. +++++ms.assetid: 8999eec3-f9da-40b3-997a-7a2587911e96 ++++ Last updated : 08/16/2022++++# Check fleet metrics of Azure Active Directory Domain Services ++Administrators can use Azure Monitor Metrics to configure a scope for Azure Active Directory Domain Services (Azure AD DS) and gain insights into how the service is performing. +You can access Azure AD DS metrics from two places: ++- In Azure Monitor Metrics, click **New chart** > **Select a scope** and select the Azure AD DS instance: ++ :::image type="content" border="true" source="media/fleet-metrics/select.png" alt-text="Screenshot of how to select Azure AD DS for fleet metrics."::: ++- In Azure AD DS, under **Monitoring**, click **Metrics**: ++ :::image type="content" border="true" source="media/fleet-metrics/metrics-scope.png" alt-text="Screenshot of how to select Azure AD DS as scope in Azure Monitor Metrics."::: ++ The following screenshot shows how to select combined metrics for Total Processor Time and LDAP searches: ++ :::image type="content" border="true" source="media/fleet-metrics/combined-metrics.png" alt-text="Screenshot of combined metrics in Azure Monitor Metrics."::: ++ You can also view metrics for a fleet of Azure AD DS instances: ++ :::image type="content" border="true" source="media/fleet-metrics/metrics-instance.png" alt-text="Screenshot of how to select an Azure AD DS instance as the scope for fleet metrics."::: ++ The following screenshot shows combined metrics for Total Processor Time, DNS Queries, and LDAP searches by role instance: ++ :::image type="content" border="true" source="media/fleet-metrics/combined-metrics-instance.png" alt-text="Screenshot of combined metrics for an Azure AD DS instance."::: ++## Metrics definitions and descriptions ++You can select a metric for more details about the data collection. +++The following table describes the metrics that are available for Azure AD DS. ++| Metric | Description | +|--|-| +|DNS - Total Query Received/sec |The average number of queries received by DNS server in each second. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| +|Total Response Sent/sec |The average number of responses sent by DNS server in each second. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| +|NTDS - LDAP Successful Binds/sec|The number of LDAP successful binds per second for the NTDS object. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| +|% Committed Bytes In Use |The ratio of Memory\\\Committed Bytes to the Memory\\\Commit Limit. Committed memory is the physical memory in use for which space has been reserved in the paging file should it need to be written to disk. The commit limit is determined by the size of the paging file. If the paging file is enlarged, the commit limit increases, and the ratio is reduced. This counter displays the current percentage value only; it isn't an average. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| +|Total Processor Time |The percentage of elapsed time that the processor spends to execute a non-Idle thread. It's calculated by measuring the percentage of time that the processor spends executing the idle thread and then subtracting that value from 100%. (Each processor has an idle thread that consumes cycles when no other threads are ready to run). This counter is the primary indicator of processor activity, and displays the average percentage of busy time observed during the sample interval. It should be noted that the accounting calculation of whether the processor is idle is performed at an internal sampling interval of the system clock (10ms). On today's fast processors, % Processor Time can therefore underestimate the processor utilization as the processor may be spending much time servicing threads between the system clock sampling interval. Workload-based timer applications are one type application that is more likely to be measured inaccurately because timers are signaled just after the sample is taken. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| +|Kerberos Authentications |The number of times that clients use a ticket to authenticate to this computer per second. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| +|NTLM Authentications|The number of NTLM authentications processed per second for the Active Directory on this domain controller or for local accounts on this member server. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| +|% Processor Time (dns)|The percentage of elapsed time that all of dns process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| +|% Processor Time (lsass)|The percentage of elapsed time that all of lsass process threads used the processor to execute instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| +|NTDS - LDAP Searches/sec |The average number of searches per second for the NTDS object. It's backed by performance counter data from the domain controller, and can be filtered or split by role instance.| ++## Azure Monitor alert ++You can configure metric alerts for Azure AD DS to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](/azure/azure-monitor/alerts/alerts-overview). ++To view and manage Azure Monitor alert, a user needs to be assigned [Azure Monitor roles](/azure/azure-monitor/roles-permissions-security). + +In Azure Monitor or Azure AD DS Metrics, click **New alert** and configure an Azure AD DS instance as the scope. Then choose the metrics you want to measure from the list of available signals: ++ :::image type="content" border="true" source="media/fleet-metrics/available-alerts.png" alt-text="Screenshot of available alerts."::: ++The following screenshot shows how to define a metric alert with a threshold for **Total Processor Time**: +++You can also configure an alert notification, which can be email, SMS, or voice call: +++The following screenshot shows a metrics alert triggered for **Total Processor Time**: +++In this case, an email notification is sent after an alert activation: +++Another email notification is sent after deactivation of the alert: +++## Select multiple resources ++You can upvote to enable multiple resource selection to correlate data between resource types. +++## Next steps ++- [Check the health of an Azure Active Directory Domain Services managed domain](check-health.md) |
active-directory-domain-services | How To Data Retrieval | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/how-to-data-retrieval.md | Title: Instructions for data retrieval from Azure Active Directory Domain Servic description: Learn how to retrieve data from Azure Active Directory Domain Services (Azure AD DS). -+ |
active-directory-domain-services | Join Centos Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-centos-linux-vm.md | Title: Join a CentOS VM to Azure AD Domain Services | Microsoft Docs description: Learn how to configure and join a CentOS Linux virtual machine to an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 16100caa-f209-4cb0-86d3-9e218aeb51c6 |
active-directory-domain-services | Join Coreos Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-coreos-linux-vm.md | Title: Join a CoreOS VM to Azure AD Domain Services | Microsoft Docs description: Learn how to configure and join a CoreOS virtual machine to an Azure AD Domain Services managed domain. -+ ms.assetid: 5db65f30-bf69-4ea3-9ea5-add1db83fdb8 |
active-directory-domain-services | Join Rhel Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-rhel-linux-vm.md | Title: Join a RHEL VM to Azure AD Domain Services | Microsoft Docs description: Learn how to configure and join a Red Hat Enterprise Linux virtual machine to an Azure AD Domain Services managed domain. -+ ms.assetid: 16100caa-f209-4cb0-86d3-9e218aeb51c6 |
active-directory-domain-services | Join Suse Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-suse-linux-vm.md | Title: Join a SLE VM to Azure AD Domain Services | Microsoft Docs description: Learn how to configure and join a SUSE Linux Enterprise virtual machine to an Azure AD Domain Services managed domain. -+ |
active-directory-domain-services | Join Ubuntu Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-ubuntu-linux-vm.md | Title: Join an Ubuntu VM to Azure AD Domain Services | Microsoft Docs description: Learn how to configure and join an Ubuntu Linux virtual machine to an Azure AD Domain Services managed domain. -+ ms.assetid: 804438c4-51a1-497d-8ccc-5be775980203 |
active-directory-domain-services | Join Windows Vm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm-template.md | Title: Use a template to join a Windows VM to Azure AD DS | Microsoft Docs description: Learn how to use Azure Resource Manager templates to join a new or existing Windows Server VM to an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 4eabfd8e-5509-4acd-86b5-1318147fddb5 |
active-directory-domain-services | Join Windows Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm.md | Title: Join a Windows Server VM to an Azure AD Domain Services managed domain | Microsoft Docs description: In this tutorial, learn how to join a Windows Server virtual machine to an Azure Active Directory Domain Services managed domain. -+ |
active-directory-domain-services | Manage Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-dns.md | Title: Manage DNS for Azure AD Domain Services | Microsoft Docs description: Learn how to install the DNS Server Tools to manage DNS and create conditional forwarders for an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d |
active-directory-domain-services | Manage Group Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-group-policy.md | Title: Create and manage group policy in Azure AD Domain Services | Microsoft Docs description: Learn how to edit the built-in group policy objects (GPOs) and create your own custom policies in an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 938a5fbc-2dd1-4759-bcce-628a6e19ab9d |
active-directory-domain-services | Migrate From Classic Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/migrate-from-classic-vnet.md | Title: Migrate Azure AD Domain Services from a Classic virtual network | Microsoft Docs description: Learn how to migrate an existing Azure AD Domain Services managed domain from the Classic virtual network model to a Resource Manager-based virtual network. -+ |
active-directory-domain-services | Mismatched Tenant Error | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/mismatched-tenant-error.md | Title: Fix mismatched directory errors in Azure AD Domain Services | Microsoft D description: Learn what a mismatched directory error means and how to resolve it in Azure AD Domain Services -+ ms.assetid: 40eb75b7-827e-4d30-af6c-ca3c2af915c7 |
active-directory-domain-services | Network Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md | Title: Network planning and connections for Azure AD Domain Services | Microsoft description: Learn about some of the virtual network design considerations and resources used for connectivity when you run Azure Active Directory Domain Services. -+ |
active-directory-domain-services | Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/notifications.md | Title: Email notifications for Azure AD Domain Services | Microsoft Docs description: Learn how to configure email notifications to alert you about issues in an Azure Active Directory Domain Services managed domain -+ ms.assetid: b9af1792-0b7f-4f3e-827a-9426cdb33ba6 |
active-directory-domain-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/overview.md | Title: Overview of Azure Active Directory Domain Services | Microsoft Docs description: In this overview, learn what Azure Active Directory Domain Services provides and how to use it in your organization to provide identity services to applications and services in the cloud. -+ |
active-directory-domain-services | Password Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/password-policy.md | Title: Create and use password policies in Azure AD Domain Services | Microsoft description: Learn how and why to use fine-grained password policies to secure and control account passwords in an Azure AD DS managed domain. -+ ms.assetid: 1a14637e-b3d0-4fd9-ba7a-576b8df62ff2 |
active-directory-domain-services | Powershell Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md | Title: Enable Azure DS Domain Services using PowerShell | Microsoft Docs description: Learn how to configure and enable Azure Active Directory Domain Services using Azure AD PowerShell and Azure PowerShell. -+ ms.assetid: d4bc5583-6537-4cd9-bc4b-7712fdd9272a |
active-directory-domain-services | Powershell Scoped Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md | Title: Scoped synchronization using PowerShell for Azure AD Domain Services | Mi description: Learn how to use Azure AD PowerShell to configure scoped synchronization from Azure AD to an Azure Active Directory Domain Services managed domain -+ |
active-directory-domain-services | Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scenarios.md | Title: Common deployment scenarios for Azure AD Domain Services | Microsoft Docs description: Learn about some of the common scenarios and use-cases for Azure Active Directory Domain Services to provide value and meet business needs. -+ ms.assetid: c5216ec9-4c4f-4b7e-830b-9d70cf176b20 |
active-directory-domain-services | Scoped Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scoped-synchronization.md | Title: Scoped synchronization for Azure AD Domain Services | Microsoft Docs description: Learn how to use the Azure portal to configure scoped synchronization from Azure AD to an Azure Active Directory Domain Services managed domain -+ ms.assetid: 9389cf0f-0036-4b17-95da-80838edd2225 |
active-directory-domain-services | Secure Remote Vm Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-remote-vm-access.md | Title: Secure remote VM access in Azure AD Domain Services | Microsoft Docs description: Learn how to secure remote access to VMs using Network Policy Server (NPS) and Azure AD Multi-Factor Authentication with a Remote Desktop Services deployment in an Azure Active Directory Domain Services managed domain. -+ |
active-directory-domain-services | Secure Your Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md | Title: Secure Azure AD Domain Services | Microsoft Docs description: Learn how to disable weak ciphers, old protocols, and NTLM password hash synchronization for an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 6b4665b5-4324-42ab-82c5-d36c01192c2a |
active-directory-domain-services | Security Audit Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/security-audit-events.md | Title: Enable security audits for Azure AD Domain Services | Microsoft Docs description: Learn how to enable security audits to centralize the logging of events for analysis and alerts in Azure AD Domain Services -+ ms.assetid: 662362c3-1a5e-4e94-ae09-8e4254443697 |
active-directory-domain-services | Suspension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/suspension.md | Title: Suspended domains in Azure AD Domain Services | Microsoft Docs description: Learn about the different health states for an Azure AD DS managed domain and how to restore a suspended domain. -+ ms.assetid: 95e1d8da-60c7-4fc1-987d-f48fde56a8cb |
active-directory-domain-services | Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md | Title: How synchronization works in Azure AD Domain Services | Microsoft Docs description: Learn how the synchronization process works for objects and credentials from an Azure AD tenant or on-premises Active Directory Domain Services environment to an Azure Active Directory Domain Services managed domain. -+ ms.assetid: 57cbf436-fc1d-4bab-b991-7d25b6e987ef |
active-directory-domain-services | Template Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md | Title: Enable Azure DS Domain Services using a template | Microsoft Docs description: Learn how to configure and enable Azure Active Directory Domain Services using an Azure Resource Manager template -+ |
active-directory-domain-services | Troubleshoot Account Lockout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-account-lockout.md | Title: Troubleshoot account lockout in Azure AD Domain Services | Microsoft Docs description: Learn how to troubleshoot common problems that cause user accounts to be locked out in Azure Active Directory Domain Services. -+ |
active-directory-domain-services | Troubleshoot Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-alerts.md | Title: Common alerts and resolutions in Azure AD Domain Services | Microsoft Doc description: Learn how to resolve common alerts generated as part of the health status for Azure Active Directory Domain Services -+ ms.assetid: 54319292-6aa0-4a08-846b-e3c53ecca483 |
active-directory-domain-services | Troubleshoot Domain Join | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-domain-join.md | Title: Troubleshoot domain-join with Azure AD Domain Services | Microsoft Docs description: Learn how to troubleshoot common problems when you try to domain-join a VM or connect an application to Azure Active Directory Domain Services and you can't connect or authenticate to the managed domain. -+ |
active-directory-domain-services | Troubleshoot Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-sign-in.md | Title: Troubleshoot sign in problems in Azure AD Domain Services | Microsoft Doc description: Learn how to troubleshoot common user sign-in problems and errors in Azure Active Directory Domain Services. -+ |
active-directory-domain-services | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md | Title: Azure Active Directory Domain Services troubleshooting | Microsoft Docs' description: Learn how to troubleshoot common errors when you create or manage Azure Active Directory Domain Services -+ ms.assetid: 4bc8c604-f57c-4f28-9dac-8b9164a0cf0b |
active-directory-domain-services | Tshoot Ldaps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tshoot-ldaps.md | Title: Troubleshoot secure LDAP in Azure AD Domain Services | Microsoft Docs description: Learn how to troubleshoot secure LDAP (LDAPS) for an Azure Active Directory Domain Services managed domain -+ ms.assetid: 445c60da-e115-447b-841d-96739975bdf6 |
active-directory-domain-services | Tutorial Configure Ldaps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-ldaps.md | Title: Tutorial - Configure LDAPS for Azure Active Directory Domain Services | Microsoft Docs description: In this tutorial, you learn how to configure secure lightweight directory access protocol (LDAPS) for an Azure Active Directory Domain Services managed domain. -+ |
active-directory-domain-services | Tutorial Configure Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-networking.md | Title: Tutorial - Configure virtual networking for Azure AD Domain Services | Microsoft Docs description: In this tutorial, you learn how to create and configure an Azure virtual network subnet or network peering for an Azure Active Directory Domain Services managed domain using the Azure portal. -+ |
active-directory-domain-services | Tutorial Configure Password Hash Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-password-hash-sync.md | Title: Enable password hash sync for Azure AD Domain Services | Microsoft Docs description: In this tutorial, learn how to enable password hash synchronization using Azure AD Connect to an Azure Active Directory Domain Services managed domain. -+ |
active-directory-domain-services | Tutorial Create Forest Trust | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-forest-trust.md | Title: Tutorial - Create a forest trust in Azure AD Domain Services | Microsoft description: Learn how to create a one-way outbound forest to an on-premises AD DS domain in the Azure portal for Azure AD Domain Services -+ |
active-directory-domain-services | Tutorial Create Instance Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md | Title: Tutorial - Create a customized Azure Active Directory Domain Services managed domain | Microsoft Docs description: In this tutorial, you learn how to create and configure a customized Azure Active Directory Domain Services managed domain and specify advanced configuration options using the Azure portal. -+ |
active-directory-domain-services | Tutorial Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md | Title: Tutorial - Create an Azure Active Directory Domain Services managed domain | Microsoft Docs description: In this tutorial, you learn how to create and configure an Azure Active Directory Domain Services managed domain using the Azure portal. -+ |
active-directory-domain-services | Tutorial Create Management Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-management-vm.md | Title: Tutorial - Create a management VM for Azure Active Directory Domain Services | Microsoft Docs description: In this tutorial, you learn how to create and configure a Windows virtual machine that you use to administer Azure Active Directory Domain Services managed domain. -+ |
active-directory-domain-services | Tutorial Create Replica Set | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-replica-set.md | Title: Tutorial - Create a replica set in Azure AD Domain Services | Microsoft D description: Learn how to create and use replica sets in the Azure portal for service resiliency with Azure AD Domain Services -+ |
active-directory-domain-services | Tutorial Perform Disaster Recovery Drill | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-perform-disaster-recovery-drill.md | Title: Tutorial - Perform a disaster recovery drill in Azure AD Domain Services description: Learn how to perform a disaster recovery drill using replica sets in Azure AD Domain Services -+ |
active-directory-domain-services | Use Azure Monitor Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/use-azure-monitor-workbooks.md | Title: Use Azure Monitor Workbooks with Azure AD Domain Services | Microsoft Docs description: Learn how to use Azure Monitor Workbooks to review security audits and understand issues in an Azure Active Directory Domain Services managed domain. -+ |
active-directory | Concept Password Ban Bad On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-on-premises.md | Azure AD Password Protection is designed with the following principles in mind: * Domain controllers (DCs) never have to communicate directly with the internet. * No new network ports are opened on DCs. * No AD DS schema changes are required. The software uses the existing AD DS *container* and *serviceConnectionPoint* schema objects.-* No minimum AD DS domain or forest functional level (DFL/FFL) is required. +* Any supported AD DS domain or forest functional level can be used. * The software doesn't create or require accounts in the AD DS domains that it protects. * User clear-text passwords never leave the domain controller, either during password validation operations or at any other time. * The software isn't dependent on other Azure AD features. For example, Azure AD password hash sync (PHS) isn't related or required for Azure AD Password Protection. |
active-directory | Howto Authentication Passwordless Security Key On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md | Run the following steps in each domain and forest in your organization that cont 1. Run the following PowerShell commands to create a new Azure AD Kerberos Server object both in your on-premises Active Directory domain and in your Azure Active Directory tenant. ### Example 1 prompt for all credentials- > [!NOTE] - > Replace `contoso.corp.com` in the following example with your on-premises Active Directory domain name. ```powershell # Specify the on-premises Active Directory domain. A new Azure AD # Kerberos Server object will be created in this Active Directory domain.- $domain = "contoso.corp.com" + $domain = $env:USERDNSDOMAIN # Enter an Azure Active Directory global administrator username and password. $cloudCred = Get-Credential -Message 'An Active Directory user who is a member of the Global Administrators group for Azure AD.' Run the following steps in each domain and forest in your organization that cont ```powershell # Specify the on-premises Active Directory domain. A new Azure AD # Kerberos Server object will be created in this Active Directory domain.- $domain = "contoso.corp.com" + $domain = $env:USERDNSDOMAIN # Enter an Azure Active Directory global administrator username and password. $cloudCred = Get-Credential Run the following steps in each domain and forest in your organization that cont ```powershell # Specify the on-premises Active Directory domain. A new Azure AD # Kerberos Server object will be created in this Active Directory domain.- $domain = "contoso.corp.com" + $domain = $env:USERDNSDOMAIN # Enter a UPN of an Azure Active Directory global administrator $userPrincipalName = "administrator@contoso.onmicrosoft.com" Run the following steps in each domain and forest in your organization that cont ### Example 4 prompt for cloud credentials using modern authentication > [!NOTE] > If you are working on a domain-joined machine with an account that has domain administrator privileges and your organization protects password-based sign-in and enforces modern authentication methods such as multifactor authentication, FIDO2, or smart card technology, you must use the `-UserPrincipalName` parameter with the User Principal Name (UPN) of a global administrator. And you can skip the "-DomainCredential" parameter.- > - Replace `contoso.corp.com` in the following example with your on-premises Active Directory domain name. - > - Replace `administrator@contoso.onmicrosoft.com` in the following example with the UPN of a global administrator. + > - Replace `administrator@contoso.onmicrosoft.com` in the following example with the UPN of a global administrator. ```powershell # Specify the on-premises Active Directory domain. A new Azure AD # Kerberos Server object will be created in this Active Directory domain.- $domain = "contoso.corp.com" + $domain = $env:USERDNSDOMAIN # Enter a UPN of an Azure Active Directory global administrator $userPrincipalName = "administrator@contoso.onmicrosoft.com" |
active-directory | Howto Password Ban Bad On Premises Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md | The following core requirements apply: The following requirements apply to the Azure AD Password Protection DC agent: -* All machines where the Azure AD Password Protection DC agent software will be installed must run Windows Server 2012 or later, including Windows Server Core editions. - * The Active Directory domain or forest doesn't need to be at Windows Server 2012 domain functional level (DFL) or forest functional level (FFL). As mentioned in [Design Principles](concept-password-ban-bad-on-premises.md#design-principles), there's no minimum DFL or FFL required for either the DC agent or proxy software to run. +* Machines where the Azure AD Password Protection DC agent software will be installed can run any supported version of Windows Server, including Windows Server Core editions. + * The Active Directory domain or forest can be any supported functional level. * All machines where the Azure AD Password Protection DC agent will be installed must have .NET 4.7.2 installed. * If .NET 4.7.2 is not already installed, download and run the installer found at [The .NET Framework 4.7.2 offline installer for Windows](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2). * Any Active Directory domain that runs the Azure AD Password Protection DC agent service must use Distributed File System Replication (DFSR) for sysvol replication. |
active-directory | Tutorial Enable Cloud Sync Sspr Writeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md | Permissions for cloud sync are configured by default. If permissions need to be ### Enable password writeback in Azure AD Connect cloud sync -For public preview, you need to enable password writeback in Azure AD Connect cloud sync by using the Set-AADCloudSyncPasswordWritebackConfiguration cmdlet on the servers with the provisioning agents. You will need global administrator credentials: +For public preview, you need to enable password writeback in Azure AD Connect cloud sync by running `Set-AADCloudSyncPasswordWritebackConfiguration` on any server with the provisioning agent. You will need global administrator credentials: ```powershell Import-Module 'C:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dll' |
active-directory | Onboard Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md | -This article describes how to onboard a Microsoft Azure subscription or subscriptions on Permissions Management (Permissions Management). Onboarding a subscription creates a new authorization system to represent the Azure subscription in Permissions Management. +This article describes how to onboard a Microsoft Azure subscription or subscriptions on Permissions Management. Onboarding a subscription creates a new authorization system to represent the Azure subscription in Permissions Management. > [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md). To add Permissions Management to your Azure AD tenant: 1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches: - - In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab. + - In the Permissions Management home page, select **Settings** (the gear icon, top right), and then select the **Data Collectors** subtab. 1. On the **Data Collectors** dashboard, select **Azure**, and then select **Create Configuration**. Choose from 3 options to manage Azure subscriptions. #### Option 1: Automatically manage -This option allows subscriptions to be automatically detected and monitored without additional configuration. Steps to detect list of subscriptions and onboard for collection: +This option allows subscriptions to be automatically detected and monitored without extra configuration.A key benefit of automatic management is that any current or future subscriptions found get onboarded automatically. Steps to detect list of subscriptions and onboard for collection: -- Grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope. +- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope. -Any current or future subscriptions found get onboarded automatically. -- To view status of onboarding after saving the configuration: --1. In the MEPM portal, click the cog on the top right-hand side. -1. Navigate to data collectors tab. +1. In the EPM portal, left-click the cog on the top right-hand side. +1. Navigate to data collectors tab +1. Ensure 'Azure' is selected 1. Click ΓÇÿCreate ConfigurationΓÇÖ 1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ -1. Click ΓÇÿVerify Now & SaveΓÇÖ ++The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This can be performed manually in the Entra console, or programatically with PowerShell or the Azure CLI. ++Lastly, Click ΓÇÿVerify Now & SaveΓÇÖ ++To view status of onboarding after saving the configuration: + 1. Collectors will now be listed and change through status types. For each collector listed with a status of ΓÇ£Collected InventoryΓÇ¥, click on that status to view further information. 1. You can then view subscriptions on the In Progress page Any current or future subscriptions found get onboarded automatically. You have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 10 per collector). Follow the steps below to configure these subscriptions to be monitored: 1. For each subscription you wish to manage, ensure that the ΓÇÿReaderΓÇÖ role has been granted to Cloud Infrastructure Entitlement Management application for this subscription. -1. In the MEPM portal, click the cog on the top right-hand side. +1. In the EPM portal, click the cog on the top right-hand side. 1. Navigate to data collectors tab +1. Ensure 'Azure' is selected 1. Click ΓÇÿCreate ConfigurationΓÇÖ 1. Select ΓÇÿEnter Authorization SystemsΓÇÖ 1. Under the Subscription IDs section, enter a desired subscription ID into the input box. Click the ΓÇ£+ΓÇ¥ up to 9 additional times, putting a single subscription ID into each respective input box. To view status of onboarding after saving the configuration: This option detects all subscriptions that are accessible by the Cloud Infrastructure Entitlement Management application. -1. Grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription(s) scope. -1. Click Verify and Save. +- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope. ++1. In the EPM portal, click the cog on the top right-hand side. +1. Navigate to data collectors tab +1. Ensure 'Azure' is selected +1. Click ΓÇÿCreate ConfigurationΓÇÖ +1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ ++The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. You can do this manually in the Entra console, or programatically with PowerShell or the Azure CLI. ++Lastly, Click ΓÇÿVerify Now & SaveΓÇÖ ++To view status of onboarding after saving the configuration: + 1. Navigate to newly create Data Collector row under Azure data collectors. 1. Click on Status column when the row has ΓÇ£PendingΓÇ¥ status 1. To onboard and start collection, choose specific ones subscriptions from the detected list and consent for collection. |
active-directory | Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md | Choose from 3 options to manage GCP projects. This option allows projects to be automatically detected and monitored without additional configuration. Steps to detect list of projects and onboard for collection: -- Grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope. +Firstly, grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope. ++Once done, the steps are listed in the screen to do this manually in the GPC console, or programatically with the gcloud CLI. ++Once this has been configured, click next, then 'Verify Now & Save'. Any current or future projects found get onboarded automatically. To view status of onboarding after saving the configuration: -- Navigate to data collectors tab. -- Click on the status of the data collector. +- Navigate to data collectors tab +- Click on the status of the data collector - View projects on the In Progress page #### Option 2: Enter authorization systems To view status of onboarding after saving the configuration: This option detects all projects that are accessible by the Cloud Infrastructure Entitlement Management application. -- Grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope. -- Click Verify and Save. -- Navigate to newly create Data Collector row under GCP data collectors. +- Firstly, grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope +- Once done, the steps are listed in the screen to do this manually in the GPC console, or programatically with the gcloud CLI +- Click Next +- Click 'Verify Now & Save' +- Navigate to newly create Data Collector row under GCP data collectors - Click on Status column when the row has ΓÇ£PendingΓÇ¥ status -- To onboard and start collection, choose specific ones from the detected list and consent for collection. +- To onboard and start collection, choose specific ones from the detected list and consent for collection ### 3. Set up GCP member projects. |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md | Organizations have to consider permissions management as a central piece of thei Permissions Management allows customers to address three key use cases: *discover*, *remediate*, and *monitor*. +Permissions Management has been designed in such a way that we recommended your organization sequentially 'step-through' each of the below phases in order to gain insights into permissions across the organization. This is because you generally cannot action what is yet to be discovered, likewise you cannot continually evaluate what is yet to be remediated. ++ ### Discover Customers can assess permission risks by evaluating the gap between permissions granted and permissions used. Permissions Management deepens Zero Trust security strategies by augmenting the - Automate least privilege access: Use access analytics to ensure identities have the right permissions, at the right time. - Unify access policies across infrastructure as a service (IaaS) platforms: Implement consistent security policies across your cloud infrastructure. -+Once your organization has explored and implemented the discover, remediation and monitor phases, you have established one of the core pillars of a modern zero-trust security strategy. ## Next steps |
active-directory | Entitlement Management Access Package Auto Assignment Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md | During this preview, you can have at most one automatic assignment policy in an This article describes how to create an access package automatic assignment policy for an existing access package. +## Before you begin ++You'll need to have attributes populated on the users who will be in scope for being assigned access. The attributes you can use in the rules criteria of an access package assignment policy are those attributes listed in [supported properties](../enterprise-users/groups-dynamic-membership.md#supported-properties), along with [extension attributes and custom extension properties](../enterprise-users/groups-dynamic-membership.md#extension-properties-and-custom-extension-properties). These attributes can be brought into Azure AD from [Graph](/graph/api/resources/user?view=graph-rest-beta), an HR system such as [SuccessFactors](../app-provisioning/sap-successfactors-integration-reference.md), [Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md) or [Azure AD Connect sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md). + ## Create an automatic assignment policy (Preview) To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new policy for an access package. To create a policy for an access package, you need to start from the access pack 1. Provide a dynamic membership rule, using the [membership rule builder](../enterprise-users/groups-dynamic-membership.md) or by clicking **Edit** on the rule syntax text box. > [!NOTE]- > The rule builder might not be able to display some rules constructed in the text box. For more information, see [rule builder in the Azure portal](/enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal). + > The rule builder might not be able to display some rules constructed in the text box, and validating a rule currently requires the you to be in the Global administrator role. For more information, see [rule builder in the Azure portal](/enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal).  |
active-directory | Manage Guest Access With Access Reviews | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-guest-access-with-access-reviews.md | description: Manage guest users as members of a group or assigned to an applicat documentationcenter: '' -+ editor: markwahl-msft na Previously updated : 4/16/2021 Last updated : 08/23/2021 For more information, [License requirements](access-reviews-overview.md#license- First, you must be assigned one of the following roles: - global administrator - User administrator-- (Preview) M365 or AAD Security Group owner of the group to be reviewed+- (Preview) Microsoft 365 or Azure AD Security Group owner of the group to be reviewed Then, go to the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) to ensure that access reviews is ready for your organization. In some organizations, guests might not be aware of their group memberships. 4. After the reviewers give input, stop the access review. For more information, see [Complete an access review of groups or applications](complete-access-review.md). -5. Remove guest access for guests who were denied, didn't complete the review, or didn't previously accept their invitation. If some of the guests are contacts who were selected to participate in the review or they didn't previously accept an invitation, you can disable their accounts by using the Azure portal or PowerShell. If the guest no longer needs access and isn't a contact, you can remove their user object from your directory by using the Azure portal or PowerShell to delete the guest user object. +5. You can automatically delete the guest users Azure AD B2B accounts as part of an access review when you are configuring an Access review for **Select Team + Groups**. This option is not available for **All Microsoft 365 groups with guest users**. ++ ++To do so, select **Auto apply results to resource** as this will automatically remove the user from the resource. **If reviewer don't respond** should be set to **Remove access** and **Action to apply on denied guest users** should also be set to **Block from signing in for 30 days then remove user from the tenant**. ++This will immediately block sign in to the guest user account and then automatically delete their Azure AD B2B account after 30 days. ## Next steps |
active-directory | Concept Identity Protection B2b | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-b2b.md | -Identity Protection detects compromised credentials for Azure AD users. If your credential is detected as compromised, it means that someone else may have your password and be using it illegitimately. To prevent further risk to your account, it is important to securely reset your password so that the bad actor can no longer use your compromised password. Identity Protection marks accounts that may be compromised as "at risk." +Identity Protection detects compromised credentials for Azure AD users. If your credential is detected as compromised, it means that someone else may have your password and be using it illegitimately. To prevent further risk to your account, it's important to securely reset your password so that the bad actor can no longer use your compromised password. Identity Protection marks accounts that may be compromised as "at risk." -You can use your organizational credentials to sign-in to another organization as a guest. This process is referred to [business-to-business or B2B collaboration](../external-identities/what-is-b2b.md). Organizations can configure policies to block users from signing-in if their credentials are considered [at risk](concept-identity-protection-risks.md). If your account is at risk and you are blocked from signing-in to another organization as a guest, you may be able to self-remediate your account using the following steps. If your organization has not enabled self-service password reset, your administrator will need to manually remediate your account. +You can use your organizational credentials to sign-in to another organization as a guest. This process is referred to [business-to-business or B2B collaboration](../external-identities/what-is-b2b.md). Organizations can configure policies to block users from signing-in if their credentials are considered [at risk](concept-identity-protection-risks.md). If your account is at risk and you're blocked from signing-in to another organization as a guest, you may be able to self-remediate your account using the following steps. If your organization hasn't enabled self-service password reset, your administrator will need to manually remediate your account. ## How to unblock your account -If you are attempting to sign-in to another organization as a guest and are blocked due to risk, you will see the following block message: "Your account is blocked. We've detected suspicious activity on your account." +If you're attempting to sign-in to another organization as a guest and are blocked due to risk, you'll see the following block message: "Your account is blocked. We've detected suspicious activity on your account."  If your organization enables it, you can use self-service password reset unblock your account and get your credentials back to a safe state.-1. Go to the [Password reset portal](https://passwordreset.microsoftonline.com/) and initiate the password reset. If self-service password reset is not enabled for your account and you cannot proceed, reach out to your IT administrator with the information [below](#how-to-remediate-a-users-risk-as-an-administrator). -2. If self-service password reset is enabled for your account, you will be prompted to verify your identity using security methods prior to changing your password. For assistance, see the [Reset your work or school password](https://support.microsoft.com/account-billing/reset-your-work-or-school-password-using-security-info-23dde81f-08bb-4776-ba72-e6b72b9dda9e) article. +1. Go to the [Password reset portal](https://passwordreset.microsoftonline.com/) and initiate the password reset. If self-service password reset isn't enabled for your account and you can't proceed, reach out to your IT administrator with the information [below](#how-to-remediate-a-users-risk-as-an-administrator). +2. If self-service password reset is enabled for your account, you'll be prompted to verify your identity using security methods prior to changing your password. For assistance, see the [Reset your work or school password](https://support.microsoft.com/account-billing/reset-your-work-or-school-password-using-security-info-23dde81f-08bb-4776-ba72-e6b72b9dda9e) article. 3. Once you have successfully and securely reset your password, your user risk will be remediated. You can now try again to sign-in as a guest user. -If after resetting your password you are still blocked as a guest due to risk, reach out to your organization's IT administrator. +If after resetting your password you're still blocked as a guest due to risk, reach out to your organization's IT administrator. ## How to remediate a user's risk as an administrator -Identity Protection automatically detects risky users for Azure AD tenants. If you have not previously checked the Identity Protection reports, there may be a large number of users with risk. Since resource tenants can apply user risk policies to guest users, your users can be blocked due to risk even if they were previously unaware of their risky state. If your user reports they have been blocked as a guest user in another tenant due to risk, it is important to remediate the user to protect their account and enable collaboration. +Identity Protection automatically detects risky users for Azure AD tenants. If you haven't previously checked the Identity Protection reports, there may be a large number of users with risk. Since resource tenants can apply user risk policies to guest users, your users can be blocked due to risk even if they were previously unaware of their risky state. If your user reports they've been blocked as a guest user in another tenant due to risk, it's important to remediate the user to protect their account and enable collaboration. ### Reset the user's password -From the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu, search for the impacted user using the 'User' filter. Select the impacted user in the report and click "Reset password" in the top toolbar. The user will be assigned a temporary password that must be changed on the next sign in. This process will remediate their user risk and bring their credentials back to a safe state. +From the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu, search for the impacted user using the 'User' filter. Select the impacted user in the report and select "Reset password" in the top toolbar. The user will be assigned a temporary password that must be changed on the next sign-in. This process will remediate their user risk and bring their credentials back to a safe state. ### Manually dismiss user's risk -If password reset is not an option for you from the Azure AD portal, you can choose to manually dismiss user risk. Dismissing user risk does not have any impact on the user's existing password, but this process will change the user's Risk State from At Risk to Dismissed. It is important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state. +If password reset isn't an option for you from the Azure AD portal, you can choose to manually dismiss user risk. Dismissing user risk doesn't have any impact on the user's existing password, but this process will change the user's Risk State from At Risk to Dismissed. It's important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state. -To dismiss user risk, go to the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu. Search for the impacted user using the 'User' filter and click on the user. Click on "dismiss user risk" option from the top toolbar. This action may take a few minutes to complete and update the user risk state in the report. +To dismiss user risk, go to the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu. Search for the impacted user using the 'User' filter and select the user. Select the "dismiss user risk" option from the top toolbar. This action may take a few minutes to complete and update the user risk state in the report. To learn more about Identity Protection, see [What is Identity Protection](overview-identity-protection.md). To learn more about Identity Protection, see [What is Identity Protection](overv The user risk for B2B collaboration users is evaluated at their home directory. The real-time sign-in risk for these users is evaluated at the resource directory when they try to access the resource. With Azure AD B2B collaboration, organizations can enforce risk-based policies for B2B users using Identity Protection. These policies be configured in two ways: -- Administrators can configure the built-in Identity Protection risk-based policies, that apply to all apps, that include guest users.-- Administrators can configure their Conditional Access policies, using sign-in risk as a condition, that includes guest users.+- Administrators can configure the built-in Identity Protection risk-based policies, that apply to all apps, and include guest users. +- Administrators can configure their Conditional Access policies, using sign-in risk as a condition, and includes guest users. ## Limitations of Identity Protection for B2B collaboration users -There are limitations in the implementation of Identity Protection for B2B collaboration users in a resource directory due to their identity existing in their home directory. The main limitations are as follows: +There are limitations in the implementation of Identity Protection for B2B collaboration users in a resource directory, due to their identity existing in their home directory. The main limitations are as follows: - If a guest user triggers the Identity Protection user risk policy to force password reset, **they will be blocked**. This block is due to the inability to reset passwords in the resource directory. - **Guest users do not appear in the risky users report**. This limitation is due to the risk evaluation occurring in the B2B user's home directory. There are limitations in the implementation of Identity Protection for B2B colla ### Why can't I remediate risky B2B collaboration users in my directory? -The risk evaluation and remediation for B2B users occurs in their home directory. Due to this fact, the guest users do not appear in the risky users report in the resource directory and admins in the resource directory cannot force a secure password reset for these users. +The risk evaluation and remediation for B2B users occurs in their home directory. Due to this fact, the guest users don't appear in the risky users report in the resource directory and admins in the resource directory can't force a secure password reset for these users. ### What do I do if a B2B collaboration user was blocked due to a risk-based policy in my organization? -If a risky B2B user in your directory is blocked by your risk-based policy, the user will need to remediate that risk in their home directory. Users can remediate their risk by performing a secure password reset in their home directory [as outlined above](#how-to-unblock-your-account). If they do not have self-service password reset enabled in their home directory, they will need to contact their own organization's IT Staff to have an administrator manually dismiss their risk or reset their password. +If a risky B2B user in your directory is blocked by your risk-based policy, the user will need to remediate that risk in their home directory. Users can remediate their risk by performing a secure password reset in their home directory [as outlined above](#how-to-unblock-your-account). If they don't have self-service password reset enabled in their home directory, they'll need to contact their own organization's IT Staff to have an administrator manually dismiss their risk or reset their password. ### How do I prevent B2B collaboration users from being impacted by risk-based policies? |
active-directory | Concept Identity Protection Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-policies.md | Azure Active Directory Identity Protection includes three default policies that ## Azure AD MFA registration policy -Identity Protection can help organizations roll out Azure AD Multi-Factor Authentication (MFA) using a Conditional Access policy requiring registration at sign-in. Enabling this policy is a great way to ensure new users in your organization have registered for MFA on their first day. Multi-factor authentication is one of the self-remediation methods for risk events within Identity Protection. Self-remediation allows your users to take action on their own to reduce helpdesk call volume. +Identity Protection can help organizations roll out Azure AD Multifactor Authentication (MFA) using a Conditional Access policy requiring registration at sign-in. Enabling this policy is a great way to ensure new users in your organization have registered for MFA on their first day. Multifactor authentication is one of the self-remediation methods for risk events within Identity Protection. Self-remediation allows your users to take action on their own to reduce helpdesk call volume. -More information about Azure AD Multi-Factor Authentication can be found in the article, [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). +More information about Azure AD Multifactor Authentication can be found in the article, [How it works: Azure AD Multifactor Authentication](../authentication/concept-mfa-howitworks.md). ## Sign-in risk policy -Identity Protection analyzes signals from each sign-in, both real-time and offline, and calculates a risk score based on the probability that the sign-in wasn't performed by the user. Administrators can make a decision based on this risk score signal to enforce organizational requirements. Administrators can choose to block access, allow access, or allow access but require multi-factor authentication. +Identity Protection analyzes signals from each sign-in, both real-time and offline, and calculates a risk score based on the probability that the sign-in wasn't really the user. Administrators can make a decision based on this risk score signal to enforce organizational requirements like: -If risk is detected, users can perform multi-factor authentication to self-remediate and close the risky sign-in event to prevent unnecessary noise for administrators. +- Block access +- Allow access +- Require multifactor authentication ++If risk is detected, users can perform multifactor authentication to self-remediate and close the risky sign-in event to prevent unnecessary noise for administrators. > [!NOTE] -> Users must have previously registered for Azure AD Multi-Factor Authentication before triggering the sign-in risk policy. +> Users must have previously registered for Azure AD Multifactor Authentication before triggering the sign-in risk policy. ### Custom Conditional Access policy If risk is detected, users can perform self-service password reset to self-remed ## Next steps - [Enable Azure AD self-service password reset](../authentication/howto-sspr-deployment.md)--- [Enable Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)--- [Enable Azure AD Multi-Factor Authentication registration policy](howto-identity-protection-configure-mfa-policy.md)-+- [Enable Azure AD Multifactor Authentication](../authentication/howto-mfa-getstarted.md) +- [Enable Azure AD Multifactor Authentication registration policy](howto-identity-protection-configure-mfa-policy.md) - [Enable sign-in and user risk policies](howto-identity-protection-configure-risk-policies.md) |
active-directory | Concept Identity Protection Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-security-overview.md | The ΓÇÿSecurity overviewΓÇÖ is broadly divided into two sections: - Tiles, on the right, highlight the key ongoing issues in your organization and suggest how to quickly take action. :::image type="content" source="./media/concept-identity-protection-security-overview/01.png" alt-text="Screenshot of the Azure portal Security overview. Bar charts show the count of risks over time. Tiles summarize information on users and sign-ins." border="false":::- ++You can find the security overview page in the **Azure portal** > **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**. + ## Trends ### New risky users detected -This chart shows the number of new risky users that were detected over the chosen time period. You can filter the view of this chart by user risk level (low, medium, high). Hover over the UTC date increments to see the number of risky users detected for that day. A click on this chart will bring you to the ΓÇÿRisky usersΓÇÖ report. To remediate users that are at risk, consider changing their password. +This chart shows the number of new risky users that were detected over the chosen time period. You can filter the view of this chart by user risk level (low, medium, high). Hover over the UTC date increments to see the number of risky users detected for that day. Selecting this chart will bring you to the ΓÇÿRisky usersΓÇÖ report. To remediate users that are at risk, consider changing their password. ### New risky sign-ins detected -This chart shows the number of risky sign-ins detected over the chosen time period. You can filter the view of this chart by the sign-in risk type (real-time or aggregate) and the sign-in risk level (low, medium, high). Unprotected sign-ins are successful real-time risk sign-ins that were not MFA challenged. (Note: Sign-ins that are risky because of offline detections cannot be protected in real-time by sign-in risk policies). Hover over the UTC date increments to see the number of sign-ins detected at risk for that day. A click on this chart will bring you to the ΓÇÿRisky sign-insΓÇÖ report. +This chart shows the number of risky sign-ins detected over the chosen time period. You can filter the view of this chart by the sign-in risk type (real-time or aggregate) and the sign-in risk level (low, medium, high). Unprotected sign-ins are successful real-time risk sign-ins that weren't MFA challenged. (Note: Sign-ins that are risky because of offline detections can't be protected in real-time by sign-in risk policies). Hover over the UTC date increments to see the number of sign-ins detected at risk for that day. Selecting this chart will bring you to the ΓÇÿRisky sign-insΓÇÖ report. ## Tiles ### High risk users -The ΓÇÿHigh risk usersΓÇÖ tile shows the latest count of users with high probability of identity compromise. These should be a top priority for investigation. A click on the ΓÇÿHigh risk usersΓÇÖ tile will redirect to a filtered view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of high. Using this report, you can learn more and remediate these users with a password reset. +The ΓÇÿHigh risk usersΓÇÖ tile shows the latest count of users with high probability of identity compromise. These users should be a top priority for investigation. Selecting the ΓÇÿHigh risk usersΓÇÖ tile will redirect to a filtered view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of high. Using this report, you can learn more and remediate these users with a password reset. :::image type="content" source="./media/concept-identity-protection-security-overview/02.png" alt-text="Screenshot of the Azure portal Security overview, with tiles visible for high-risk and medium-risk users and other risk factors." border="false"::: ### Medium risk users-The ΓÇÿMedium risk usersΓÇÖ tile shows the latest count of users with medium probability of identity compromise. A click on ΓÇÿMedium risk usersΓÇÖ tile will redirect to a filtered view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of medium. Using this report, you can further investigate and remediate these users. +The ΓÇÿMedium risk usersΓÇÖ tile shows the latest count of users with medium probability of identity compromise. Selecting the ΓÇÿMedium risk usersΓÇÖ tile will take you to a view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of medium. Using this report, you can further investigate and remediate these users. ### Unprotected risky sign-ins -The ΓÇÿUnprotected risky sign-ins' tile shows the last weekΓÇÖs count of successful, real-time risky sign-ins that were not blocked or MFA challenged by a Conditional Access policy, Identity Protection risk policy, or per-user MFA. These are potentially compromised logins that were successful and not MFA challenged. To protect such sign-ins in future, apply a sign-in risk policy. A click on ΓÇÿUnprotected risky sign-ins' tile will redirect to the sign-in risk policy configuration blade where you can configure the sign-in risk policy to require MFA on a sign-in with a specified risk level. +The ΓÇÿUnprotected risky sign-ins' tile shows the last weekΓÇÖs count of successful, real-time risky sign-ins that weren't blocked or MFA challenged by a Conditional Access policy, Identity Protection risk policy, or per-user MFA. These successful sign-ins are potentially compromised and not challenged for MFA. To protect such sign-ins in future, apply a sign-in risk policy. Selecting the ΓÇÿUnprotected risky sign-ins' tile will take you to the sign-in risk policy configuration blade where you can configure the sign-in risk policy. ### Legacy authentication -The ΓÇÿLegacy authenticationΓÇÖ tile shows the last weekΓÇÖs count of legacy authentications with risk present in your organization. Legacy authentication protocols do not support modern security methods such as an MFA. To prevent legacy authentication, you can apply a Conditional Access policy. A click on ΓÇÿLegacy authenticationΓÇÖ tile will redirect you to the ΓÇÿIdentity Secure ScoreΓÇÖ. +The ΓÇÿLegacy authenticationΓÇÖ tile shows the last weekΓÇÖs count of legacy authentications with risk present in your organization. Legacy authentication protocols don't support modern security methods such as an MFA. To prevent legacy authentication, you can apply a Conditional Access policy. Selecting the ΓÇÿLegacy authenticationΓÇÖ tile will redirect you to the ΓÇÿIdentity Secure ScoreΓÇÖ. ### Identity Secure Score -The Identity Secure Score measures and compares your security posture to industry patterns. If you click on ΓÇÿIdentity Secure Score (Preview)ΓÇÖ tile, it will redirect to the ΓÇÿIdentity Secure ScoreΓÇÖ blade where you can learn more about improving your security posture. +The Identity Secure Score measures and compares your security posture to industry patterns. If you select the **Identity Secure Score** tile, it will redirect to [Identity Secure Score](../fundamentals/identity-secure-score.md) where you can learn more about improving your security posture. ## Next steps - [What is risk](concept-identity-protection-risks.md)- - [Policies available to mitigate risks](concept-identity-protection-policies.md) |
active-directory | Howto Identity Protection Configure Mfa Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md | Title: Configure the MFA registration policy - Azure Active Directory Identity Protection -description: Learn how to configure the Azure AD Identity Protection multi-factor authentication registration policy. +description: Learn how to configure the Azure AD Identity Protection multifactor authentication registration policy. Previously updated : 06/05/2020 Last updated : 08/22/2022 -# How To: Configure the Azure AD Multi-Factor Authentication registration policy +# How To: Configure the Azure AD Multifactor Authentication registration policy -Azure AD Identity Protection helps you manage the roll-out of Azure AD Multi-Factor Authentication (MFA) registration by configuring a Conditional Access policy to require MFA registration no matter what modern authentication app you are signing in to. +Azure Active Directory (Azure AD) Identity Protection helps you manage the roll-out of Azure AD Multifactor Authentication (MFA) registration by configuring a Conditional Access policy to require MFA registration no matter what modern authentication app you're signing in to. -## What is the Azure AD Multi-Factor Authentication registration policy? +## What is the Azure AD Multifactor Authentication registration policy? -Azure AD Multi-Factor Authentication provides a means to verify who you are using more than just a username and password. It provides a second layer of security to user sign-ins. In order for users to be able to respond to MFA prompts, they must first register for Azure AD Multi-Factor Authentication. +Azure AD Multifactor Authentication provides a means to verify who you are using more than just a username and password. It provides a second layer of security to user sign-ins. In order for users to be able to respond to MFA prompts, they must first register for Azure AD Multifactor Authentication. -We recommend that you require Azure AD Multi-Factor Authentication for user sign-ins because it: +We recommend that you require Azure AD Multifactor Authentication for user sign-ins because it: - Delivers strong authentication through a range of verification options. - Plays a key role in preparing your organization to self-remediate from risk detections in Identity Protection. -For more information on Azure AD Multi-Factor Authentication, see [What is Azure AD Multi-Factor Authentication?](../authentication/howto-mfa-getstarted.md) +For more information on Azure AD Multifactor Authentication, see [What is Azure AD Multifactor Authentication?](../authentication/howto-mfa-getstarted.md) ## Policy configuration For more information on Azure AD Multi-Factor Authentication, see [What is Azure 1. Under **Assignments** 1. **Users** - Choose **All users** or **Select individuals and groups** if limiting your rollout. 1. Optionally you can choose to exclude users from the policy.- 1. **Enforce Policy** - **On** - 1. **Save** +1. **Enforce Policy** - **On** +1. **Save** ## User experience -Azure Active Directory Identity Protection will prompt your users to register the next time they sign in interactively and they will have 14 days to complete registration. During this 14-day period, they can bypass registration if MFA is not required as a condition, but at the end of the period they will be required to register before they can complete the sign-in process. +Azure AD Identity Protection will prompt your users to register the next time they sign in interactively and they'll have 14 days to complete registration. During this 14-day period, they can bypass registration if MFA isn't required as a condition, but at the end of the period they'll be required to register before they can complete the sign-in process. For an overview of the related user experience, see: For an overview of the related user experience, see: - [Enable Azure AD self-service password reset](../authentication/howto-sspr-deployment.md) -- [Enable Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)+- [Enable Azure AD Multifactor Authentication](../authentication/howto-mfa-getstarted.md) |
active-directory | Howto Identity Protection Configure Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-notifications.md | |
active-directory | Howto Identity Protection Configure Risk Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md | Before organizations enable remediation policies, they may want to [investigate] 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.-1. Under **Assignments**, select **Users or workload identities**.. +1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**. Before organizations enable remediation policies, they may want to [investigate] 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.-1. Under **Assignments**, select **Users or workload identities**.. +1. Under **Assignments**, select **Users or workload identities**. 1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**. Before organizations enable remediation policies, they may want to [investigate] ## Next steps - [Enable Azure AD Multi-Factor Authentication registration policy](howto-identity-protection-configure-mfa-policy.md)- - [What is risk](concept-identity-protection-risks.md)- - [Investigate risk detections](howto-identity-protection-investigate-risk.md)- - [Simulate risk detections](howto-identity-protection-simulate-risk.md) |
active-directory | Howto Identity Protection Graph Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-graph-api.md | Title: Microsoft Graph PowerShell SDK and Azure Active Directory Identity Protection -description: Learn how to query Microsoft Graph risk detections and associated information from Azure Active Directory +description: Query Microsoft Graph risk detections and associated information from Azure Active Directory Previously updated : 01/25/2021 Last updated : 08/23/2022 +1. [Create a certificate](#create-a-certificate) +1. [Create a new app registration](#create-a-new-app-registration) +1. [Configure API permissions](#configure-api-permissions) +1. [Configure a valid credential](#configure-a-valid-credential) ### Create a certificate -In a production environment you would use a certificate from your production Certificate Authority, but in this sample we will use a self-signed certificate. Create and export the certificate using the following PowerShell commands. +In a production environment you would use a certificate from your production Certificate Authority, but in this sample we'll use a self-signed certificate. Create and export the certificate using the following PowerShell commands. ```powershell $cert = New-SelfSignedCertificate -Subject "CN=MSGraph_ReportingAPI" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256 Export-Certificate -Cert $cert -FilePath "C:\Reporting\MSGraph_ReportingAPI.cer" 1. In the **Name** textbox, type a name for your application (for example: Azure AD Risk Detection API). 1. Under **Supported account types**, select the type of accounts that will use the APIs. 1. Select **Register**.-1. Take note of the **Application (client) ID** and **Directory (tenant) ID** as you will need these items later. +1. Take note of the **Application (client) ID** and **Directory (tenant) ID** as you'll need these items later. ### Configure API permissions In this example, we configure application permissions allowing this sample to be 1. Under **certificates**, select **Upload certificate**. 1. Select the previously exported certificate from the window that opens. 1. Select **Add**.-1. Take note of the **Thumbprint** of the certificate as you will need this information in the next step. +1. Take note of the **Thumbprint** of the certificate as you'll need this information in the next step. ## List risky users using PowerShell |
active-directory | Howto Identity Protection Risk Feedback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-risk-feedback.md | Azure AD Identity Protection allows you to give feedback on its risk assessment. An Identity Protection detection is an indicator of suspicious activity from an identity risk perspective. These suspicious activities are called risk detections. These identity-based detections can be based on heuristics, machine learning or can come from partner products. These detections are used to determine sign-in risk and user risk, * User risk represents the probability an identity is compromised.-* Sign-in risk represents the probability a sign-in is compromised (for example, the sign-in is not authorized by the identity owner). +* Sign-in risk represents the probability a sign-in is compromised (for example, the sign-in isn't authorized by the identity owner). ## Why should I give risk feedback to Azure ADΓÇÖs risk assessments? Here are the scenarios and mechanisms to give risk feedback to Azure AD. | Scenario | How to give feedback? | What happens under the hood? | Notes | | | | | |-| **Sign-in not compromised (False positive)** <br> ΓÇÿRisky sign-insΓÇÖ report shows an at-risk sign-in [Risk state = At risk] but that sign-in was not compromised. | Select the sign-in and click on ΓÇÿConfirm sign-in safeΓÇÖ. | Azure AD will move the sign-inΓÇÖs aggregate risk to none [Risk state = Confirmed safe; Risk level (Aggregate) = -] and will reverse its impact on the user risk. | Currently, the ΓÇÿConfirm sign-in safeΓÇÖ option is only available in ΓÇÿRisky sign-insΓÇÖ report. | -| **Sign-in compromised (True positive)** <br> ΓÇÿRisky sign-insΓÇÖ report shows an at-risk sign-in [Risk state = At risk] with low risk [Risk level (Aggregate) = Low] and that sign-in was indeed compromised. | Select the sign-in and click on ΓÇÿConfirm sign-in compromisedΓÇÖ. | Azure AD will move the sign-inΓÇÖs aggregate risk and the user risk to High [Risk state = Confirmed compromised; Risk level = High]. | Currently, the ΓÇÿConfirm sign-in compromisedΓÇÖ option is only available in ΓÇÿRisky sign-insΓÇÖ report. | -| **User compromised (True positive)** <br> ΓÇÿRisky usersΓÇÖ report shows an at-risk user [Risk state = At risk] with low risk [Risk level = Low] and that user was indeed compromised. | Select the user and click on ΓÇÿConfirm user compromisedΓÇÖ. | Azure AD will move the user risk to High [Risk state = Confirmed compromised; Risk level = High] and will add a new detection ΓÇÿAdmin confirmed user compromisedΓÇÖ. | Currently, the ΓÇÿConfirm user compromisedΓÇÖ option is only available in ΓÇÿRisky usersΓÇÖ report. <br> The detection ΓÇÿAdmin confirmed user compromisedΓÇÖ is shown in the tab ΓÇÿRisk detections not linked to a sign-inΓÇÖ in the ΓÇÿRisky usersΓÇÖ report. | -| **User remediated outside of Azure AD Identity Protection (True positive + Remediated)** <br> ΓÇÿRisky usersΓÇÖ report shows an at-risk user and I have subsequently remediated the user outside of Azure AD Identity Protection. | 1. Select the user and click ΓÇÿConfirm user compromisedΓÇÖ. (This process confirms to Azure AD that the user was indeed compromised.) <br> 2. Wait for the userΓÇÖs ΓÇÿRisk levelΓÇÖ to go to High. (This time gives Azure AD the needed time to take the above feedback to the risk engine.) <br> 3. Select the user and click ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is no longer compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -] and closes the risk on all existing sign-ins having active risk. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action cannot be undone. | -| **User not compromised (False positive)** <br> ΓÇÿRisky usersΓÇÖ report shows at at-risk user but the user is not compromised. | Select the user and click ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is not compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -]. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action cannot be undone. | -| I want to close the user risk but I am not sure whether the user is compromised / safe. | Select the user and click ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is no longer compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -]. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action cannot be undone. We recommend you remediate the user by clicking on ΓÇÿReset passwordΓÇÖ or request the user to securely reset/change their credentials. | +| **Sign-in not compromised (False positive)** <br> ΓÇÿRisky sign-insΓÇÖ report shows an at-risk sign-in [Risk state = At risk] but that sign-in wasn't compromised. | Select the sign-in and then ΓÇÿConfirm sign-in safeΓÇÖ. | Azure AD will move the sign-inΓÇÖs aggregate risk to none [Risk state = Confirmed safe; Risk level (Aggregate) = -] and will reverse its impact on the user risk. | Currently, the ΓÇÿConfirm sign-in safeΓÇÖ option is only available in ΓÇÿRisky sign-insΓÇÖ report. | +| **Sign-in compromised (True positive)** <br> ΓÇÿRisky sign-insΓÇÖ report shows an at-risk sign-in [Risk state = At risk] with low risk [Risk level (Aggregate) = Low] and that sign-in was indeed compromised. | Select the sign-in and then ΓÇÿConfirm sign-in compromisedΓÇÖ. | Azure AD will move the sign-inΓÇÖs aggregate risk and the user risk to High [Risk state = Confirmed compromised; Risk level = High]. | Currently, the ΓÇÿConfirm sign-in compromisedΓÇÖ option is only available in ΓÇÿRisky sign-insΓÇÖ report. | +| **User compromised (True positive)** <br> ΓÇÿRisky usersΓÇÖ report shows an at-risk user [Risk state = At risk] with low risk [Risk level = Low] and that user was indeed compromised. | Select the user and then ΓÇÿConfirm user compromisedΓÇÖ. | Azure AD will move the user risk to High [Risk state = Confirmed compromised; Risk level = High] and will add a new detection ΓÇÿAdmin confirmed user compromisedΓÇÖ. | Currently, the ΓÇÿConfirm user compromisedΓÇÖ option is only available in ΓÇÿRisky usersΓÇÖ report. <br> The detection ΓÇÿAdmin confirmed user compromisedΓÇÖ is shown in the tab ΓÇÿRisk detections not linked to a sign-inΓÇÖ in the ΓÇÿRisky usersΓÇÖ report. | +| **User remediated outside of Azure AD Identity Protection (True positive + Remediated)** <br> ΓÇÿRisky usersΓÇÖ report shows an at-risk user and I've then remediated the user outside of Azure AD Identity Protection. | 1. Select the user and then ΓÇÿConfirm user compromisedΓÇÖ. (This process confirms to Azure AD that the user was indeed compromised.) <br> 2. Wait for the userΓÇÖs ΓÇÿRisk levelΓÇÖ to go to High. (This time gives Azure AD the needed time to take the above feedback to the risk engine.) <br> 3. Select the user and then ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is no longer compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -] and closes the risk on all existing sign-ins having active risk. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action can't be undone. | +| **User not compromised (False positive)** <br> ΓÇÿRisky usersΓÇÖ report shows at at-risk user but the user isn't compromised. | Select the user and then ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user isn't compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -]. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action can't be undone. | +| I want to close the user risk but I'm not sure whether the user is compromised / safe. | Select the user and then ΓÇÿDismiss user riskΓÇÖ. (This process confirms to Azure AD that the user is no longer compromised.) | Azure AD moves the user risk to none [Risk state = Dismissed; Risk level = -]. | Clicking ΓÇÿDismiss user riskΓÇÖ will close all risk on the user and past sign-ins. This action can't be undone. We recommend you remediate the user by clicking on ΓÇÿReset passwordΓÇÖ or request the user to securely reset/change their credentials. | Feedback on user risk detections in Identity Protection is processed offline and may take some time to update. The risk processing state column will provide the current state of feedback processing. |
active-directory | Howto Identity Protection Simulate Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md | This article provides you with steps for simulating the following risk detection - Atypical travel (difficult) - Leaked credentials in GitHub for workload identities (moderate) -Other risk detections cannot be simulated in a secure manner. +Other risk detections can't be simulated in a secure manner. More information about each risk detection can be found in the article, What is risk for [user](concept-identity-protection-risks.md) and [workload identity](concept-workload-identity-risk.md). More information about each risk detection can be found in the article, What is Completing the following procedure requires you to use: - The [Tor Browser](https://www.torproject.org/projects/torbrowser.html.en) to simulate anonymous IP addresses. You might need to use a virtual machine if your organization restricts using the Tor browser.-- A test account that is not yet registered for Azure AD Multi-Factor Authentication.+- A test account that isn't yet registered for Azure AD Multi-Factor Authentication. **To simulate a sign-in from an anonymous IP, perform the following steps**: The sign-in shows up on the Identity Protection dashboard within 10 - 15 minutes ## Unfamiliar sign-in properties -To simulate unfamiliar locations, you have to sign in from a location and device your test account has not signed in from before. +To simulate unfamiliar locations, you have to sign in from a location and device your test account hasn't signed in from before. The procedure below uses a newly created: The sign-in shows up on the Identity Protection dashboard within 10 - 15 minutes ## Atypical travel -Simulating the atypical travel condition is difficult because the algorithm uses machine learning to weed out false-positives such as atypical travel from familiar devices, or sign-ins from VPNs that are used by other users in the directory. Additionally, the algorithm requires a sign-in history of 14 days and 10 logins of the user before it begins generating risk detections. Because of the complex machine learning models and above rules, there is a chance that the following steps will not lead to a risk detection. You might want to replicate these steps for multiple Azure AD accounts to simulate this detection. +Simulating the atypical travel condition is difficult because the algorithm uses machine learning to weed out false-positives such as atypical travel from familiar devices, or sign-ins from VPNs that are used by other users in the directory. Additionally, the algorithm requires a sign-in history of 14 days and 10 logins of the user before it begins generating risk detections. Because of the complex machine learning models and above rules, there's a chance that the following steps won't lead to a risk detection. You might want to replicate these steps for multiple Azure AD accounts to simulate this detection. **To simulate an atypical travel risk detection, perform the following steps**: This risk detection indicates that the application's valid credentials have been **To simulate Leaked Credentials in GitHub for Workload Identities, perform the following steps**: 1. Navigate to the [Azure portal](https://portal.azure.com). 2. Browse to **Azure Active Directory** > **App registrations**.-3. Select **New registration** to register a new application or reuse an exsiting stale application. -4. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and click **Add**. Record the secret's value for later use for your GitHub Commit. +3. Select **New registration** to register a new application or reuse an existing stale application. +4. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and select **Add**. Record the secret's value for later use for your GitHub Commit. > [!Note] > **You can not retrieve the secret again after you leave this page**. This risk detection indicates that the application's valid credentials have been "AadTenantDomain": "XXXX.onmicrosoft.com", "AadTenantId": "99d4947b-XXX-XXXX-9ace-abceab54bcd4", ```-7. In about 8 hours, you will be able to view a leaked credentail detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain your the URL of your GitHub commit. +7. In about 8 hours, you'll be able to view a leaked credential detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain the URL of your GitHub commit. ## Testing risk policies To test a user risk security policy, perform the following steps: ### Sign-in risk security policy -To test a sign in risk policy, perform the following steps: +To test a sign-in risk policy, perform the following steps: 1. Navigate to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**. |
active-directory | Azure Pim Resource Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-pim-resource-rbac.md | You may have a compliance requirement where you must provide a complete list of 1. Select the resource you want to export role assignments for, such as a subscription. -1. Select **Members**. +1. Select **Assignments**. 1. Select **Export** to open the Export membership pane. |
active-directory | Groups Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md | Follow these steps to open the settings for an Azure privileged access group rol 1. Open **Azure AD Privileged Identity Management**. 1. Select **Privileged access (Preview)**.+ >[!NOTE] + > Approver doesn't have to be member of the group, owner of the group or have Azure AD role assigned. 1. Select the group that you want to manage. |
active-directory | Pim Resource Roles Configure Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md | Follow these steps to open the settings for an Azure resource role. 1. Open **Azure AD Privileged Identity Management**. 1. Select **Azure resources**.+ >[!NOTE] + > Approver doesn't have to have any Azure or Azure AD role assigned. 1. Select the resource you want to manage, such as a subscription or management group. |
active-directory | Ideagen Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideagen-cloud-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Ideagen Cloud](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Ideagen Cloud to support provisioning with Azure AD-1. Login to [Ideagen Home](https://cktenant-homev2-scimtest1.ideagenhomedev.com). Click on the **Administration** icon to show the left hand side menu. +1. Log in to Ideagen. Click on the **Administration** icon to show the left hand side menu.  |
advisor | Advisor Performance Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-performance-recommendations.md | Azure Premium Storage delivers high-performance, low-latency disk support for vi ## Remove data skew on your Azure Synapse Analytics tables to increase query performance -Data skew can cause unnecessary data movement or resource bottlenecks when you run your workload. Advisor detects distribution data skew of greater than 15%. It recommends that you redistribute your data and revisit your table distribution key selections. To learn more about identifying and removing skew, see [troubleshooting skew](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md#how-to-tell-if-your-distribution-column-is-a-good-choice). +Data skew can cause unnecessary data movement or resource bottlenecks when you run your workload. Advisor detects distribution data skew of greater than 15%. It recommends that you redistribute your data and revisit your table distribution key selections. To learn more about identifying and removing skew, see [troubleshooting skew](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md#how-to-tell-if-your-distribution-is-a-good-choice). ## Create or update outdated table statistics in your Azure Synapse Analytics tables to increase query performance Learn more about [Azure Communication Services](../communication-services/overvi 1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard). -2. On the Advisor dashboard, select the **Performance** tab. +2. On the Advisor dashboard, select the **Performance** tab. ## Next steps |
aks | Support Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md | Microsoft provides technical support for the following examples: * Connectivity to other Azure services and applications * Ingress controllers and ingress or load balancer configurations * Network performance and latency- * [Network policies](use-network-policies.md#differences-between-azure-and-calico-policies-and-their-capabilities) -+ * [Network policies](use-network-policies.md#differences-between-azure-npm-and-calico-network-policy-and-their-capabilities) > [!NOTE] > Any cluster actions taken by Microsoft/AKS are made with user consent under a built-in Kubernetes role `aks-service` and built-in role binding `aks-service-rolebinding`. This role enables AKS to troubleshoot and diagnose cluster issues, but can't modify permissions nor create roles or role bindings, or other high privilege actions. Role access is only enabled under active support tickets with just-in-time (JIT) access. |
aks | Use Network Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md | Last updated 06/24/2022 When you run modern, microservices-based applications in Kubernetes, you often want to control which components can communicate with each other. The principle of least privilege should be applied to how traffic can flow between pods in an Azure Kubernetes Service (AKS) cluster. Let's say you likely want to block traffic directly to back-end applications. The *Network Policy* feature in Kubernetes lets you define rules for ingress and egress traffic between pods in a cluster. -This article shows you how to install the network policy engine and create Kubernetes network policies to control the flow of traffic between pods in AKS. Network policy should only be used for Linux-based nodes and pods in AKS. +This article shows you how to install the Network Policy engine and create Kubernetes network policies to control the flow of traffic between pods in AKS. Network Policy could be used for Linux-based or Windows-based nodes and pods in AKS. ## Before you begin You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. -## Overview of network policy +## Overview of Network Policy All pods in an AKS cluster can send and receive traffic without limitations, by default. To improve security, you can define rules that control the flow of traffic. Back-end applications are often only exposed to required front-end services, for example. Or, database components are only accessible to the application tiers that connect to them. -Network Policy is a Kubernetes specification that defines access policies for communication between Pods. Using Network Policies, you define an ordered set of rules to send and receive traffic and apply them to a collection of pods that match one or more label selectors. +Network Policy is a Kubernetes specification that defines access policies for communication between Pods. Using network policies, you define an ordered set of rules to send and receive traffic and apply them to a collection of pods that match one or more label selectors. -These network policy rules are defined as YAML manifests. Network policies can be included as part of a wider manifest that also creates a deployment or service. +These Network Policy rules are defined as YAML manifests. Network policies can be included as part of a wider manifest that also creates a deployment or service. -### Network policy options in AKS +## Network policy options in AKS -Azure provides two ways to implement network policy. You choose a network policy option when you create an AKS cluster. The policy option can't be changed after the cluster is created: +Azure provides two ways to implement Network Policy. You choose a Network Policy option when you create an AKS cluster. The policy option can't be changed after the cluster is created: -* Azure's own implementation, called *Azure Network Policies*. +* Azure's own implementation, called *Azure Network Policy Manager (NPM)*. * *Calico Network Policies*, an open-source network and network security solution founded by [Tigera][tigera]. -Both implementations use Linux *IPTables* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable filter rules. +Azure NPM for Linux uses Linux *IPTables* and Azure NPM for Windows uses *Host Network Service (HNS) ACLPolicies* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable/HNS ACLPolicy filter rules. -### Differences between Azure and Calico policies and their capabilities +## Differences between Azure NPM and Calico Network Policy and their capabilities -| Capability | Azure | Calico | +| Capability | Azure NPM | Calico Network Policy | ||-|--|-| Supported platforms | Linux | Linux, Windows Server 2019 and 2022 | +| Supported platforms | Linux, Windows Server 2022 | Linux, Windows Server 2019 and 2022 | | Supported networking options | Azure CNI | Azure CNI (Linux, Windows Server 2019 and 2022) and kubenet (Linux) | | Compliance with Kubernetes specification | All policy types supported | All policy types supported | | Additional features | None | Extended policy model consisting of Global Network Policy, Global Network Set, and Host Endpoint. For more information on using the `calicoctl` CLI to manage these extended features, see [calicoctl user reference][calicoctl]. | | Support | Supported by Azure support and Engineering team | Calico community support. For more information on additional paid support, see [Project Calico support options][calico-support]. |-| Logging | Rules added / deleted in IPTables are logged on every host under */var/log/azure-npm.log* | For more information, see [Calico component logs][calico-logs] | +| Logging | Logs available with **kubectl log -n kube-system <network-policy-pod>** command | For more information, see [Calico component logs][calico-logs] | -## Create an AKS cluster and enable network policy +## Limitations: -To see network policies in action, let's create and then expand on a policy that defines traffic flow: +Azure Network Policy Manager(NPM) does not support IPv6. Otherwise, Azure NPM fully supports the network policy spec in Linux. +* In Windows, Azure NPM does not support the following: + * named ports + * SCTP protocol + * negative match label or namespace selectors (e.g. all labels except "debug=true") + * "except" CIDR blocks (a CIDR with exceptions) -* Deny all traffic to pod. -* Allow traffic based on pod labels. -* Allow traffic based on namespace. +>[!NOTE] +> * Azure NPM pod logs will record an error if an unsupported policy is created. -First, let's create an AKS cluster that supports network policy. +## Create an AKS cluster and enable Network Policy ++To see network policies in action, let's create an AKS cluster that supports network policy and then work on adding policies. > [!IMPORTANT] > > The network policy feature can only be enabled when the cluster is created. You can't enable network policy on an existing AKS cluster. -To use Azure Network Policy, you must use the [Azure CNI plug-in][azure-cni]. Calico Network Policy could be used with either this same Azure CNI plug-in or with the Kubenet CNI plug-in. +To use Azure NPM, you must use the [Azure CNI plug-in][azure-cni]. Calico Network Policy could be used with either this same Azure CNI plug-in or with the Kubenet CNI plug-in. The following example script: -* Creates an AKS cluster with system-assigned identity and enables network policy. - * The _Azure Network_ policy option is used. To use Calico as the network policy option instead, use the `--network-policy calico` parameter. Note: Calico could be used with either `--network-plugin azure` or `--network-plugin kubenet`. +* Creates an AKS cluster with system-assigned identity and enables Network Policy. + * The _Azure NPM_ option is used. To use Calico as the Network Policy option instead, use the `--network-policy calico` parameter. Note: Calico could be used with either `--network-plugin azure` or `--network-plugin kubenet`. Instead of using a system-assigned identity, you can also use a user-assigned identity. For more information, see [Use managed identities](use-managed-identity.md). -### Create an AKS cluster for Azure network policies +### Create an AKS cluster with Azure NPM enabled - Linux only ++In this section, we will work on creating a cluster with Linux node pools and Azure NPM enabled. -You can replace the *RESOURCE_GROUP_NAME* and *CLUSTER_NAME* variables: +To begin, you should replace the values for *$RESOURCE_GROUP_NAME* and *$CLUSTER_NAME* variables. ```azurecli-interactive-RESOURCE_GROUP_NAME=myResourceGroup-NP -CLUSTER_NAME=myAKSCluster -LOCATION=canadaeast +$RESOURCE_GROUP_NAME=myResourceGroup-NP +$CLUSTER_NAME=myAKSCluster +$LOCATION=canadaeast +``` -Create the AKS cluster and specify *azure* for the network plugin and network policy. +Create the AKS cluster and specify *azure* for the `network-plugin` and `network-policy`. +Use the following command to create a cluster: ```azurecli az aks create \ --resource-group $RESOURCE_GROUP_NAME \ az aks create \ --network-policy azure ``` -It takes a few minutes to create the cluster. When the cluster is ready, configure `kubectl` to connect to your Kubernetes cluster by using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them: +### Create an AKS cluster with Azure NPM enabled - Windows Server 2022 (Preview) -```azurecli-interactive -az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME -``` +In this section, we will work on creating a cluster with Windows node pools and Azure NPM enabled. -### Create an AKS cluster for Calico network policies +Please execute the following commands prior to creating a cluster: -Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the network policy. Using *calico* as the network policy enables Calico networking on both Linux and Windows node pools. --If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password]. +```azurecli + az extension add --name aks-preview + az extension update --name aks-preview + az feature register --namespace Microsoft.ContainerService --name AKSWindows2022Preview + az feature register --namespace Microsoft.ContainerService --name WindowsNetworkPolicyPreview + az provider register -n Microsoft.ContainerService +``` -> [!IMPORTANT] -> At this time, using Calico network policies with Windows nodes is available on new clusters using Kubernetes version 1.20 or later with Calico 3.17.2 and requires using Azure CNI networking. Windows nodes on AKS clusters with Calico enabled also have [Direct Server Return (DSR)][dsr] enabled by default. +> [!NOTE] +> At this time, Azure NPM with Windows nodes is available on Windows Server 2022 only >-> For clusters with only Linux node pools running Kubernetes 1.20 with earlier versions of Calico, the Calico version will automatically be upgraded to 3.17.2. -Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following commands prompt you for a username and set it WINDOWS_USERNAME for use in a later command (remember that the commands in this article are entered into a BASH shell). +Now, you should replace the values for *$RESOURCE_GROUP_NAME*, *$CLUSTER_NAME* and *$WINDOWS_USERNAME* variables. ++```azurecli-interactive +$RESOURCE_GROUP_NAME=myResourceGroup-NP +$CLUSTER_NAME=myAKSCluster +$WINDOWS_USERNAME=myWindowsUserName +$LOCATION=canadaeast +``` ++Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following command prompts you for a username. Set it to `$WINDOWS_USERNAME`(remember that the commands in this article are entered into a BASH shell). ```azurecli-interactive echo "Please enter the username to use as administrator credentials for Windows Server containers on your cluster: " && read WINDOWS_USERNAME ``` +Use the following command to create a cluster : + ```azurecli az aks create \ --resource-group $RESOURCE_GROUP_NAME \ az aks create \ --node-count 1 \ --windows-admin-username $WINDOWS_USERNAME \ --network-plugin azure \- --network-policy calico + --network-policy azure ``` It takes a few minutes to create the cluster. By default, your cluster is created with only a Linux node pool. If you would like to use Windows node pools, you can add one. For example: az aks nodepool add \ --node-count 1 ``` -When the cluster is ready, configure `kubectl` to connect to your Kubernetes cluster by using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them: --```azurecli-interactive -az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME -``` --## Deny all inbound traffic to a pod --Before you define rules to allow specific network traffic, first create a network policy to deny all traffic. This policy gives you a starting point to begin to create an allowlist for only the desired traffic. You can also clearly see that traffic is dropped when the network policy is applied. --For the sample application environment and traffic rules, let's first create a namespace called *development* to run the example pods: --```console -kubectl create namespace development -kubectl label namespace/development purpose=development -``` --Create an example back-end pod that runs NGINX. This back-end pod can be used to simulate a sample back-end web-based application. Create this pod in the *development* namespace, and open port *80* to serve web traffic. Label the pod with *app=webapp,role=backend* so that we can target it with a network policy in the next section: --```console -kubectl run backend --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --labels app=webapp,role=backend --namespace development --expose --port 80 -``` --Create another pod and attach a terminal session to test that you can successfully reach the default NGINX webpage: --```console -kubectl run --rm -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 network-policy --namespace development -``` --Install `wget`: --```console -apt-get update && apt-get install -y wget -``` --At the shell prompt, use `wget` to confirm that you can access the default NGINX webpage: --```console -wget -qO- http://backend -``` --The following sample output shows that the default NGINX webpage returned: -```output -<!DOCTYPE html> -<html> -<head> -<title>Welcome to nginx!</title> -[...] -``` --Exit out of the attached terminal session. The test pod is automatically deleted. --```console -exit -``` --### Create and apply a network policy --Now that you've confirmed you can use the basic NGINX webpage on the sample back-end pod, create a network policy to deny all traffic. Create a file named `backend-policy.yaml` and paste the following YAML manifest. This manifest uses a *podSelector* to attach the policy to pods that have the *app:webapp,role:backend* label, like your sample NGINX pod. No rules are defined under *ingress*, so all inbound traffic to the pod is denied: --```yaml -kind: NetworkPolicy -apiVersion: networking.k8s.io/v1 -metadata: - name: backend-policy - namespace: development -spec: - podSelector: - matchLabels: - app: webapp - role: backend - ingress: [] -``` --Go to [https://shell.azure.com](https://shell.azure.com) to open Azure Cloud Shell in your browser. --Apply the network policy by using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: --```console -kubectl apply -f backend-policy.yaml -``` --### Test the network policy --Let's see if you can use the NGINX webpage on the back-end pod again. Create another test pod and attach a terminal session: --```console -kubectl run --rm -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 network-policy --namespace development -``` --Install `wget`: --```console -apt-get update && apt-get install -y wget -``` +### Create an AKS cluster for Calico network policies -At the shell prompt, use `wget` to see if you can access the default NGINX webpage. This time, set a timeout value to *2* seconds. The network policy now blocks all inbound traffic, so the page can't be loaded, as shown in the following example: +Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the Network Policy. Using *calico* as the Network Policy enables Calico networking on both Linux and Windows node pools. -```console -wget -O- --timeout=2 --tries=1 http://backend -``` +If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password]. -```output -wget: download timed out -``` +> [!IMPORTANT] +> At this time, using Calico network policies with Windows nodes is available on new clusters using Kubernetes version 1.20 or later with Calico 3.17.2 and requires using Azure CNI networking. Windows nodes on AKS clusters with Calico enabled also have [Direct Server Return (DSR)][dsr] enabled by default. +> +> For clusters with only Linux node pools running Kubernetes 1.20 with earlier versions of Calico, the Calico version will automatically be upgraded to 3.17.2. -Exit out of the attached terminal session. The test pod is automatically deleted. +Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following command prompts you for a username. Set it to `$WINDOWS_USERNAME`(remember that the commands in this article are entered into a BASH shell). -```console -exit +```azurecli-interactive +echo "Please enter the username to use as administrator credentials for Windows Server containers on your cluster: " && read WINDOWS_USERNAME ``` -## Allow inbound traffic based on a pod label --In the previous section, a back-end NGINX pod was scheduled, and a network policy was created to deny all traffic. Let's create a front-end pod and update the network policy to allow traffic from front-end pods. --Update the network policy to allow traffic from pods with the labels *app:webapp,role:frontend* and in any namespace. Edit the previous *backend-policy.yaml* file, and add *matchLabels* ingress rules so that your manifest looks like the following example: --```yaml -kind: NetworkPolicy -apiVersion: networking.k8s.io/v1 -metadata: - name: backend-policy - namespace: development -spec: - podSelector: - matchLabels: - app: webapp - role: backend - ingress: - - from: - - namespaceSelector: {} - podSelector: - matchLabels: - app: webapp - role: frontend +```azurecli +az aks create \ + --resource-group $RESOURCE_GROUP_NAME \ + --name $CLUSTER_NAME \ + --node-count 1 \ + --windows-admin-username $WINDOWS_USERNAME \ + --network-plugin azure \ + --network-policy calico ``` -> [!NOTE] -> This network policy uses a *namespaceSelector* and a *podSelector* element for the ingress rule. The YAML syntax is important for the ingress rules to be additive. In this example, both elements must match for the ingress rule to be applied. Kubernetes versions prior to *1.12* might not interpret these elements correctly and restrict the network traffic as you expect. For more about this behavior, see [Behavior of to and from selectors][policy-rules]. --Apply the updated network policy by using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: +It takes a few minutes to create the cluster. By default, your cluster is created with only a Linux node pool. If you would like to use Windows node pools, you can add one. For example: -```console -kubectl apply -f backend-policy.yaml +```azurecli +az aks nodepool add \ + --resource-group $RESOURCE_GROUP_NAME \ + --cluster-name $CLUSTER_NAME \ + --os-type Windows \ + --name npwin \ + --node-count 1 ``` -Schedule a pod that is labeled as *app=webapp,role=frontend* and attach a terminal session: +## Verify Network Policy setup -```console -kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace development -``` --Install `wget`: +When the cluster is ready, configure `kubectl` to connect to your Kubernetes cluster by using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them: -```console -apt-get update && apt-get install -y wget +```azurecli-interactive +az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME ```+To begin verification of Network Policy, we will create a sample application and set traffic rules. -At the shell prompt, use `wget` to see if you can access the default NGINX webpage: +Firstly, let's create a namespace called *demo* to run the example pods: ```console-wget -qO- http://backend +kubectl create namespace demo ``` -Because the ingress rule allows traffic with pods that have the labels *app: webapp,role: frontend*, the traffic from the front-end pod is allowed. The following example output shows the default NGINX webpage returned: +We will now create two pods in the cluster named *client* and *server*. -```output -<!DOCTYPE html> -<html> -<head> -<title>Welcome to nginx!</title> -[...] -``` +>[!NOTE] +> If you want to schedule the *client* or *server* on a particular node, add the following bit before the *--command* argument in the pod creation [kubectl run][kubectl-run] command: -Exit out of the attached terminal session. The pod is automatically deleted. +> ```console +>--overrides='{"spec": { "nodeSelector": {"kubernetes.io/os": "linux|windows"}}}' -```console -exit -``` --### Test a pod without a matching label --The network policy allows traffic from pods labeled *app: webapp,role: frontend*, but should deny all other traffic. Let's test to see whether another pod without those labels can access the back-end NGINX pod. Create another test pod and attach a terminal session: +Create a *server* pod. This pod will serve on TCP port 80: ```console-kubectl run --rm -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 network-policy --namespace development +kubectl run server -n demo --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --labels="app=server" --port=80 --command -- /agnhost serve-hostname --tcp --http=false --port "80" ``` -Install `wget`: +Create a *client* pod. The below command will run bash on the client pod: ```console-apt-get update && apt-get install -y wget +kubectl run -it client -n demo --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --command -- bash ``` -At the shell prompt, use `wget` to see if you can access the default NGINX webpage. The network policy blocks the inbound traffic, so the page can't be loaded, as shown in the following example: -+Now, in a separate window, run the following command to get the server IP: ```console-wget -O- --timeout=2 --tries=1 http://backend +kubectl get pod --output=wide ```+The output should look like: ```output-wget: download timed out -``` --Exit out of the attached terminal session. The test pod is automatically deleted. --```console -exit +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +server 1/1 Running 0 30s 10.224.0.72 akswin22000001 <none> <none> ``` -## Allow traffic only from within a defined namespace --In the previous examples, you created a network policy that denied all traffic, and then updated the policy to allow traffic from pods with a specific label. Another common need is to limit traffic to only within a given namespace. If the previous examples were for traffic in a *development* namespace, create a network policy that prevents traffic from another namespace, such as *production*, from reaching the pods. +### Test Connectivity without Network Policy -First, create a new namespace to simulate a production namespace: +In the client's shell, verify connectivity with the server by executing the following command. Replace *server-ip* by IP found in the output from executing previous command. There will be no output if the connection is successful: ```console-kubectl create namespace production -kubectl label namespace/production purpose=production +/agnhost connect <server-ip>:80 --timeout=3s --protocol=tcp ``` -Schedule a test pod in the *production* namespace that is labeled as *app=webapp,role=frontend*. Attach a terminal session: --```console -kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace production -``` +### Test Connectivity with Network Policy -Install `wget`: --```console -apt-get update && apt-get install -y wget -``` --At the shell prompt, use `wget` to confirm that you can access the default NGINX webpage: --```console -wget -qO- http://backend.development -``` --Because the labels for the pod match what is currently permitted in the network policy, the traffic is allowed. The network policy doesn't look at the namespaces, only the pod labels. The following example output shows the default NGINX webpage returned: --```output -<!DOCTYPE html> -<html> -<head> -<title>Welcome to nginx!</title> -[...] -``` --Exit out of the attached terminal session. The test pod is automatically deleted. --```console -exit -``` --### Update the network policy --Let's update the ingress rule *namespaceSelector* section to only allow traffic from within the *development* namespace. Edit the *backend-policy.yaml* manifest file as shown in the following example: +Create a file named demo-policy.yaml and paste the following YAML manifest to add network policies: ```yaml-kind: NetworkPolicy apiVersion: networking.k8s.io/v1+kind: NetworkPolicy metadata:- name: backend-policy - namespace: development + name: demo-policy + namespace: demo spec: podSelector: matchLabels:- app: webapp - role: backend + app: server ingress: - from:- - namespaceSelector: - matchLabels: - purpose: development - podSelector: + - podSelector: matchLabels:- app: webapp - role: frontend + app: client + ports: + - port: 80 + protocol: TCP ```--In more complex examples, you could define multiple ingress rules, like a *namespaceSelector* and then a *podSelector*. --Apply the updated network policy by using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: --```console -kubectl apply -f backend-policy.yaml -``` --### Test the updated network policy --Schedule another pod in the *production* namespace and attach a terminal session: --```console -kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace production -``` --Install `wget`: --```console -apt-get update && apt-get install -y wget -``` --At the shell prompt, use `wget` to see that the network policy now denies traffic: --```console -wget -O- --timeout=2 --tries=1 http://backend.development -``` --```output -wget: download timed out -``` --Exit out of the test pod: --```console -exit -``` --With traffic denied from the *production* namespace, schedule a test pod back in the *development* namespace and attach a terminal session: --```console -kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace development -``` --Install `wget`: +Specify the name of your YAML manifest and apply it using [kubectl apply][kubectl-apply]: ```console-apt-get update && apt-get install -y wget +kubectl apply –f demo-policy.yaml ``` -At the shell prompt, use `wget` to see that the network policy allows the traffic: +Now, in the client's shell, verify connectivity with the server by executing the following `/agnhost` command: ```console-wget -qO- http://backend +/agnhost connect <server-ip>:80 --timeout=3s --protocol=tcp ``` -Traffic is allowed because the pod is scheduled in the namespace that matches what's permitted in the network policy. The following sample output shows the default NGINX webpage returned: +Connectivity with traffic will be blocked since the server is labeled with app=server, but the client is not labeled. The connect command above will yield this output: ```output-<!DOCTYPE html> -<html> -<head> -<title>Welcome to nginx!</title> -[...] +TIMEOUT ``` -Exit out of the attached terminal session. The test pod is automatically deleted. +Run the following command to label the *client* and verify connectivity with the server (output should return nothing). ```console-exit +kubectl label pod client -n demo app=client ``` ## Clean up resources -In this article, we created two namespaces and applied a network policy. To clean up these resources, use the [kubectl delete][kubectl-delete] command and specify the resource names: +In this article, we created a namespace and two pods and applied a Network Policy. To clean up these resources, use the [kubectl delete][kubectl-delete] command and specify the resource name: ```console-kubectl delete namespace production -kubectl delete namespace development +kubectl delete namespace demo ``` ## Next steps To learn more about policies, see [Kubernetes network policies][kubernetes-netwo <!-- LINKS - external --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete+[kubectl-run]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run [kubernetes-network-policies]: https://kubernetes.io/docs/concepts/services-networking/network-policies/ [azure-cni]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md [policy-rules]: https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors |
api-management | Api Management Transformation Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-transformation-policies.md | or ``` > [!NOTE]-> Backend entities can be managed via [Azure portal](how-to-configure-service-fabric-backend.md), management [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement). +> Backend entities can be managed via [Azure portal](how-to-configure-service-fabric-backend.md), management [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement). Currently, if you define a base `set-backend-service` policy using the `backend-id` attribute and inherit the base policy using `<base />` within the scope, then it can be only overridden with a policy using the `backend-id` attribute, not the `base-url` attribute. ### Example OriginalUrl. - **Policy scopes:** all scopes |
app-service | Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md | description: Connect privately to a Web App using Azure Private Endpoint ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 03/04/2022 Last updated : 08/23/2022 From a security perspective: - By default, when you enable Private Endpoints to your Web App, you disable all public access. - You can enable multiple Private Endpoints in others VNets and Subnets, including VNets in other regions. - The IP address of the Private Endpoint NIC must be dynamic, but will remain the same until you delete the Private Endpoint.-- The NIC of the Private Endpoint can't have an NSG associated. - The Subnet that hosts the Private Endpoint can have an NSG associated, but you must disable the network policies enforcement for the Private Endpoint: see [Disable network policies for private endpoints][disablesecuritype]. As a result, you can't filter by any NSG the access to your Private Endpoint. - By default, when you enable Private Endpoint to your Web App, the [access restrictions][accessrestrictions] configuration of the Web App isn't evaluated. - You can eliminate the data exfiltration risk from the VNet by removing all NSG rules where destination is tag Internet or Azure services. When you deploy a Private Endpoint for a Web App, you can only reach this specific Web App through the Private Endpoint. If you have another Web App, you must deploy another dedicated Private Endpoint for this other Web App. |
app-service | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md | Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure' description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Previously updated : 03/22/2022 Last updated : 08/23/2022 ms.devlang: python To complete this quickstart, you need: 1. An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). 1. <a href="https://www.python.org/downloads/" target="_blank">Python 3.9 or higher</a> installed locally. +>**Note**: This article contains current instructions on deploying a Python web app using Azure App Service. Python on Windows is no longer supported. + ## 1 - Sample application This quickstart can be completed using either Flask or Django. A sample application in each framework is provided to help you follow along with this quickstart. Download or clone the sample application to your local workstation. |
automation | Disable Local Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/disable-local-authentication.md | Disabling local authentication doesn't take effect immediately. Allow a few minu >[!NOTE] > Currently, PowerShell support for the new API version (2021-06-22) or the flag ΓÇô `DisableLocalAuth` is not available. However, you can use the Rest-API with this API version to update the flag.-To allow list and enroll your subscription for this feature in your respective regions, follow the steps in [how to create an Azure support request - Azure supportability | Microsoft Docs](../azure-portal/supportability/how-to-create-azure-support-request.md). ## Re-enable local authentication |
availability-zones | Az Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md | description: Learn about regions and availability zones and how they work to hel Previously updated : 06/21/2022 Last updated : 08/23/2022 |
availability-zones | Az Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md | description: Learn what services are supported by availability zones and underst Previously updated : 08/18/2022 Last updated : 08/23/2022 |
azure-arc | Manage Vm Extensions Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-powershell.md | The following example enables the Key Vault VM extension on an Azure Arc-enabled $location = "regionName" # Start the deployment- New-AzConnectedMachineExtension -ResourceGroupName $resourceGRoup -Location $location -MachineName $machineName -Name "KeyVaultForWindows or KeyVaultforLinux" -Publisher "Microsoft.Azure.KeyVault" -ExtensionType "KeyVaultforWindows or KeyVaultforLinux" -Setting (ConvertTo-Json $settings) + New-AzConnectedMachineExtension -ResourceGroupName $resourceGroup -Location $location -MachineName $machineName -Name "KeyVaultForWindows or KeyVaultforLinux" -Publisher "Microsoft.Azure.KeyVault" -ExtensionType "KeyVaultforWindows or KeyVaultforLinux" -Setting $settings ``` ## List extensions installed |
azure-arc | Manage Vmware Vms In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md | To perform guest OS operations on Arc-enabled VMs, you must enable guest managem |-|-|--| |Custom Script extension |Microsoft.Compute | CustomScriptExtension | |Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |MicrosoftMonitoringAgent |+|Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute | HybridWorkerForWindows| + ### Linux extensions To perform guest OS operations on Arc-enabled VMs, you must enable guest managem |-|-|--| |Custom Script extension |Microsoft.Azure.Extensions |CustomScript | |Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |OmsAgentForLinux |+|Azure Automation Hybrid Runbook Worker extension (preview) | Microsoft.Compute | HybridWorkerForLinux| ## Enable guest management |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md | The following scenarios are supported in Azure Arc-enabled VMware vSphere (previ - App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart). -- App teams and administrators can install extensions such as the Log Analytics agent, Custom Script Extension, and Dependency Agent, on the virtual machines and do operations supported by the extensions.+- App teams and administrators can install extensions such as the Log Analytics agent, Custom Script Extension, Dependency Agent, and Azure Automation Hybrid Runbook Worker extension on the virtual machines and do operations supported by the extensions. ## Supported regions |
azure-fluid-relay | Container Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/container-recovery.md | We aren't recovering (rolling back) existing container. `copyContainer` will giv ### New Container is detached New container is initially in `detached` state. We can continue working with detached container, or immediately attach. After calling `attach` we'll get back unique Container ID, representing newly created instance.++ ## Post-recovery considerations ++When it comes to building use cases around post-recovery scenarios, here are couple of considerations on what application might want do to get its remote collaborators all working on the same container again. ++If you are modeling your application data solely using fluid containers, the communication ΓÇ£linkΓÇ¥ is effectively broken when the container is corrupted. Similar real-world example may be video-call where the original author has shared the link with participants and that link is not working any more. With that perspective in mind, one option is to limit recovery permissions to original author and let them share new container link in the same way they shared original link, after recovering the copy of the original container. ++Alternatively, if you are using fluid framework for transient data only, you can always use your own source-of-truth data and supporting services to manage more autonomous recovery workflows. For example, multiple clients may kick off the recovery process until your app has a first recovered copy. Your app can then notify all participating clients to transition to a new container. This can be useful as any currently active client can unblock the participating group to proceed with collaboration. One consideration here is the incurred costs of redundancy. |
azure-functions | Create First Function Vs Code Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md | adobe-target-content: ./create-first-function-vs-code-csharp-ieux In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. This article creates an HTTP triggered function that runs on .NET 6.0. There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article. -By default, this article shows you how to create C# functions that runs on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in preview) [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](create-first-function-vs-code-csharp.md?tabs=isolated-process). +By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in preview) [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](create-first-function-vs-code-csharp.md?tabs=isolated-process). Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. Completing this quickstart incurs a small cost of a few USD cents or less in you Before you get started, make sure you have the following requirements in place: -+ [.NET 6.0 SDK](https://dotnet.microsoft.com/download/dotnet/6.0) --+ [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 4.x. --+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). --+ [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code. --+ [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. --You also need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). ## <a name="create-an-azure-functions-project"></a>Create your local project |
azure-functions | Create First Function Vs Code Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-java.md | Completing this quickstart incurs a small cost of a few USD cents or less in you Before you get started, make sure you have the following requirements in place: -+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). --+ The [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 11 or 8. --+ [Apache Maven](https://maven.apache.org), version 3.0 or above. --+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). --+ The [Java extension pack](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-pack) --+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. ## <a name="create-an-azure-functions-project"></a>Create your local project |
azure-functions | Create First Function Vs Code Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md | There's also a [CLI-based version](create-first-function-cli-node.md) of this ar Before you get started, make sure you have the following requirements in place: -+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). --+ [Node.js 14.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/). Use the `node --version` command to check your version. --+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). --+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. --+ [Azure Functions Core Tools 4.x](functions-run-local.md#install-the-azure-functions-core-tools). ## <a name="create-an-azure-functions-project"></a>Create your local project |
azure-functions | Create First Function Vs Code Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-powershell.md | There's also a [CLI-based version](create-first-function-cli-powershell.md) of t Before you get started, make sure you have the following requirements in place: -+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). --+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 4.x. --+ [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows) --+ [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download/dotnet) --+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). --+ The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell). --+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. ## <a name="create-an-azure-functions-project"></a>Create your local project |
azure-functions | Create First Function Vs Code Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md | There's also a [CLI-based version](create-first-function-cli-python.md) of this Before you begin, make sure that you have the following requirements in place: -+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). --+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 3.x. --+ Python versions that are [supported by Azure Functions](supported-languages.md#languages-by-runtime-version). For more information, see [How to install Python](https://wiki.python.org/moin/BeginnersGuide/Download). --+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). --+ The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code. --+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. ## <a name="create-an-azure-functions-project"></a>Create your local project |
azure-functions | Durable Functions Event Publishing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-event-publishing.md | The following list explains the lifecycle events schema: ## How to test locally -To test locally, read [Azure Function Event Grid Trigger Local Debugging](../functions-debug-event-grid-trigger-local.md). +To test locally, read [Local testing with viewer web app](../event-grid-how-tos.md#local-testing-with-viewer-web-app). You can also use the *ngrok* utility as shown in [this tutorial](../functions-event-grid-blob-trigger.md#start-local-debugging). ## Next steps |
azure-functions | Event Grid How Tos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-grid-how-tos.md | -+ [Azure Event Grid bindings for Azure Functions](functions-bindings-event-grid.md) ++ [Azure Event Grid bindings Overview](functions-bindings-event-grid.md) + [Azure Event Grid trigger for Azure Functions](functions-bindings-event-grid-trigger.md) + [Azure Event Grid output binding for Azure Functions](functions-bindings-event-grid-output.md) -## Create a subscription --To start receiving Event Grid HTTP requests, create an Event Grid subscription that specifies the endpoint URL that invokes the function. --### Azure portal --For functions that you develop in the Azure portal with the Event Grid trigger, select **Integration** then choose the **Event Grid Trigger** and select **Create Event Grid subscription**. +## Event subscriptions +To start receiving Event Grid HTTP requests, you need a subscription to events raised by Event Grid. Event subscriptions specify the endpoint URL that invokes the function. When you create an event subscription from your function's **Integration** tab in the [Azure portal](https://portal.azure.com), the URL is supplied for you. When you programmatically create an event subscription or when you create the event subscription from Event Grid, you'll need to provide the endpoint. The endpoint URL contains a system key, which you must obtain from Functions administrator REST APIs. -When you select this link, the portal opens the **Create Event Subscription** page with the current trigger endpoint already defined. ---For more information about how to create subscriptions by using the Azure portal, see [Create custom event - Azure portal](../event-grid/custom-event-quickstart-portal.md) in the Event Grid documentation. +### Webhook endpoint URL -### Azure CLI --To create a subscription by using [the Azure CLI](/cli/azure/get-started-with-azure-cli), use the [az eventgrid event-subscription create](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) command. --The command requires the endpoint URL that invokes the function, and the endpoint varies between version 1.x of the Functions runtime and later versions. The following example shows the version-specific URL pattern: +The URL endpoint for your Event Grid triggered function depends on the version of the Functions runtime. The following example shows the version-specific URL pattern: # [v2.x+](#tab/v2) https://{functionappname}.azurewebsites.net/admin/extensions/EventGridExtensionC ``` -The system key is an authorization key that has to be included in the endpoint URL for an Event Grid trigger. The following section explains how to get the system key. --Here's an example that subscribes to a blob storage account (with a placeholder for the system key): --# [Bash](#tab/bash/v2) --```azurecli -az eventgrid resource event-subscription create -g myResourceGroup \ - --provider-namespace Microsoft.Storage --resource-type storageAccounts \ - --resource-name myblobstorage12345 --name myFuncSub \ - --included-event-types Microsoft.Storage.BlobCreated \ - --subject-begins-with /blobServices/default/containers/images/blobs/ \ - --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key> -``` --# [Cmd](#tab/cmd/v2) +### System key -```azurecli -az eventgrid resource event-subscription create -g myResourceGroup ^ - --provider-namespace Microsoft.Storage --resource-type storageAccounts ^ - --resource-name myblobstorage12345 --name myFuncSub ^ - --included-event-types Microsoft.Storage.BlobCreated ^ - --subject-begins-with /blobServices/default/containers/images/blobs/ ^ - --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key> -``` --# [Bash](#tab/bash/v1) --```azurecli -az eventgrid resource event-subscription create -g myResourceGroup \ - --provider-namespace Microsoft.Storage --resource-type storageAccounts \ - --resource-name myblobstorage12345 --name myFuncSub \ - --included-event-types Microsoft.Storage.BlobCreated \ - --subject-begins-with /blobServices/default/containers/images/blobs/ \ - --endpoint https://mystoragetriggeredfunction.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName=imageresizefunc&code=<key> -``` --# [Cmd](#tab/cmd/v1) --```azurecli -az eventgrid resource event-subscription create -g myResourceGroup ^ - --provider-namespace Microsoft.Storage --resource-type storageAccounts ^ - --resource-name myblobstorage12345 --name myFuncSub ^ - --included-event-types Microsoft.Storage.BlobCreated ^ - --subject-begins-with /blobServices/default/containers/images/blobs/ ^ - --endpoint https://mystoragetriggeredfunction.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName=imageresizefunc&code=<key> -``` ----For more information about how to create a subscription, see [the blob storage quickstart](../storage/blobs/storage-blob-event-quickstart.md#subscribe-to-your-storage-account) or the other Event Grid quickstarts. --### Get the system key +The URL endpoint you construct includes the system key value. The system key is an authorization key that has to be included in the endpoint URL for an Event Grid trigger. The following section explains how to get the system key. You can get the system key by using the following API (HTTP GET): http://{functionappname}.azurewebsites.net/admin/host/systemkeys/eventgridextens -This REST API is an administrator API, so it requires your function app [master key](functions-bindings-http-webhook-trigger.md#authorization-keys). Don't confuse the system key (for invoking an Event Grid trigger function) with the master key (for performing administrative tasks on the function app). When you subscribe to an event grid topic, be sure to use the system key. +This REST API is an administrator API, so it requires your function app [master key](functions-bindings-http-webhook-trigger.md#authorization-keys). Don't confuse the system key (for invoking an Event Grid trigger function) with the master key (for performing administrative tasks on the function app). When you subscribe to an Event Grid topic, be sure to use the system key. Here's an example of the response that provides the system key: You can get the master key for your function app from the **Function app setting For more information, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys) in the HTTP trigger reference article. +### <a name="create-a-subscription"></a>Create an event subscription ++You can create an event subscription either from the [Azure portal](https://portal.azure.com) or by using the Azure CLI. ++# [Portal](#tab/portal) ++For functions that you develop in the Azure portal with the Event Grid trigger, select **Integration** then choose the **Event Grid Trigger** and select **Create Event Grid subscription**. +++When you select this link, the portal opens the **Create Event Subscription** page with the current trigger endpoint already defined. +++For more information about how to create subscriptions by using the Azure portal, see [Create custom event - Azure portal](../event-grid/custom-event-quickstart-portal.md) in the Event Grid documentation. ++# [Azure CLI](#tab/azure-cli) ++To create a subscription by using [the Azure CLI](/cli/azure/get-started-with-azure-cli), use the [`az eventgrid event-subscription create`](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create) command. Examples use the v2.x+ version of the URL and are written to run in [Azure Cloud Shell](../cloud-shell/overview.md). You'll need to modify the examples to run from a Windows command prompt. ++This example creates a subscription to a blob storage account, with a placeholder for the [system key](#system-key): ++```azurecli-interactive +az eventgrid resource event-subscription create -g myResourceGroup \ + --provider-namespace Microsoft.Storage --resource-type storageAccounts \ + --resource-name myblobstorage12345 --name myFuncSub \ + --included-event-types Microsoft.Storage.BlobCreated \ + --subject-begins-with /blobServices/default/containers/images/blobs/ \ + --endpoint https://mystoragetriggeredfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=imageresizefunc&code=<key> +``` ++++For more information about how to create a subscription, see [the blob storage quickstart](../storage/blobs/storage-blob-event-quickstart.md#subscribe-to-your-storage-account) or the other Event Grid quickstarts. + ## Local testing with viewer web app To test an Event Grid trigger locally, you have to get Event Grid HTTP requests delivered from their origin in the cloud to your local machine. One way to do that is by capturing requests online and manually resending them on your local machine: To test an Event Grid trigger locally, you have to get Event Grid HTTP requests 1. [Generate a request](#generate-a-request) and copy the request body from the viewer app. 1. [Manually post the request](#manually-post-the-request) to the localhost URL of your Event Grid trigger function. -When you're done testing, you can use the same subscription for production by updating the endpoint. Use the [az eventgrid event-subscription update](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-update) Azure CLI command. +When you're done testing, you can use the same subscription for production by updating the endpoint. Use the [`az eventgrid event-subscription update`](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-update) Azure CLI command. ++You can also use the *ngrok* utility to forward remote requests to your locally running functions. For more information, see [this tutorial](./functions-event-grid-blob-trigger.md#start-local-debugging). ### Create a viewer web app |
azure-functions | Functions Bindings Event Grid Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md | Title: Azure Event Grid trigger for Azure Functions description: Learn to run code when Event Grid events in Azure Functions are dispatched.- Last updated 03/04/2022 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers # Azure Event Grid trigger for Azure Functions -Use the function trigger to respond to an event sent to an event grid topic. To learn how to work with the Event Grid trigger. ---For information on setup and configuration details, see the [overview](./functions-bindings-event-grid.md). +Use the function trigger to respond to an event sent by an [Event Grid source](../event-grid/overview.md). You must have an event subscription to the source to receive events. To learn how to create an event subscription, see [Create a subscription](event-grid-how-tos.md#create-a-subscription). For information on binding setup and configuration, see the [overview](./functions-bindings-event-grid.md). > [!NOTE] > Event Grid triggers aren't natively supported in an internal load balancer App Service Environment (ASE). The trigger uses an HTTP request that can't reach the function app without a gateway into the virtual network. Upon arrival, the event's JSON payload is de-serialized into the ```EventSchema` } ``` -In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `EventGridTrigger` annotation on parameters whose value would come from EventGrid. Parameters with these annotations cause the function to run when an event arrives. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`. +In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `EventGridTrigger` annotation on parameters whose value would come from Event Grid. Parameters with these annotations cause the function to run when an event arrives. This annotation can be used with native Java types, POJOs, or nullable values using `Optional<T>`. ::: zone-end ::: zone pivot="programming-language-javascript" The following example shows a trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. |
azure-functions | Functions Bindings Register | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md | The following table lists the currently available versions of the default *Micro <sup>1</sup> Version 3.x of the extension bundle currently doesn't include the [Table Storage bindings](./functions-bindings-storage-table.md). If your app requires Table Storage, you'll need to continue using the 2.x version for now. > [!NOTE]-> While you can a specify custom version range in host.json, we recommend you use a version value from this table. +> Even though host.json supports custom ranges for `version`, you should use a version value from this table. ## Explicitly install extensions |
azure-functions | Functions Bindings Storage Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md | zone_pivot_groups: programming-languages-set-functions-lang-workers The Blob storage trigger starts a function when a new or updated blob is detected. The blob contents are provided as [input to the function](./functions-bindings-storage-blob-input.md). -The Azure Blob storage trigger requires a general-purpose storage account. Storage V2 accounts with [hierarchical namespaces](../storage/blobs/data-lake-storage-namespace.md) are also supported. To use a blob-only account, or if your application has specialized needs, review the alternatives to using this trigger. +There are several ways to execute your function code based on changes to blobs in a storage container. Use the following table to determine which function trigger best fits your needs: -For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md). +| | Blob Storage (standard) | Blob Storage (event-based) | Queue Storage | Event Grid | +| -- | -- | -- | -- | - | +| Latency | High (up to 10 min) | Low | Medium | Low | +| [Storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts) limitations | Blob-only accounts not supported┬╣ | general purpose v1 not supported | none | general purpose v1 not supported | +| Extension version |Any | Storage v5.x+ |Any |Any | +| Processes existing blobs | Yes | No | No | No | +| Filters | [Blob name pattern](#blob-name-patterns) | [Event filters](../storage/blobs/storage-blob-event-overview.md#filtering-events) | n/a | [Event filters](../storage/blobs/storage-blob-event-overview.md#filtering-events) | +| Requires [event subscription](../event-grid/concepts.md#event-subscriptions) | No | Yes | No | Yes | +| Supports high-scale┬▓ | No | Yes | Yes | Yes | +| Description | Default trigger behavior, which relies on polling the container for updates. For more information, see the [examples in this article](#example). | Consumes blob storage events from an event subscription. Requires a `Source` parameter value of `EventGrid`. For more information, see [Tutorial: Trigger Azure Functions on blob containers using an event subscription](./functions-event-grid-blob-trigger.md). | Blob name string is manually added to a storage queue when a blob is added to the container. This value is passed directly by a Queue Storage trigger to a Blob Storage input binding on the same function. | Provides the flexibility of triggering on events besides those coming from a storage container. Use when need to also have non-storage events trigger your function. For more information, see [How to work with Event Grid triggers and bindings in Azure Functions](event-grid-how-tos.md). | ++┬╣Blob Storage input and output bindings support blob-only accounts. +┬▓High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second. ++For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md). ## Example To look for curly braces in file names, escape the braces by using two braces. T If the blob is named *{20140101}-soundfile.mp3*, the `name` variable value in the function code is *soundfile.mp3*. +## Polling and latency --## Polling --Polling works as a hybrid between inspecting logs and running periodic container scans. Blobs are scanned in groups of 10,000 at a time with a continuation token used between intervals. +Polling works as a hybrid between inspecting logs and running periodic container scans. Blobs are scanned in groups of 10,000 at a time with a continuation token used between intervals. If your function app is on the Consumption plan, there can be up to a 10-minute delay in processing new blobs if a function app has gone idle. > [!WARNING]-> In addition, [storage logs are created on a "best effort"](/rest/api/storageservices/About-Storage-Analytics-Logging) basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed. -> -> If you require faster or more reliable blob processing, consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob. Another option is to use Event Grid; see the tutorial [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md). -> --## Alternatives --### Event Grid trigger --> [!NOTE] -> When using Storage Extensions 5.x and higher, the Blob trigger has built-in support for an Event Grid based Blob trigger. For more information, see the [Storage extension 5.x and higher](#storage-extension-5x-and-higher) section below. --The [Event Grid trigger](functions-bindings-event-grid.md) also has built-in support for [blob events](../storage/blobs/storage-blob-event-overview.md). Use Event Grid instead of the Blob storage trigger for the following scenarios: --- **Blob-only storage accounts**: [Blob-only storage accounts](../storage/common/storage-account-overview.md#types-of-storage-accounts) are supported for blob input and output bindings but not for blob triggers.--- **High-scale**: High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.--- **Existing Blobs**: The blob trigger will process all existing blobs in the container when you set up the trigger. If you have a container with many existing blobs and only want to trigger for new blobs, use the Event Grid trigger.--- **Minimizing latency**: If your function app is on the Consumption plan, there can be up to a 10-minute delay in processing new blobs if a function app has gone idle. To avoid this latency, you can switch to an App Service plan with Always On enabled. You can also use an [Event Grid trigger](functions-bindings-event-grid.md) with your Blob storage account. For an example, see the [Event Grid tutorial](../event-grid/resize-images-on-storage-blob-upload-event.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json).--See the [Image resize with Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) tutorial of an Event Grid example. --#### Storage Extension 5.x and higher --When using the storage extension, there is built-in support for Event Grid in the Blob trigger, which requires setting the `source` parameter to Event Grid in your existing Blob trigger. --For more information on how to use the Blob Trigger based on Event Grid, refer to the [Event Grid Blob Trigger guide](./functions-event-grid-blob-trigger.md). +> [Storage logs are created on a "best effort"](/rest/api/storageservices/About-Storage-Analytics-Logging) basis. There's no guarantee that all events are captured. Under some conditions, logs may be missed. -### Queue storage trigger +If you require faster or more reliable blob processing, you should instead implement one of the following strategies: -Another approach to processing blobs is to write queue messages that correspond to blobs being created or modified and then use a [Queue storage trigger](./functions-bindings-storage-queue.md) to begin processing. ++ Change your binding definition to consume [blob events](../storage/blobs/storage-blob-event-overview.md) instead of polling the container. You can do this in one of two ways:+ + Add the `source` parameter with a value of `EventGrid` to your binding definition and create an event subscription on the same container. For more information, see [Tutorial: Trigger Azure Functions on blob containers using an event subscription](./functions-event-grid-blob-trigger.md). + + Replace the Blob Storage trigger with an [Event Grid trigger](functions-bindings-event-grid-trigger.md) using an event subscription on the same container. For more information, see the [Image resize with Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) tutorial. ++ Consider creating a [queue message](../storage/queues/storage-dotnet-how-to-use-queues.md) when you create the blob. Then use a [queue trigger](functions-bindings-storage-queue.md) instead of a blob trigger to process the blob.++ Switch your hosting to use an App Service plan with Always On enabled, which may result in increased costs. ## Blob receipts |
azure-functions | Functions Debug Event Grid Trigger Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-debug-event-grid-trigger-local.md | - Title: Azure Functions Event Grid local debugging -description: Learn to locally debug Azure Functions triggered by an Event Grid event -- Previously updated : 10/18/2018---# Azure Function Event Grid Trigger Local Debugging --This article demonstrates how to debug a local function that handles an Azure Event Grid event raised by a storage account. --## Prerequisites --- Create or use an existing function app-- Create or use an existing storage account. Event Grid notification subscription can be set on Azure Storage accounts for `BlobStorage`, `StorageV2`, or [Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).-- Download [ngrok](https://ngrok.com/) to allow Azure to call your local function--## Create a new function --Open your function app in Visual Studio and, right-click on the project name in the Solution Explorer and click **Add > New Azure Function**. --In the *New Azure Function* window, select **Event Grid trigger** and click **OK**. -- --Once the function is created, open the code file and copy the URL commented out at the top of the file. This location is used when configuring the Event Grid trigger. -- --Then, set a breakpoint on the line that begins with `log.LogInformation`. -- ---Next, **press F5** to start a debugging session. ---## Debug the function --Once the Event Grid recognizes a new file is uploaded to the storage container, the break point is hit in your local function. -- --## Clean up resources --To clean up the resources created in this article, delete the **test** container in your storage account. --## Next steps --- [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md)-- [Event Grid trigger for Azure Functions](./functions-bindings-event-grid.md) |
azure-functions | Functions Develop Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md | The following steps publish your project to a new function app created with adva | | -- | | Enter a globally unique name for the new function app. | Type a globally unique name that identifies your new function app and then select Enter. Valid characters for a function app name are `a-z`, `0-9`, and `-`. | | Select a runtime stack. | Choose the language version on which you've been running locally. |- | Select an OS. | Choose either Linux or Windows. Python apps must run on Linux | + | Select an OS. | Choose either Linux or Windows. Python apps must run on Linux. | | Select a resource group for new resources. | Choose **Create new resource group** and type a resource group name, like `myResourceGroup`, and then select enter. You can also select an existing resource group. | | Select a location for new resources. | Select a location in a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. | | Select a hosting plan. | Choose **Consumption** for serverless [Consumption plan hosting](consumption-plan.md), where you're only charged when your functions run. | |
azure-functions | Functions Event Grid Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-grid-blob-trigger.md | Title: Azure Functions Event Grid Blob Trigger -description: Learn to setup and debug with the Event Grid Blob Trigger + Title: 'Tutorial: Trigger Azure Functions on blob containers using an event subscription' +description: In this tutorial, you learn how to use an Event Grid event subscription to create a low-latency, event-driven trigger on an Azure Blob Storage container. --+ Last updated 3/1/2021 +zone_pivot_groups: programming-languages-set-functions-lang-workers +#Customer intent: As an Azure Functions developer, I want learn how to create an Event Grid-based trigger on a Blob Storage container so that I can get a more rapid response to changes in the container. -# Azure Function Event Grid Blob Trigger +# Tutorial: Trigger Azure Functions on blob containers using an event subscription -This article demonstrates how to debug and deploy a local Event Grid Blob triggered function that handles events raised by a storage account. +Earlier versions of the Blob Storage trigger for Azure Functions polled the container for updates, which often resulted in delayed execution. By using the latest version of the extension, you can reduce latency by instead triggering on an event subscription to the same blob container. The event subscription uses Event Grid to forward changes in the blob container as events for your function to consume. This article demonstrates how to use Visual Studio Code to locally develop a function that runs based events raised when a blob is added to a container. You'll locally verify the function before deploying your project to Azure. -> [!NOTE] -> The Event Grid Blob trigger is in preview. +> [!div class="checklist"] +> * Create a general storage v2 account in Azure Storage. +> * Create a container in blob storage. +> * Create an event-driven Blob Storage triggered function. +> * Create an event subscription to a blob container. +> * Debug locally using ngrok by uploading files. +> * Deploy to Azure and create a filtered event subscription. ## Prerequisites -- Create or use an existing function app-- Create or use an existing storage account-- Have version 5.0+ of the [Microsoft.Azure.WebJobs.Extensions.Storage extension](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0-beta.2) installed-- Download [ngrok](https://ngrok.com/) to allow Azure to call your local function++ The [ngrok](https://ngrok.com/) utility, which provides a way for Azure to call into your locally running function.+++ The [Azure Storage extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage) for Visual Studio Code. -## Create a new function +> [!NOTE] +> The Storage Extension for Visual Studio Code is currently in preview. -1. Open your function app in Visual Studio Code. +## Create a storage account -1. **Press F1** to create a new blob trigger function. Make sure to use the connection string for your storage account. +Using an event subscription to Azure Storage requires you to use a general-purpose v2 storage account. With the Azure Storage extension installed, you can create this kind of storage account by default from your Visual Studio Code project. -1. The default url for your event grid blob trigger is: +1. In Visual Studio Code, open the command palette (press F1), type `Azure Storage: Create Storage Account...`, and then provide the following information at the prompts: - # [C#](#tab/csharp) + |Prompt|Selection| + |--|--| + |**Enter the name of the new storage account**| Type a globally unique name. Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. We'll use the same name for the resource group and the function app name, to make it easier. | + |**Select a location for new resources**| For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.| - ```http - http://localhost:7071/runtime/webhooks/blobs?functionName={functionname} - ``` + The extension creates a new general-purpose v2 storage account with the name you provided. The same name is also used for the resource group in which the storage account is created. - # [Python](#tab/python) +1. After the storage account is created, open the command palette (press F1) and type `Azure Storage: Create Blob Container...`, and then provide the following information at the prompts: - ```http - http://localhost:7071/runtime/webhooks/blobs?functionName=Host.Functions.{functionname} - ``` + |Prompt|Selection| + |--|--| + |**Select a resource**| Choose the name of the storage account you created. | + |**Enter a name for the new blob container**| Type `samples-workitems`, which is the container name referenced in your code project.| - # [Java](#tab/java) +Now that you have the blob container, you can create both the function that triggers on this container and the event subscription that delivers events to your function. - ```http - http://localhost:7071/runtime/webhooks/blobs?functionName=Host.Functions.{functionname} - ``` +## Create a Blob triggered function - +When you use Visual Studio Code to create a Blob Storage triggered function, you also create a new project. You'll then need to modify the function to consume an event subscription as the source instead of the regular polled container. - Note your function app's name and that the trigger type is a blob trigger, which is indicated by `blobs` in the url. This will be needed when setting up endpoints later in the how to guide. +1. Open your function app in Visual Studio Code. -1. Once the function is created, add the Event Grid source parameter. +1. Open the command palette (press F1) and type `Azure Functions: Create Function...` and select **Create new project**. ++1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace. ++1. Provide the following information at the prompts: ++ ::: zone pivot="programming-language-csharp" + |Prompt|Selection| + |--|--| + |**Select a language**|Choose `C#`.| + |**Select a .NET runtime**| Choose `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated process. | + |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.| + |**Provide a function name**|Type `BlobTriggerEventGrid`.| + |**Provide a namespace** | Type `My.Functions`. | + |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.| + |**Select a storage account**|Choose the storage account you created from the list. | + |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | + |**Select how you would like to open your project**|Choose `Add to workspace`.| + ::: zone-end + ::: zone pivot="programming-language-python" + |Prompt|Selection| + |--|--| + |**Select a language**|Choose `Python`.| + |**Select a Python interpreter to create a virtual environment**| Choose your preferred Python interpreter. If an option isn't shown, type in the full path to your Python binary.| + |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.| + |**Provide a function name**|Type `BlobTriggerEventGrid`.| + |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.| + |**Select a storage account**|Choose the storage account you created from the list. | + |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | + |**Select how you would like to open your project**|Choose `Add to workspace`.| + ::: zone-end + ::: zone pivot="programming-language-java" + |Prompt|Selection| + |--|--| + |**Select a language**|Choose `Java`.| + |**Select a version of Java**| Choose `Java 11` or `Java 8`, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally. | + | **Provide a group ID** | Choose `com.function`. | + | **Provide an artifact ID** | Choose `BlobTriggerEventGrid`. | + | **Provide a version** | Choose `1.0-SNAPSHOT`. | + | **Provide a package name** | Choose `com.function`. | + | **Provide an app name** | Accept the generated name starting with `BlobTriggerEventGrid`. | + | **Select the build tool for Java project** | Choose `Maven`. | + |**Select how you would like to open your project**|Choose `Add to workspace`.| + ::: zone-end + ::: zone pivot="programming-language-javascript" + |Prompt|Selection| + |--|--| + |**Select a language for your function project**|Choose `JavaScript`.| + |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.| + |**Provide a function name**|Type `BlobTriggerEventGrid`.| + |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.| + |**Select a storage account**|Choose the storage account you created from the list. | + |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | + |**Select how you would like to open your project**|Choose `Add to workspace`.| + ::: zone-end + ::: zone pivot="programming-language-powershell" + |Prompt|Selection| + |--|--| + |**Select a language for your function project**|Choose `PowerShell`.| + |**Select a template for your project's first function**|Choose `Azure Blob Storage trigger`.| + |**Provide a function name**|Type `BlobTriggerEventGrid`.| + |**Select setting from "local.settings.json"**|Choose `Create new local app setting`.| + |**Select a storage account**|Choose the storage account you created from the list. | + |**This is the path within your storage account that the trigger will monitor**| Accept the default value `samples-workitems`. | + |**Select how you would like to open your project**|Choose `Add to workspace`.| + ::: zone-end ++1. When prompted, choose **Select storage account** and then **Add to workspace**. ++To simplify things, this tutorial reuses the same storage account with your function app. In production, you might want to use a separate storage account for your function app. For more information, see [Storage considerations for Azure Functions](storage-considerations.md). ++## Upgrade the Blob Storage extension ++To be able to use the Event Grid-based Blog Storage trigger, your function needs to be using version 5.x of the Blob Storage extension. ++To upgrade your project to use the latest extension, run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window. ++<!# [In-process](#tab/in-process) --> +```bash +dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.1 +``` +<!# [Isolated process](#tab/isolated-process) +```bash +dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage --version 5.0.0 +``` ++--> - # [C#](#tab/csharp) - Add **Source = BlobTriggerSource.EventGrid** to the function parameters. - - ```csharp - [FunctionName("BlobTriggerCSharp")] - public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "connection")]Stream myBlob, string name, ILogger log) - { - log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes"); - } +1. Open the host.json project file and inspect the `extensionBundle` element. ++1. If `extensionBundle.version` isn't at least `3.3.0 `, replace `extensionBundle` with the following version: ++ ```json + "extensionBundle": { + "id": "Microsoft.Azure.Functions.ExtensionBundle", + "version": "[3.3.0, 4.0.0)" + } ``` - # [Python](#tab/python) - Add **"source": "EventGrid"** to the function.json binding data. ++## Update the function to use events ++Open the BlobTriggerEventGrid.cs file and, add `Source = BlobTriggerSource.EventGrid` to the parameters for the blob trigger attribute, as shown in the following example: - ```json +```csharp +[FunctionName("BlobTriggerCSharp")] +public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "<NAMED_STORAGE_CONNECTION>")]Stream myBlob, string name, ILogger log) +{ + log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes"); +} +``` +After the function is created add `"source": "EventGrid"` to the `myBlob` binding in the function.json configuration file, as shown in the following example: + +```json +{ + "scriptFile": "__init__.py", + "bindings": [ {- "scriptFile": "__init__.py", - "bindings": [ - { - "name": "myblob", - "type": "blobTrigger", - "direction": "in", - "path": "samples-workitems/{name}", - "source": "EventGrid", - "connection": "MyStorageAccountConnectionString" + "name": "myblob", + "type": "blobTrigger", + "direction": "in", + "path": "samples-workitems/{name}", + "source": "EventGrid", + "connection": "<NAMED_STORAGE_CONNECTION>" + } + ] +} +``` +1. Replace contents of the generated `Function.java` file with the following code and rename the file to `BlobTriggerEventGrid.java`: ++ ```java + package com.function; ++ import com.microsoft.azure.functions.annotation.*; + import com.microsoft.azure.functions.*; ++ /** + * Azure Functions with Azure Blob trigger. + */ + public class BlobTriggerEventGrid { + /** + * This function will be invoked when a new or updated blob is detected at the specified path. The blob contents are provided as input to this function. + */ + @FunctionName("BlobTriggerEventGrid") + @StorageAccount("glengatesteventgridblob_STORAGE") + public void run( + @BlobTrigger(name = "content", path = "samples-workitems/{name}", dataType = "binary", source = "EventGrid" ) byte[] content, + @BindingName("name") String name, + final ExecutionContext context + ) { + context.getLogger().info("Java Blob trigger function processed a blob. Name: " + name + "\n Size: " + content.length + " Bytes"); }- ] } ```-- # [Java](#tab/java) - **Press F5** to build the function. Once the build is complete, add **"source": "EventGrid"** to the **function.json** binding data. +2. Remove the associated unit test file, which is no longer relevant to the new trigger type. +After the function is created, add `"source": "EventGrid"` to the `myBlob` binding in the function.json configuration file, as shown in the following example: - ```json +```json +{ + "bindings": [ {- "scriptFile" : "../java-1.0-SNAPSHOT.jar", - "entryPoint" : "com.function.{MyFunctionName}.run", - "bindings" : [ { - "type" : "blobTrigger", - "direction" : "in", - "name" : "content", - "path" : "samples-workitems/{name}", - "dataType" : "binary", - "source": "EventGrid", - "connection" : "MyStorageAccountConnectionString" - } ] + "name": "myblob", + "type": "blobTrigger", + "direction": "in", + "path": "samples-workitems/{name}", + "source": "EventGrid", + "connection": "<NAMED_STORAGE_CONNECTION>" }+ ] +} + ``` ++## Start local debugging ++Event Grid validates the endpoint URL when you create an event subscription in the Azure portal. This validation means that before you can create an event subscription for local debugging, your function must be running locally with remote access enabled by the ngrok utility. If your local function code isn't running and accessible to Azure, you won't be able to create the event subscription. ++### Determine the blob trigger endpoint ++When your function runs locally, the default endpoint used for an event-driven blob storage trigger looks like the following URL: ++```http +http://localhost:7071/runtime/webhooks/blobs?functionName=BlobTriggerEventGrid +``` +```http +http://localhost:7071/runtime/webhooks/blobs?functionName=Host.Functions.BlobTriggerEventGrid +``` ++Save this path, which you'll use later to create endpoint URLs for event subscriptions. If you used a different name for your Blob Storage triggered function, you need to change the `functionName` value in the query string. ++> [!NOTE] +> Because the endpoint is handling events for a Blob Storage trigger, the endpoint path includes `blobs`. The endpoint URL for an Event Grid trigger would instead have `eventgrid` in the path. ++### Run ngrok ++To break into a function being debugged on your machine, you must provide a way for Azure Event Grid to communicate with functions running on your local computer. ++The [ngrok](https://ngrok.com/) utility forwards external requests to a randomly generated proxy server address to a specific address and port on your local computer. through to call the webhook endpoint of the function running on your machine. ++1. Start *ngrok* using the following command: ++ ```bash + ngrok.exe http http://localhost:7071 ``` - + As the utility starts, the command window should look similar to the following screenshot: -1. Set a breakpoint in your function on the line that handles logging. +  -1. Start a debugging session. +1. Copy the **HTTPS** URL generated when *ngrok* is run. This value is used to determine the webhook endpoint on your computer exposed using ngrok. ++> [!IMPORTANT] +> At this point, don't stop `ngrok`. Every time you start `ngrok`, the HTTPS URL is regenerated with a different value. Because the endpoint of an event subscription can't be modified, you have to create a new event subscription every time you run `ngrok`. +> +> Unless you create an ngrok account, the maximum ngrok session time is limited to two hours. ++### Build the endpoint URL ++The endpoint used in the event subscription is made up of three different parts, a prefixed server name, a path, and a query string. The following table describes these parts: - # [C#](#tab/csharp) - **Press F5** to start a debugging session. +| URL part | Description | +| | | +| Prefix and server name | When your function runs locally, the server name with an `https://` prefix comes from the **Forwarding** URL generated by *ngrok*. In the localhost URL, the *ngrok* URL replaces `http://localhost:7071`. When running in Azure, you'll instead use the published function app server, which is usually in the form `https://<FUNCTION_APP_NAME>.azurewebsites.net`. | +| Path | The path portion of the endpoint URL comes from the localhost URL copied earlier, and looks like `/runtime/webhooks/blobs` for a Blob Storage trigger. The path for an Event Grid trigger would be `/runtime/webhooks/EventGrid` | +| Query string | The `functionName=BlobTriggerEventGrid` parameter in the query string sets the name of the function that handles the event. For functions other than C#, the function name is qualified by `Host.Functions.`. If you used a different name for your function, you'll need to change this value. An access key isn't required when running locally. When running in Azure, you'll also need to include a `code=` parameter in the URL, which contains a key that you can get from the portal. | ++The following screenshot shows an example of how the final endpoint URL should look when using a Blob Storage trigger named `BlobTriggerEventGrid`: ++  +  ++### Start debugging ++With ngrok already running, start your local project as follows: ++1. Set a breakpoint in your function on the line that handles logging. - # [Python](#tab/python) - **Press F5** to start a debugging session. +1. Start a debugging session. - # [Java](#tab/java) - Open a new terminal and run the below mvn command to start the debugging session. + ::: zone pivot="programming-language-java" + Open a new terminal and run the following `mvn` command to start the debugging session. ```bash mvn azure-functions:run ```+ ::: zone-end + ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-csharp" + Press **F5** to start a debugging session. + ::: zone-end - +With your code running and ngrok forwarding requests, it's time to create an event subscription to the blob container. +## Create the event subscription -## Debug the function -Once the Blob Trigger recognizes a new file is uploaded to the storage container, the break point is hit in your local function. +An event subscription, powered by Azure Event Grid, raises events based on changes in the linked blob container. This event is then sent to the webhook endpoint on your function's trigger. After an event subscription is created, the endpoint URL can't be changed. This means that after you're done with local debugging (or if you restart ngrok), you'll need to delete and recreate the event subscription. -## Deployment +1. In Visual Studio Code, choose the Azure icon in the Activity bar. In **Resources**, expand your subscription, expand **Storage accounts**, right-click the storage account you created earlier, and select **Open in portal**. -As you deploy the function app to Azure, update the webhook endpoint from your local endpoint to your deployed app endpoint. To update an endpoint, follow the steps in [Add a storage event](#add-a-storage-event) and use the below for the webhook URL in step 5. The `<BLOB-EXTENSION-KEY>` can be found in the **App Keys** section from the left menu of your **Function App**. +1. Sign in to the [Azure portal](https://portal.azure.com) and make a note of the **Resource group** for your storage account. You'll create your other resources in the same group to make it easier to clean up resources when you're done. -# [C#](#tab/csharp) +1. select the **Events** option from the left menu. -```http -https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=<FUNCTION-NAME>&code=<BLOB-EXTENSION-KEY> -``` +  -# [Python](#tab/python) +1. In the **Events** window, select the **+ Event Subscription** button, and provide values from the following table into the **Basic** tab: -```http -https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.<FUNCTION-NAME>&code=<BLOB-EXTENSION-KEY> -``` + | Setting | Suggested value | Description | + | | - | -- | + | **Name** | *myBlobLocalNgrokEventSub* | Name that identifies the event subscription. You can use the name to quickly find the event subscription. | + | **Event Schema** | **Event Grid Schema** | Use the default schema for events. | + | **System Topic Name** | *samples-workitems-blobs* | Name for the topic, which represents the container. The topic is created with the first subscription, and you'll use it for future event subscriptions. | + | **Filter to Event Types** | *Blob Created*<br/>*Blob Deleted*| + | **Endpoint Type** | **Web Hook** | The blob storage trigger uses a web hook endpoint. You would use Azure Functions for an Event Grid trigger. | + | **Endpoint** | Your ngrok-based URL endpoint | Use the ngrok-based URL endpoint that you determined earlier. | ++1. Select **Confirm selection** to validate the endpoint URL. ++1. Select **Create** to create the event subscription. ++## Upload a file to the container ++With the event subscription in place and your code project and ngrok still running, you can now upload a file to your storage container to trigger your function. You can upload a file from your computer to your blob storage container using Visual Studio Code. ++1. In Visual Studio Code, open the command palette (press F1) and type `Azure Storage: Upload Files...`. ++1. In the **Open** dialog box, choose a file, preferably a binary image file that's not too large, select **Upload** . ++1. Provide the following information at the prompts: ++ | Setting | Suggested value | Description | + | | - | -- | + | **Select a resource** | Storage account name | Choose the name of the storage account you created in a previous step. | + | **Select a resource type** | **Blob Containers** | You're uploading to a blob container. | + | **Select Blob Container** | **samples-workitems** | This value is the name of the container you created in a previous step. | + | **Enter the destination directory of this upload** | default | Just accept the default value of `/`, which is the container root. | ++This command uploads a file from your computer to the storage container in Azure. At this point, your running ngrok instance should report that a request was forwarded. You'll also see in the func.exe output for your debugging session that your function has been started. Hopefully, at this point, your debug session is waiting for you where you set the breakpoint. ++## Publish the project to Azure ++Now that you've successfully validated your function code locally, it's time to publish the project to a new function app in Azure. ++### Create the function app ++The following steps create the resources you need in Azure and deploy your project files. ++1. In the command pallet, enter **Azure Functions: Create function app in Azure...(Advanced)**. ++1. Following the prompts, provide this information: ++ | Prompt | Selection | + | | -- | + | **Enter a globally unique name for the new function app.** | Type a globally unique name that identifies your new function app and then select Enter. Valid characters for a function app name are `a-z`, `0-9`, and `-`. Write down this name; you'll need it later when building the new endpoint URL. | + | **Select a runtime stack.** | Choose the language version on which you've been running locally. | + | **Select an OS.** | Choose either Linux or Windows. Python apps must run on Linux. | + | **Select a resource group for new resources.** | Choose the name of the resource group you created with your storage account, which you previously noted in the portal. | + | **Select a location for new resources.** | Select a location in a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. | + | **Select a hosting plan.** | Choose **Consumption** for serverless [Consumption plan hosting](consumption-plan.md), where you're only charged when your functions run. | + | **Select a storage account.** | Choose the name of the existing storage account that you've been using. | + | **Select an Application Insights resource for your app.** | Choose **Create new Application Insights resource** and at the prompt, type a name for the instance used to store runtime data from your functions.| ++ A notification appears after your function app is created and the deployment package is applied. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. ++### Deploy the function code +++### Publish application settings ++Because the local settings from local.settings.json aren't automatically published, you must upload them now so that your function run correctly in Azure. ++In the command pallet, enter **Azure Functions: Upload Local Settings...**, and in the **Select a resource.** prompt choose the name of your function app. -# [Java](#tab/java) +## Recreate the event subscription +Now that the function app is running in Azure, you need to create a new event subscription. This new event subscription uses the endpoint of your function in Azure. You'll also add a filter to the event subscription so that the function is only triggered when JPEG (.jpg) files are added to the container. In Azure, the endpoint URL also contains an access key, which helps to block actors other than Event Grid from accessing the endpoint. ++### Get the blob extension key ++1. In Visual Studio Code, choose the Azure icon in the Activity bar. In **Resources**, expand your subscription, expand **Function App**, right-click the function app you created, and select **Open in portal**. ++1. Under **Functions** in the left menu, select **App keys**. + +1. Under **System keys** select the key named **blobs_extension**, and copy the key **Value**. ++You'll include this value in the query string of new endpoint URL. ++### Build the endpoint URL ++Create a new endpoint URL for the Blob Storage trigger based on the following example: + ```http-https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.<FUNCTION-NAME>&code=<BLOB-EXTENSION-KEY> +https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=BlobTriggerEventGrid&code=<BLOB_EXTENSION_KEY> ```+```http +https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.BlobTriggerEventGrid&code=<BLOB_EXTENSION_KEY> +``` -+In this example, replace `<FUNCTION_APP_NAME>` with the name of your function app and replace `<BLOB_EXTENSION_KEY>` with the value you got from the portal. If you used a different name for your function, you'll also need to change the `functionName` query string as needed. ++### Create a filtered event subscription ++Because the endpoint URL of an event subscription can't be changed, you must create a new event subscription. You should also delete the old event subscription at this time, since it can't be reused. ++This time, you'll include the filter on the event subscription so that only JPEG files (*.jpg) trigger the function. ++1. In Visual Studio Code, choose the Azure icon in the Activity bar. In **Resources**, expand your subscription, expand **Storage accounts**, right-click the storage account you created earlier, and select **Open in portal**. ++1. In the [Azure portal](https://portal.azure.com), select the **Events** option from the left menu. ++1. In the **Events** window, select your old ngrok-based event subscription, select **Delete** > **Save**. This action removes the old event subscription. ++1. Select the **+ Event Subscription** button, and provide values from the following table into the **Basic** tab: ++ | Setting | Suggested value | Description | + | | - | -- | + | **Name** | *myBlobAzureEventSub* | Name that identifies the event subscription. You can use the name to quickly find the event subscription. | + | **Event Schema** | **Event Grid Schema** | Use the default schema for events. | + | **Filter to Event Types** | *Blob Created*<br/>*Blob Deleted*| + | **Endpoint Type** | **Web Hook** | The blob storage trigger uses a web hook endpoint. You would use Azure Functions for an Event Grid trigger. | + | **Endpoint** | Your new Azure-based URL endpoint | Use the URL endpoint that you built, which includes the key value. | ++1. Select **Confirm selection** to validate the endpoint URL. ++1. Select the **Filters** tab, under **Subject filters** check **Enable subject filtering**, type `.jpg` in **Subject ends with**. This filters events to only JPEG files. ++  ++1. Select **Create** to create the event subscription. ++## Verify the function in Azure ++With the entire topology now running Azure, it's time to verify that everything is working correctly. Since you're already in the portal, it's easiest to just upload a file from there. ++1. In your storage account page in the portal, select **Containers** and select your **samples-workitems** container. ++1. Select the **Upload** button to open the upload page on the right, browse your local file system to find a `.jpg` file to upload, and then select the **Upload** button to upload the blob. Now, you can verify that your function ran based on the container upload event. ++1. In your storage account, return to the **Events** page, select **Event Subscriptions**, and you should see that an event was delivered. + +1. Back in your function app page in the portal, under **Functions** select **Functions**, choose your function and you should see a **Total Execution Count** of at least one. -## Clean up resources +1. Under **Developer**, select **Monitor**, and you should see traces written from your successful function executions. There might be up a five-minute delay as events are processed by Application Insights. -To clean up the resources created in this article, delete the event grid subscription you created in this tutorial. ## Next steps |
azure-functions | Functions How To Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md | Last updated 10/07/2020 -# Continuous delivery by using GitHub Action +# Continuous delivery by using GitHub Actions Use [GitHub Actions](https://github.com/features/actions) to define a workflow to automatically build and deploy code to your function app in Azure Functions. jobs: build-and-deploy: runs-on: ubuntu-latest steps:- - name: 'Checkout GitHub Action' + - name: 'Checkout GitHub action' uses: actions/checkout@v2 - name: Setup DotNet ${{ env.DOTNET_VERSION }} Environment jobs: pushd './${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}' dotnet build --configuration Release --output ./output popd- - name: 'Run Azure Functions Action' + - name: 'Run Azure Functions action' uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} jobs: build-and-deploy: runs-on: windows-latest steps:- - name: 'Checkout GitHub Action' + - name: 'Checkout GitHub action' uses: actions/checkout@v2 - name: Setup DotNet ${{ env.DOTNET_VERSION }} Environment jobs: pushd './${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}' dotnet build --configuration Release --output ./output popd- - name: 'Run Azure Functions Action' + - name: 'Run Azure Functions action' uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} jobs: build-and-deploy: runs-on: ubuntu-latest steps:- - name: 'Checkout GitHub Action' + - name: 'Checkout GitHub action' uses: actions/checkout@v2 - name: Setup Java Sdk ${{ env.JAVA_VERSION }} jobs: mvn clean package mvn azure-functions:package popd- - name: 'Run Azure Functions Action' + - name: 'Run Azure Functions action' uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} jobs: build-and-deploy: runs-on: windows-latest steps:- - name: 'Checkout GitHub Action' + - name: 'Checkout GitHub action' uses: actions/checkout@v2 - name: Setup Java Sdk ${{ env.JAVA_VERSION }} jobs: mvn clean package mvn azure-functions:package popd- - name: 'Run Azure Functions Action' + - name: 'Run Azure Functions action' uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} jobs: build-and-deploy: runs-on: ubuntu-latest steps:- - name: 'Checkout GitHub Action' + - name: 'Checkout GitHub action' uses: actions/checkout@v2 - name: Setup Node ${{ env.NODE_VERSION }} Environment jobs: npm run build --if-present npm run test --if-present popd- - name: 'Run Azure Functions Action' + - name: 'Run Azure Functions action' uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} jobs: build-and-deploy: runs-on: windows-latest steps:- - name: 'Checkout GitHub Action' + - name: 'Checkout GitHub action' uses: actions/checkout@v2 - name: Setup Node ${{ env.NODE_VERSION }} Environment jobs: npm run build --if-present npm run test --if-present popd- - name: 'Run Azure Functions Action' + - name: 'Run Azure Functions action' uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} jobs: build-and-deploy: runs-on: ubuntu-latest steps:- - name: 'Checkout GitHub Action' + - name: 'Checkout GitHub action' uses: actions/checkout@v2 - name: Setup Python ${{ env.PYTHON_VERSION }} Environment jobs: python -m pip install --upgrade pip pip install -r requirements.txt --target=".python_packages/lib/site-packages" popd- - name: 'Run Azure Functions Action' + - name: 'Run Azure Functions action' uses: Azure/functions-action@v1 with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} |
azure-functions | Functions Manually Run Non Http | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-manually-run-non-http.md | Open Postman and follow these steps: ## Next steps - [Strategies for testing your code in Azure Functions](./functions-test-a-function.md)-- [Azure Function Event Grid Trigger Local Debugging](./functions-debug-event-grid-trigger-local.md)+- [Event Grid local testing with viewer web app](./event-grid-how-tos.md#local-testing-with-viewer-web-app) |
azure-monitor | Agents Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md | The tables below provide a comparison of Azure Monitor Agent with the legacy the | | Event Hub | | | X | | **Services and features supported** | | | | | | | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | |-| | VM Insights | | X (Public preview) | | -| | Azure Automation | | X | | -| | Microsoft Defender for Cloud | | X | | +| | VM Insights | X (Public preview) | X | | +| | Microsoft Defender for Cloud | X (Public preview) | X | | +| | Update Management | X (Public preview, independent of monitoring agents) | X | | +| | Change Tracking | | X | | ### Linux agents The tables below provide a comparison of Azure Monitor Agent with the legacy the | | Azure Storage | | | X | | | | Event Hub | | | X | | | **Services and features supported** | | | | | |-| | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | | | -| | VM Insights | X (Public preview) | X | | | -| | Container Insights | X (Public preview) | X | | | -| | Azure Automation | | X | | | -| | Microsoft Defender for Cloud | | X | | | +| | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | | +| | VM Insights | X (Public preview) | X | | +| | Microsoft Defender for Cloud | X (Public preview) | X | | +| | Update Management | X (Public preview, independent of monitoring agents) | X | | +| | Change Tracking | | X | | <sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher. The following tables list the operating systems that Azure Monitor Agent and the #### Linux -| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>| +| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|::-| AlmaLinux 8.* | X | X | | +| AlmaLinux 8 | X | X | | | Amazon Linux 2017.09 | | X | | | Amazon Linux 2 | | X | |-| CentOS Linux 8 | X <sup>3</sup> | X | | +| CentOS Linux 8 | X | X | | | CentOS Linux 7 | X | X | X | | CentOS Linux 6 | | X | | | CentOS Linux 6.5+ | | X | X |-| Debian 11 <sup>1</sup> | X | | | -| Debian 10 <sup>1</sup> | X | X | | +| Debian 11 | X | | | +| Debian 10 | X | X | | | Debian 9 | X | X | X | | Debian 8 | | X | | | Debian 7 | | | X | | OpenSUSE 13.1+ | | | X |-| Oracle Linux 8 | X <sup>3</sup> | X | | +| Oracle Linux 8 | X | X | | | Oracle Linux 7 | X | X | X | | Oracle Linux 6 | | X | | | Oracle Linux 6.4+ | | X | X |-| Red Hat Enterprise Linux Server 8.5, 8.6 | X | X | | -| Red Hat Enterprise Linux Server 8, 8.1, 8.2, 8.3, 8.4 | X <sup>3</sup> | X | | +| Red Hat Enterprise Linux Server 8 | X | X | | | Red Hat Enterprise Linux Server 7 | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | | | Red Hat Enterprise Linux Server 6.7+ | | X | X |-| Rocky Linux 8.* | X | X | | -| SUSE Linux Enterprise Server 15.2 | X <sup>3</sup> | | | -| SUSE Linux Enterprise Server 15.1 | X <sup>3</sup> | X | | +| Rocky Linux 8 | X | X | | +| SUSE Linux Enterprise Server 15 SP2 | X | | | | SUSE Linux Enterprise Server 15 SP1 | X | X | | | SUSE Linux Enterprise Server 15 | X | X | |-| SUSE Linux Enterprise Server 12 SP5 | X | X | X | | SUSE Linux Enterprise Server 12 | X | X | X | | Ubuntu 22.04 LTS | X | | | | Ubuntu 20.04 LTS | X | X | X | The following tables list the operating systems that Azure Monitor Agent and the | Ubuntu 14.04 LTS | | X | X | <sup>1</sup> Requires Python (2 or 3) to be installed on the machine.<br>-<sup>2</sup> Known issue collecting Syslog events in versions prior to 1.9.0.<br> -<sup>3</sup> Not all kernel versions are supported. For more information, see [Dependency Agent Linux support](../vm/vminsights-dependency-agent-maintenance.md#dependency-agent-linux-support). +<sup>2</sup> Requires Python 2 to be installed on the machine and aliased to the `python` command.<br> ## Next steps |
azure-monitor | Data Collection Text Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md | The custom table must be created before you can send data to it. When you create Use the **Tables - Update** API to create the table with the PowerShell code below. This code creates a table called *MyTable_CL* with two columns. Modify this schema to collect a different table. > [!IMPORTANT]-> Custom tables must use a suffix of *_CL* as in *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name created in the log Analytics workspace. +> Custom tables have a suffix of *_CL*; for example, *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name in the Log Analytics workspace. 1. Click the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**. The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) See [Structure of a data collection rule in Azure Monitor (preview)](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the text log DCR. > [!IMPORTANT]- > Custom tables must use a suffix of *_CL* as in *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name created in the log Analytics workspace. + > Custom tables have a suffix of *_CL*; for example, *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name in the Log Analytics workspace. ```json { |
azure-monitor | Activity Log Alerts Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/activity-log-alerts-webhook.md | Title: Understand the webhook schema used in activity log alerts + Title: Configure the webhook to get activity log alerts description: Learn about the schema of the JSON that is posted to a webhook URL when an activity log alert activates. Last updated 03/31/2017 -# Webhooks for Azure activity log alerts +# Webhooks for activity log alerts As part of the definition of an action group, you can configure webhook endpoints to receive activity log alert notifications. With webhooks, you can route these notifications to other systems for post-processing or custom actions. This article shows what the payload for the HTTP POST to a webhook looks like. For more information on activity log alerts, see how to [create Azure activity log alerts](./activity-log-alerts.md). |
azure-monitor | Alerts Smart Detections Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smart-detections-migration.md | A new set of alert rules is created when migrating an Application Insights resou <sup>(2)</sup> Name of new alert rule after migration <sup>(3)</sup> These smart detection capabilities aren't converted to alerts, because of low usage and reassessment of detection effectiveness. These detectors will no longer be supported for this resource once its migration is completed. + > [!NOTE] + > The **Failure Anomalies** smart detector is already created as an alert rule and therefore does not require migration, it is not covered in this document. + The migration doesn't change the algorithmic design and behavior of smart detection. The same detection performance is expected before and after the change. You need to apply the migration to each Application Insights resource separately. For resources that aren't explicitly migrated, smart detection will continue to work as before. |
azure-monitor | Auto Collect Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md | description: Application Insights automatically collect and visualize dependenci ms.devlang: csharp, java, javascript Previously updated : 05/06/2020 Last updated : 08/22/2022 Below is the currently supported list of dependency calls that are automatically ## Node.js -| Communication libraries | Versions | -| |-| -| [HTTP](https://nodejs.org/api/http.html), [HTTPS](https://nodejs.org/api/https.html) | 0.10+ | -| <b>Storage clients</b> | | -| [Redis](https://www.npmjs.com/package/redis) | 2.x - 3.x | -| [MongoDb](https://www.npmjs.com/package/mongodb); [MongoDb Core](https://www.npmjs.com/package/mongodb-core) | 2.x - 3.x | -| [MySQL](https://www.npmjs.com/package/mysql) | 2.x | -| [PostgreSql](https://www.npmjs.com/package/pg); | 6.x - 8.x | -| [pg-pool](https://www.npmjs.com/package/pg-pool) | 1.x - 2.x | -| <b>Logging libraries</b> | | -| [console](https://nodejs.org/api/console.html) | 0.10+ | -| [Bunyan](https://www.npmjs.com/package/bunyan) | 1.x | -| [Winston](https://www.npmjs.com/package/winston) | 2.x - 3.x | +A list of the latest [currently-supported modules](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers) is maintained [here](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers). ## JavaScript |
azure-monitor | Convert Classic Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md | Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs description: Learn about the steps required to upgrade your Application Insights classic resource to the new workspace-based model. Previously updated : 09/23/2020 Last updated : 08/22/2022 Once the migration is complete, you can use [diagnostic settings](../essentials/ - Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource. > [!NOTE]- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period, you may need to adjust your workspace retention settings. + > - If you currently store Application Insights data for longer than the default 90 days and want to retain this larger retention period after migration, you will need to adjust your [workspace retention settings](https://docs.microsoft.com/azure/azure-monitor/logs/data-retention-archive?tabs=portal-1%2Cportal-2#set-retention-and-archive-policy-by-table) from the default 90 days to the desired longer retention period. > - If youΓÇÖve selected data retention greater than 90 days on data ingested into the Classic Application Insights resource prior to migration, data retention will continue to be billed to through that Application Insights resource until that data exceeds the retention period. > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, then use that setting to control the retention days for the telemetry data still saved in your classic resource's storage. |
azure-monitor | Export Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md | To migrate to diagnostic settings export: > [!CAUTION] > If you want to store diagnostic logs in a Log Analytics workspace, there are two things to consider to avoid seeing duplicate data in Application Insights: > * The destination can't be the same Log Analytics workspace that your Application Insights resource is based on.-> * The Application Insights user can't have access to both the Application Insights resource and the workspace created for diagnostic logs. This can be done with [Azure role-based access control (Azure RBAC)](./resources-roles-access-control.md). +> * The Application Insights user can't have access to both workspaces. This can be done by setting the Log Analytics [Access control mode](/azure/azure-monitor/logs/log-analytics-workspace-overview#permissions) to **Requires workspace permissions** and ensuring through [Azure role-based access control (Azure RBAC)](./resources-roles-access-control.md) that the user only has access to the Log Analytics workspace the Application Insights resource is based on. +> +> These steps are necessary because Application Insights accesses telemetry across Application Insight resources (including Log Analytics workspaces) to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources containing the same data. <!--Link references--> |
azure-monitor | Tutorial Asp Net Custom Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-custom-metrics.md | + + Title: Application Insights custom metrics with .NET and .NET Core +description: Learn how to use Application Insights to capture locally pre-aggregated metrics for .NET and .NET Core applications. + Last updated : 08/22/2022+ms.devlang: csharp ++++# Capture Application Insights custom metrics with .NET and .NET Core ++In this article, you'll learn how to capture custom metrics with Application Insights in .NET and .NET Core apps. ++Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use. ++## ASP.NET Core applications ++### Prerequisites ++If you'd like to follow along with the guidance in this article, certain pre-requisites are needed. ++* Visual Studio 2022 +* Visual Studio Workloads: ASP.NET and web development, Data storage and processing, and Azure development +* .NET 6.0 +* Azure subscription and user account (with the ability to create and delete resources) +* Deploy the [completed sample application (`2 - Completed Application`)](./tutorial-asp-net-core.md) or an existing ASP.NET Core application with the [Application Insights for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) NuGet package installed and [configured to gather server-side telemetry](asp-net-core.md#enable-application-insights-server-side-telemetry-visual-studio). ++### Custom metrics overview ++The Application Insights .NET and .NET Core SDKs have two different methods of collecting custom metrics, `TrackMetric()`, and `GetMetric()`. The key difference between these two methods is local aggregation. `TrackMetric()` lacks pre-aggregation while `GetMetric()` has pre-aggregation. The recommended approach is to use aggregation, therefore, `TrackMetric()` is no longer the preferred method of collecting custom metrics. This article will walk you through using the GetMetric() method, and some of the rationale behind how it works. ++#### Pre-aggregating vs non pre-aggregating API ++`TrackMetric()` sends raw telemetry denoting a metric. It's inefficient to send a single telemetry item for each value. `TrackMetric()` is also inefficient in terms of performance since every `TrackMetric(item)` goes through the full SDK pipeline of telemetry initializers and processors. Unlike `TrackMetric()`, `GetMetric()` handles local pre-aggregation for you and then only submits an aggregated summary metric at a fixed interval of one minute. So if you need to closely monitor some custom metric at the second or even millisecond level you can do so while only incurring the storage and network traffic cost of only monitoring every minute. This behavior also greatly reduces the risk of throttling occurring since the total number of telemetry items that need to be sent for an aggregated metric are greatly reduced. ++In Application Insights, custom metrics collected via `TrackMetric()` and `GetMetric()` aren't subject to [sampling](./sampling.md). Sampling important metrics can lead to scenarios where alerting you may have built around those metrics could become unreliable. By never sampling your custom metrics, you can generally be confident that when your alert thresholds are breached, an alert will fire. But since custom metrics aren't sampled, there are some potential concerns. ++Trend tracking in a metric every second, or at an even more granular interval can result in: ++- Increased data storage costs. There's a cost associated with how much data you send to Azure Monitor. (The more data you send the greater the overall cost of monitoring.) +- Increased network traffic/performance overhead. (In some scenarios this overhead could have both a monetary and application performance cost.) +- Risk of ingestion throttling. (The Azure Monitor service drops ("throttles") data points when your app sends a high rate of telemetry in a short time interval.) ++Throttling is a concern as it can lead to missed alerts. The condition to trigger an alert could occur locally and then be dropped at the ingestion endpoint due to too much data being sent. We don't recommend using `TrackMetric()` for .NET and .NET Core unless you've implemented your own local aggregation logic. If you're trying to track every instance an event occurs over a given time period, you may find that [`TrackEvent()`](./api-custom-events-metrics.md#trackevent) is a better fit. Though keep in mind that unlike custom metrics, custom events are subject to sampling. You can still use `TrackMetric()` even without writing your own local pre-aggregation, but if you do so be aware of the pitfalls. ++In summary `GetMetric()` is the recommended approach since it does pre-aggregation, it accumulates values from all the Track() calls and sends a summary/aggregate once every minute. `GetMetric()` can significantly reduce the cost and performance overhead by sending fewer data points, while still collecting all relevant information. ++## Getting a TelemetryClient instance ++Get an instance of `TelemetryClient` from the dependency injection container in **HomeController.cs**: ++```csharp +//... additional code removed for brevity +using Microsoft.ApplicationInsights; ++namespace AzureCafe.Controllers +{ + public class HomeController : Controller + { + private readonly ILogger<HomeController> _logger; + private AzureCafeContext _cafeContext; + private BlobContainerClient _blobContainerClient; + private TextAnalyticsClient _textAnalyticsClient; + private TelemetryClient _telemetryClient; ++ public HomeController(ILogger<HomeController> logger, AzureCafeContext context, BlobContainerClient blobContainerClient, TextAnalyticsClient textAnalyticsClient, TelemetryClient telemetryClient) + { + _logger = logger; + _cafeContext = context; + _blobContainerClient = blobContainerClient; + _textAnalyticsClient = textAnalyticsClient; + _telemetryClient = telemetryClient; + } ++ //... additional code removed for brevity + } +} +``` ++`TelemetryClient` is thread safe. ++## TrackMetric ++Application Insights can chart metrics that aren't attached to particular events. For example, you could monitor a queue length at regular intervals. With metrics, the individual measurements are of less interest than the variations and trends, and so statistical charts are useful. ++To send metrics to Application Insights, you can use the `TrackMetric(..)` API. We'll cover the recommended way to send a metric: ++* **Aggregation**. When you work with metrics, every single measurement is rarely of interest. Instead, a summary of what happened during a particular time period is important. Such a summary is called _aggregation_. ++ For example, the aggregate metric sum for that time period is `1` and the count of the metric values is `2`. When you use the aggregation approach, you invoke `TrackMetric` only once per time period and send the aggregate values. We recommend this approach because it can significantly reduce the cost and performance overhead by sending fewer data points to Application Insights, while still collecting all relevant information. ++### TrackMetric example ++1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file. ++2. Locate the `CreateReview` method and the following code. ++ ```csharp + if (model.Comments != null) + { + var response = _textAnalyticsClient.AnalyzeSentiment(model.Comments); + review.CommentsSentiment = response.Value.Sentiment.ToString(); + } + ``` ++3. Immediately following the previous code, insert the following to add a custom metric. ++ ```csharp + _telemetryClient.TrackMetric("ReviewPerformed", model.Rating); + ``` ++4. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu. ++  ++5. Select **Publish** to promote the new code to the Azure App Service. ++  ++6. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application. ++  ++7. Perform various activities in the web application to generate some telemetry. ++ 1. Select **Details** next to a Cafe to view its menu and reviews. ++  ++ 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review. ++  ++ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**. ++  ++ 4. Repeat adding reviews as desired to generate more telemetry. ++### View metrics in Application Insights ++1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com). ++ :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="First screenshot of a resource group with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png"::: ++2. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **customMetrics** table, located under the **Application Insights** tree. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows, then select **Run** to filter the results. ++ ```kql + customMetrics + | where name == "ReviewPerformed" + ``` ++3. Observe the results display the rating value present in the Review. ++## GetMetric ++As referenced before, `GetMetric(..)` is the preferred method for sending metrics. In order to make use of this method, we'll be performing some changes to the existing code. ++When running the sample code, you'll see that no telemetry is being sent from the application right away. A single telemetry item will be sent by around the 60-second mark. ++> [!NOTE] +> GetMetric does not support tracking the last value (i.e. "gauge") or tracking histograms/distributions. ++### GetMetric example ++1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file. ++2. Locate the `CreateReview` method and the code added in the previous [TrackMetric example](#trackmetric-example). ++3. Replace the previously added code in _Step 3_ with the following one. ++ ```csharp + var metric = _telemetryClient.GetMetric("ReviewPerformed"); + metric.TrackValue(model.Rating); + ``` ++4. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu. ++  ++5. Select **Publish** to promote the new code to the Azure App Service. ++  ++6. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application. ++  ++7. Perform various activities in the web application to generate some telemetry. ++ 1. Select **Details** next to a Cafe to view its menu and reviews. ++  ++ 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review. ++  ++ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**. ++  ++ 4. Repeat adding reviews as desired to generate more telemetry. ++### View metrics in Application Insights ++1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com). ++  ++2. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **customMetrics** table, located under the **Application Insights** tree. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows, then select **Run** to filter the results. ++ ```kql + customMetrics + | where name == "ReviewPerformed" + ``` ++3. Observe the results display the rating value present in the Review and the aggregated values. ++## Multi-dimensional metrics ++The examples in the previous section show zero-dimensional metrics. Metrics can also be multi-dimensional. We currently support up to 10 dimensions. ++By default multi-dimensional metrics within the Metric explorer experience aren't turned on in Application Insights resources. ++>[!NOTE] +> This is a preview feature and additional billing may apply in the future. ++### Enable multi-dimensional metrics ++To enable multi-dimensional metrics for an Application Insights resource, Select **Usage and estimated costs** > **Custom Metrics** > **Send custom metrics to Azure Metric Store (With dimensions)** > **OK**. ++Once you have made that change and send new multi-dimensional telemetry, you'll be able to **Apply splitting**. ++> [!NOTE] +> Only newly sent metrics after the feature was turned on in the portal will have dimensions stored. ++### Multi-dimensional metrics example ++1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file. ++2. Locate the `CreateReview` method and the code added in the previous [GetMetric example](#getmetric-example). ++3. Replace the previously added code in _Step 3_ with the following one. ++ ```csharp + var metric = _telemetryClient.GetMetric("ReviewPerformed", "IncludesPhoto"); + ``` ++4. Still in the `CreateReview` method, change to code to match the following one. ++ ```csharp + [HttpPost] + [ValidateAntiForgeryToken] + public ActionResult CreateReview(int id, CreateReviewModel model) + { + //... additional code removed for brevity + var metric = _telemetryClient.GetMetric("ReviewPerformed", "IncludesPhoto"); ++ if ( model.ReviewPhoto != null ) + { + using (Stream stream = model.ReviewPhoto.OpenReadStream()) + { + //... additional code removed for brevity + } + + metric.TrackValue(model.Rating, bool.TrueString); + } + else + { + metric.TrackValue(model.Rating, bool.FalseString); + } + //... additional code removed for brevity + } + ``` ++5. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu. ++  ++6. Select **Publish** to promote the new code to the Azure App Service. ++  ++7. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application. ++  ++8. Perform various activities in the web application to generate some telemetry. ++ 1. Select **Details** next to a Cafe to view its menu and reviews. ++  ++ 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review. ++  ++ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**. ++  ++ 4. Repeat adding reviews as desired to generate more telemetry. ++### View logs in Application Insights ++1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com). ++  ++2. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **customMetrics** table, located under the **Application Insights** tree. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows, then select **Run** to filter the results. ++ ```kql + customMetrics + | where name == "ReviewPerformed" + ``` ++3. Observe the results display the rating value present in the Review and the aggregated values. ++4. In order to better observe the **IncludesPhoto** dimension, we can extract it into a separate variable (column) by using the following query. ++ ```kql + customMetrics + | extend IncludesPhoto = tobool(customDimensions.IncludesPhoto) + | where name == "ReviewPerformed" + ``` ++5. Since we reused the same custom metric name has before, results with and without the custom dimension will be displayed. In order to avoid that, we'll update the query to match the following one. ++ ```kql + customMetrics + | extend IncludesPhoto = tobool(customDimensions.IncludesPhoto) + | where name == "ReviewPerformed" and isnotnull(IncludesPhoto) + ``` ++### View metrics in Application Insights ++1. Go to the **Application Insights** resource in the [Azure portal](https://portal.azure.com). ++  ++2. From the left menu of the Application Insights resource, select **Metrics** from beneath the **Monitoring** section. ++3. For **Metric Namespace**, select **azure.applicationinsights**. ++  ++4. For **Metric**, select **ReviewPerformed**. ++  ++5. However, you'll notice that you aren't able to split the metric by your new custom dimension, or view your custom dimension with the metrics view. Select **Apply Splitting**. ++  ++6. For the custom dimension **Values** to use, select **IncludesPhoto**. ++  ++## Next steps ++* [Metric Explorer](../essentials/metrics-getting-started.md) +* How to enable Application Insights for [ASP.NET Core Applications](./asp-net-core.md) |
azure-monitor | Best Practices Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md | Title: Azure Monitor best practices - Alerts and automated actions + Title: 'Azure Monitor best practices: Alerts and automated actions' description: Recommendations for deployment of Azure Monitor alerts and automated actions. -# Deploying Azure Monitor - Alerts and automated actions -This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It provides guidance on alerts in Azure Monitor, which proactively notify you of important data or patterns identified in your monitoring data. You can view alerts in the Azure portal, have them send a proactive notification, or have them initiated some automated action to attempt to remediate the issue. +# Deploy Azure Monitor: Alerts and automated actions ++This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It provides guidance on alerts in Azure Monitor. Alerts proactively notify you of important data or patterns identified in your monitoring data. You can view alerts in the Azure portal. You can create alerts that: ++- Send a proactive notification. +- Initiate an automated action to attempt to remediate an issue. + ## Alerting strategy-An alerting strategy defines your organizations standards for the types of alert rules that you'll create for different scenarios, how you'll categorize and manage alerts after they're created, and automated actions and notifications that you'll take in response to alerts. Defining an alert strategy assists you defining the configuration of alert rules including alert severity and action groups. -See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) for factors that you should consider in developing an alerting strategy. +An alerting strategy defines your organization's standards for: +- The types of alert rules that you'll create for different scenarios. +- How you'll categorize and manage alerts after they're created. +- Automated actions and notifications that you'll take in response to alerts. ++Defining an alert strategy assists you in defining the configuration of alert rules including alert severity and action groups. ++For factors to consider as you develop an alerting strategy, see [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy). ## Alert rule types-Alerts in Azure Monitor are created by alert rules which you must create. See the monitoring documentation for each Azure service for guidance on recommended alert rules. Azure Monitor does not have any alert rules by default. -There are multiple types of alert rules defined by the type of data that they use. Each has different capabilities and a different cost. The basic strategy you should follow is to use the alert rule type with the lowest cost that provides the logic that you require. +Alerts in Azure Monitor are created by alert rules that you must create. For guidance on recommended alert rules, see the monitoring documentation for each Azure service. Azure Monitor doesn't have any alert rules by default. ++Multiple types of alert rules are defined by the type of data they use. Each has different capabilities and a different cost. The basic strategy is to use the alert rule type with the lowest cost that provides the logic you require. -- [Activity log rules](alerts/activity-log-alerts.md). Creates an alert in response to a new Activity log event that matches specified conditions. There is no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md) for details on creating an Activity log alert.-- [Metric alert rules](alerts/alerts-metric-overview.md). Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful meaning that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There is a cost to metric alerts, but this is often significantly less than log alerts. See [Create, view, and manage metric alerts using Azure Monitor](alerts/alerts-metric.md) for details on creating a metric alert.-- [Log alert rules](alerts/alerts-unified-log.md). Creates an alert when the results of a schedule query matches specified criteria. They are the most expensive of the alert rules, but they allow the most complex criteria. See [Create, view, and manage log alerts using Azure Monitor](alerts/alerts-log.md) for details on creating a log query alert.-- [Application alerts](app/monitor-web-app-availability.md) allow you to perform proactive performance and availability testing of your web application. You can perform a simple ping test at no cost, but there is a cost to more complex testing. See [Monitor the availability of any website](app/monitor-web-app-availability.md) for a description of the different tests and details on creating them.+- [Activity log rules](alerts/activity-log-alerts.md). Creates an alert in response to a new activity log event that matches specified conditions. There's no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create, view, and manage activity log alerts by using Azure Monitor](alerts/alerts-activity-log.md) for information on creating an activity log alert. +- [Metric alert rules](alerts/alerts-metric-overview.md). Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log alerts. See [Create, view, and manage metric alerts by using Azure Monitor](alerts/alerts-metric.md) for information on creating a metric alert. +- [Log alert rules](alerts/alerts-unified-log.md). Creates an alert when the results of a schedule query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create, view, and manage log alerts by using Azure Monitor](alerts/alerts-log.md) for information on creating a log query alert. +- [Application alerts](app/monitor-web-app-availability.md). Performs proactive performance and availability testing of your web application. You can perform a ping test at no cost, but there's a cost to more complex testing. See [Monitor the availability of any website](app/monitor-web-app-availability.md) for a description of the different tests and information on creating them. ## Alert severity-Each alert rule defines the severity of the alerts that it creates based on the table below. Alerts in the Azure portal are grouped by level so that you can manage similar alerts together and quickly identify those that require the greatest urgency. ++Each alert rule defines the severity of the alerts that it creates based on the following table. Alerts in the Azure portal are grouped by level so that you can manage similar alerts together and quickly identify alerts that require the greatest urgency. | Level | Name | Description | |:|:|:| | Sev 0 | Critical | Loss of service or application availability or severe degradation of performance. Requires immediate attention. | | Sev 1 | Error | Degradation of performance or loss of availability of some aspect of an application or service. Requires attention but not immediate. |-| Sev 2 | Warning | A problem that does not include any current loss in availability or performance, although has the potential to lead to more sever problems if unaddressed. | -| Sev 3 | Informational | Does not indicate a problem but rather interesting information to an operator such as successful completion of a regular process. | -| Sev 4 | Verbose | Detailed information not useful +| Sev 2 | Warning | A problem that doesn't include any current loss in availability or performance, although it has the potential to lead to more severe problems if unaddressed. | +| Sev 3 | Informational | Doesn't indicate a problem but provides interesting information to an operator, such as successful completion of a regular process. | +| Sev 4 | Verbose | Detailed information that isn't useful. -You should assess the severity of the condition each rule is identifying to assign an appropriate level. The types of issues you assign to each severity level and your standard response to each should be defined in your alerts strategy. +Assess the severity of the condition each rule is identifying to assign an appropriate level. Define the types of issues you assign to each severity level and your standard response to each in your alerts strategy. ## Action groups-Automated responses to alerts in Azure Monitor are defined in [action groups](alerts/action-groups.md). An action group is a collection of one or more notifications and actions that are fired when an alert is triggered. A single action group can be used with multiple alert rules and contain one or more of the following: -- Notifications. Messages that notify operators and administrators that an alert was created.-- Actions. Automated processes that attempt to correct the detected issue, +Automated responses to alerts in Azure Monitor are defined in [action groups](alerts/action-groups.md). An action group is a collection of one or more notifications and actions that are fired when an alert is triggered. A single action group can be used with multiple alert rules and contain one or more of the following items: ++- **Notifications**: Messages that notify operators and administrators that an alert was created. +- **Actions**: Automated processes that attempt to correct the detected issue. + ## Notifications-Notifications are messages sent to one or more users to notify them that an alert has been created. Since a single action group can be used with multiple alert rules, you should design a set of action groups for different sets of administrators and users who will receive the same sets of alerts. Use any of the following types of notifications depending on the preferences of your operators and your organizational standards. ++Notifications are messages sent to one or more users to notify them that an alert has been created. Because a single action group can be used with multiple alert rules, you should design a set of action groups for different sets of administrators and users who will receive the same sets of alerts. Use any of the following types of notifications depending on the preferences of your operators and your organizational standards: - Email - SMS - Push to Azure app - Voice-- Email Azure Resource Manager Role+- Email Azure Resource Manager role ## Actions-Actions are automated responses to an alert. You can use the available actions for any scenario that they support, but the following sections describe how each are typically used. ++Actions are automated responses to an alert. You can use the available actions for any scenario that they support, but the following sections describe how each action is typically used. ### Automated remediation-Use the following actions to attempt automated remediation of the issue identified by the alert. -- Automation runbook - Start either a built-in or custom a runbook in Azure Automation. For example, built-in runbooks are available to perform such functions as restarting or scaling up a virtual machine.-- Azure Function - Start an Azure Function.+Use the following actions to attempt automated remediation of the issue identified by the alert: +- **Automation runbook**: Start a built-in runbook or a custom runbook in Azure Automation. For example, built-in runbooks are available to perform such functions as restarting or scaling up a virtual machine. +- **Azure Functions**: Start an Azure function. ### ITSM and on-call management -- ITSM - Use the [ITSM connector]() to create work items in your ITSM tool based on alerts from Azure Monitor. You first configure the connector and then use the **ITSM** action in alert rules.-- Webhooks - Send the alert to an incident management system that supports webhooks such as PagerDuty and Splunk On-Call.-- Secure webhook - ITSM integration with Azure AD Authentication+- **IT service management (ITSM)**: Use the [ITSM Connector]() to create work items in your ITSM tool based on alerts from Azure Monitor. You first configure the connector and then use the **ITSM** action in alert rules. +- **Webhooks**: Send the alert to an incident management system that supports webhooks such as PagerDuty and Splunk On-Call. +- **Secure webhook**: Integrate ITSM with Azure Active Directory Authentication. +## Minimize alert activity -## Minimizing alert activity -While you want to create alerts for any important information in your environment, you should ensure that you aren't creating excessive alerts and notifications for issues that don't warrant them. Use the following guidelines to minimize your alert activity to ensure that critical issues are surfaced while you don't generate excess information and notifications for administrators. +You want to create alerts for any important information in your environment. But you don't want to create excessive alerts and notifications for issues that don't warrant them. To minimize your alert activity to ensure that critical issues are surfaced while you don't generate excess information and notifications for administrators, follow these guidelines: -- See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) for principles on determining whether a symptom is an appropriate candidate for alerting.+- See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) to determine whether a symptom is an appropriate candidate for alerting. - Use the **Automatically resolve alerts** option in metric alert rules to resolve alerts when the condition has been corrected.-- Use **Suppress alerts** option in log query alert rules which prevents creating multiple alerts for the same issue.-- Ensure that you use appropriate severity levels for alert rules so that high priority issues can be analyzed together.-- Limit notifications for alerts with a severity of Warning or less since they don't require immediate attention.+- Use the **Suppress alerts** option in log query alert rules to avoid creating multiple alerts for the same issue. +- Ensure that you use appropriate severity levels for alert rules so that high-priority issues can be analyzed together. +- Limit notifications for alerts with a severity of Warning or less because they don't require immediate attention. ## Create alert rules at scale-Since you'll typically want to alert on issues for all of your critical Azure applications and resources, you should leverage methods for creating alert rules at scale. -- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. See [Monitoring at scale using metric alerts in Azure Monitor](alerts/alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor) for a list of Azure services that are currently supported for this feature.-- For metric alert rules for Azure services that don't support multiple resources, leverage automation tools such as CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. See [Resource Manager template samples for metric alert rules in Azure Monitor](alerts/resource-manager-alerts-metric.md) for samples.-- Write queries in log query alert rules to return data for multiple resources. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.+Typically, you'll want to alert on issues for all your critical Azure applications and resources. Use the following methods for creating alert rules at scale: +- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Monitoring at scale using metric alerts in Azure Monitor](alerts/alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor). +- For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](alerts/resource-manager-alerts-metric.md). +- To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource. > [!NOTE]-> Resource-centric log query alert rules which are currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert. +> Resource-centric log query alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert. ## Next steps -- [Define alerts and automated actions from Azure Monitor data](best-practices-alerts.md)+[Define alerts and automated actions from Azure Monitor data](best-practices-alerts.md) |
azure-monitor | Best Practices Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md | Title: Azure Monitor best practices - Cost management + Title: 'Azure Monitor best practices: Cost management' description: Guidance and recommendations for reducing your cost for Azure Monitor. -# Azure Monitor best practices - Cost management -This article provides guidance on reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost effective manner. This includes leveraging cost saving features and ensuring that you're not paying for data collection that provides little value. It also provides guidance for regularly monitoring your usage so that you can proactively detect and identify sources responsible for excessive usage. +# Azure Monitor best practices: Cost management +This article provides guidance on reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. It explains how to take advantage of cost-saving features to help ensure that you're not paying for data collection that provides little value. It also provides guidance for regularly monitoring your usage so that you can proactively detect and identify sources responsible for excessive usage. ## Understand Azure Monitor charges+ You should start by understanding the different ways that Azure Monitor charges and how to view your monthly bill. See [Azure Monitor cost and usage](usage-estimated-costs.md) for a complete description and the different tools available to analyze your charges. ## Configure workspaces-You can start using Azure Monitor with a single Log Analytics workspace using default options. As your monitoring environment grows though, you will need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces, and you want to evaluate configuration options that allow you to reduce your monitoring costs. ++You can start using Azure Monitor with a single Log Analytics workspace by using default options. As your monitoring environment grows, you'll need to make decisions about whether to have multiple services share a single workspace or create multiple workspaces. You want to evaluate configuration options that allow you to reduce your monitoring costs. ### Configure pricing tier or dedicated cluster-By default, workspaces will use Pay-As-You-Go pricing with no minimum data volume. If you collect a sufficient amount of data though, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers). You commit to a daily minimum of data collected in exchange for a lower rate. -[Dedicated clusters](logs/logs-dedicated-clusters.md) provide additional functionality and cost savings if you ingest at least 500 GB per day collectively among multiple workspaces in the same region. Unlike commitment tiers, workspaces in a dedicated cluster don't need to individually reach the 500 GB. +By default, workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough amount of data, you can significantly decrease your cost by using a [commitment tier](logs/cost-logs.md#commitment-tiers). You commit to a daily minimum of data collected in exchange for a lower rate. -See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers. +[Dedicated clusters](logs/logs-dedicated-clusters.md) provide more functionality and cost savings if you ingest at least 500 GB per day collectively among multiple workspaces in the same region. Unlike commitment tiers, workspaces in a dedicated cluster don't need to individually reach 500 GB. ++See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on commitment tiers and guidance on determining which is most appropriate for your level of usage. See [Usage and estimated costs](usage-estimated-costs.md#usage-and-estimated-costs) to view estimated costs for your usage at different pricing tiers. ### Optimize workspace configuration-As your monitoring environment becomes more complex, you will need to consider whether to create additional Log Analytics workspaces. This may be as you place resources in additional regions or as you implement additional services that use workspaces such as Azure Sentinel and Microsoft Defender for Cloud. -There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. See [Workspaces with Microsoft Sentinel](logs/cost-logs.md#workspaces-with-microsoft-sentinel) and [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) for a description of these implications and guidance on determining the most cost-effective solution for your environment. +As your monitoring environment becomes more complex, you'll need to consider whether to create more Log Analytics workspaces. This need might surface as you place resources in more regions or as you implement more services that use workspaces such as Microsoft Sentinel and Microsoft Defender for Cloud. ++There can be cost implications with your workspace design, most notably when you combine different services such as operational data from Azure Monitor and security data from Microsoft Sentinel. For a description of these implications and guidance on determining the most cost-effective solution for your environment, see: ++ - [Workspaces with Microsoft Sentinel](logs/cost-logs.md#workspaces-with-microsoft-sentinel) +- [Workspaces with Microsoft Defender for Cloud](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) ## Configure tables in each workspace-Except for [tables that don't incur charges](logs/cost-logs.md#data-size-calculation), all data in a Log Analytics workspace is billed at the same rate by default. You may be collecting data though that you query infrequently or that you need to archive for compliance but rarely access. You can significantly reduce your costs by configuring Basic Logs and by optimizing your data retention and archiving. ++Except for [tables that don't incur charges](logs/cost-logs.md#data-size-calculation), all data in a Log Analytics workspace is billed at the same rate by default. You might be collecting data that you query infrequently or that you need to archive for compliance but rarely access. You can significantly reduce your costs by optimizing your data retention and archiving and configuring Basic Logs. ### Configure data retention and archiving-Data collected in a Log Analytics workspace is retained for 31 days at no charge (90 days if Azure Sentinel is enabled on the workspace). You can retain data beyond the default for trending analysis or other reporting, but there is a charge for this retention. -Your retention requirement may just be for compliance reasons or for occasional investigation or analysis of historical data. In this case, you should configure [Archived Logs](logs/data-retention-archive.md) which allows you to retain data long term (up to 7 years) at a significantly reduced cost. There is a cost to search archived data or temporarily restore it for analysis. If you require infrequent access to this data though, this cost will be more than offset by the reduced retention cost. +Data collected in a Log Analytics workspace is retained for 31 days at no charge. The time period is 90 days if Microsoft Sentinel is enabled on the workspace. You can retain data beyond the default for trending analysis or other reporting, but there's a charge for this retention. ++Your retention requirement might be for compliance reasons or for occasional investigation or analysis of historical data. In this case, you should configure [Archived Logs](logs/data-retention-archive.md), which allows you to retain data for up to seven years at a reduced cost. There's a cost to search archived data or temporarily restore it for analysis. If you require infrequent access to this data, this cost is more than offset by the reduced retention cost. -You can configure retention and archiving for all tables in a workspace or configure each table separately. This allows you to optimize your costs by setting only the retention you require for each data type. +You can configure retention and archiving for all tables in a workspace or configure each table separately. The options allow you to optimize your costs by setting only the retention you require for each data type. ### Configure Basic Logs (preview)-You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting and auditing as [Basic Logs](logs/basic-logs-configure.md). Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features. They can't be used for alerting, their retention is set to eight days, they support a limited version of the query language, and there is a cost for querying them. If you query these tables infrequently though, this query cost can be more than offset by the reduced ingestion cost. ++You can save on data ingestion costs by configuring [certain tables](logs/basic-logs-configure.md#which-tables-support-basic-logs) in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as [Basic Logs](logs/basic-logs-configure.md). ++Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features. They can't be used for alerting, their retention is set to eight days, they support a limited version of the query language, and there's a cost for querying them. If you query these tables infrequently, this query cost can be more than offset by the reduced ingestion cost. The decision whether to configure a table for Basic Logs is based on the following criteria: The decision whether to configure a table for Basic Logs is based on the followi - You only require basic queries of the data using a limited version of the query language. - The cost savings for data ingestion over a month exceed the expected cost for any expected queries -See [Query Basic Logs in Azure Monitor (Preview)](.//logs/basic-logs-query.md) for details on query limitations and [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more details about them. +See [Query Basic Logs in Azure Monitor (preview)](.//logs/basic-logs-query.md) for information on query limitations. See [Configure Basic Logs in Azure Monitor (Preview)](logs/basic-logs-configure.md) for more information about Basic Logs. ## Reduce the amount of data collected-The most straightforward strategy to reduce your costs for data ingestion and retention is to reduce the amount of data that you collect. Your goal should be to collect the minimal amount of data to meet your monitoring requirements. If you find that you're collecting data that's not being used for alerting or analysis, then you have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you don't need. -The configuration change will vary depending on the data source. The following sections provide guidance for configuring common data sources to reduce the data they send to the workspace. +The most straightforward strategy to reduce your costs for data ingestion and retention is to reduce the amount of data that you collect. Your goal should be to collect the minimal amount of data to meet your monitoring requirements. You might find that you're collecting data that's not being used for alerting or analysis. If so, you have an opportunity to reduce your monitoring costs by modifying your configuration to stop collecting data that you don't need. ++The configuration change varies depending on the data source. The following sections provide guidance for configuring common data sources to reduce the data they send to the workspace. ## Virtual machines-Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. The following table lists the most common data collected from virtual machines and strategies for limiting them for each of the Azure Monitor agents. +Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. The following table lists the most common data collected from virtual machines and strategies for limiting them for each of the Azure Monitor agents. | Source | Strategy | Log Analytics agent | Azure Monitor agent | |:|:|:|:|-| Event logs | Collect only required event logs and levels. For example, *Information* level events are rarely used and should typically not be collected. For Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md) | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific event IDs. | -| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific events. | -| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific counters. | -+| Event logs | Collect only required event logs and levels. For example, *Information*-level events are rarely used and should typically not be collected. For the Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific event IDs. | +| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [Syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific events. | +| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For the Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific counters. | ### Use transformations to filter events-The bulk of data collection from virtual machines will be from Windows or Syslog events. While you can provide more filtering with the Azure Monitor agent, you still may be collecting records that provide little value. Use [transformations](essentials//data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data. -See the section below on filtering data with transformations for a summary on where to implement filtering and transformations for different data sources. +The bulk of data collection from virtual machines will be from Windows or Syslog events. While you can provide more filtering with the Azure Monitor agent, you still might be collecting records that provide little value. Use [transformations](essentials//data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data. ++See the following section on filtering data with transformations for a summary on where to implement filtering and transformations for different data sources. ### Multi-homing agents-You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces since you may be incurring charges for the same data multiple times. If you do multi-home agents, ensure that you're sending unique data to each workspace. -You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. You should continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. +You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces because you might be incurring charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. -See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to ensure that you aren't collecting duplicate data for the same machine. +You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. Continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. -## Application Insights -There are multiple methods that you can use to limit the amount of data collected by Application Insights. +See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data for the same machine. -* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. +## Application Insights -* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](app/javascript.md#enable-distributed-tracing). +There are multiple methods that you can use to limit the amount of data collected by Application Insights: +* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. +* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too. * **Disable unneeded modules**: [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.- * **Pre-aggregate metrics**: If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a [pre-aggregating package](https://www.myget.org/gallery/applicationinsights-sdk-labs).+* **Limit the use of custom metrics**: The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs. Using this option can result in the creation of more pre-aggregation metrics. +* **Ensure use of updated SDKs**: Earlier versions of the ASP.NET Core SDK and Worker Service SDK [collect many counters by default](app/eventcounters.md#default-counters-collected), which were collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected). -* **Limit the use of custom metrics**: The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs because this can result in the creation of more pre-aggregation metrics. --* **Ensure use of updated SDKs**: Earlier version of the ASP.NET Core SDK and Worker Service SDK [collect a large number of counters by default](app/eventcounters.md#default-counters-collected) which collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected). ## Resource logs-The data volume for [resource logs](essentials/resource-logs.md) varies significantly between services, so you should only collect the categories that are required. You may also not want to collect platform metrics from Azure resources since this data is already being collected in Metrics. Only configured your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries. -Diagnostic settings do not allow granular filtering of resource logs. You may require certain logs in a particular category but not others. In this case, use [transformations](essentials/data-collection-transformations.md) on the workspace to filter logs that you don't require. You can also filter out the value of certain columns that you don't require to save additional cost. +The data volume for [resource logs](essentials/resource-logs.md) varies significantly between services, so you should only collect the categories that are required. You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries. -## Other insights and services -See the documentation for other services that store their data in a Log Analytics workspace for recommendations on optimizing their data usage. Following +Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. In this case, use [transformations](essentials/data-collection-transformations.md) on the workspace to filter logs that you don't require. You can also filter out the value of certain columns that you don't require to save additional cost. -- **Container insights** - [Understand monitoring costs for Container insights](containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost)-- **Microsoft Sentinel** - [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md)-- **Defender for Cloud** - [Setting the security event option at the workspace level](../defender-for-cloud/enable-data-collection.md#setting-the-security-event-option-at-the-workspace-level)+## Other insights and services +See the documentation for other services that store their data in a Log Analytics workspace for recommendations on optimizing their data usage: +- **Container insights**: [Understand monitoring costs for Container insights](containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) +- **Microsoft Sentinel**: [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md) +- **Defender for Cloud**: [Setting the security event option at the workspace level](../defender-for-cloud/enable-data-collection.md#setting-the-security-event-option-at-the-workspace-level) ## Filter data with transformations (preview)-[Data collection rule transformations in Azure Monitor](essentials//data-collection-transformations.md) allow you to filter incoming data to reduce costs for data ingestion and retention. In addition to filtering records from the incoming data, you can filter out columns in the data, reducing its billable size as described in [Data size calculation](logs/cost-logs.md#data-size-calculation). -Use ingestion-time transformations on the workspace to further filter data for workflows where you don't have granular control. For example, you can select categories in a [diagnostic setting](essentials/diagnostic-settings.md) to collect resource logs for a particular service, but that category might send a variety of records that you don't need. Create a transformation for the table that service uses to filter out records you don't want. +You can use [data collection rule transformations in Azure Monitor](essentials//data-collection-transformations.md) to filter incoming data to reduce costs for data ingestion and retention. In addition to filtering records from the incoming data, you can filter out columns in the data, reducing its billable size as described in [Data size calculation](logs/cost-logs.md#data-size-calculation). ++Use ingestion-time transformations on the workspace to further filter data for workflows where you don't have granular control. For example, you can select categories in a [diagnostic setting](essentials/diagnostic-settings.md) to collect resource logs for a particular service, but that category might also send records that you don't need. Create a transformation for the table that service uses to filter out records you don't want. -You can also ingestion-time transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting, but you don't require certain columns in those records that contain a large amount of data. Create a transformation for that table that removes those columns. +You can also use ingestion-time transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns. -The following table for methods to apply transformations to different workflows. +The following table shows methods to apply transformations to different workflows. > [!NOTE]-> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor Reference](/azure/azure-monitor/reference/). Custom tables are created by custom applications and have a suffix of *_CL* ion their name. +> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor reference](/azure/azure-monitor/reference/). Custom tables are created by custom applications and have a suffix of *_CL* in their name. | Source | Target | Description | Filtering method | |:|:|:|:|-| Azure Monitor agent | Azure tables | Collect data from standard sources such as Windows events, syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use XPath in DCR to collect specific data from client machine. Ingestion-time transformations in agent DCR are not yet supported. | +| Azure Monitor agent | Azure tables | Collect data from standard sources such as Windows events, Syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use XPath in the data collection rule (DCR) to collect specific data from client machines. Ingestion-time transformations in the agent DCR aren't yet supported. | | Azure Monitor agent | Custom tables | Collecting data outside of standard data sources is not yet supported. | |-| Log Analytics agent | Azure tables | Collect data from standard sources such as Windows events, syslog, and performance data and send to Azure tables in Log Analytics workspace. | Configure data collection on the workspace. Optionally, create ingestion-time transformation in the workspace DCR to filter records and columns. | -| Log Analytics agent | Custom tables | Configure [custom logs](agents/data-sources-custom-logs.md) on the workspace to collect file based text logs. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new logs ingestion API. | -| Data Collector API | Custom tables | Use [Data Collector API](logs/data-collector-api.md) to send data to custom tables in the workspace using REST API. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new logs ingestion API. | -| Logs ingestion API | Custom tables<br>Azure tables | Use [Logs ingestion API](logs/logs-ingestion-api-overview.md) to send data to the workspace using REST API. | Configure ingestion-time transformation in the DCR for the custom log. | -| Other data sources | Azure tables | Includes resource logs from diagnostic settings and other Azure Monitor features such as Application insights, Container insights and VM insights. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. | -+| Log Analytics agent | Azure tables | Collect data from standard sources such as Windows events, Syslog, and performance data and send it to Azure tables in the Log Analytics workspace. | Configure data collection on the workspace. Optionally, create ingestion-time transformation in the workspace DCR to filter records and columns. | +| Log Analytics agent | Custom tables | Configure [custom logs](agents/data-sources-custom-logs.md) on the workspace to collect file-based text logs. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new logs ingestion API. | +| Data Collector API | Custom tables | Use the [Data Collector API](logs/data-collector-api.md) to send data to custom tables in the workspace by using the REST API. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. You must first migrate the custom table to the new Logs ingestion API. | +| Logs ingestion API | Custom tables<br>Azure tables | Use the [Logs ingestion API](logs/logs-ingestion-api-overview.md) to send data to the workspace using REST API. | Configure ingestion-time transformation in the DCR for the custom log. | +| Other data sources | Azure tables | Includes resource logs from diagnostic settings and other Azure Monitor features such as Application insights, Container insights, and VM insights. | Configure ingestion-time transformation in the workspace DCR to filter or transform incoming data. | ## Monitor workspace and analyze usage-Once you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have additional opportunities to reduce your usage, such as further filtering out collected data that has not proven to be useful. +After you've configured your environment and data collection for cost optimization, you need to continue to monitor it to ensure that you don't experience unexpected increases in billable usage. You should also analyze your usage regularly to determine if you have other opportunities to reduce your usage. For example, you might want to further filter out collected data that hasn't proven to be useful. ### Set a daily cap-A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day once your configured limit is reached. This should not be used as a method to reduce costs, but rather as a preventative measure to ensure that you don't exceed a particular budget. Daily caps are typically used by organizations that are particularly cost conscious. -When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Rather than just relying on the daily cap alone, you can configure an alert rule to notify you when data collection reaches some level before the daily cap. This allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources. +A [daily cap](logs/daily-cap.md) disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. A daily cap shouldn't be used as a method to reduce costs but as a preventative measure to ensure that you don't exceed a particular budget. Daily caps are typically used by organizations that are particularly cost conscious. ++When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Instead of relying on the daily cap alone, you can configure an alert rule to notify you when data collection reaches some level before the daily cap. Notification allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources. ++See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for information on how the daily cap works and how to configure one. -See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) for details on how the daily cap works and how to configure one. ### Send alert when data collection is high-In order to avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. This allows you to address any potential anomalies before the end of your billing period. -The following example is a [log alert rule](alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this will result in a higher charge for the alert rule. +To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period. ++The following example is a [log alert rule](alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule. | Setting | Value | |:|:| The following example is a [log alert rule](alerts/alerts-unified-log.md) that s | Actions | Select or add an [action group](alerts/action-groups.md) to notify you when the threshold is exceeded. | | **Details** | | | Severity| Warning |-| Alert rule name | Billable data volume greater than 50 GB in 24 hours | +| Alert rule name | Billable data volume greater than 50 GB in 24 hours. | -See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on using log queries like the one used here to analyze billable usage in your workspace. +See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for information on using log queries like the one used here to analyze billable usage in your workspace. ## Analyze your collected data-When you detect an increase in data collection, then you need methods to analyze your collected data to identify the source of the increase. You should also periodically analyze data collection to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. -See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for different methods to analyze your collected data and billable usage. This article includes a variety of log queries that will help you identify the source of any data increases and to understand your basic usage patterns. +When you detect an increase in data collection, you need methods to analyze your collected data to identify the source of the increase. You should also periodically analyze data collection to determine if there's additional configuration that can decrease your usage further. This practice is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service. ++See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for different methods to analyze your collected data and billable usage. This article includes various log queries that will help you identify the source of any data increases and to understand your basic usage patterns. ## Next steps - See [Azure Monitor cost and usage](usage-estimated-costs.md)) for a description of Azure Monitor and how to view and analyze your monthly bill.-- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.-- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that may be ingested in a workspace.+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges. +- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for information on analyzing the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected. +- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that can be ingested in a workspace. |
azure-monitor | Best Practices Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md | Title: Azure Monitor best practices - Configure data collection + Title: 'Azure Monitor best practices: Configure data collection' description: Guidance and recommendations for configuring data collection in Azure Monitor. -# Azure Monitor best practices - Configure data collection +# Azure Monitor best practices: Configure data collection + This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It describes recommended steps to configure data collection required to enable Azure Monitor features for your Azure and hybrid applications and resources. > [!IMPORTANT]-> The features of Azure Monitor and their configuration will vary depending on your business requirements balanced with the cost of the enabled features. Each step below will identify whether there is potential cost, and you should assess these costs before proceeding. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for complete pricing details. +> The features of Azure Monitor and their configuration will vary depending on your business requirements balanced with the cost of the enabled features. Each of the following steps identifies whether there's potential cost, and you should assess these costs before proceeding. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for complete pricing details. ## Design Log Analytics workspace architecture-You require at least one Log Analytics workspace to enable [Azure Monitor Logs](logs/data-platform-logs.md), which is required for collecting such data as logs from Azure resources, collecting data from the guest operating system of Azure virtual machines, and for most Azure Monitor insights. Other services such as Microsoft Sentinel and Microsoft Defender for Cloud also use a Log Analytics workspace and can share the same one that you use for Azure Monitor. -There is no cost for creating a Log Analytics workspace, but there is a potential charge once you configure data to be collected into it. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how log data is charged. +You require at least one Log Analytics workspace to enable [Azure Monitor Logs](logs/data-platform-logs.md), which is required for: ++- Collecting data such as logs from Azure resources. +- Collecting data from the guest operating system of Azure Virtual Machines. +- Enabling most Azure Monitor insights. ++Other services such as Microsoft Sentinel and Microsoft Defender for Cloud also use a Log Analytics workspace and can share the same one that you use for Azure Monitor. -See [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md) to create an initial Log Analytics workspace and [Manage access to Log Analytics workspaces](logs/manage-access.md) to configure access. You can use scalable methods such as Resource Manager templates to configure workspaces, though this is often not required since most environments will require a minimal number. +There's no cost for creating a Log Analytics workspace, but there's a potential charge after you configure data to be collected into it. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for information on how log data is charged. -Start with a single workspace to support initial monitoring, but see [Design a Log Analytics workspace configuration](logs/workspace-design.md) for guidance on when to use multiple workspaces and how to locate and configure them. +See [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md) to create an initial Log Analytics workspace, and see [Manage access to Log Analytics workspaces](logs/manage-access.md) to configure access. You can use scalable methods such as Resource Manager templates to configure workspaces, although this step is often not required because most environments will require a minimal number. +Start with a single workspace to support initial monitoring. See [Design a Log Analytics workspace configuration](logs/workspace-design.md) for guidance on when to use multiple workspaces and how to locate and configure them. ## Collect data from Azure resources-Some monitoring of Azure resources is available automatically with no configuration required, while you must perform configuration steps to collect additional monitoring data. The following table illustrates the configuration steps required to collect all available data from your Azure resources, including at which step data is sent to Azure Monitor Metrics and Azure Monitor Logs. The sections below describe each step in further detail. -[](media/best-practices-data-collection/best-practices-azure-resources.png#lightbox) +Some monitoring of Azure resources is available automatically with no configuration required. To collect more monitoring data, you must perform configuration steps. -### Collect tenant and subscription logs -While the [Azure Active Directory logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [Activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically, sending them to a Log Analytics workspace enables you to analyze these events with other log data using log queries in Log Analytics. This also allows you to create log query alerts which are the only way to alert on Azure Active Directory logs and provide more complex logic than Activity log alerts. +The following table shows the configuration steps required to collect all available data from your Azure resources. It also shows at which step data is sent to Azure Monitor Metrics and Azure Monitor Logs. The following sections describe each step in further detail. -There's no cost for sending the Activity log to a workspace, but there is a data ingestion and retention charge for Azure Active Directory logs. +[](media/best-practices-data-collection/best-practices-azure-resources.png#lightbox) -See [Integrate Azure AD logs with Azure Monitor logs](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) and [Create diagnostic settings to send platform logs and metrics to different destinations](essentials/diagnostic-settings.md) to create a diagnostic setting for your tenant and subscription to send log entries to your Log Analytics workspace. +### Collect tenant and subscription logs +The [Azure Active Directory (Azure AD) logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically. When you send them to a Log Analytics workspace, you can analyze these events with other log data by using log queries in Log Analytics. You can also create log query alerts, which are the only way to alert on Azure AD logs and provide more complex logic than activity log alerts. +There's no cost for sending the activity log to a workspace, but there's a data ingestion and retention charge for Azure AD logs. +See [Integrate Azure AD logs with Azure Monitor logs](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) and [Create diagnostic settings to send platform logs and metrics to different destinations](essentials/diagnostic-settings.md) to create a diagnostic setting for your tenant and subscription to send log entries to your Log Analytics workspace. ### Collect resource logs and platform metrics-Resources in Azure automatically generate [resource logs](essentials/platform-logs-overview.md) that provide details of operations performed within the resource. Unlike platform metrics though, you need to configure resource logs to be collected. Create a diagnostic setting to send them to a Log Analytics workspace and combine them with the other data used with Azure Monitor Logs. The same diagnostic setting can be used to also send the platform metrics for most resources to the same workspace, which allows you to analyze metric data using log queries with other collected data. -There is a cost for collecting resource logs in your Log Analytics workspace, so only select those log categories with valuable data. Collecting all categories will incur cost for collecting data with little value. See the monitoring documentation for each Azure service for a description of categories and recommendations for which to collect. Also see [Azure Monitor best practices - cost management](logs/cost-logs.md) for recommendations on optimizing the cost of your log collection. +Resources in Azure automatically generate [resource logs](essentials/platform-logs-overview.md) that provide details of operations performed within the resource. Unlike platform metrics, you need to configure resource logs to be collected. Create a diagnostic setting to send them to a Log Analytics workspace and combine them with the other data used with Azure Monitor Logs. The same diagnostic setting also can be used to send the platform metrics for most resources to the same workspace. This way, you can analyze metric data by using log queries with other collected data. -See [Create diagnostic setting to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-diagnostic-settings) to create a diagnostic setting for an Azure resource. +There's a cost for collecting resource logs in your Log Analytics workspace, so only select those log categories with valuable data. Collecting all categories will incur cost for collecting data with little value. See the monitoring documentation for each Azure service for a description of categories and recommendations for which to collect. Also see [Azure Monitor best practices - cost management](logs/cost-logs.md) for recommendations on optimizing the cost of your log collection. -Since a diagnostic setting needs to be created for each Azure resource, use Azure Policy to automatically create a diagnostic setting as each resource is created. Each Azure resource type has a unique set of categories that need to be listed in the diagnostic setting. Because of this, each resource type requires a separate policy definition. Some resource types have built-in policy definitions that you can assign without modification. For other resource types, you need to create a custom definition. +See [Create diagnostic settings to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-diagnostic-settings) to create a diagnostic setting for an Azure resource. ++Because a diagnostic setting needs to be created for each Azure resource, use Azure Policy to automatically create a diagnostic setting as each resource is created. Each Azure resource type has a unique set of categories that need to be listed in the diagnostic setting. Because of this, each resource type requires a separate policy definition. Some resource types have built-in policy definitions that you can assign without modification. For other resource types, you need to create a custom definition. See [Create diagnostic settings at scale using Azure Policy](essentials/diagnostic-settings-policy.md) for a process for creating policy definitions for a particular Azure service and details for creating diagnostic settings at scale. ### Enable insights-Insights provide a specialized monitoring experience for a particular service. They use the same data already being collected such as platform metrics and resource logs, but they provide custom workbooks the assist you in identifying and analyzing the most critical data. Most insights will be available in the Azure portal with no configuration required, other than collecting resource logs for that service. See the monitoring documentation for each Azure service to determine whether it has an insight and if it requires configuration. -There is no cost for insights, but you may be charged for any data they collect. +Insights provide a specialized monitoring experience for a particular service. They use the same data already being collected such as platform metrics and resource logs, but they provide custom workbooks that assist you in identifying and analyzing the most critical data. Most insights will be available in the Azure portal with no configuration required, other than collecting resource logs for that service. See the monitoring documentation for each Azure service to determine whether it has an insight and if it requires configuration. ++There's no cost for insights, but you might be charged for any data they collect. See [What is monitored by Azure Monitor?](monitor-reference.md) for a list of available insights and solutions in Azure Monitor. See the documentation for each for any unique configuration or pricing information. > [!IMPORTANT]-> The following insights are significantly more complex than others and have additional guidance for their configuration. -> +> The following insights are much more complex than others and have more guidance for their configuration: +> > - [VM insights](#monitor-virtual-machines) > - [Container insights](#monitor-containers) > - [Monitor applications](#monitor-applications) - ## Monitor virtual machines+ Virtual machines generate similar data as other Azure resources, but they require an agent to collect data from the guest operating system. Virtual machines also have unique monitoring requirements because of the different workloads running on them. See [Monitoring Azure virtual machines with Azure Monitor](vm/monitor-vm-azure.md) for a dedicated scenario on monitoring virtual machines with Azure Monitor. ## Monitor containers-Virtual machines generate similar data as other Azure resources, but they require a containerized version of the Log Analytics agent to collect required data. Container insights helps you prepare your containerized environment for monitoring and works in conjunction with third party tools for providing comprehensive monitoring of AKS and the workflows it supports. See [Monitoring Azure Kubernetes Service (AKS) with Azure Monitor](../aks/monitor-aks.md?toc=/azure/azure-monitor/toc.json) for a dedicated scenario on monitoring AKS with Azure Monitor. ++Virtual machines generate similar data as other Azure resources, but they require a containerized version of the Log Analytics agent to collect required data. Container insights help you prepare your containerized environment for monitoring. It works in conjunction with third-party tools to provide comprehensive monitoring of Azure Kubernetes Service (AKS) and the workflows it supports. See [Monitoring Azure Kubernetes Service with Azure Monitor](../aks/monitor-aks.md?toc=/azure/azure-monitor/toc.json) for a dedicated scenario on monitoring AKS with Azure Monitor. ## Monitor applications-Azure Monitor monitors your custom applications using [Application Insights](app/app-insights-overview.md), which you must configure for each application you want to monitor. The configuration process will vary depending on the type of application being monitored and the type of monitoring that you want to perform. Data collected by Application Insights is stored in Azure Monitor Metrics, Azure Monitor Logs, and Azure blob storage, depending on the feature. Performance data is stored in both Azure Monitor Metrics and Azure Monitor Logs with no additional configuration required. ++Azure Monitor monitors your custom applications by using [Application Insights](app/app-insights-overview.md), which you must configure for each application you want to monitor. The configuration process varies depending on the type of application being monitored and the type of monitoring that you want to perform. Data collected by Application Insights is stored in Azure Monitor Metrics, Azure Monitor Logs, and Azure Blob Storage, depending on the feature. Performance data is stored in both Azure Monitor Metrics and Azure Monitor Logs with no more configuration required. ### Create an application resource+ Application Insights is the feature of Azure Monitor for monitoring your cloud native and hybrid applications. -You must create a resource in Application Insights for each application that you're going to monitor. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separate from your Log Analytics workspace as described in [Data structure](logs/log-analytics-workspace-overview.md#data-structure). +You must create a resource in Application Insights for each application that you're going to monitor. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separately from your Log Analytics workspace as described in [Data structure](logs/log-analytics-workspace-overview.md#data-structure). - When you create the application, you must select whether to use classic or workspace-based. See [Create an Application Insights resource](app/create-new-resource.md) to create a classic application. + When you create the application, you must select whether to use classic or workspace based. See [Create an Application Insights resource](app/create-new-resource.md) to create a classic application. See [Workspace-based Application Insights resources (preview)](app/create-workspace-resource.md) to create a workspace-based application. -- A fundamental design decision is whether to use separate or single application resource for multiple applications. Separate resources can save costs and prevent mixing data from different applications, but a single resource can simplify your monitoring by keeping all relevant telemetry together. See [How many Application Insights resources should I deploy](app/separate-resources.md) for detailed criteria on making this design decision. --+ A fundamental design decision is whether to use separate or a single application resource for multiple applications. Separate resources can save costs and prevent mixing data from different applications, but a single resource can simplify your monitoring by keeping all relevant telemetry together. See [How many Application Insights resources should I deploy](app/separate-resources.md) for criteria to help you make this design decision. ### Configure codeless or code-based monitoring-To enable monitoring for an application, you must decide whether you will use codeless or code-based monitoring. The configuration process will vary depending on this decision and the type of application you're going to monitor. -**Codeless monitoring** is easiest to implement and can be configured after your code development. It doesn't require any updates to your code. See the following resources for details on enabling monitoring depending on your application. +To enable monitoring for an application, you must decide whether you'll use codeless or code-based monitoring. The configuration process varies depending on this decision and the type of application you're going to monitor. ++**Codeless monitoring** is easiest to implement and can be configured after your code development. It doesn't require any updates to your code. For information on how to enable monitoring based on your application, see: - [Applications hosted on Azure Web Apps](app/azure-web-apps.md) - [Java applications](app/java-in-process-agent.md)-- [ASP.NET applications hosted in IIS on Azure VM or Azure virtual machine scale set](app/azure-vm-vmss-apps.md)+- [ASP.NET applications hosted in IIS on Azure Virtual Machines or Azure Virtual Machine Scale Sets](app/azure-vm-vmss-apps.md) - [ASP.NET applications hosted in IIS on-premises](app/status-monitor-v2-overview.md) +**Code-based monitoring** is more customizable and collects more telemetry, but it requires adding a dependency to your code on the Application Insights SDK NuGet packages. For information on how to enable monitoring based on your application, see: -**Code-based monitoring** is more customizable and collects additional telemetry, but it requires adding a dependency to your code on the Application Insights SDK NuGet packages. See the following resources for details on enabling monitoring depending on your application. --- [ASP.NET Applications](app/asp-net.md)-- [ASP.NET Core Applications](app/asp-net-core.md)-- [.NET Console Applications](app/console.md)+- [ASP.NET applications](app/asp-net.md) +- [ASP.NET Core applications](app/asp-net-core.md) +- [.NET console applications](app/console.md) - [Java](app/java-in-process-agent.md) - [Node.js](app/nodejs.md) - [Python](app/opencensus-python.md) - [Other platforms](app/platforms.md) ### Configure availability testing-Availability tests in Application Insights are recurring tests that monitor the availability and responsiveness of your application at regular intervals from points around the world. You can create a simple ping test for free or create a sequence of web requests to simulate user transactions which have associated cost. -See [Monitor the availability of any website](app/monitor-web-app-availability.md) for summary of the different kinds of test and details on creating them. +Availability tests in Application Insights are recurring tests that monitor the availability and responsiveness of your application at regular intervals from points around the world. You can create a simple ping test for free. You can also create a sequence of web requests to simulate user transactions, which have associated costs. ++See [Monitor the availability of any website](app/monitor-web-app-availability.md) for a summary of the different kinds of tests and information on creating them. ### Configure Profiler-Profiler in Application Insights provides performance traces for .NET applications. It helps you identify the "hot" code path that takes the longest time when it's handling a particular web request. The process for configuring the profiler varies depending on the type of application. -See [Profile production applications in Azure with Application Insights](app/profiler-overview.md) for details on configuring Profiler. +Profiler in Application Insights provides performance traces for .NET applications. It helps you identify the "hot" code path that takes the longest time when it's handling a particular web request. The process for configuring the profiler varies depending on the type of application. ++See [Profile production applications in Azure with Application Insights](app/profiler-overview.md) for information on configuring Profiler. ### Configure Snapshot Debugger-Snapshot Debugger in Application Insights monitors exception telemetry from your .NET application and collects snapshots on your top-throwing exceptions so that you have the information you need to diagnose issues in production. The process for configuring Snapshot Debugger varies depending on the type of application. -See [Debug snapshots on exceptions in .NET apps](app/snapshot-debugger.md) for details on configuring Snapshot Debugger. +Snapshot Debugger in Application Insights monitors exception telemetry from your .NET application. It collects snapshots on your top-throwing exceptions so that you have the information you need to diagnose issues in production. The process for configuring Snapshot Debugger varies depending on the type of application. ++See [Debug snapshots on exceptions in .NET apps](app/snapshot-debugger.md) for information on configuring Snapshot Debugger. ## Next steps -- With data collection configured for all of your Azure resources, see [Analyze and visualize data](best-practices-analysis.md) to see options for analyzing this data. +With data collection configured for all your Azure resources, see [Analyze and visualize data](best-practices-analysis.md) to see options for analyzing this data. |
azure-monitor | Change Analysis Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md | Register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource - Enter any UI entry point, like the Web App **Diagnose and Solve Problems** tool, or - Bring up the Change Analysis standalone tab. -In this guide, you'll learn the two ways to enable Change Analysis for web app in-guest changes: -- For one or a few web apps, enable Change Analysis via the UI.+In this guide, you'll learn the two ways to enable Change Analysis for Azure Functions and web app in-guest changes: +- For one or a few Azure Functions or web apps, enable Change Analysis via the UI. - For a large number of web apps (for example, 50+ web apps), enable Change Analysis using the provided PowerShell script. > [!NOTE]-> Slot-level enablement for web app is not supported at the moment. +> Slot-level enablement for Azure Functions or web app is not supported at the moment. -## Enable web app in-guest change collection via Azure Portal +## Enable Azure Functions and web app in-guest change collection via the Change Analysis portal For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool) section. |
azure-monitor | Continuous Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/continuous-monitoring.md | Title: Continuous monitoring with Azure Monitor | Microsoft Docs -description: Describes specific steps for using Azure Monitor to enable Continuous monitoring throughout your workflows. +description: Describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows. -Continuous monitoring refers to the process and technology required to incorporate monitoring across each phase of your DevOps and IT operations lifecycles. It helps to continuously ensure the health, performance, and reliability of your application and infrastructure as it moves from development to production. Continuous monitoring builds on the concepts of Continuous Integration and Continuous Deployment (CI/CD) which help you develop and deliver software faster and more reliably to provide continuous value to your users. +Continuous monitoring refers to the process and technology required to incorporate monitoring across each phase of your DevOps and IT operations lifecycles. It helps to continuously ensure the health, performance, and reliability of your application and infrastructure as it moves from development to production. Continuous monitoring builds on the concepts of continuous integration and continuous deployment (CI/CD). CI/CD helps you develop and deliver software faster and more reliably to provide continuous value to your users. -[Azure Monitor](overview.md) is the unified monitoring solution in Azure that provides full-stack observability across applications and infrastructure in the cloud and on-premises. It works seamlessly with [Visual Studio and Visual Studio Code](https://visualstudio.microsoft.com/) during development and test and integrates with [Azure DevOps](/azure/devops/user-guide/index) for release management and work item management during deployment and operations. It even integrates across the ITSM and SIEM tools of your choice to help track issues and incidents within your existing IT processes. --This article describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows. It includes links to other documentation that provides details on implementing different features. +[Azure Monitor](overview.md) is the unified monitoring solution in Azure that provides full-stack observability across applications and infrastructure in the cloud and on-premises. It works seamlessly with [Visual Studio and Visual Studio Code](https://visualstudio.microsoft.com/) during development and test. It integrates with [Azure DevOps](/azure/devops/user-guide/index) for release management and work item management during deployment and operations. It even integrates across the IT system management (ITSM) and SIEM tools of your choice to help track issues and incidents within your existing IT processes. +This article describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows. Links to other documentation provide information on implementing different features. ## Enable monitoring for all your applications-In order to gain observability across your entire environment, you need to enable monitoring on all your web applications and services. This will allow you to easily visualize end-to-end transactions and connections across all the components. --- [Azure DevOps Projects](../devops-project/overview.md) give you a simplified experience with your existing code and Git repository, or choose from one of the sample applications to create a Continuous Integration (CI) and Continuous Delivery (CD) pipeline to Azure.-- [Continuous monitoring in your DevOps release pipeline](./app/continuous-monitoring.md) allows you to gate or rollback your deployment based on monitoring data.-- [Status Monitor](./app/status-monitor-v2-overview.md) allows you to instrument a live .NET app on Windows with Azure Application Insights, without having to modify or redeploy your code.-- If you have access to the code for your application, then enable full monitoring with [Application Insights](./app/app-insights-overview.md) by installing the Azure Monitor Application Insights SDK for [.NET](./app/asp-net.md), [.NET Core](./app/asp-net-core.md), [Java](./app/java-in-process-agent.md), [Node.js](./app/nodejs-quick-start.md), or [any other programming languages](./app/platforms.md). This allows you to specify custom events, metrics, or page views that are relevant to your application and your business. +To gain observability across your entire environment, you need to enable monitoring on all your web applications and services. This way, you can easily visualize end-to-end transactions and connections across all the components. For example: +- [Azure DevOps projects](../devops-project/overview.md) give you a simplified experience with your existing code and Git repository. You can also choose from one of the sample applications to create a CI/CD pipeline to Azure. +- [Continuous monitoring in your DevOps release pipeline](./app/continuous-monitoring.md) allows you to gate or roll back your deployment based on monitoring data. +- [Status Monitor](./app/status-monitor-v2-overview.md) allows you to instrument a live .NET app on Windows with Application Insights, without having to modify or redeploy your code. +- If you have access to the code for your application, enable full monitoring with [Application Insights](./app/app-insights-overview.md) by installing the Azure Monitor Application Insights SDK for [.NET](./app/asp-net.md), [.NET Core](./app/asp-net-core.md), [Java](./app/java-in-process-agent.md), [Node.js](./app/nodejs-quick-start.md), or [any other programming languages](./app/platforms.md). Full monitoring allows you to specify custom events, metrics, or page views that are relevant to your application and your business. ## Enable monitoring for your entire infrastructure-Applications are only as reliable as their underlying infrastructure. Having monitoring enabled across your entire infrastructure will help you achieve full observability and make it easier to discover a potential root cause when something fails. Azure Monitor helps you track the health and performance of your entire hybrid infrastructure including resources such as VMs, containers, storage, and network. -- You automatically get [platform metrics, activity logs and diagnostics logs](data-sources.md) from most of your Azure resources with no configuration.+Applications are only as reliable as their underlying infrastructure. Having monitoring enabled across your entire infrastructure will help you achieve full observability and make it easier to discover a potential root cause when something fails. Azure Monitor helps you track the health and performance of your entire hybrid infrastructure including resources such as VMs, containers, storage, and network. For example, you can: ++- Get [platform metrics, activity logs, and diagnostics logs](data-sources.md) automatically from most of your Azure resources with no configuration. - Enable deeper monitoring for VMs with [VM insights](vm/vminsights-overview.md).-- Enable deeper monitoring for AKS clusters with [Container insights](containers/container-insights-overview.md).+- Enable deeper monitoring for Azure Kubernetes Service (AKS) clusters with [Container insights](containers/container-insights-overview.md). - Add [monitoring solutions](./monitor-reference.md) for different applications and services in your environment. +[Infrastructure as code](/azure/devops/learn/what-is-infrastructure-as-code) is the management of infrastructure in a descriptive model, using the same versioning that DevOps teams use for source code. It adds reliability and scalability to your environment and allows you to use similar processes that are used to manage your applications. For example, you can: -[Infrastructure as code](/azure/devops/learn/what-is-infrastructure-as-code) is the management of infrastructure in a descriptive model, using the same versioning as DevOps teams use for source code. It adds reliability and scalability to your environment and allows you to leverage similar processes that used to manage your applications. --- Use [Resource Manager templates](./logs/resource-manager-workspace.md) to enable monitoring and configure alerts over a large set of resources.-- Use [Azure Policy](../governance/policy/overview.md) to enforce different rules over your resources. This ensures that those resources stay compliant with your corporate standards and service level agreements. +- Use [Azure Resource Manager templates](./logs/resource-manager-workspace.md) to enable monitoring and configure alerts over a large set of resources. +- Use [Azure Policy](../governance/policy/overview.md) to enforce different rules over your resources. Azure Policy ensures that those resources stay compliant with your corporate standards and service level agreements. +## Combine resources in Azure resource groups -## Combine resources in Azure Resource Groups -A typical application on Azure today includes multiple resources such as VMs and App Services or microservices hosted on Cloud Services, AKS clusters, or Service Fabric. These applications frequently utilize dependencies like Event Hubs, Storage, SQL, and Service Bus. +A typical application on Azure today includes multiple resources such as VMs and app services or microservices hosted on Azure Cloud Services, AKS clusters, or Azure Service Fabric. These applications frequently use dependencies like Azure Event Hubs, Azure Storage, Azure SQL, and Azure Service Bus. For example, you can: -- Combine resources in Azure Resource Groups to get full visibility across all your resources that make up your different applications. [Resource Group insights](./insights/resource-group-insights.md) provides a simple way to keep track of the health and performance of your entire full-stack application and enables drilling down into respective components for any investigations or debugging.+- Combine resources in Azure resource groups to get full visibility across all your resources that make up your different applications. [Resource group insights](./insights/resource-group-insights.md) provides a simple way to keep track of the health and performance of your entire full-stack application and enables drilling down into respective components for any investigations or debugging. -## Ensure quality through Continuous Deployment -Continuous Integration / Continuous Deployment allows you to automatically integrate and deploy code changes to your application based on the results of automated testing. It streamlines the deployment process and ensures the quality of any changes before they move into production. +## Ensure quality through continuous deployment +CI/CD allows you to automatically integrate and deploy code changes to your application based on the results of automated testing. It streamlines the deployment process and ensures the quality of any changes before they move into production. For example, you can: -- Use [Azure Pipelines](/azure/devops/pipelines) to implement Continuous Deployment and automate your entire process from code commit to production based on your CI/CD tests.-- Use [Quality Gates](/azure/devops/pipelines/release/approvals/gates) to integrate monitoring into your pre-deployment or post-deployment. This ensures that you are meeting the key health/performance metrics (KPIs) as your applications move from dev to production and any differences in the infrastructure environment or scale is not negatively impacting your KPIs.-- [Maintain separate monitoring instances](./app/separate-resources.md) between your different deployment environments such as Dev, Test, Canary, and Prod. This ensures that collected data is relevant across the associated applications and infrastructure. If you need to correlate data across environments, you can use [multi-resource charts in Metrics Explorer](./essentials/metrics-charts.md) or create [cross-resource queries in Azure Monitor](logs/cross-workspace-query.md).-+- Use [Azure Pipelines](/azure/devops/pipelines) to implement continuous deployment and automate your entire process from code commit to production based on your CI/CD tests. +- Use [quality gates](/azure/devops/pipelines/release/approvals/gates) to integrate monitoring into your pre-deployment or post-deployment. Quality gates ensure that you're meeting the key health and performance metrics, also known as KPIs, as your applications move from development to production. They also ensure that any differences in the infrastructure environment or scale aren't negatively affecting your KPIs. +- [Maintain separate monitoring instances](./app/separate-resources.md) between your different deployment environments, such as Dev, Test, Canary, and Prod. Separate monitoring instances ensure that collected data is relevant across the associated applications and infrastructure. If you need to correlate data across environments, you can use [multi-resource charts in metrics explorer](./essentials/metrics-charts.md) or create [cross-resource queries in Azure Monitor](logs/cross-workspace-query.md). ## Create actionable alerts with actions-A critical aspect of monitoring is proactively notifying administrators of any current and predicted issues. -- Create [alerts in Azure Monitor](./alerts/alerts-overview.md) based on logs and metrics to identify predictable failure states. You should have a goal of making all alerts actionable meaning that they represent actual critical conditions and seek to reduce false positives. Use [Dynamic Thresholds](alerts/alerts-dynamic-thresholds.md) to automatically calculate baselines on metric data rather than defining your own static thresholds. -- Define actions for alerts to use the most effective means of notifying your administrators. Available [actions for notification](alerts/action-groups.md#create-an-action-group-by-using-the-azure-portal) are SMS, e-mails, push notifications, or voice calls.+A critical aspect of monitoring is proactively notifying administrators of any current and predicted issues. For example, you can: ++- Create [alerts in Azure Monitor](./alerts/alerts-overview.md) based on logs and metrics to identify predictable failure states. You should have a goal of making all alerts actionable, which means that they represent actual critical conditions and seek to reduce false positives. Use [dynamic thresholds](alerts/alerts-dynamic-thresholds.md) to automatically calculate baselines on metric data rather than defining your own static thresholds. +- Define actions for alerts to use the most effective means of notifying your administrators. Available [actions for notification](alerts/action-groups.md#create-an-action-group-by-using-the-azure-portal) are SMS, emails, push notifications, or voice calls. - Use more advanced actions to [connect to your ITSM tool](alerts/itsmc-overview.md) or other alert management systems through [webhooks](alerts/activity-log-alerts-webhook.md).-- Remediate situations identified in alerts as well with [Azure Automation runbooks](../automation/automation-webhooks.md) or [Logic Apps](/connectors/custom-connectors/create-webhook-trigger) that can be launched from an alert using webhooks. +- Remediate situations identified in alerts as well with [Azure Automation runbooks](../automation/automation-webhooks.md) or [Azure Logic Apps](/connectors/custom-connectors/create-webhook-trigger) that can be launched from an alert by using webhooks. - Use [autoscaling](./autoscale/tutorial-autoscale-performance-schedule.md) to dynamically increase and decrease your compute resources based on collected metrics. ## Prepare dashboards and workbooks-Ensuring that your development and operations have access to the same telemetry and tools allows them to view patterns across your entire environment and minimize your Mean Time To Detect (MTTD) and Mean Time To Restore (MTTR). ++Ensuring that your development and operations have access to the same telemetry and tools allows them to view patterns across your entire environment and minimize your mean time to detect and mean time to restore. For example, you can: - Prepare [custom dashboards](./app/tutorial-app-dashboards.md) based on common metrics and logs for the different roles in your organization. Dashboards can combine data from all Azure resources.-- Prepare [Workbooks](./visualize/workbooks-overview.md) to ensure knowledge sharing between development and operations. These could be prepared as dynamic reports with metric charts and log queries, or even as troubleshooting guides prepared by developers helping customer support or operations to handle basic problems.+- Prepare [workbooks](./visualize/workbooks-overview.md) to ensure knowledge sharing between development and operations. Workbooks could be prepared as dynamic reports with metric charts and log queries. They can also be troubleshooting guides prepared by developers to help customer support or operations handle basic problems. ## Continuously optimize- Monitoring is one of the fundamental aspects of the popular Build-Measure-Learn philosophy, which recommends continuously tracking your KPIs and user behavior metrics and then striving to optimize them through planning iterations. Azure Monitor helps you collect metrics and logs relevant to your business and to add new data points in the next deployment as required. -- Use tools in Application Insights to [track end-user behavior and engagement](./app/tutorial-users.md).-- Use [Impact Analysis](./app/usage-impact.md) to help you prioritize which areas to focus on to drive to important KPIs.+ Monitoring is one of the fundamental aspects of the popular Build-Measure-Learn philosophy, which recommends continuously tracking your KPIs and user behavior metrics and then striving to optimize them through planning iterations. Azure Monitor helps you collect metrics and logs relevant to your business and add new data points in the next deployment as required. For example, you can: +- Use tools in Application Insights to [track user behavior and engagement](./app/tutorial-users.md). +- Use [Impact analysis](./app/usage-impact.md) to help you prioritize which areas to focus on to drive to important KPIs. ## Next steps |
azure-monitor | Observability Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/observability-data.md | documentationcenter: '' na Previously updated : 04/05/2022 Last updated : 08/18/2022 # Observability data in Azure Monitor Enabling observability across today's complex computing environments running distributed applications that rely on both cloud and on-premises services, requires collection of operational data from every layer and every component of the distributed system. You need to be able to perform deep insights on this data and consolidate it into a single pane of glass with different perspectives to support the multitude of stakeholders in your organization. -[Azure Monitor](overview.md) collects and aggregates data from a variety of sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources, which gives you deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor. +[Azure Monitor](overview.md) collects and aggregates data from various sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources, which gives you deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor. :::image type="content" source="media/overview/azure-monitor-overview-optm.svg" alt-text="Diagram that shows an overview of Azure Monitor." border="false" lightbox="media/overview/azure-monitor-overview-optm.svg"::: ## Pillars of observability -Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. These are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services. +Metrics, logs, distributed traces, and changes are commonly referred to as the pillars of observability. These are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services. Azure resources generate a significant amount of monitoring data. Azure Monitor consolidates this data along with monitoring data from other sources into either a Metrics or Logs platform. Each is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost effective manner. Insights in Azure Monitor such as [Application Insights](app/app-insights-overview.md) or [VM insights](vm/vminsights-overview.md) have analysis tools that allow you to focus on the particular monitoring scenario without having to understand the differences between the two types of data. Logs in Azure Monitor are stored in a Log Analytics workspace that's based on [A > [!NOTE] > It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription level events in Azure are written to an [activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations. Azure Monitor Logs is a log data platform that collects activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources. -- You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal or add the results to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data. You can also create [log alerts](alerts/alerts-log.md) which will trigger an alert based on the results of a schedule query. +You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal or add the results to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data. You can also create [log alerts](alerts/alerts-log.md) which will trigger an alert based on the results of a schedule query. Read more about Azure Monitor Logs including their sources of data in [Logs in Azure Monitor](logs/data-platform-logs.md). Distributed tracing in Azure Monitor is enabled with the [Application Insights S Read more about distributed tracing at [What is Distributed Tracing?](app/distributed-tracing.md). +## Changes ++Change Analysis alerts you to live site issues, outages, component failures, or other change data. It also provides insights into those application changes, increases observability, and reduces the mean time to repair. You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by going to Change Analysis via the Azure portal. For web app in-guest changes, you can enable the [Change Analysis tool via the Change Analysis portal](./change/change-analysis-enable.md#enable-azure-functions-and-web-app-in-guest-change-collection-via-the-change-analysis-portal). ++Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time. It detects managed identities, platform operating system upgrades, and hostname changes. Change Analysis securely queries IP configuration rules, TLS settings, and extension versions to provide more detailed change data. ++Read more about Change Analysis at [Use Change Analysis in Azure Monitor](./change/change-analysis.md). [Try Change Analysis for observability into your Azure subscriptions](https://aka.ms/cahome). ## Next steps |
azure-monitor | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md | Azure Monitor uses a version of the [Kusto Query Language](/azure/kusto/query/)  -Change Analysis alerts you to live site issues, outages, component failures, or other change data. It also provides insights into those application changes, increases observability, and reduces the mean time to repair. You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by going to Change Analysis via the Azure portal. For web app in-guest changes, you can enable Change Analysis by using the [Diagnose and solve problems tool](./change/change-analysis-enable.md#enable-web-app-in-guest-change-collection-via-azure-portal). +Change Analysis alerts you to live site issues, outages, component failures, or other change data. It also provides insights into those application changes, increases observability, and reduces the mean time to repair. You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by going to Change Analysis via the Azure portal. For web app in-guest changes, you can enable Change Analysis by using the [Diagnose and solve problems tool](./change/change-analysis-enable.md#enable-azure-functions-and-web-app-in-guest-change-collection-via-the-change-analysis-portal). Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time. It detects managed identities, platform operating system upgrades, and hostname changes. Change Analysis securely queries IP configuration rules, TLS settings, and extension versions to provide more detailed change data. |
azure-monitor | Resource Manager Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md | -You can deploy and configure Azure Monitor at scale by using [Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). This article lists sample templates for Azure Monitor features. You can modify these samples for your particular requirements and deploy them by using any standard method for deploying Resource Manager templates. +You can deploy and configure Azure Monitor at scale by using [Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). This article lists sample templates for Azure Monitor features. You can modify these samples for your particular requirements and deploy them by using any standard method for deploying Resource Manager templates. -## Deploying the sample templates -The basic steps to use the one of the template samples are: +## Deploy the sample templates ++The basic steps to use one of the template samples are: 1. Copy the template and save it as a JSON file.-2. Modify the parameters for your environment and save the JSON file. -3. Deploy the template by using [any deployment method for Resource Manager templates](../azure-resource-manager/templates/deploy-powershell.md). +1. Modify the parameters for your environment and save the JSON file. +1. Deploy the template by using [any deployment method for Resource Manager templates](../azure-resource-manager/templates/deploy-powershell.md). For example, use the following commands to deploy the template and parameter file to a resource group by using PowerShell or the Azure CLI: az deployment group create \ ## Next steps -- Learn more about [Resource Manager templates](../azure-resource-manager/templates/overview.md).+Learn more about [Resource Manager templates](../azure-resource-manager/templates/overview.md). |
azure-netapp-files | Backup Restore New Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md | Restoring a backup creates a new volume with the same protocol type. This articl * You should trigger the restore operation when there are no baseline backups. Otherwise, the restore might increase the load on the Azure Blob account where your data is backed up. +* For large volumes (greater than 10 TB), it can take multiple hours to transfer all the data from the backup media. + See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for additional considerations about using Azure NetApp Files backup. ## Steps See [Requirements and considerations for Azure NetApp Files backup](backup-requi > If a volume is deleted but the backup policy wasn’t disabled before the volume deletion, all the backups related to the volume are retained in the Azure storage, and you can find them under the associated NetApp account. See [Search backups at NetApp account level](backup-search.md#search-backups-at-netapp-account-level). -2. From the backup list, select the backup to restore. Click the three dots (`…`) to the right of the backup, then click **Restore to new volume** from the Action menu. +2. From the backup list, select the backup to restore. Select the three dots (`…`) to the right of the backup, then select **Restore to new volume** from the Action menu.  -3. In the Create a Volume page that appears, provide information for the fields in the page as applicable, and click **Review + Create** to begin restoring the backup to a new volume. +3. In the Create a Volume page that appears, provide information for the fields in the page as applicable, and select **Review + Create** to begin restoring the backup to a new volume. * The **Protocol** field is pre-populated from the original volume and cannot be changed. However, if you restore a volume from the backup list at the NetApp account level, you need to specify the Protocol field. The Protocol field must match the protocol of the original volume. Otherwise, the restore operation will fail with the following error: |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t Standard network features now includes Global VNet peering. You must still [register the feature](configure-network-features.md#register-the-feature) before using it. [!INCLUDE [Standard network features pricing](includes/standard-networking-pricing.md)]--* [Cloud Backup for Virtual Machines on Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/install-cloud-backup-virtual-machines.md) - You can now create VM consistent snapshot backups of VMs on Azure NetApp Files datastores using [Cloud Backup for Virtual Machines](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). The associated virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated and consistent backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores. ## July 2022 Azure NetApp Files is updated regularly. This article provides a summary about t * Azure Key Vault to store Service Principal content * Azure Managed Disk as an alternate storage back end -* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now in public preview. You can [Back up Azure NetApp Files datastores and VMs using Cloud Backup](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). This virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores. +* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now in public preview. You can back up Azure NetApp Files datastores and VMs using Cloud Backup. This virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores. * [Active Directory connection enhancement: Reset Active Directory computer account password](create-active-directory-connections.md#reset-active-directory) (Preview) |
azure-percept | Voice Control Your Inventory Then Visualize With Power Bi Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/voice-control-your-inventory-then-visualize-with-power-bi-dashboard.md | - Title: Voice control your inventory with Azure Percept Audio -description: This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI. ---- Previously updated : 12/14/2021 ------# Voice control your inventory with Azure Percept Audio -This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI. The solution uses the Azure Percept DK device and the Audio SoM, Azure Speech Services -Custom Commands, Azure Function App, SQL Database, and Power BI. Users can learn how to manage their inventory with voice using Azure Percept Audio and visualize results with Power BI. The goal of this article is to empower users to create a basic inventory management solution. --Users who want to take their solution further can add an additional edge module for visual inventory inspection or expand on the inventory visualizations within Power BI. --In this tutorial, you learn how to: --- Create an Azure SQL Server and SQL Database-- Create an Azure function project and publish to Azure-- Import an available template to Custom Commands-- Create a Custom Commands using an available template-- Deploy modules to your Devkit-- Import dataset from Azure SQL to Power BI---## Prerequisites -- Percept DK ([Purchase](https://www.microsoft.com/store/build/azure-percept/8v2qxmzbz9vc))-- Azure Subscription : [Free trial account](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)-- [Azure Percept Audio setup](./quickstart-percept-audio-setup.md)-- Speaker or headphones that can connect to 3.5mm audio jack (optional) -- Install [Power BI Desktop](https://powerbi.microsoft.com/downloads/)-- Install [VS code](https://code.visualstudio.com/download) -- Install the [IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) and [IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) Extension in VS Code -- The [Azure Functions Core Tools](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-functions/functions-run-local.md) version 3.x.-- The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.-- The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.-- Create an [Azure SQL server](/azure/azure-sql/database/single-database-create-quickstart)---## Software architecture - ---## Step 1: Create an Azure SQL Server and SQL Database -In this section, you will learn how to create the table for this lab. This table will be the main source of truth for your current inventory and the basis of data visualized in Power BI. --1. Set SQL server firewall - 1. Click Set server firewall -  - 2. Add Rule name workshop - Start IP 0.0.0.0 and End IP 255.255.255.255 to the IP allowlist for lab purpose -  - 3. Click Query editor to login your sql database <br /> -  <br /> - 4. Login to your SQL database through SQL Server Authentication <br /> -  <br /> -2. Run the T-SQL query below in the query editor to create the table <br /> -- - ```sql - -- Create table stock - CREATE TABLE Stock - ( -     color varchar(255), -     num_box int - ) -- ``` - - :::image type="content" source="./media/voice-control-your-inventory-images/create-sql-table.png" alt-text="Create SQL table."::: - -## Step 2: Create an Azure Functions project and publish to Azure -In this section, you will use Visual Studio Code to create a local Azure Functions project in Python. Later in this article, you'll publish your function code to Azure. --1. Go to the [GitHub link](https://github.com/microsoft/Azure-Percept-Reference-Solutions/tree/main/voice-control-inventory-management) and clone the repository - 1. Click Code and HTTPS tab - :::image type="content" source="./media/voice-control-your-inventory-images/clone-git.png" alt-text="Code and HTTPS tab."::: - 2. Copy the command below in your terminal to clone the repository -  -- ``` - git clone https://github.com/microsoft/Azure-Percept-Reference-Solutions/tree/main/voice-control-inventory-management - ``` --2. Enable Azure Functions. -- 1. Click Azure Logo in the task bar --  - 2. Click "..." and check the “Functions” has been checked -  - -3. Create your local project - 1. Create a folder (ex: airlift_az_func) for your project workspace -  - 2. Choose the Azure icon in the Activity bar, then in Functions, select the <strong>Create new project...</strong>icon. -  - 3. Choose the directory location you just created for your project workspace and choose **Select**. -  - 4. <strong>Provide the following information at the prompts</strong>: Select a language for your function project: Choose <strong>Python</strong>. -  - 5. <strong>Select a Python alias to create a virtual environment</strong>: Choose the location of your Python interpreter. If the location isn't shown, type in the full path to your Python binary. Select skip virtual environment you don’t have Python installed. -  - 6. <strong>Select a template for your project's first function</strong>: Choose <strong>HTTP trigger</strong>. -  - 7. <strong>Provide a function name</strong>: Type <strong>HttpExample</strong>. -  - 8. <strong>Authorization level</strong>: Choose <strong>Anonymous</strong>, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md). -  - 9. <strong>Select how you would like to open your project</strong>: Choose Add to workspace. Trust folder and enable all features. - -  - 1. You will see the HTTPExample function has been initiated -  - -4. Develop CRUD.py to update Azure SQL on Azure Function - 1. Replace the content of the <strong>__init__.py</strong> in [here](https://github.com/microsoft/Azure-Percept-Reference-Solutions/blob/main/voice-control-inventory-management/azure-functions/__init__.py) by copying the raw content of <strong>__init__.py</strong> - :::image type="content" source="./media/voice-control-your-inventory-images/copy-raw-content-mini.png" alt-text="Copy raw contents." lightbox="./media/voice-control-your-inventory-images/copy-raw-content.png"::: - 2. Drag and drop the <strong>CRUD.py</strong> to the same layer of <strong>init.py</strong> -  -  - 3. Update the value of the <strong>sql server full address</strong>, <strong>database</strong>, <strong>username</strong>, <strong>password</strong> you created in section 1 in <strong>CRUD.py</strong> - :::image type="content" source="./media/voice-control-your-inventory-images/server-name-mini.png" alt-text="Update the values."lightbox="./media/voice-control-your-inventory-images/server-name.png"::: -  - 4. Replace the content of the <strong>requirements.txt</strong> in here by copying the raw content of requirements.txt -  - :::image type="content" source="./media/voice-control-your-inventory-images/view-requirement-file-mini.png" alt-text="Replace the content." lightbox= "./media/voice-control-your-inventory-images/view-requirement-file.png"::: - 5. Press “Ctrl + s” to save the content - -5. Sign in to Azure - 1. Before you can publish your app, you must sign into Azure. If you aren't already signed in, choose the Azure icon in the Activity bar, then in the Azure: Functions area, choose <strong>Sign in to Azure...</strong>.If you're already signed in, go to the next section. -  -- 2. When prompted in the browser, choose your Azure account and sign in using your Azure account credentials. - 3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong to your Azure account are displayed in the Side bar. - -6. Publish the project to Azure - 1. Choose the Azure icon in the Activity bar, then in the <strong>Azure: Functions area</strong>, choose the <strong>Deploy to function app...</strong> button. -  - 2. Provide the following information at the prompts: - 1. <strong>Select folder</strong>: Choose a folder from your workspace or browse to one that contains your function app. You won't see this if you already have a valid function app opened. - 2. <strong>Select subscription</strong>: Choose the subscription to use. You won't see this if you only have one subscription. - 3. <strong>Select Function App in Azure</strong>: Choose + Create new Function App. (Don't choose the Advanced option, which isn't covered in this article.) - 4. <strong>Enter a globally unique name for the function app</strong>: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. - 5. <strong>Select a runtime</strong>: Choose the version of <strong>3.9</strong> -  - 1. <strong>Select a location for new resources</strong>: Choose the region. - 2. Select <strong>View Output</strong> in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again. - -  - 3. <strong>Note down the HTTP Trigger Url</strong> for further use in the section 4 -  --7. Test your Azure Function App - 1. Choose the Azure icon in the Activity bar, expand your subscription, your new function app, and Functions. - 2. Right-click the HttpExample function and choose <strong>Execute Function Now</strong>.... -  - 3. In Enter request body you see the request message body value of - ``` - { "color": "yellow", "num_box" :"2", "action":"remove" } - ``` -  - Press Enter to send this request message to your function. - - 1. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code. -  - -## Step 3: Import an inventory speech template to Custom Commands -In this section, you will import an existing application config json file to Custom Commands. --1. Create an Azure Speech resource in a region that supports Custom Commands. - 1. Click [Create Speech Services portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) to create an Azure Speech resource - 1. Select your Subscription - 2. Use the Resource group you just created in exercise 1 - 3. Select the Region(Please check here to see the support region in custom commands) - 4. Create Name for your speech service - 5. Select Pricing tier to Free F0 - 2. Go to the Speech Studio for Custom Commands - 1. In a web browser, go to [Speech Studio](https://speech.microsoft.com/portal). - 2. Select <strong>Custom Commands</strong>. - The default view is a list of the Custom Commands applications you have under your selected subscription. - :::image type="content" source="./media/voice-control-your-inventory-images/cognitive-service.png" alt-text="Custom Commands applications."::: - 3. Select your Speech <strong>subscription</strong> and <strong>resource group</strong>, and then select <strong>Use resource</strong>. -  - 3. Import an existing application config as a new Custom Commands project - 1. Select <strong>New project</strong> to create a project. -  - 2. In the <strong>Name</strong> box, enter project name as Stock (or something else of your choice). - 3. In the <strong>Language</strong> list, select <strong>English (United States)</strong>. - 4. Select <strong>Browse files</strong> and in the browse window, select the <strong>smart-stock.json</strong> file in the <strong>custom-commands folder</strong> -  -  -- 5. In the <strong>LUIS authoring resource</strong> list, select an authoring resource. If there are no valid authoring resources, create one by selecting <strong>Create new LUIS authoring resource</strong>. -  - - 6. In the <strong>Resource Name</strong> box, enter the name of the resource. - 7. In the <strong>Resource Group</strong> list, select a resource group. - 8. In the <strong>Location list</strong>, select a region. - 9. In the <strong>Pricing Tier</strong> list, select a tier. - 10. Next, select <strong>Create</strong> to create your project. After the project is created, select your project. You should now see overview of your new Custom Commands application. ---## Step 4: Train, test, and publish the Custom Commands -In this section, you will train, test, and publish your Custom Commands --1. Replace the web endpoints URL - 1. Click Web endpoints and replace the URL - 2. Replace the value in the URL to the <strong>HTTP Trigger Url</strong> you noted down in section 2 (ex: `https://xxx.azurewebsites.net/api/httpexample`) - :::image type="content" source="./media/voice-control-your-inventory-images/web-point-url.png" alt-text="Replace the value in the URL."::: -2. Create LUIS prediction resource - 1. Click <strong>settings</strong> and create a <strong>S0</strong> prediction resource under LUIS <strong>prediction resource</strong>. - :::image type="content" source="./media/voice-control-your-inventory-images/predict-source.png" alt-text="Prediction resource-1."::: -  -3. Train and Test with your custom command - 1. Click <strong>Save</strong> to save the Custom Commands Project - 2. Click <strong>Train</strong> to Train your custom commands service - :::image type="content" source="./media/voice-control-your-inventory-images/train-model.png" alt-text="Custom commands train model."::: - 3. Click <strong>Test</strong> to test your custom commands service - :::image type="content" source="./media/voice-control-your-inventory-images/test-model.png" alt-text="Custom commands test model."::: - 4. Type “Add 2 green boxes” in the pop-up window to see if it can respond correctly -  -4. Publish your custom command - 1. Click Publish to publish the custom commands - :::image type="content" source="./media/voice-control-your-inventory-images/publish.png" alt-text="Publish the custom commands."::: -5. Note down your application ID, speech key in the settings for further use - :::image type="content" source="./media/voice-control-your-inventory-images/application-id.png" alt-text="Application ID."::: --## Step 5: Deploy modules to your Devkit -In this section, you will learn how to use deployment manifest to deploy modules to your device. -1. Set IoT Hub Connection String - 1. Go to your IoT Hub service in Azure portal. Click <strong>Shared access policies</strong> -> <strong>Iothubowner</strong> - 2. Click <strong>Copy</strong> the get the <strong>primary connection string</strong> - :::image type="content" source="./media/voice-control-your-inventory-images/iot-hub-owner.png" alt-text="Primary connection string."::: - 3. In Explorer of VS Code, click "Azure IoT Hub". -  - 4. Click "Set IoT Hub Connection String" in context menu -  - 5. An input box will pop up, then enter your IoT Hub Connection String<br /> -2. Open VSCode to open the folder you cloned in the section 1 <br /> -  -3. Modify the envtemplate<br /> - 1. Right click the <strong>envtemplate</strong> and rename to <strong>.env</strong>. Provide values for all variables such as below.<br /> -  -  - 2. Relace your Application ID and Speech resource key by checking your Speech Studio<br /> -  -  - 3. Check the region by checking your Azure speech service, and mapping the <strong>display name</strong> (e.g. West US) to <strong>name</strong> (e.g., westus) [here](https://azuretracks.com/2021/04/current-azure-region-names-reference/). -  - 4. Replace the Speech Region to the name (e.g.: westus) you just get from the mapping table. (Check all characters are in lower case.) -  - -4. Deploy modules to device - 1. Right click on deployment.template.json and <strong>select Generate IoT Edge Deployment Manifest</strong> -  - 2. After you generated the manifest, you can see <strong>deployment.amd64.json</strong> is under config folder. Right click on deployment.amd64.json and choose Create Deployment for <strong>Single Device</strong> -  - 3. Choose the IoT Hub device you are going to deploy -  - 4. Check your log of the azurespeechclient module - 1. Go to Azure portal to click your Azure IoT Hub - !:::image type="content" source="./media/voice-control-your-inventory-images/voice-iothub.png" alt-text="Select IoT hub."::: - 2. Click IoT Edge - :::image type="content" source="./media/voice-control-your-inventory-images/portal-iotedge.png" alt-text="Go to IoT edge."::: - 3. Click your Edge device to see if the modules run well - :::image type="content" source="./media/voice-control-your-inventory-images/device-id.png" alt-text="Confirm modules."::: - 4. Click <strong>azureearspeechclientmodule</strong> module - :::image type="content" source="./media/voice-control-your-inventory-images/azure-ear-module.png" alt-text="Select ear module."::: - 5. Click <strong>Troubleshooting</strong> tab of the azurespeechclientmodule -  - - 5. Check your log of the azurespeechclient module - 1. Change the Time range to 3 minutes to check the latest log -  - 2. Speak <strong>“Computer, remove 2 red boxes”</strong> to your Azure Percept Audio - (Computer is the wake word to wake Azure Percept DK, and remove 2 red boxes is the command) - Check the log in the speech log if it shows <strong>“sure, remove 2 red boxes. 2 red boxes have been removed.”</strong> - :::image type="content" source="./media/voice-control-your-inventory-images/speech-regconizing.png" alt-text="Verify log."::: - >[!NOTE] - >If you have set up the wake word before, please use the wake work you set up to wake your DK. - --## Step 6: Import dataset from Azure SQL to Power BI -In this section, you will create a Power BI report and check if the report has been updated after you speak commands to your Azure Percept Audio. -1. Open the Power BI Desktop Application and import data from Azure SQL Server - 1. Click close of the pop-up window -  - 2. Import data from SQL Server -  - 3. Enter your sql server name \<sql server name\>.database.windows.NET, and choose DirectQuery -  - 4. Select Database, and enter the username and the password -  - 5. <strong>Select</strong> the table Stock, and Click <strong>Load</strong> to load dataset to Power BI Desktop<br /> - -  -2. Create your Power BI report - 1. Click color, num_box columns in the Fields. And choose visualization Clustered column chart to present your chart.<br /> -  -  - 2. Drag and drop the <strong>color</strong>column to the <strong>Legend</strong> and you will get the chart that looks like below. -  -  - 3. Click <strong>format</strong> and click Data colors to change the colors accordingly. You will have the charts that look like below. -  - 4. Select card visualization -  - 5. Check the num_box -  - 6. Drag and drop the <strong>color</strong> column to <strong>Filters on this visual</strong> -  - 7. Select green in the Filters on this visual - -  - 8. Double click the column name of the column in the Fields and change the name of the column from “Count of the green box” -  -3. Speak command to your Devkit and refresh Power BI - 1. Speak “Add three green boxes” to Azure Percept Audio - 2. Click “Refresh”. You will see the number of green boxes has been updated. -  --Congratulations! You now know how to develop your own voice assistant! You went through a lot of configuration and set up the custom commands for the first time. Great job! Now you can start trying more complex scenarios after this tutorial. Looking forward to seeing you design more interesting scenarios and let voice assistant help in the future. --<!-- 6. Clean up resources -Required. If resources were created during the tutorial. If no resources were created, -state that there are no resources to clean up in this section. >--## Clean up resources --If you're not going to continue to use this application, delete -resources with the following steps: --1. Login to the [Azure portal](https://portal.azure.com), go to `Resource Group` you have been using for this tutorial. Delete the SQL DB, Azure Function, and Speech Service resources. --2. Go into [Azure Percept Studio](https://portal.azure.com/#blade/AzureEdgeDevices/Main/overview), select your device from the `Device` blade, click the `Speech` tab within your device, and under `Configuration` remove reference to your custom command. --3. Go in to [Speech Studio](https://speech.microsoft.com/portal) and delete project created for this tutorial. --4. Login to [Power BI](https://msit.powerbi.com/home) and select your Workspace (this is the same Group Workspace you used while creating the Stream Analytics job output), and delete workspace. ----<!-- 7. Next steps -Required: A single link in the blue box format. Point to the next logical tutorial -in a series, or, if there are no other tutorials, to some other cool thing the -customer can do. >--## Next steps --Check out the tutorial [Create a people counting solution with Azure Percept Vision](./create-people-counting-solution-with-azure-percept-devkit-vision.md). --<!-- -Remove all the comments in this template before you sign-off or merge to the -main branch. > |
azure-resource-manager | Deploy Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-github-actions.md | description: In this quickstart, you learn how to deploy Bicep files by using Gi Previously updated : 07/18/2022 Last updated : 08/22/2022 To create a workflow, take the following steps: ```yml on: [push] name: Azure ARM+ permissions: + id-token: write + contents: read jobs: build-and-deploy: runs-on: ubuntu-latest |
azure-resource-manager | Microsoft Solutions Armapicontrol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-solutions-armapicontrol.md | Title: ArmApiControl UI element -description: Describes the Microsoft.Solutions.ArmApiControl UI element for Azure portal. Used for calling API operations. +description: Describes the Microsoft.Solutions.ArmApiControl UI element for Azure portal that's used to call API operations. -- Previously updated : 07/14/2020 -+ Last updated : 08/23/2022 # Microsoft.Solutions.ArmApiControl UI element -ArmApiControl lets you get results from an Azure Resource Manager API operation. Use the results to populate dynamic content in other controls. +The `ArmApiControl` gets results from an Azure Resource Manager API operation using GET or POST. You can use the results to populate dynamic content in other controls. ## UI sample -There's no UI for this control. +There's no UI for `ArmApiControl`. ## Schema -The following example shows the schema for this control: +The following example shows the control's schema. ```json {- "name": "testApi", - "type": "Microsoft.Solutions.ArmApiControl", - "request": { - "method": "{HTTP-method}", - "path": "{path-for-the-URL}", -    "body": { -      "key1": "val1", -      "key2": "val2" - } + "name": "testApi", + "type": "Microsoft.Solutions.ArmApiControl", + "request": { + "method": "{HTTP-method}", + "path": "{path-for-the-URL}", + "body": { + "key1": "value1", + "key2": "value2" }+ } } ``` ## Sample output -The control's output is not displayed to the user. Instead, the result of the operation is used in other controls. +The control's output isn't displayed to the user. Instead, the operation's results are used in other controls. ## Remarks -- The `request.method` property specifies the HTTP method. Only `GET` or `POST` are allowed.-- The `request.path` property specifies a URL that must be a relative path to an ARM endpoint. It can be a static path or can be constructed dynamically by referring output values of the other controls.+- The `request.method` property specifies the HTTP method. Only GET or POST are allowed. +- The `request.path` property specifies a URL that must be a relative path to an Azure Resource Manager endpoint. It can be a static path or can be constructed dynamically by referring output values of the other controls. - For example, an ARM call into `Microsoft.Network/expressRouteCircuits` resource provider: + For example, an Azure Resource Manager call into the `Microsoft.Network/expressRouteCircuits` resource provider. ```json- "path": "subscriptions/<subid>/resourceGroup/<resourceGroupName>/providers/Microsoft.Network/expressRouteCircuits/<routecircuitName>/?api-version=2020-05-01" + "path": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}?api-version=2022-01-01" ``` - The `request.body` property is optional. Use it to specify a JSON body that is sent with the request. The body can be static content or constructed dynamically by referring to output values from other controls. ## Example -In the following example, the `providersApi` element calls an API to get an array of provider objects. +In the following example, the `providersApi` element uses the `ArmApiControl` and calls an API to get an array of provider objects. ++The `providersDropDown` element's `allowedValues` property is configured to use the array and get the provider names. The provider names are displayed in the dropdown list. -The `allowedValues` property of the `providersDropDown` element is configured to get the names of the providers. It displays them in the dropdown list. +The `output` property `providerName` shows the provider name that was selected from the dropdown list. The output can be used to pass the value to a parameter in an Azure Resource Manager template. ```json {- "name": "providersApi", - "type": "Microsoft.Solutions.ArmApiControl", - "request": { - "method": "GET", - "path": "[concat(subscription().id, '/providers/Microsoft.Network/expressRouteServiceProviders?api-version=2019-02-01')]" + "$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#", + "handler": "Microsoft.Azure.CreateUIDef", + "version": "0.1.2-preview", + "parameters": { + "basics": [ + { + "name": "providersApi", + "type": "Microsoft.Solutions.ArmApiControl", + "request": { + "method": "GET", + "path": "[concat(subscription().id, '/providers/Microsoft.Network/expressRouteServiceProviders?api-version=2022-01-01')]" + } + }, + { + "name": "providerDropDown", + "type": "Microsoft.Common.DropDown", + "label": "Provider", + "toolTip": "The provider that offers the express route connection.", + "constraints": { + "allowedValues": "[map(basics('providersApi').value, (item) => parse(concat('{\"label\":\"', item.name, '\",\"value\":\"', item.name, '\"}')))]", + "required": true + }, + "visible": true + } + ], + "steps": [], + "outputs": { + "providerName": "[basics('providerDropDown')]" }-}, -{ - "name": "providerDropDown", - "type": "Microsoft.Common.DropDown", - "label": "Provider", - "toolTip": "The provider that offers the express route connection.", - "constraints": { - "allowedValues": "[map(steps('settings').providersApi.value, (item) => parse(concat('{\"label\":\"', item.name, '\",\"value\":\"', item.name, '\"}')))]", - "required": true - }, - "visible": true + } } ``` -For an example of using the ArmApiControl to check the availability of a resource name, see [Microsoft.Common.TextBox](microsoft-common-textbox.md). +For an example of the `ArmApiControl` that uses the `request.body` property, see the [Microsoft.Common.TextBox](microsoft-common-textbox.md#single-line) single-line example. That example checks the availability of a storage account name and returns a message if the name is unavailable. ## Next steps -- For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md).+- For an introduction to creating UI definitions, see [CreateUiDefinition.json for Azure managed application's create experience](create-uidefinition-overview.md). - For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).+- To learn more about functions like `map`, `basics`, and `parse`, see [CreateUiDefinition functions](create-uidefinition-functions.md). |
azure-resource-manager | Resource Name Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md | In the following tables, the term alphanumeric refers to: > | networkWatchers | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | > | privateDnsZones | resource group | 1-63 characters<br><br>2 to 34 labels<br><br>Each label is a set of characters separated by a period. For example, **contoso.com** has 2 labels. | Each label can contain alphanumerics, underscores, and hyphens.<br><br>Each label is separated by a period. | > | privateDnsZones / virtualNetworkLinks | private DNS zone | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. |+> | privateEndpoints | resource group | 2-64 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | > | publicIPAddresses | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | > | publicIPPrefixes | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | > | routeFilters | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | |
azure-resource-manager | Quickstart Create Templates Use The Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md | Title: Deploy template - Azure portal description: Learn how to create your first Azure Resource Manager template (ARM template) using the Azure portal. You also learn how to deploy it. Previously updated : 03/24/2022 Last updated : 08/22/2022 #Customer intent: As a developer new to Azure deployment, I want to learn how to use the Azure portal to create and edit Resource Manager templates, so I can use the templates to deploy Azure resources. -# Quickstart: Create and deploy ARM templates by using the Azure portal +# Quickstart: Create and deploy ARM templates by using the Azure portal -In this quickstart, you learn how to generate an Azure Resource Manager template (ARM template) in the Azure portal. You edit and deploy the template from the portal. +In this quickstart, you learn how to create an Azure Resource Manager template (ARM template) in the Azure portal. You edit and deploy the template from the portal. ARM templates are JSON files that define the resources you need to deploy for your solution. To understand the concepts associated with deploying and managing your Azure solutions, see [template deployment overview](overview.md). After completing the tutorial, you deploy an Azure Storage account. The same process can be used to deploy other Azure resources. - - If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. -## Generate a template using the portal --If you're new to Azure deployment, you may find it challenging to create an ARM template. To get around this challenge, you can configure your deployment in the Azure portal and download the corresponding ARM template. You save the template and reuse it in the future. +## Retrieve a custom template -Many experienced template developers use this method to generate templates when they try to deploy Azure resources that they aren't familiar with. For more information about exporting templates by using the portal, see [Export resource groups to templates](../management/manage-resource-groups-portal.md#export-resource-groups-to-templates). The other way to find a working template is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/). +Rather than manually building an entire ARM template, let's start by retrieving a pre-built template that accomplishes our goal. The [Azure Quickstart Templates repo](https://github.com/Azure/azure-quickstart-templates) repo contains a large collection of templates that deploy common scenarios. The portal makes it easy for you find and use templates from this repo. You can save the template and reuse it later. 1. In a web browser, go to the [Azure portal](https://portal.azure.com) and sign in.-1. From the Azure portal menu, select **Create a resource**. +1. From the Azure portal search bar, search for **deploy a custom template** and then select it from the available options. -  + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/search-custom-template.png" alt-text="Screenshot of Search for Custom Template."::: -1. In the search box, type **storage account**, and then press **[ENTER]**. -1. Select the down arrow next to **Create**, and then select **Storage account**. +1. For **Template** source, notice that **Quickstart template** is selected by default. You can keep this selection. In the drop-down, search for *quickstarts/microsoft.storage/storage-account-create* and select it. After finding the quickstart template, select **Select template.** -  + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/select-custom-template.png" alt-text="Screenshot of Select Quickstart Template."::: -1. Enter the following information: +1. In the next blade, you provide custom values to use for the deployment. - |Name|Value| - |-|-| - |**Resource group**|Select **Create new**, and specify a resource group name of your choice. On the screenshot, the resource group name is *mystorage1016rg*. Resource group is a container for Azure resources. Resource group makes it easier to manage Azure resources. | - |**Name**|Give your storage account a unique name. The storage account name must be unique across all of Azure, and it contain only lowercase letters and numbers. Name must be between 3 and 24 characters. If you get an error message saying "The storage account name 'mystorage1016' is already taken", try using **<your name>storage<Today's date in MMDD>**, for example **johndolestorage1016**. For more information, see [Naming rules and restrictions](/azure/architecture/best-practices/resource-naming).| + For **Resource group**, select **Create new** and provide *myResourceGroup* for the name. You can use the default values for the other fields. When you've finished providing values, select **Review + create**. - You can use the default values for the rest of the properties. + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/input-fields-template.png" alt-text="Screenshot for Input Fields for Template."::: + +1. The portal validates your template and the values you provided. After validation succeeds, select **Create** to start the deployment. + + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/template-validation.png" alt-text="Screenshot for Validation and create."::: -  +1. Once your validation has passed, you'll see the status of the deployment. When it completes successfully, select **Go to resource** to see the storage account. - > [!NOTE] - > Some of the exported templates require some edits before you can deploy them. + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/deploy-success.png" alt-text="Screenshot for Deployment Succeeded Notification."::: -1. Select **Review + create** on the bottom of the screen. Don't select **Create** in the next step. -1. Select **Download a template for automation** on the bottom of the screen. The portal shows the generated template: +1. From this screen, you can view the new storage account and its properties. -  + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-storage-account.png" alt-text="Screenshot for View Deployment Page."::: - The main pane shows the template. It's a JSON file with six top-level elements - `schema`, `contentVersion`, `parameters`, `variables`, `resources`, and `output`. For more information, see [Understand the structure and syntax of ARM templates](./syntax.md) +## Edit and deploy the template - There are nine parameters defined. One of them is called **storageAccountName**. The second highlighted part on the previous screenshot shows how to reference this parameter in the template. In the next section, you edit the template to use a generated name for the storage account. +You can use the portal for quickly developing and deploying ARM templates. In general, we recommend using Visual Studio Code for developing your ARM templates, and Azure CLI or Azure PowerShell for deploying the template, but you can use the portal for quick deployments without installing those tools. ++In this section, let's suppose you have an ARM template that you want to deploy one time with setting up the other tools. ++1. Again, select **Deploy a custom template** in the portal. ++1. This time, select **Build your own template in the editor**. ++ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/build-own-template.png" alt-text="Screenshot for Build your own template."::: ++1. You see a blank template. ++ :::image type="content" source="./media/quickstart-create-templates-use-the-portal/blank-template.png" alt-text="Screenshot for Blank Template."::: ++1. Replace the blank template with the following template. It deploys a virtual network with a subnet. ++ ```json + { + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "vnetName": { + "type": "string", + "defaultValue": "VNet1", + "metadata": { + "description": "VNet name" + } + }, + "vnetAddressPrefix": { + "type": "string", + "defaultValue": "10.0.0.0/16", + "metadata": { + "description": "Address prefix" + } + }, + "subnetPrefix": { + "type": "string", + "defaultValue": "10.0.0.0/24", + "metadata": { + "description": "Subnet Prefix" + } + }, + "subnetName": { + "type": "string", + "defaultValue": "Subnet1", + "metadata": { + "description": "Subnet Name" + } + }, + "location": { + "type": "string", + "defaultValue": "[resourceGroup().location]", + "metadata": { + "description": "Location for all resources." + } + } + }, + "resources": [ + { + "type": "Microsoft.Network/virtualNetworks", + "apiVersion": "2021-08-01", + "name": "[parameters('vnetName')]", + "location": "[parameters('location')]", + "properties": { + "addressSpace": { + "addressPrefixes": [ + "[parameters('vnetAddressPrefix')]" + ] + }, + "subnets": [ + { + "name": "[parameters('subnetName')]", + "properties": { + "addressPrefix": "[parameters('subnetPrefix')]" + } + } + ] + } + } + ] + } + ``` - In the template, one Azure resource is defined. The type is `Microsoft.Storage/storageAccounts`. Take a look of how the resource is defined, and the definition structure. -1. Select **Download** from the top of the screen. -1. Open the downloaded zip file, and then save **template.json** to your computer. In the next section, you use a template deployment tool to edit the template. -1. Select the **Parameter** tab to see the values you provided for the parameters. Write down these values, you need them in the next section when you deploy the template. +1. Select **Save**. -  +1. You see the blade for providing deployment values. Again, select **myResourceGroup** for the resource group. You can use the other default values. When you're done providing values, select **Review + create** - Using both the template file and the parameters file, you can create a resource, in this tutorial, an Azure storage account. +1. After the portal validates the template, select **Create**. -## Edit and deploy the template +1. When the deployment completes, you see the status of the deployment. This time select the name of the resource group. -The Azure portal can be used to perform some basic template editing. In this quickstart, you use a portal tool called *Template Deployment*. *Template Deployment* is used in this tutorial so you can complete the whole tutorial using one interface - the Azure portal. To edit a more complex template, consider using [Visual Studio Code](quickstart-create-templates-use-visual-studio-code.md), which provides richer edit functionalities. --> [!IMPORTANT] -> Template Deployment provides an interface for testing simple templates. It is not recommended to use this feature in production. Instead, store your templates in an Azure storage account, or a source code repository like GitHub. --Azure requires that each Azure service has a unique name. The deployment could fail if you entered a storage account name that already exists. To avoid this issue, you modify the template to use a template function call `uniquestring()` to generate a unique storage account name. --1. From the Azure portal menu, in the search box, type **deploy**, and then select **Deploy a custom template**. --  --1. Select **Build your own template in the editor**. -1. Select **Load file**, and then follow the instructions to load template.json you downloaded in the last section. -- After the file is loaded, you may notice a warning that the template schema wasn't loaded. You can ignore this warning. The schema is valid. --1. Make the following three changes to the template: --  -- - Remove the **storageAccountName** parameter as shown in the previous screenshot. - - Add one variable called **storageAccountName** as shown in the previous screenshot: -- ```json - "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]" - ``` -- Two template functions are used here: `concat()` and `uniqueString()`. - - Update the name element of the **Microsoft.Storage/storageAccounts** resource to use the newly defined variable instead of the parameter: -- ```json - "name": "[variables('storageAccountName')]", - ``` -- The final template shall look like: -- ```json - { - "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "location": { - "type": "string" - }, - "accountType": { - "type": "string" - }, - "kind": { - "type": "string" - }, - "accessTier": { - "type": "string" - }, - "minimumTlsVersion": { - "type": "string" - }, - "supportsHttpsTrafficOnly": { - "type": "bool" - }, - "allowBlobPublicAccess": { - "type": "bool" - }, - "allowSharedKeyAccess": { - "type": "bool" - } - }, - "variables": { - "storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]" - }, - "resources": [ - { - "name": "[variables('storageAccountName')]", - "type": "Microsoft.Storage/storageAccounts", - "apiVersion": "2019-06-01", - "location": "[parameters('location')]", - "properties": { - "accessTier": "[parameters('accessTier')]", - "minimumTlsVersion": "[parameters('minimumTlsVersion')]", - "supportsHttpsTrafficOnly": "[parameters('supportsHttpsTrafficOnly')]", - "allowBlobPublicAccess": "[parameters('allowBlobPublicAccess')]", - "allowSharedKeyAccess": "[parameters('allowSharedKeyAccess')]" - }, - "dependsOn": [], - "sku": { - "name": "[parameters('accountType')]" - }, - "kind": "[parameters('kind')]", - "tags": {} - } - ], - "outputs": {} - } - ``` + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-second-deployment.png" alt-text="Screenshot for View second deployment."::: -1. Select **Save**. -1. Enter the following values: +1. Notice that your resource group now contains a storage account and a virtual network. + + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/view-resource-group.png" alt-text="Screenshot for View Storage Account and Virtual Network."::: ++## Export a custom template - |Name|Value| - |-|-| - |**Resource group**|Select the resource group name you created in the last section. | - |**Region**|Select a location for the resource group. For example, **Central US**. | - |**Location**|Select a location for the storage account. For example, **Central US**. | - |**Account Type**|Enter **Standard_LRS** for this quickstart. | - |**Kind**|Enter **StorageV2** for this quickstart. | - |**Access Tier**|Enter **Hot** for this quickstart. | - |**Minimum TLS Version**|Enter **TLS1_0**. | - |**Supports Https Traffic Only**| Select **true** for this quickstart. | - |**Allow Blob Public Access**| Select **false** for this quickstart. | - |**Allow Shared Key Access**| Select **true** for this quickstart. | +Sometimes the easiest way to work with an ARM template is to have the portal generate it for you. The portal can create an ARM template based on the current state of your resource group. -1. Select **Review + create**. -1. Select **Create**. -1. Select the bell icon (notifications) from the top of the screen to see the deployment status. You shall see **Deployment in progress**. Wait until the deployment is completed. +1. In your resource group, select **Export template**. + + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/export-template.png" alt-text="Screenshot for Export Template."::: -  +1. The portal generates a template for you based on the current state of the resource group. Notice that this template isn't the same as either template you deployed earlier. It contains definitions for both the storage account and virtual network, along with other resources like a blob service that was automatically created for your storage account. -1. Select **Go to resource group** from the notification pane. You shall see a screen similar to: +1. To save this template for later use, select **Download**. -  + :::image type="content" source="./media/quickstart-create-templates-use-the-portal/download-template.png" alt-text="Screenshot for Download exported template."::: - You can see the deployment status was successful, and there's only one storage account in the resource group. The storage account name is a unique string generated by the template. To learn more about using Azure storage accounts, see [Quickstart: Upload, download, and list blobs using the Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). +You now have an ARM template that represents the current state of the resource group. This template is auto-generated. Before using the template for production deployments, you may want to revise it, such as adding parameters for template reuse. ## Clean up resources When the Azure resources are no longer needed, clean up the resources you deployed by deleting the resource group. -1. In the Azure portal, select **Resource group** on the left menu. -1. Enter the resource group name in the **Filter by name** field. +1. In the Azure portal, select **Resource groups** on the left menu. +1. Enter the resource group name in the **Filter for any field** search box. 1. Select the resource group name. You shall see the storage account in the resource group. 1. Select **Delete resource group** in the top menu. ## Next steps -In this tutorial, you learned how to generate a template from the Azure portal, and how to deploy the template using the portal. The template used in this Quickstart is a simple template with one Azure resource. When the template is complex, it's easier to use Visual Studio Code or Visual Studio to develop the template. To learn more about template development, see our new beginner tutorial series: +In this tutorial, you learned how to generate a template from the Azure portal, and how to deploy the template using the portal. The template used in this Quickstart is a simple template with one Azure resource. When the template is complex, it's easier to use Visual Studio Code, or Visual Studio to develop the template. To learn more about template development, see our new beginner tutorial series: > [!div class="nextstepaction"] > [Beginner tutorials](./template-tutorial-create-first-template.md) |
azure-resource-manager | Quickstart Create Templates Use Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md | Title: Create template - Visual Studio Code description: Use Visual Studio Code and the Azure Resource Manager tools extension to work on Azure Resource Manager templates (ARM templates). Previously updated : 08/09/2020 Last updated : 06/27/2022 #Customer intent: As a developer new to Azure deployment, I want to learn how to use Visual Studio Code to create and edit Resource Manager templates, so I can use the templates to deploy Azure resources. -# Quickstart: Create ARM templates with Visual Studio Code +# Quickstart: Create ARM templates with Visual Studio Code -The Azure Resource Manager Tools for Visual Studio Code provide language support, resource snippets, and resource autocompletion. These tools help create and validate Azure Resource Manager templates (ARM templates). In this quickstart, you use the extension to create an ARM template from scratch. While doing so you experience the extensions capabilities such as ARM template snippets, validation, completions, and parameter file support. +The Azure Resource Manager Tools for Visual Studio Code provide language support, resource snippets, and resource autocompletion. These tools help create and validate Azure Resource Manager templates (ARM templates), and are therefore the recommended method of ARM template creation and configuration. In this quickstart, you use the extension to create an ARM template from scratch. While doing so you experience the extensions capabilities such as ARM template snippets, validation, completions, and parameter file support. To complete this quickstart, you need [Visual Studio Code](https://code.visualstudio.com/), with the [Azure Resource Manager tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools) installed. You also need either the [Azure CLI](/cli/azure/) or the [Azure PowerShell module](/powershell/azure/new-azureps-module-az) installed and authenticated. |
azure-signalr | Concept Upstream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-upstream.md | The URL of upstream is not encryption at rest. If you have any sensitive informa 2. Grant secret read permission for the managed identity in the Access policies in the Key Vault. See [Assign a Key Vault access policy using the Azure portal](../key-vault/general/assign-access-policy-portal.md) -3. Replace your sensitive text with the syntax `{@Microsoft.KeyVault(SecretUri=<secret-identity>)}` in the Upstream URL Pattern. +3. Replace your sensitive text with the below syntax in the Upstream URL Pattern: + ``` + {@Microsoft.KeyVault(SecretUri=<secret-identity>)} + ``` + `<secret-identity>` is the full data-plane URI of a secret in Key Vault, optionally including a version, e.g., https://myvault.vault.azure.net/secrets/mysecret/ or https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931 + + For example, a complete reference would look like the following: + ``` + @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/) + ``` > [!NOTE]-> The secret content only rereads when you change the Upstream settings or change the managed identity. Make sure you have granted secret read permission to the managed identity before using the Key Vault secret reference. +> The service rereads the secret content every 30 minutes or whenever the upstream settings or managed identity changes. Try updating the Upstream settings if you'd like an immediate update when the Key Vault content is changed. ### Rule settings |
azure-signalr | Signalr Quickstart Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-dotnet-core.md | Ready to start? ## Prerequisites * Install the [.NET Core SDK](https://dotnet.microsoft.com/download).-* Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository. +* Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository. Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsnetcore). Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide. ## Create an ASP.NET Core web app -In this section, you use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to create an ASP.NET Core MVC web app project. The advantage of using the .NET Core CLI over Visual Studio is that it's available across the Windows, macOS, and Linux platforms. +In this section, you use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to create an ASP.NET Core MVC web app project. The advantage of using the .NET Core CLI over Visual Studio is that it's available across the Windows, macOS, and Linux platforms. 1. Create a folder for your project. This quickstart uses the *E:\Testing\chattest* folder. In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app- dotnet add package Microsoft.Azure.SignalR ``` -2. Run the following command to restore packages for your project: +1. Run the following command to restore packages for your project: ```dotnetcli dotnet restore ``` -3. Add a secret named *Azure:SignalR:ConnectionString* to Secret Manager. +1. Prepare the Secret Manager for use with this project. ++ ````dotnetcli + dotnet user-secrets init + ```` ++1. Add a secret named *Azure:SignalR:ConnectionString* to Secret Manager. This secret will contain the connection string to access your SignalR Service resource. *Azure:SignalR:ConnectionString* is the default configuration key that SignalR looks for to establish a connection. Replace the value in the following command with the connection string for your SignalR Service resource. In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app- This secret is accessed with the Configuration API. A colon (:) works in the configuration name with the Configuration API on all supported platforms. See [Configuration by environment](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider). --4. Open *Startup.cs* and update the `ConfigureServices` method to use Azure SignalR Service by calling the `AddSignalR()` and `AddAzureSignalR()` methods: +1. Open *Startup.cs* and update the `ConfigureServices` method to use Azure SignalR Service by calling the `AddSignalR()` and `AddAzureSignalR()` methods: ```csharp public void ConfigureServices(IServiceCollection services) In this section, you'll add the [Secret Manager tool](/aspnet/core/security/app- Not passing a parameter to `AddAzureSignalR()` causes this code to use the default configuration key for the SignalR Service resource connection string. The default configuration key is *Azure:SignalR:ConnectionString*. -5. In *Startup.cs*, update the `Configure` method by replacing it with the following code. +1. In *Startup.cs*, update the `Configure` method by replacing it with the following code. ```csharp public void Configure(IApplicationBuilder app, IWebHostEnvironment env) In this section, you'll add a development runtime environment for ASP.NET Core. } ``` - ## Build and run the app locally 1. To build the app by using the .NET Core CLI, run the following command in the command shell: In this section, you'll add a development runtime environment for ASP.NET Core.  - ## Clean up resources If you'll continue to the next tutorial, you can keep the resources created in this quickstart and reuse them. |
azure-sql-edge | Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/backup-restore.md | Title: Back up and restore databases - Azure SQL Edge description: Learn about backup and restore capabilities in Azure SQL Edge. -keywords: -+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020 + # Back up and restore databases in Azure SQL Edge |
azure-sql-edge | Configure Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/configure-replication.md | Title: Configure replication to Azure SQL Edge + Title: Configure replication to Azure SQL Edge description: Learn about configuring replication to Azure SQL Edge.-keywords: -+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020+ # Configure replication to Azure SQL Edge |
azure-sql-edge | Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/configure.md | Title: Configure Azure SQL Edge description: Learn about configuring Azure SQL Edge. -keywords: -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+ # Configure Azure SQL Edge |
azure-sql-edge | Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/connect.md | Title: Connect and query Azure SQL Edge description: Learn how to connect to and query Azure SQL Edge. -keywords: -+++ Last updated : 07/25/2020 --- Previously updated : 07/25/2020+ # Connect and query Azure SQL Edge |
azure-sql-edge | Create External Stream Transact Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/create-external-stream-transact-sql.md | Title: CREATE EXTERNAL STREAM (Transact-SQL) - Azure SQL Edge description: Learn about the CREATE EXTERNAL STREAM statement in Azure SQL Edge -keywords: -+++ Last updated : 07/27/2020 --- Previously updated : 07/27/2020+ # CREATE EXTERNAL STREAM (Transact-SQL) |
azure-sql-edge | Create Stream Analytics Job | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/create-stream-analytics-job.md | Title: Create a T-SQL streaming job in Azure SQL Edge -description: Learn about creating Stream Analytics jobs in Azure SQL Edge. -keywords: -+ Title: Create a T-SQL streaming job in Azure SQL Edge +description: Learn about creating Stream Analytics jobs in Azure SQL Edge. +++ Last updated : 07/27/2020 --- Previously updated : 07/27/2020+ # Create a data streaming job in Azure SQL Edge |
azure-sql-edge | Data Retention Cleanup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/data-retention-cleanup.md | Title: Manage historical data with retention policy - Azure SQL Edge description: Learn how to manage historical data with retention policy in Azure SQL Edge -keywords: SQL Edge, data retention -+++ Last updated : 09/04/2020 --- Previously updated : 09/04/2020+keywords: + - SQL Edge + - data retention + # Manage historical data with retention policy |
azure-sql-edge | Data Retention Enable Disable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/data-retention-enable-disable.md | Title: Enable and disable data retention policies - Azure SQL Edge description: Learn how to enable and disable data retention policies in Azure SQL Edge -keywords: SQL Edge, data retention -+++ Last updated : 09/04/2020 --- Previously updated : 09/04/2020+keywords: + - SQL Edge + - data retention + # Enable and disable data retention policies |
azure-sql-edge | Data Retention Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/data-retention-overview.md | Title: Data retention policy overview - Azure SQL Edge description: Learn about the data retention policy in Azure SQL Edge -keywords: SQL Edge, data retention -+++ Last updated : 09/04/2020 --- Previously updated : 09/04/2020+keywords: + - SQL Edge + - data retention + # Data retention overview |
azure-sql-edge | Date Bucket Tsql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/date-bucket-tsql.md | Title: Date_Bucket (Transact-SQL) - Azure SQL Edge description: Learn about using Date_Bucket in Azure SQL Edge -keywords: Date_Bucket, SQL Edge -+++ Last updated : 09/03/2020 --- Previously updated : 09/03/2020+keywords: + - Date_Bucket + - SQL Edge + # Date_Bucket (Transact-SQL) |
azure-sql-edge | Deploy Dacpac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-dacpac.md | Title: Using SQL Database DACPAC and BACPAC packages - Azure SQL Edge description: Learn about using dacpacs and bacpacs in Azure SQL Edge -keywords: SQL Edge, sqlpackage -+++ Last updated : 09/03/2020 --- Previously updated : 09/03/2020+keywords: + - SQL Edge + - sqlpackage + # SQL Database DACPAC and BACPAC packages in SQL Edge |
azure-sql-edge | Deploy Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-kubernetes.md | Title: Deploy an Azure SQL Edge container in Kubernetes - Azure SQL Edge description: Learn about deploying an Azure SQL Edge container in Kubernetes -keywords: SQL Edge, container, kubernetes -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+keywords: + - SQL Edge + - container + - kubernetes + # Deploy an Azure SQL Edge container in Kubernetes |
azure-sql-edge | Deploy Onnx | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-onnx.md | Title: Deploy and make predictions with ONNX description: Learn how to train a model, convert it to ONNX, deploy it to Azure SQL Edge, and then run native PREDICT on data using the uploaded ONNX model. -keywords: deploy SQL Edge - -+ Last updated 06/21/2022+ms.technology: machine-learning + +keywords: deploy SQL Edge # Deploy and make predictions with an ONNX model and SQL machine learning |
azure-sql-edge | Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-portal.md | Title: Deploy Azure SQL Edge using the Azure portal description: Learn how to deploy Azure SQL Edge using the Azure portal -keywords: deploy SQL Edge -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+keywords: deploy SQL Edge + # Deploy Azure SQL Edge |
azure-sql-edge | Disconnected Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/disconnected-deployment.md | Title: Deploy Azure SQL Edge with Docker - Azure SQL Edge description: Learn about deploying Azure SQL Edge with Docker -keywords: SQL Edge, container, docker -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020 +keywords: + - SQL Edge + - container + - docker + # Deploy Azure SQL Edge with Docker |
azure-sql-edge | Drop External Stream Transact Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/drop-external-stream-transact-sql.md | Title: DROP EXTERNAL STREAM (Transact-SQL) - Azure SQL Edge -description: Learn about the DROP EXTERNAL STREAM statement in Azure SQL Edge -keywords: -+description: Learn about the DROP EXTERNAL STREAM statement in Azure SQL Edge +++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020+ # DROP EXTERNAL STREAM (Transact-SQL) |
azure-sql-edge | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/features.md | Title: Supported features of Azure SQL Edge + Title: Supported features of Azure SQL Edge description: Learn about details of features supported by Azure SQL Edge.-keywords: introduction to SQL Edge, what is SQL Edge, SQL Edge overview -+++ Last updated : 09/03/2020 --- Previously updated : 09/03/2020+keywords: + - introduction to SQL Edge + - what is SQL Edge + - SQL Edge overview + # Supported features of Azure SQL Edge |
azure-sql-edge | High Availability Sql Edge Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/high-availability-sql-edge-containers.md | Title: High availability for Azure SQL Edge containers - Azure SQL Edge description: Learn about high availability for Azure SQL Edge containers -keywords: SQL Edge, containers, high availability -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+keywords: + - SQL Edge + - containers + - high availability + # High availability for Azure SQL Edge containers |
azure-sql-edge | Imputing Missing Values | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/imputing-missing-values.md | Title: Filling time gaps and imputing missing values - Azure SQL Edge description: Learn about filling time gaps and imputing missing values in Azure SQL Edge -keywords: SQL Edge, timeseries -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+keywords: + - SQL Edge + - timeseries + # Filling time gaps and imputing missing values |
azure-sql-edge | Onnx Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/onnx-overview.md | Title: Machine learning and AI with ONNX in Azure SQL Edge description: Machine learning in Azure SQL Edge supports models in the Open Neural Network Exchange (ONNX) format. ONNX is an open format you can use to interchange models between various machine learning frameworks and tools. -keywords: deploy SQL Edge ---- -+ Last updated 06/21/2022++++keywords: deploy SQL Edge + # Machine learning and AI with ONNX in SQL Edge |
azure-sql-edge | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/overview.md | Title: What is Azure SQL Edge? + Title: What is Azure SQL Edge? description: Learn about Azure SQL Edge-keywords: introduction to SQL Edge,what is SQL Edge, SQL Edge overview -+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020+keywords: + - introduction to SQL Edge + - what is SQL Edge + - SQL Edge overview + # What is Azure SQL Edge? |
azure-sql-edge | Performance Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/performance-best-practices.md | Title: Performance best practices and configuration guidelines - Azure SQL Edge description: Learn about performance best practices and configuration guidelines in Azure SQL Edge -keywords: SQL Edge, data retention -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+keywords: + - SQL Edge + - data retention + # Performance best practices and configuration guidelines |
azure-sql-edge | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/release-notes.md | Title: Release notes for Azure SQL Edge -description: Release notes detailing what's new or what has changed in the Azure SQL Edge images. -keywords: release notes SQL Edge ----+ Title: Release notes for Azure SQL Edge +description: Release notes detailing what's new or what has changed in the Azure SQL Edge images. --++ Last updated 6/21/2022+++keywords: release notes SQL Edge + # Azure SQL Edge release notes |
azure-sql-edge | Resources Partners Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/resources-partners-security.md | Title: External partners for security solutions for Azure SQL Edge -description: Providing details about external partners who are working with Azure SQL Edge -keywords: security partners Azure SQL Edge ----+description: Providing details about external partners who are working with Azure SQL Edge --++ Last updated 10/09/2020+++keywords: security partners Azure SQL Edge + # Azure SQL Edge security partners |
azure-sql-edge | Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/security-overview.md | Title: Secure Azure SQL Edge + Title: Secure Azure SQL Edge description: Learn about security in Azure SQL Edge-keywords: SQL Edge, security -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+keywords: + - SQL Edge + - security + # Securing Azure SQL Edge |
azure-sql-edge | Stream Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/stream-data.md | Title: Data streaming in Azure SQL Edge description: Learn about data streaming in Azure SQL Edge. -keywords: -+++ Last updated : 07/08/2022 --- Previously updated : 07/08/2022+ # Data streaming in Azure SQL Edge |
azure-sql-edge | Streaming Catalog Views | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/streaming-catalog-views.md | Title: Streaming catalog views (Transact-SQL) - Azure SQL Edge description: Learn about the available streaming catalog views and dynamic management views in Azure SQL Edge -keywords: sys.external_streams, SQL Edge -+++ Last updated : 05/19/2019 --- Previously updated : 05/19/2019+keywords: + - sys.external_streams + - SQL Edge + # Streaming Catalog Views (Transact-SQL) |
azure-sql-edge | Sys External Job Streams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/sys-external-job-streams.md | Title: sys.external_job_streams (Transact-SQL) - Azure SQL Edge description: Learn about using sys.external_job_streams in Azure SQL Edge -keywords: sys.external_job_streams, SQL Edge -+++ Last updated : 05/19/2019 --- Previously updated : 05/19/2019+keywords: + - sys.external_job_streams + - SQL Edge + # sys.external_job_streams (Transact-SQL) |
azure-sql-edge | Sys External Streaming Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/sys-external-streaming-jobs.md | Title: sys.external_streaming_jobs (Transact-SQL) - Azure SQL Edge description: Learn about using sys.external_streaming_jobs in Azure SQL Edge -keywords: sys.external_streaming_jobs, SQL Edge -+++ Last updated : 05/19/2019 --- Previously updated : 05/19/2019+keywords: + - sys.external_streaming_jobs + - SQL Edge + # sys.external_streaming_jobs (Transact-SQL) |
azure-sql-edge | Sys External Streams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/sys-external-streams.md | Title: sys.external_streams (Transact-SQL) - Azure SQL Edge description: Learn about using sys.external_streams in Azure SQL Edge -keywords: sys.external_streams, SQL Edge -+++ Last updated : 05/19/2019 --- Previously updated : 05/19/2019+keywords: + - sys.external_streams + - SQL Edge + # sys.external_streams (Transact-SQL) |
azure-sql-edge | Sys Sp Cleanup Data Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/sys-sp-cleanup-data-retention.md | Title: sys.sp_cleanup_data_retention (Transact-SQL) - Azure SQL Edge description: Learn about using sys.sp_cleanup_data_retention (Transact-SQL) in Azure SQL Edge -keywords: sys.sp_cleanup_data_retention (Transact-SQL), SQL Edge -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+keywords: + - sys.sp_cleanup_data_retention (Transact-SQL) + - SQL Edge + # sys.sp_cleanup_data_retention (Transact-SQL) |
azure-sql-edge | Track Data Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/track-data-changes.md | Title: Track data changes in Azure SQL Edge description: Learn about change tracking and change data capture in Azure SQL Edge. -keywords: -+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020+ # Track data changes in Azure SQL Edge |
azure-sql-edge | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/troubleshoot.md | Title: Troubleshooting Azure SQL Edge deployments description: Learn about possible errors when deploying Azure SQL Edge -keywords: SQL Edge, troubleshooting, deployment errors -+++ Last updated : 09/22/2020 --- Previously updated : 09/22/2020+keywords: + - SQL Edge + - troubleshooting + - deployment errors + # Troubleshooting Azure SQL Edge deployments |
azure-sql-edge | Tutorial Deploy Azure Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-deploy-azure-resources.md | Title: Set up resources for deploying an ML model in Azure SQL Edge description: In part one of this three-part Azure SQL Edge tutorial for predicting iron ore impurities, you'll install the prerequisite software and set up required Azure resources for deploying a machine learning model in Azure SQL Edge. -keywords: --- --++ Last updated 05/19/2020+++ # Install software and set up resources for the tutorial |
azure-sql-edge | Tutorial Renewable Energy Demo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-renewable-energy-demo.md | Title: Deploying Azure SQL Edge on turbines in a Contoso wind farm description: In this tutorial, you'll use Azure SQL Edge for wake-detection on the turbines in a Contoso wind farm. -keywords: --- --++ Last updated 12/18/2020+++ # Using Azure SQL Edge to build smarter renewable resources |
azure-sql-edge | Tutorial Run Ml Model On Sql Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-run-ml-model-on-sql-edge.md | Title: Deploy ML model on Azure SQL Edge using ONNX + Title: Deploy ML model on Azure SQL Edge using ONNX description: In part three of this three-part Azure SQL Edge tutorial for predicting iron ore impurities, you'll run the ONNX machine learning models on SQL Edge.-keywords: --- --++ Last updated 05/19/2020+++ # Deploy ML model on Azure SQL Edge using ONNX |
azure-sql-edge | Tutorial Set Up Iot Edge Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-set-up-iot-edge-modules.md | Title: Set up IoT Edge modules in Azure SQL Edge description: In part two of this three-part Azure SQL Edge tutorial for predicting iron ore impurities, you'll set up IoT Edge modules and connections. -keywords: --- --++ Last updated 09/22/2020+++ # Set up IoT Edge modules and connections |
azure-sql-edge | Tutorial Sync Data Factory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-sync-data-factory.md | Title: Sync data from Azure SQL Edge by using Azure Data Factory description: Learn about syncing data between Azure SQL Edge and Azure Blob storage -keywords: SQL Edge,sync data from SQL Edge, SQL Edge data factory -+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020+keywords: + - SQL Edge + - sync data from SQL Edge + - SQL Edge data factory + # Tutorial: Sync data from SQL Edge to Azure Blob storage by using Azure Data Factory |
azure-sql-edge | Tutorial Sync Data Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-sync-data-sync.md | Title: Sync data from Azure SQL Edge by using SQL Data Sync description: Learn about syncing data from Azure SQL Edge by using Azure SQL Data Sync -keywords: SQL Edge,sync data from SQL Edge, SQL Edge data sync -+++ Last updated : 05/19/2020 --- Previously updated : 05/19/2020+keywords: + - SQL Edge + - sync data from SQL Edge + - SQL Edge data sync + # Tutorial: Sync data from SQL Edge to Azure SQL Database by using SQL Data Sync |
azure-sql-edge | Usage And Diagnostics Data Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/usage-and-diagnostics-data-configuration.md | Title: Azure SQL Edge usage and diagnostics data configuration description: Learn how to configure usage and diagnostics data in Azure SQL Edge.-+++ Last updated : 08/04/2020 --- Previously updated : 08/04/2020+ # Azure SQL Edge usage and diagnostics data configuration |
azure-video-indexer | Connect To Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md | If your storage account is behind a firewall, see [storage account that is behin :::image type="content" alt-text="Screenshot that shows how to use the classic API." source="./media/create-account/enable-classic-api.png"::: - When creating a storage account for your Media Services account, select **StorageV2** for account kind and **Geo-redundant** (GRS) for replication fields. -- :::image type="content" alt-text="Screenshot that shows how to specify a storage account." source="./media/create-account/create-new-ams-account.png"::: - > [!NOTE] > Make sure to write down the Media Services resource and account names. 1. Before you can play your videos in the Azure Video Indexer web app, you must start the default **Streaming Endpoint** of the new Media Services account. The following Azure Media Services related considerations apply: * If you plan to connect to an existing Media Services account, make sure the Media Services account was created with the classic APIs. -* If you connect to an existing Media Services account, Azure Video Indexer doesn't change the existing media **Reserved Units** configuration. -- You might need to adjust the type and number of Media Reserved Units according to your planned load. Keep in mind that if your load is high and you don't have enough units or speed, videos processing can result in timeout failures. * If you connect to a new Media Services account, Azure Video Indexer automatically starts the default **Streaming Endpoint** in it:  To create a paid account in Azure Government, follow the instructions in [Create ### Limitations of Azure Video Indexer on Azure Government -* No manual content moderation available in Government cloud. +* Only paid accounts (ARM or classic) are available on Azure Government. +* No manual content moderation available in Government cloud. In the public cloud when content is deemed offensive based on a content moderation, the customer can ask for a human to look at that content and potentially revert that decision.-* No trial accounts. * Bing description - in Gov cloud we won't present a description of celebrities and named entities identified. This is a UI capability only. ## Clean up resources |
azure-vmware | Backup Azure Netapp Files Datastores Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/backup-azure-netapp-files-datastores-vms.md | - Title: Back up Azure NetApp Files datastores and VMs using Cloud Backup -description: Learn how to back up datastores and Virtual Machines to the cloud. -- Previously updated : 08/12/2022---# Back up Azure NetApp Files datastores and VMs using Cloud Backup for Virtual Machines --From the VMware vSphere client, you can back up datastores and Virtual Machines (VMs) to the cloud. --## Configure subscriptions --Before you back up your Azure NetApp Files datastores, you must add your Azure and Azure NetApp Files cloud subscriptions. --### Add Azure cloud subscription --1. Sign in to the VMware vSphere client. -2. From the left navigation, select **Cloud Backup for Virtual Machines**. -3. Select the **Settings** page and then select the **Cloud Subscription** tab. -4. Select **Add** and then provide the required values from your Azure subscription. --### Add Azure NetApp Files cloud subscription account --1. From the left navigation, select **Cloud Backup for Virtual Machines**. -2. Select **Storage Systems**. -3. Select **Add** to add the Azure NetApp Files cloud subscription account details. -4. Provide the required values and then select **Add** to save your settings. --## Create a backup policy --You must create backup policies before you can use Cloud Backup for Virtual Machines to back up Azure NetApp Files datastores and VMs. --1. In the left navigation of the vCenter web client page, select **Cloud Backup for Virtual Machines** > **Policies**. -2. On the **Policies** page, select **Create** to initiate the wizard. -3. On the **New Backup Policy** page, select the vCenter Server that will use the policy, then enter the policy name and a description. -* **Only alphanumeric characters and underscores (_) are supported in VM, datastore, cluster, policy, backup, or resource group names.** Other special characters are not supported. -4. Specify the retention settings. - The maximum retention value is 255 backups. If the **"Backups to keep"** option is selected during the backup operation, Cloud Backup for Virtual Machines will retain backups with the specified retention count and delete the backups that exceed the retention count. -5. Specify the frequency settings. - The policy specifies the backup frequency only. The specific protection schedule for backing up is defined in the resource group. Therefore, two or more resource groups can share the same policy and backup frequency but have different backup schedules. -6. **Optional:** In the **Advanced** fields, select the fields that are needed. The Advanced field details are listed in the following table. -- | Field | Action | - | - | - | - | VM consistency | Check this box to pause the VMs and create a VMware snapshot each time the backup job runs. <br> When you check the VM consistency box, backup operations might take longer and require more storage space. In this scenario, the VMs are first paused, then VMware performs a VM consistent snapshot. Cloud Backup for Virtual Machines then performs its backup operation, and then VM operations are resumed. <br> VM guest memory is not included in VM consistency snapshots. | - | Include datastores with independent disks | Check this box to include any datastores with independent disks that contain temporary data in your backup. | - | Scripts | Enter the fully qualified path of the prescript or postscript that you want the Cloud Backup for Virtual Machines to run before or after backup operations. For example, you can run a script to update Simple Network Management Protocol (SNMP) traps, automate alerts, and send logs. The script path is validated at the time the script is executed. <br> **NOTE**: Prescripts and postscripts must be located on the virtual appliance VM. To enter multiple scripts, press **Enter** after each script path to list each script on a separate line. The semicolon (;) character is not allowed. | -7. Select **Add** to save your policy. - You can verify that the policy has been created successfully and review the policy configuration by selecting the policy in the **Policies** page. --## Resource groups --A resource group is the container for VMs and datastores that you want to protect. --Do not add VMs in an inaccessible state to a resource group. Although a resource group can contain a VM in an inaccessible state, the inaccessible state will cause backups for the resource group to fail. --### Considerations for resource groups --You can add or remove resources from a resource group at any time. -* **Back up a single resource:** To back up a single resource (for example, a single VM), you must create a resource group that contains that single resource. -* **Back up multiple resources:** To back up multiple resources, you must create a resource group that contains multiple resources. -* **Optimize snapshot copies:** To optimize snapshot copies, group the VMs and datastores that are associated with the same volume into one resource group. -* **Backup policies:** Although it's possible to create a resource group without a backup policy, you can only perform scheduled data protection operations when at least one policy is attached to the resource group. You can use an existing policy, or you can create a new policy while creating a resource group. -* **Compatibility checks:** Cloud Backup for VMs performs compatibility checks when you create a resource group. Reasons for incompatibility might be: - * Virtual machine disks (VMDKs) are on unsupported storage. - * A shared PCI device is attached to a VM. - * You have not added the Azure subscription account. --### Create a resource group using the wizard --1. In the left navigation of the vCenter web client page, select **Cloud Backup** for **Virtual Machines** > **Resource Groups**. Then select **+ Create** to start the wizard -- :::image type="content" source="./media/cloud-backup/vsphere-create-resource-group.png" alt-text="Screenshot of the vSphere Client Resource Group interface shows a red box highlights a button with a green plus sign that reads Create, instructing you to select this button." lightbox="./media/cloud-backup/vsphere-create-resource-group.png"::: - -1. On the **General Info & Notification** page in the wizard, enter the required values. -1. On the **Resource** page, do the following: -- | Field | Action | - | -- | -- | - | Scope | Select the type of resource you want to protect: <ul><li>Datastores</li><li>Virtual Machines</li></ul> | - | Datacenter | Navigate to the VMs or datastores | - | Available entities | Select the resources you want to protect. Then select **>** to move your selections to the Selected entities list. | -- When you select **Next**, the system first checks that Cloud Backup for Virtual Machines manages and is compatible with the storage on which the selected resources are located. - - >[!IMPORTANT] - >If you receive the message `selected <resource-name> is not Cloud Backup for Virtual Machines compatible` then a selected resource is not compatible with Cloud Backup for Virtual Machines. --1. On the **Spanning disks** page, select an option for VMs with multiple VMDKs across multiple datastores: - * Always exclude all spanning datastores - (This is the default option for datastores) - * Always include all spanning datastores - (This is the default for VMs) - * Manually select the spanning datastores to be included -1. On the **Policies** page, select or create one or more backup policies. - * To use **an existing policy**, select one or more policies from the list. - * To **create a new policy**: - 1. Select **+ Create**. - 1. Complete the New Backup Policy wizard to return to the Create Resource Group wizard. -1. On the **Schedules** page, configure the backup schedule for each selected policy. - In the **Starting** field, enter a date and time other than zero. The date must be in the format day/month/year. You must fill in each field. The Cloud Backup for Virtual Machines creates schedules in the time zone in which the Cloud Backup for Virtual Machines is deployed. You can modify the time zone by using the Cloud Backup for Virtual Machines GUI. -- :::image type="content" source="./media/cloud-backup/backup-schedules.png" alt-text="A screenshot of the Backup schedules interface showing an hourly backup beginning at 10:22 a.m. on April 26, 2022." lightbox="./media/cloud-backup/backup-schedules.png"::: -1. Review the summary. If you need to change any information, you can return to any page in the wizard to do so. Select **Finish** to save your settings. -- After you select **Finish**, the new resource group will be added to the resource group list. -- If the pause operation fails for any of the VMs in the backup, then the backup is marked as not VM-consistent even if the policy selected has VM consistency selected. In this case, it's possible that some of the VMs were successfully paused. --### Other ways to create a resource group --In addition to using the wizard, you can: -* **Create a resource group for a single VM:** - 1. Select **Menu** > **Hosts and Clusters**. - 1. Right-click the Virtual Machine you want to create a resource group for and select **Cloud Backup for Virtual Machines**. Select **+ Create**. -* **Create a resource group for a single datastore:** - 1. Select **Menu** > **Hosts and Clusters**. - 1. Right-click a datastore, then select **Cloud Backup for Virtual Machines**. Select **+ Create**. --## Back up resource groups --Backup operations are performed on all the resources defined in a resource group. If a resource group has a policy attached and a schedule configured, backups occur automatically according to the schedule. --### Prerequisites --* You must have created a resource group with a policy attached. - Do not start an on-demand backup job when a job to back up the Cloud Backup for Virtual Machines MySQL database is already running. Use the maintenance console to see the configured backup schedule for the MySQL database. --### Back up resource groups on demand --1. In the left navigation of the vCenter web client page, select **Cloud Backup for Virtual Machines** > **Resource Groups**, then select a resource group. Select **Run Now** to start the backup. -- :::image type="content" source="./media/cloud-backup/resource-groups-run-now.png" alt-text="Image of the vSphere Client Resource Group interface. At the top left, a red box highlights a green circular button with a white arrow inside next to text reading Run Now, instructing you to select this button." lightbox="./media/cloud-backup/resource-groups-run-now.png"::: - - 1.1 If the resource group has multiple policies configured, then in the **Backup Now** dialog box, select the policy you want to use for this backup operation. -1. Select **OK** to initiate the backup. - >[!NOTE] - >You can't rename a backup once it is created. -1. **Optional:** Monitor the operation progress by selecting **Recent Tasks** at the bottom of the window or on the dashboard Job Monitor for more details. - If the pause operation fails for any of the VMs in the backup, then the backup completes with a warning and is marked as not VM-consistent even if the selected policy has VM consistency selected. In this case, it is possible that some of the VMs were successfully paused. In the job monitor, the failed VM details will show the paused as failed. --## Next steps --* [Restore VMs using Cloud Backup for Virtual Machines](restore-azure-netapp-files-vms.md) |
azure-vmware | Install Cloud Backup Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-cloud-backup-virtual-machines.md | - Title: Install Cloud Backup for Virtual Machines -description: Cloud Backup for Virtual Machines is a plug-in installed in the Azure VMware Solution and enables you to back up and restore Azure NetApp Files datastores and virtual machines. -- Previously updated : 08/10/2022---# Install Cloud Backup for Virtual Machines --Cloud Backup for Virtual Machines is a plug-in installed in the Azure VMware Solution and enables you to back up and restore Azure NetApp Files datastores and virtual machines (VMs). --Use Cloud Backup for VMs to: -* Build and securely connect both legacy and cloud-native workloads across environments and unify operations -* Provision and resize datastore volumes right from the Azure portal -* Take VM consistent snapshots for quick checkpoints -* Quickly recover VMs --## Prerequisites --Before you can install Cloud Backup for Virtual Machines, you need to create an Azure service principle with the required Azure NetApp Files privileges. If you've already created one, you can skip to the installation steps below. --## Install Cloud Backup for Virtual Machines using the Azure portal --You'll need to install Cloud Backup for Virtual Machines through the Azure portal as an add-on. --1. Sign in to your Azure VMware Solution private cloud. -1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Install-NetAppCBSA**. - - :::image type="content" source="./media/cloud-backup/run-command.png" alt-text="Screenshot of the Azure interface that shows the configure signal logic step with a backdrop of the Create alert rule page." lightbox="./media/cloud-backup/run-command.png"::: --1. Provide the required values, then select **Run**. -- :::image type="content" source="./media/cloud-backup/run-commands-fields.png" alt-text="Image of the Run Command fields which are described in the table below." lightbox="./media/cloud-backup/run-commands-fields.png"::: -- | Field | Value | - | | -- | - | ApplianceVirtualMachineName | VM name for the appliance. | - | EsxiCluster | Destination ESXi cluster name to be used for deploying the appliance. | - | VmDatastore | Datastore to be used for the appliance. | - | NetworkMapping | Destination network to be used for the appliance. | - | ApplianceNetworkName | Network name to be used for the appliance. | - | ApplianceIPAddress | IPv4 address to be used for the appliance. | - | Netmask | Subnet mask. | - | Gateway | Gateway IP address. | - | PrimaryDNS | Primary DNS server IP address. | - | ApplianceUser | User Account for hosting API services in the appliance. | - | AppliancePassword | Password of the user hosting API services in the appliance. | - | MaintenanceUserPassword | Password of the appliance maintenance user. | -- >[!IMPORTANT] - >You can also install Cloud Backup for Virtual Machines using DHCP by running the package `NetAppCBSApplianceUsingDHCP`. If you install Cloud Backup for Virtual Machines using DHCP, you don't need to provide the values for the PrimaryDNS, Gateway, Netmask, and ApplianceIPAddress fields. These values will be automatically generated. --1. Check **Notifications** or the **Run Execution Status** tab to see the progress. For more information about the status of the execution, see [Run command in Azure VMware Solution](concepts-run-command.md). - -Upon successful execution, the Cloud Backup for Virtual Machines will automatically be displayed in the VMware vSphere client. --## Upgrade Cloud Backup for Virtual Machines --You can execute this run command to upgrade the Cloud Backup for Virtual Machines to the next available version. -->[!IMPORTANT] -> Before you initiate the upgrade, you must: -> * Back up the MySQL database of Cloud Backup for Virtual Machines. -> * Take snapshot copies of Cloud Backup for Virtual Machines. --1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Invoke-UpgradeNetAppCBSAppliance**. --1. Provide the required values, and then select **Run**. --1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress. --## Uninstall Cloud Backup for Virtual Machines --You can execute the run command to uninstall Cloud Backup for Virtual Machines. --> [!IMPORTANT] -> Before you initiate the upgrade, you must: -> * Backup the MySQL database of Cloud Backup for Virtual Machines. -> * Ensure that there are no other VMs installed in the VMware vSphere tag: `AVS_ANF_CLOUD_ADMIN_VM_TAG`. All VMs with this tag will be deleted when you uninstall. --1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Uninstall-NetAppCBSAppliance**. --1. Provide the required values, and then select **Run**. --1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress. --## Change vCenter account password --You can execute this command to reset the vCenter account password: --1. Select **Run command** > **Packages** > **NetApp.CBS.AVS** > **Invoke-ResetNetAppCBSApplianceVCenterPasswordA**. --1. Provide the required values, then select **Run**. --1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress. --## Next steps --* [Back up Azure NetApp Files datastores and VMs using Cloud Backup for Virtual Machines](backup-azure-netapp-files-datastores-vms.md) -* [Restore VMs using Cloud Backup for Virtual Machines](restore-azure-netapp-files-vms.md) |
azure-vmware | Restore Azure Netapp Files Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/restore-azure-netapp-files-vms.md | - Title: Restore VMs using Cloud Backup for Virtual Machines -description: Learn how to restore virtual machines from a cloud backup to the vCenter. -- Previously updated : 08/12/2022---# Restore VMs using Cloud Backup for Virtual Machines --Cloud Backup for Virtual Machines enables you to restore virtual machines (VMs) from the cloud backup to the vCenter. --This topic covers how to: -* Restore VMs from backups -* Restore deleted VMs from backups -* Restore VM disks (VMDKs) from backups -* Recovery of Cloud Backup for Virtual Machines internal database --## Restore VMs from backups --When you restore a VM, you can overwrite the existing content with the backup copy that you select or you can restore to a new VM. --You can restore VMs to the original datastore mounted on the original ESXi host (this overwrites the original VM). --## Prerequisites to restore VMs --* A backup must exist: you must have created a backup of the VM using the Cloud Backup for Virtual Machines before you can restore the VM. ->[!NOTE] ->Restore operations cannot finish successfully if there are snapshots of the VM that were performed by software other than the Cloud Backup for Virtual Machines. -* The VM must not be in transit: the VM that you want to restore must not be in a state of vMotion or Storage vMotion. -* High Availability (HA) configuration errors: ensure there are no HA configuration errors displayed on the vCenter ESXi Host Summary screen before restoring backups to a different location. --### Considerations for restoring VMs from backups --* VM is unregistered and registered again: The restore operation for VMs unregisters the original VM, restores the VM from a backup snapshot, and registers the restored VM with the same name and configuration on the same ESXi server. You must manually add the VMs to resource groups after the restore. -* Restoring datastores: You cannot restore a datastore, but you can restore any VM in the datastore. -* VMware consistency snapshot failures for a VM: Even if a VMware consistency snapshot for a VM fails, the VM is nevertheless backed up. You can view the entities contained in the backup copy in the Restore wizard and use it for restore operations. --### Restore a VM from a backup --1. In the VMware vSphere web client GUI, select **Menu** in the toolbar. Select **Inventory** and then **Virtual Machines and Templates**. -1. In the left navigation, right-click a Virtual Machine, then select **NetApp Cloud Backup**. In the drop-down list, select **Restore** to initiate the wizard. -1. In the Restore wizard, on the **Select Backup** page, select the backup snapshot copy that you want to restore. - > [!NOTE] - > You can search for a specific backup name or a partial backup name, or you can filter the backup list by selecting the filter icon and then choosing a date and time range, selecting whether you want backups that contain VMware snapshots, whether you want mounted backups, and the location. Select **OK** to return to the wizard. -1. On the **Select Scope** page, select **Entire Virtual Machine** in the **Restore scope** field, then select **Restore location**, and then enter the destination ESXi information where the backup should be mounted. -1. When restoring partial backups, the restore operation skips the Select Scope page. -1. Enable **Restart VM** checkbox if you want the VM to be powered on after the restore operation. -1. On the **Select Location** page, select the location for the primary or secondary location. -1. Review the **Summary** page and then select **Finish**. -1. **Optional:** Monitor the operation progress by selecting Recent Tasks at the bottom of the screen. -1. Although the VMs are restored, they are not automatically added to their former resource groups. Therefore, you must manually add the restored VMs to the appropriate resource groups. --## Restore deleted VMs from backups --You can restore a deleted VM from a datastore primary or secondary backup to an ESXi host that you select. You can also restore VMs to the original datastore mounted on the original ESXi host, which creates a clone of the VM. --## Prerequisites to restore deleted VMs --* You must have added the Azure cloud Subscription account. - The user account in vCenter must have the minimum vCenter privileges required for Cloud Backup for Virtual Machines. -* A backup must exist. - You must have created a backup of the VM using the Cloud Backup for Virtual Machines before you can restore the VMDKs on that VM. --### Considerations for restoring deleted VMs --You cannot restore a datastore, but you can restore any VM in the datastore. --### Restore deleted VMs --1. Select **Menu** and then select the **Inventory** option. -1. Select a datastore, then select the **Configure** tab, then the **Backups in the Cloud Backup for Virtual Machines** section. -1. Select (double-click) a backup to see a list of all VMs that are included in the backup. -1. Select the deleted VM from the backup list and then select **Restore**. -1. On the **Select Scope** page, select **Entire Virtual Machine** in the **Restore scope field**, then select the restore location, and then enter the destination ESXi information where the backup should be mounted. -1. Enable **Restart VM** checkbox if you want the VM to be powered on after the restore operation. -1. On the **Select Location** page, select the location of the backup that you want to restore to. -1. Review the **Summary** page, then select **Finish**. --## Restore VMDKs from backups --You can restore existing VMDKs or deleted or detached VMDKs from either a primary or secondary backup. You can restore one or more VMDKs on a VM to the same datastore. --## Prerequisites to restore VMDKs --* A backup must exist. - You must have created a backup of the VM using the Cloud Backup for Virtual Machines. -* The VM must not be in transit. - The VM that you want to restore must not be in a state of vMotion or Storage vMotion. --### Considerations for restoring VMDKs --* If the VMDK is deleted or detached from the VM, then the restore operation attaches the VMDK to the VM. -* Attach and restore operations connect VMDKs using the default SCSI controller. VMDKs that are attached to a VM with a NVME controller are backed up, but for attach and restore operations they are connected back using a SCSI controller. --### Restore VMDKs --1. In the VMware vSphere web client GUI, select **Menu** in the toolbar. Select **Inventory**, then **Virtual Machines and Templates**. -1. In the left navigation, right-click a VM and select **NetApp Cloud Backup**. In the drop-down list, select **Restore**. -1. In the Restore wizard, on the **Select Backup** page, select the backup copy from which you want to restore. To find the backup, do one of the following options: - * Search for a specific backup name or a partial backup name - * Filter the backup list by selecting the filter icon and a date and time range. Select if you want backups that contain VMware snapshots, if you want mounted backups, and primary location. - Select **OK** to return to the wizard. -1. On the **Select Scope** page, select **Particular virtual disk** in the Restore scope field, then select the virtual disk and destination datastore. -1. On the **Select Location** page, select the snapshot copy that you want to restore. -1. Review the **Summary** page and then select **Finish**. -1. **Optional:** Monitor the operation progress by clicking Recent Tasks at the bottom of the screen. --## Recovery of Cloud Backup for Virtual Machines internal database --You can use the maintenance console to restore a specific backup of the MySQL database (also called an NSM database) for Cloud Backup for Virtual Machines. --1. Open a maintenance console window. -1. From the main menu, enter option **1) Application Configuration**. -1. From the Application Configuration menu, enter option **6) MySQL backup and restore**. -1. From the MySQL Backup and Restore Configuration menu, enter option **2) List MySQL backups**. Make note of the backup you want to restore. -1. From the MySQL Backup and Restore Configuration menu, enter option **3) Restore MySQL backup**. -1. At the prompt ΓÇ£Restore using the most recent backup,ΓÇ¥ enter **n**. -1. At the prompt ΓÇ£Backup to restore from,ΓÇ¥ enter the backup name, and then select **Enter**. - The selected backup MySQL database will be restored to its original location. --If you need to change the MySQL database backup configuration, you can modify: -* The backup location (the default is: `/opt/netapp/protectionservice/mysqldumps`) -* The number of backups kept (the default value is three) -* The time of day the backup is recorded (the default value is 12:39 a.m.) --1. Open a maintenance console window. -1. From the main menu, enter option **1) Application Configuration**. -1. From the Application Configuration menu, enter option **6) MySQL backup and restore**. -1. From the MySQL Backup & Restore Configuration, menu, enter option **1) Configure MySQL backup**. --- :::image type="content" source="./media/cloud-backup/mysql-backup-configuration.png" alt-text="Screenshot of the CLI maintenance menu depicting menu options." lightbox="./media/cloud-backup/mysql-backup-configuration.png"::: |
center-sap-solutions | Install Software | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md | The following components are necessary for the SAP installation: - `jq` version 1.6 - `ansible` version 2.9.27 - `netaddr` version 0.8.0-- The SAP Bill of Materials (BOM), as generated by ACSS. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0003ms.yaml`, `HANA_2_00_063_v0001ms.yaml` `SUM20SP14_latest.yaml`, `SWPM20SP12_latest.yaml`). They provide the following information:+- The SAP Bill of Materials (BOM), as generated by ACSS. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0003ms.yaml`, `HANA_2_00_064_v0001ms.yaml` `SUM20SP14_latest.yaml`, `SWPM20SP12_latest.yaml`). They provide the following information: - The full name of the SAP package (`name`) - The package name with its file extension as downloaded (`archive`) - The checksum of the package as specified by SAP (`checksum`) You also can [run scripts to automate this process](#option-1-upload-software-co - For S/4HANA 2020 SPS 03, make following folders- 1. **HANA_2_00_063_v0001ms** + 1. **HANA_2_00_064_v0001ms** 1. **S42020SPS03_v0003ms** 1. **SWPM20SP12_latest** 1. **SUM20SP14_latest** - For S/4HANA 2021 ISS 00, make following folders- 1. **HANA_2_00_063_v0001ms** + 1. **HANA_2_00_064_v0001ms** 1. **S4HANA_2021_ISS_v0001ms** 1. **SWPM20SP12_latest** 1. **SUM20SP14_latest** You also can [run scripts to automate this process](#option-1-upload-software-co - For S/4HANA 2020 SPS 03, 1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)- 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) + 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml) - For S/4HANA 2021 ISS 00, 1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)- 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) + 1. [HANA_2_00_064_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml) You also can [run scripts to automate this process](#option-1-upload-software-co 1. [S4HANA_2021_ISS_v0001ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-app-inifile-param.j2) 1. [S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2) 1. [S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2) - 1. [S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2) + 1. [S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2) 1. [S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2) 1. [S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2) 1. [S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2) You also can [run scripts to automate this process](#option-1-upload-software-co - For S/4HANA 2020 SPS 03, 1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)- 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) + 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml) - For S/4HANA 2021 ISS 00, 1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)- 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml) + 1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml) 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml) You can install a maximum of 10 Application Servers, excluding the Primary Appli ### SAP package version changes -When SAP changes the version of packages for a component in the BOM, you might encounter problems with the automated installation shell script. It's recommended to download your SAP installation media as soon as possible to avoid issues. +1. When SAP changes the version of packages for a component in the BOM, you might encounter problems with the automated installation shell script. It's recommended to download your SAP installation media as soon as possible to avoid issues. If you encounter this problem, follow these steps: If you encounter this problem, follow these steps: 1. Reupload the BOM file(s) in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the "boms" folder +### Special characters like $ in S-user password is not accepted while downloading the BOM. ++1. Follow the step by step instructions upto cloning the 'SAP Automation repository from GitHub' in **Download SAP media** section. ++1. Before running the Ansible playbook set the SPASS environment variable below. Single quotes should be present in the below command ++ ```bash + export SPASS='password_with_special_chars' + ``` +1. Then run the ansible playbook ++```azurecli + ansible-playbook ./sap-automation/deploy/ansible/playbook_bom_downloader.yaml -e "bom_base_name=S41909SPS03_v0011ms" -e "deployer_kv_name=dummy_value" -e "s_user=<username>" -e "s_password=$SPASS" -e "sapbits_access_key=<storageAccountAccessKey>" -e "sapbits_location_base_path=<containerBasePath>" + ``` + +- For `<username>`, use your SAP username. +- For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_** +- For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the [previous section](#download-supporting-software). +- For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the [previous section](#download-supporting-software). + The format is `https://<your-storage-account>.blob.core.windows.net/sapbits` ++This should resolve the problem and you can proceed with next steps as described in the section. ## Next steps - [Monitor SAP system from Azure portal](monitor-portal.md) |
cognitive-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/role-based-access-control.md | Use the following table to determine access needs for your LUIS application. These custom roles only apply to authoring (Language Understanding Authoring) and not prediction resources (Language Understanding). > [!NOTE]-> * "Owner" and "Contributor" roles take priority over the custom LUIS roles. -> * Azure Active Directory (Azure AD) is only used with custom LUIS roles. +> * *Owner* and *Contributor* roles take priority over the custom LUIS roles. +> * Azure Active Directory (Azure AAD) is only used with custom LUIS roles. +> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in LUIS portal. ### Cognitive Services LUIS reader A user that is responsible for building and modifying LUIS application, as a col ### Cognitive Services LUIS owner +> [!NOTE] +> * If you are assigned as an *Owner* and *LUIS Owner* you will be be shown as *LUIS Owner* in LUIS portal. + These users are the gatekeepers for LUIS applications in a production environment. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments. :::row::: |
cognitive-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md | + + Title: Role-based access control for the Language service ++description: Learn how to use Azure RBAC for managing individual access to Azure resources. ++++++ Last updated : 08/23/2022+++++# Language role-based access control ++Azure Cognitive Service for Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. See the [Azure RBAC documentation](/azure/role-based-access-control/) for more information. ++## Enable Azure Active Directory authentication ++To use Azure RBAC, you must enable Azure Active Directory authentication. You can [create a new resource with a custom subdomain](../../authentication.md#create-a-resource-with-a-custom-subdomain) or [create a custom subdomain for your existing resource](../../cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources). ++## Add role assignment to Language resource ++Azure RBAC can be assigned to a Language resource. To grant access to an Azure resource, you add a role assignment. +1. In the [Azure portal](https://ms.portal.azure.com/), select **All services**. +1. Select **Cognitive Services**, and navigate to your specific Language resource. + > [!NOTE] + > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group. ++1. Select **Access control (IAM)** on the left navigation pane. +1. Select **Add**, then select **Add role assignment**. +1. On the **Role** tab on the next screen, select a role you want to add. +1. On the **Members** tab, select a user, group, service principal, or managed identity. +1. On the **Review + assign** tab, select **Review + assign** to assign the role. ++Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal). ++## Language role types ++Use the following table to determine access needs for your Language projects. ++These custom roles only apply to Language resources. +> [!NOTE] +> * All prebuilt capabilities are accessible to all roles +> * *Owner* and *Contributor* roles take priority over the custom language roles +> * AAD is only used in case of custom Language roles +> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in Language studio portal. +++### Cognitive Services Language reader ++A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results. +++ :::column span=""::: + **Capabilities** + :::column-end::: + :::column span=""::: + **API Access** + :::column-end::: + :::column span=""::: + * Read + * Test + :::column-end::: + :::column span=""::: + All GET APIs under: + * [Language authoring conversational language understanding APIs](/rest/api/language/conversational-analysis-authoring) + * [Language authoring text analysis APIs](/rest/api/language/text-analysis-authoring) + * [Question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) + Only `TriggerExportProjectJob` POST operation under: + * [Language authoring conversational language understanding export API](/rest/api/language/conversational-analysis-authoring/export?tabs=HTTP) + * [Language authoring text analysis export API](/rest/api/language/text-analysis-authoring/export?tabs=HTTP) + Only Export POST operation under: + * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export) + All the Batch Testing Web APIs + *[Language Runtime CLU APIs](/rest/api/language/conversation-analysis-runtime) + *[Language Runtime Text Analysis APIs](/rest/api/language/text-analysis-runtime) + :::column-end::: ++### Cognitive Services Language writer ++A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldnΓÇÖt have access to deploying this application to the runtime, as they may accidentally reflect their changes in production. They also shouldnΓÇÖt be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They may also create new applications under this resource, but with the restrictions mentioned. ++ :::column span=""::: + **Capabilities** + :::column-end::: + :::column span=""::: + **API Access** + :::column-end::: + :::column span=""::: + * All functionalities under Cognitive Services Language Reader. + * Ability to: + * Train + * Write + :::column-end::: + :::column span=""::: + * All APIs under Language reader + * All POST, PUT and PATCH APIs under: + * [Language conversational language understanding APIs](/rest/api/language/conversational-analysis-authoring) + * [Language text analysis APIs](/rest/api/language/text-analysis-authoring) + * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) + Except for + * Delete deployment + * Delete trained model + * Delete Project + * Deploy Model + :::column-end::: ++### Cognitive Services Language owner ++> [!NOTE] +> If you are assigned as an *Owner* and *Language Owner* you will be be shown as *Cognitive Services Language owner* in Language studio portal. +++These users are the gatekeepers for the Language applications in production environments. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments ++ :::column span=""::: + **Functionality** + :::column-end::: + :::column span=""::: + **API Access** + :::column-end::: + :::column span=""::: + * All functionalities under Cognitive Services Language Writer + * Deploy + * Delete + :::column-end::: + :::column span=""::: + All APIs available under: + * [Language authoring conversational language understanding APIs](/rest/api/language/conversational-analysis-authoring) + * [Language authoring text analysis APIs](/rest/api/language/text-analysis-authoring) + * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) + + :::column-end::: |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md | Summarization is one of the features offered by [Azure Cognitive Service for Lan This documentation contains the following article types: -* [**Quickstarts**](quickstart.md?pivots=rest-api&tabs=document-summarization) are getting-started instructions to guide you through making requests to the service. -* [**How-to guides**](how-to/document-summarization.md) contain instructions for using the service in more specific or customized ways. +* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service. +* **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways. Text summarization is a broad topic, consisting of several approaches to represent relevant information in text. The document summarization feature described in this documentation enables you to use extractive text summarization to produce a summary of a document. It extracts sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. For example, it can condense articles, papers, or documents to key sentences. Document summarization supports the following features: This documentation contains the following article types: -* [**Quickstarts**](quickstart.md?pivots=rest-api&tabs=conversation-summarization) are getting-started instructions to guide you through making requests to the service. -* [**How-to guides**](how-to/conversation-summarization.md) contain instructions for using the service in more specific or customized ways. +* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=conversation-summarization)** are getting-started instructions to guide you through making requests to the service. +* **[How-to guides](how-to/conversation-summarization.md)** contain instructions for using the service in more specific or customized ways. Conversation summarization is a broad topic, consisting of several approaches to represent relevant information in text. The conversation summarization feature described in this documentation enables you to use abstractive text summarization to produce a summary of issues and resolutions in transcripts of web chats and service call transcripts between customer-service agents, and your customers. Conversation summarization is a broad topic, consisting of several approaches to ## When to use conversation summarization -* When there are predefined aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as: - * The reason for a service chat/call (the issue). - * That resolution for the issue. +* When there are aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as: + * The reason for a service chat/call (the issue). + * That resolution for the issue. * You only want a summary that focuses on related information about issues and resolutions. * When there are two participants in the conversation, and you want to summarize what each had said. Conversation summarization feature would simplify the text into the following: To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use summarization: --|Development option |Description | Links | +|Development option |Description | Links | |||| | Language Studio | A web-based platform that enables you to try document summarization without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/summarization) <br> ΓÇó [Quickstart: Use Language Studio](../language-studio.md) | | REST API or Client library (Azure SDK) | Integrate document summarization into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use document summarization](quickstart.md) | - # [Conversation summarization](#tab/conversation-summarization) To use this feature, you submit raw text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use conversation summarization: --|Development option |Description | Links | +|Development option |Description | Links | |||| | REST API | Integrate conversation summarization into your applications using the REST API. | [Quickstart: Use conversation summarization](quickstart.md) | To use this feature, you submit raw text for analysis and handle the API output * Summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information. * Summarization works with a variety of written languages. See [language support](language-support.md?tabs=document-summarization) for more information. - # [Conversation summarization](#tab/conversation-summarization) * Conversation summarization takes structured text for analysis. See the [data and service limits](../concepts/data-limits.md) for more information. As you use document summarization in your applications, see the following refere |JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) | |Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) | -## Responsible AI +## Responsible AI An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information: |
container-apps | Compare Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md | You can get started building your first container app [using the quickstarts](ge [Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change. ### Azure Spring Apps-[Azure Spring Apps](../spring-apps/overview.md) makes it easy to deploy Spring Boot microservice applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. If your team or organization is predominantly Spring, Azure Spring Apps is an ideal option. +[Azure Spring Apps](../spring-apps/overview.md) is a platform as a service (PaaS) for Spring developers. If you want to run Spring Boot, Sprng Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. ### Azure Red Hat OpenShift [Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option. You can get started building your first container app [using the quickstarts](ge ## Next steps > [!div class="nextstepaction"]-> [Deploy your first container app](get-started.md) +> [Deploy your first container app](get-started.md) |
container-apps | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md | Additionally, the following resources can help you create your own disaster reco To take advantage of availability zones, you must enable zone redundancy when you create the Container Apps environment. The environment must include a virtual network (VNET) with an infrastructure subnet. To ensure proper distribution of replicas, you should configure your app's minimum and maximum replica count with values that are divisible by three. The minimum replica count should be at least three. -### Enabled zone redundancy via the Azure portal +### Enable zone redundancy via the Azure portal To create a container app in an environment with zone redundancy enabled using the Azure portal: |
cosmos-db | Transactional Batch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/transactional-batch.md | Title: Transactional batch operations in Azure Cosmos DB using the .NET SDK -description: Learn how to use TransactionalBatch in the Azure Cosmos DB .NET SDK to perform a group of point operations that either succeed or fail. + Title: Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK +description: Learn how to use TransactionalBatch in the Azure Cosmos DB .NET or Java SDK to perform a group of point operations that either succeed or fail. -# Transactional batch operations in Azure Cosmos DB using the .NET SDK +# Transactional batch operations in Azure Cosmos DB using the .NET or Java SDK [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] Transactional batch describes a group of point operations that need to either succeed or fail together with the same partition key in a container. In the .NET and Java SDKs, the `TransactionalBatch` class is used to define this batch of operations. If all operations succeed in the order they're described within the transactional batch operation, the transaction will be committed. However, if any operation fails, the entire transaction is rolled back. |
cost-management-billing | Tutorial Export Acm Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md | Remove-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/00000000-00 Scheduled exports are affected by the time and day of week of when you initially create the export. When you create a scheduled export, the export runs at the same frequency for each export that runs later. For example, for a daily export of month-to-date costs export set at a daily frequency, the export runs daily. Similarly for a weekly export, the export runs every week on the same day as it is scheduled. The exact delivery time of the export isn't guaranteed and the exported data is available within four hours of run time. +- When you create an export using the [Exports API](/rest/api/cost-management/exports/create-or-update?tabs=HTTP), specify the `recurrencePeriod` in UTC time. The API doesnΓÇÖt convert your local time to UTC. + - Example - A weekly export is scheduled on Friday, August 19 with `recurrencePeriod` set to 2:00 PM. The API receives the input as 2:00 PM UTC, Friday, August 19. The weekly export will be scheduled to run every Friday. +- When you create an export in the Azure portal, its start date time is automatically converted to the equivalent UTC time. + - Example - A weekly export is scheduled on Friday, August 19 with the local time of 2:00 AM IST (UTC+5:30) from the Azure portal. The API receives the input as 8:30 PM, Thursday, August 18th. The weekly export will be scheduled to run every Thursday. + Each export creates a new file, so older exports aren't overwritten. #### Create an export for multiple subscriptions |
data-factory | Format Delta | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md | The below table lists the properties supported by a delta sink. You can edit the | Compression type | The compression type of the delta table | no | `bzip2`<br>`gzip`<br>`deflate`<br>`ZipDeflate`<br>`snappy`<br>`lz4` | compressionType | | Compression level | Choose whether the compression completes as quickly as possible or if the resulting file should be optimally compressed. | required if `compressedType` is specified. | `Optimal` or `Fastest` | compressionLevel | | Vacuum | Specify retention threshold in hours for older versions of table. A value of 0 or less defaults to 30 days | yes | Integer | vacuum |+| Table action | Tells ADF what to do with the target Delta table in your sink. You can leave it as-is and append new rows, overwrite the existing table definition and data with new metadata and data, or keep the existing table structure but first truncate all rows, then insert the new rows. | no | None, Truncate, Overwrite | truncate, overwrite | | Update method | Specify which update operations are allowed on the delta lake. For methods that aren't insert, a preceding alter row transformation is required to mark rows. | yes | `true` or `false` | deletable <br> insertable <br> updateable <br> merge | | Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true | | Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true | |
databox-online | Azure Stack Edge Gpu Deploy Virtual Machine Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md | -> We recommend that you enable multifactor authentication for the user who manages VMs that are deployed on your device from the cloud. +> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication click [here](/articles/active-directory/authentication/tutorial-enable-azure-mfa.md) ## VM deployment workflow |
defender-for-iot | How To Manage Cloud Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md | +> [!IMPORTANT] +> The **Alerts** page is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++ This article describes how to manage your alerts from Microsoft Defender for IoT on the Azure portal. If you're integrating with Microsoft Sentinel, the alert details and entity information are also sent to Microsoft Sentinel, where you can also view them from the **Alerts** page. |
defender-for-iot | Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/workbooks.md | Last updated 06/02/2022 # Use Azure Monitor workbooks in Microsoft Defender for IoT +> [!IMPORTANT] +> The **Azure Monitor workbooks** are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. + Azure Monitor workbooks provide graphs, charts, and dashboards that visually reflect data stored in your Azure Resource Graph subscriptions and are available directly in Microsoft Defender for IoT. In the Azure portal, use the Defender for IoT **Workbooks** page to view workbooks created by Microsoft and provided out-of-the-box, or created by customers and shared across the community. |
event-grid | Communication Services Voice Video Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md | This section contains an example of what that data would look like for each even "subject": "call/{serverCallId}/startedBy/8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", "data": { "startedBy": {- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", - "communicationUser": { - "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + "communicationIdentifier": { + "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", + "communicationUser": { + "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + } }, "role": "{role}" }, This section contains an example of what that data would look like for each even "data": { "durationOfCall": 49.728617199999995, "startedBy": {- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", - "communicationUser": { - "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + "communicationIdentifier": { + "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", + "communicationUser": { + "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + } }, "role": "{role}" }, This section contains an example of what that data would look like for each even "subject": "call/{serverCallId}/participant/8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", "data": { "user": {- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", - "communicationUser": { - "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + "communicationIdentifier": { + "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", + "communicationUser": { + "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + } }, "role": "{role}" }, This section contains an example of what that data would look like for each even "participantId": "041e3b8a-1cce-4ebf-b587-131312c39410", "endpointType": "acs-web-test-client-ACSWeb(3617/1.0.0.0/os=windows; browser=chrome; browserVer=93.0; deviceType=Desktop)/TsCallingVersion=_TS_BUILD_VERSION_/Ovb=_TS_OVB_VERSION_", "startedBy": {- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", - "communicationUser": { - "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + "communicationIdentifier": { + "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", + "communicationUser": { + "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + } }, "role": "{role}" }, This section contains an example of what that data would look like for each even "subject": "call/aHR0cHM6Ly9jb252LWRldi0yMS5jb252LWRldi5za3lwZS5uZXQ6NDQzL2NvbnYvbVQ4NnVfempBMG05QVM4VnRvSWFrdz9pPTAmZT02Mzc2Nzc3MTc2MDAwMjgyMzA/participant/8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8", "data": { "user": {- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8", - "communicationUser": { - "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8" + "communicationIdentifier": { + "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8", + "communicationUser": { + "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8" + } }, "role": "{role}" }, This section contains an example of what that data would look like for each even "participantId": "750a1442-3156-4914-94d2-62cf73796833", "endpointType": "acs-web-test-client-ACSWeb(3617/1.0.0.0/os=windows; browser=chrome; browserVer=93.0; deviceType=Desktop)/TsCallingVersion=_TS_BUILD_VERSION_/Ovb=_TS_OVB_VERSION_", "startedBy": {- "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", - "communicationUser": { - "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + "communicationIdentifier": { + "rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", + "communicationUser": { + "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1" + } }, "role": "{role}" }, |
event-hubs | Authenticate Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-application.md | For Schema Registry built-in roles, see [Schema Registry roles](schema-registry- ## Authenticate from an application A key advantage of using Azure AD with Event Hubs is that your credentials no longer need to be stored in your code. Instead, you can request an OAuth 2.0 access token from Microsoft identity platform. Azure AD authenticates the security principal (a user, a group, or service principal) running the application. If authentication succeeds, Azure AD returns the access token to the application, and the application can then use the access token to authorize requests to Azure Event Hubs. -Following sections shows you how to configure your native application or web application for authentication with Microsoft identity platform 2.0. For more information about Microsoft identity platform 2.0, see [Microsoft identity platform (v2.0) overview](../active-directory/develop/v2-overview.md). +The following sections show you how to configure your native application or web application for authentication with Microsoft identity platform 2.0. For more information about Microsoft identity platform 2.0, see [Microsoft identity platform (v2.0) overview](../active-directory/develop/v2-overview.md). For an overview of the OAuth 2.0 code grant flow, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md). |
event-hubs | Event Hubs Scalability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-scalability.md | There are two factors which influence scaling with Event Hubs. ## Throughput units -The throughput capacity of Event Hubs is controlled by *throughput units*. Throughput units are pre-purchased units of capacity. A single throughput lets you: +The throughput capacity of Event Hubs is controlled by *throughput units*. Throughput units are pre-purchased units of capacity. A single throughput unit lets you: * Ingress: Up to 1 MB per second or 1000 events per second (whichever comes first). * Egress: Up to 2 MB per second or 4096 events per second. For more information about the auto-inflate feature, see [Automatically scale th ## Processing units - [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit*(PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace. + [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit* (PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace. How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more. -For example, Event Hubs Premium namespace with 1 PU and 1 event hub(100 partitions) can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress for both AMQP or Kafka workloads. +For example, Event Hubs Premium namespace with 1 PU and 1 event hub (100 partitions) can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress for both AMQP or Kafka workloads. To learn about configuring PUs for a premium tier namespace, see [Configure processing units](configure-processing-units-premium-namespace.md). |
event-hubs | Resource Governance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-overview.md | Title: Resource governance with application groups description: This article describes how to enable resource governance using application groups. Previously updated : 05/24/2022 Last updated : 08/23/2022 When policies for application groups are applied, the client application workloa ### Disabling application groups Application group is enabled by default and that means all the client applications can access Event Hubs namespace for publishing and consuming events by adhering to the application group policies. -When an application group is disabled, client applications of that application group won't be able to connect to the Event Hubs namespace and all the existing connections that are already established from client applications are terminated. +When an application group is disabled, the client will still be able to connect to the event hub, but the authorization will fail and then the client connection gets closed. Therefore, you'll see lots of successful open and close connections, with same number of authorization failures in diagnostic logs. ## Next steps For instructions on how to create and manage application groups, see [Resource governance for client applications using Azure portal](resource-governance-with-app-groups.md) |
governance | Machine Configuration Create Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-definition.md | Title: How to create custom machine configuration policy definitions description: Learn how to create a machine configuration policy. Previously updated : 07/25/2022 Last updated : 08/09/2022 Create a policy definition that audits using a custom configuration package, in a specified path: ```powershell-New-GuestConfigurationPolicy ` - -PolicyId 'My GUID' ` - -ContentUri '<paste the ContentUri output from the Publish command>' ` - -DisplayName 'My audit policy.' ` - -Description 'Details about my policy.' ` - -Path './policies' ` - -Platform 'Windows' ` - -PolicyVersion 1.0.0 ` - -Verbose +$PolicyConfig = @{ + PolicyId = '_My GUID_' + ContentUri = <_ContentUri output from the Publish command_> + DisplayName = 'My audit policy' + Description = 'My audit policy' + Path = './policies' + Platform = 'Windows' + PolicyVersion = 1.0.0 +} ++New-GuestConfigurationPolicy @PolicyConfig ``` Create a policy definition that deploys a configuration using a custom configuration package, in a specified path: ```powershell-New-GuestConfigurationPolicy ` - -PolicyId 'My GUID' ` - -ContentUri '<paste the ContentUri output from the Publish command>' ` - -DisplayName 'My audit policy.' ` - -Description 'Details about my policy.' ` - -Path './policies' ` - -Platform 'Windows' ` - -PolicyVersion 1.0.0 ` - -Mode 'ApplyAndAutoCorrect' ` - -Verbose +$PolicyConfig2 = @{ + PolicyId = '_My GUID_' + ContentUri = <_ContentUri output from the Publish command_> + DisplayName = 'My audit policy' + Description = 'My audit policy' + Path = './policies' + Platform = 'Windows' + PolicyVersion = 1.0.0 + Mode = 'ApplyAndAutoCorrect' +} ++New-GuestConfigurationPolicy @PolicyConfig2 ``` The cmdlet output returns an object containing the definition display name and The following example creates a policy definition to audit a service, where the list at the time of policy assignment. ```powershell-# This DSC Resource text: +# This DSC resource definition... Service 'UserSelectedNameExample' { Name = 'ParameterValue' Service 'UserSelectedNameExample' State = 'Running' }` -# Would require the following hashtable: -$PolicyParameterInfo = @( +# ...can be converted to a hash table: +$PolicyParameterInfo = @( @{- Name = 'ServiceName' # Policy parameter name (mandatory) - DisplayName = 'windows service name.' # Policy parameter display name (mandatory) - Description = 'Name of the windows service to be audited.' # Policy parameter description (optional) - ResourceType = 'Service' # DSC configuration resource type (mandatory) - ResourceId = 'UserSelectedNameExample' # DSC configuration resource id (mandatory) - ResourcePropertyName = 'Name' # DSC configuration resource property name (mandatory) - DefaultValue = 'winrm' # Policy parameter default value (optional) - AllowedValues = @('BDESVC','TermService','wuauserv','winrm') # Policy parameter allowed values (optional) - } -) --New-GuestConfigurationPolicy ` - -PolicyId 'My GUID' ` - -ContentUri '<paste the ContentUri output from the Publish command>' ` - -DisplayName 'Audit Windows Service.' ` - -Description 'Audit if a Windows Service isn't enabled on Windows machine.' ` - -Path '.\policies' ` - -Parameter $PolicyParameterInfo ` - -PolicyVersion 1.0.0 + # Policy parameter name (mandatory) + Name = 'ServiceName' + # Policy parameter display name (mandatory) + DisplayName = 'windows service name.' + # Policy parameter description (optional) + Description = 'Name of the windows service to be audited.' + # DSC configuration resource type (mandatory) + ResourceType = 'Service' + # DSC configuration resource id (mandatory) + ResourceId = 'UserSelectedNameExample' + # DSC configuration resource property name (mandatory) + ResourcePropertyName = 'Name' + # Policy parameter default value (optional) + DefaultValue = 'winrm' + # Policy parameter allowed values (optional) + AllowedValues = @('BDESVC','TermService','wuauserv','winrm') + }) ++# ...and then passed into the `New-GuestConfigurationPolicy` cmdlet +$PolicyParam = @{ + PolicyId = 'My GUID' + ContentUri = '<ContentUri output from the Publish command>' + DisplayName = 'Audit Windows Service.' + Description = "Audit if a Windows Service isn't enabled on Windows machine." + Path = '.\policies' + Parameter = $PolicyParameterInfo + PolicyVersion = 1.0.0 +} ++New-GuestConfigurationPolicy @PolicyParam ``` ### Publish the Azure Policy definition |
governance | Guest Configuration Baseline Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md | Title: Reference - Azure Policy guest configuration baseline for Windows description: Details of the Windows baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 05/12/2022 Last updated : 08/23/2022 implementations: - **Vulnerabilities in security configuration on your machines should be remediated** in Azure Security Center -For more information, see [Azure Policy guest configuration](../../machine-configuration/overview.md) and +For more information, see [Azure Policy guest configuration](../concepts/guest-configuration.md) and [Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md). +## Account Policies-Password Policy ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Account Lockout Duration<br /><sub>(AZ-WIN-73312)</sub> |<br />**Key Path**: [System Access]LockoutDuration<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 15<br /><sub>(Policy)</sub> |Warning | ++## Administrative Template - Window Defender ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Configure detection for potentially unwanted applications<br /><sub>(AZ-WIN-202219)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\PUAProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|Scan all downloaded files and attachments<br /><sub>(AZ-WIN-202221)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableIOAVProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning | +|Turn off Microsoft Defender AntiVirus<br /><sub>(AZ-WIN-202220)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\DisableAntiSpyware<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|Turn off real-time protection<br /><sub>(AZ-WIN-202222)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableRealtimeMonitoring<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning | +|Turn on e-mail scanning<br /><sub>(AZ-WIN-202218)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Scan\DisableEmailScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | +|Turn on script scanning<br /><sub>(AZ-WIN-202223)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableScriptScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning | + ## Administrative Templates - Control Panel |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Allow Input Personalization<br /><sub>(AZ-WIN-00168)</sub> |**Description**: This policy enables the automatic learning component of input personalization that includes speech, inking, and typing. Automatic learning enables the collection of speech and handwriting patterns, typing history, contacts, and recent calendar information. It is required for the use of Cortana. Some of this collected information may be stored on the user's OneDrive, in the case of inking and typing; some of the information will be uploaded to Microsoft to personalize speech. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\InputPersonalization\AllowInputPersonalization<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |-|Prevent enabling lock screen camera<br /><sub>(CCE-38347-1)</sub> |**Description**: Disables the lock screen camera toggle switch in PC Settings and prevents a camera from being invoked on the lock screen. By default, users can enable invocation of an available camera on the lock screen. If you enable this setting, users will no longer be able to enable or disable lock screen camera access in PC Settings, and the camera cannot be invoked on the lock screen.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenCamera<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | -|Prevent enabling lock screen slide show<br /><sub>(CCE-38348-9)</sub> |**Description**: Disables the lock screen slide show settings in PC Settings and prevents a slide show from playing on the lock screen. By default, users can enable a slide show that will run after they lock the machine. If you enable this setting, users will no longer be able to modify slide show settings in PC Settings, and no slide show will ever start.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenSlideshow<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | ++## Administrative Templates - MS Security Guide ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Disable SMB v1 client (remove dependency on LanmanWorkstation)<br /><sub>(AZ-WIN-00122)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\DependsOnService<br />**OS**: WS2008, WS2008R2, WS2012<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= Bowser\0MRxSmb20\0NSI\0\0<br /><sub>(Registry)</sub> |Critical | +|WDigest Authentication must be disabled.<br /><sub>(AZ-WIN-73497)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurityProviders\Wdigest\UseLogonCredential<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Important | ++## Administrative Templates - MSS ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|MSS: (DisableIPSourceRouting IPv6) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202213)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\Tcpip6\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational | +|MSS: (DisableIPSourceRouting) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202244)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\Tcpip\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational | +|MSS: (NoNameReleaseOnDemand) Allow the computer to ignore NetBIOS name release requests except from WINS servers<br /><sub>(AZ-WIN-202214)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\Netbt\Parameters\NoNameReleaseOnDemand<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational | +|MSS: (SafeDllSearchMode) Enable Safe DLL search mode (recommended)<br /><sub>(AZ-WIN-202215)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\SafeDllSearchMode<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|MSS: (WarningLevel) Percentage threshold for the security event log at which the system will generate a warning<br /><sub>(AZ-WIN-202212)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Eventlog\Security\WarningLevel<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | 90<br /><sub>(Registry)</sub> |Informational | +|Windows Server must be configured to prevent Internet Control Message Protocol (ICMP) redirects from overriding Open Shortest Path First (OSPF)-generated routes.<br /><sub>(AZ-WIN-73503)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\EnableICMPRedirect<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational | ## Administrative Templates - Network |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Enable insecure guest logons<br /><sub>(AZ-WIN-00171)</sub> |**Description**: This policy setting determines if the SMB client will allow insecure guest logons to an SMB server. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\LanmanWorkstation\AllowInsecureGuestAuth<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |+|Hardened UNC Paths - NETLOGON<br /><sub>(AZ_WIN_202250)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths\\\*\NETLOGON<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning | +|Hardened UNC Paths - SYSVOL<br /><sub>(AZ_WIN_202251)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths\\\*\SYSVOL<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning | |Minimize the number of simultaneous connections to the Internet or a Windows Domain<br /><sub>(CCE-38338-0)</sub> |**Description**: This policy setting prevents computers from connecting to both a domain based network and a non-domain based network at the same time. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WcmSvc\GroupPolicy\fMinimizeConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning | |Prohibit installation and configuration of Network Bridge on your DNS domain network<br /><sub>(CCE-38002-2)</sub> |**Description**: You can use this procedure to control user's ability to install and configure a network bridge. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_AllowNetBridge_NLA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Prohibit use of Internet Connection Sharing on your DNS domain network<br /><sub>(AZ-WIN-00172)</sub> |**Description**: Although this "legacy" setting traditionally applied to the use of Internet Connection Sharing (ICS) in Windows 2000, Windows XP & Server 2003, this setting now freshly applies to the Mobile Hotspot feature in Windows 10 & Server 2016. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_ShowSharedAccessUI<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Turn off multicast name resolution<br /><sub>(AZ-WIN-00145)</sub> |**Description**: LLMNR is a secondary name resolution protocol. With LLMNR, queries are sent using multicast over a local network link on a single subnet from a client computer to another client computer on the same subnet that also has LLMNR enabled. LLMNR does not require a DNS server or DNS client configuration, and provides name resolution in scenarios in which conventional DNS name resolution is not possible. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\EnableMulticast<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | +## Administrative Templates - Security Guide ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Enable Structured Exception Handling Overwrite Protection (SEHOP)<br /><sub>(AZ-WIN-202210)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\kernel\DisableExceptionChainValidation<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|NetBT NodeType configuration<br /><sub>(AZ-WIN-202211)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\NetBT\Parameters\NodeType<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Warning | + ## Administrative Templates - System |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Block user from showing account details on sign-in<br /><sub>(AZ-WIN-00138)</sub> |**Description**: This policy prevents the user from showing account details (email address or user name) on the sign-in screen. If you enable this policy setting, the user cannot choose to show account details on the sign-in screen. If you disable or do not configure this policy setting, the user may choose to show account details on the sign-in screen.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\BlockUserFromShowingAccountDetailsOnSignin<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |-|Boot-Start Driver Initialization Policy<br /><sub>(CCE-37912-3)</sub> |**Description**: This policy setting allows you to specify which boot-start drivers are initialized based on a classification determined by an Early Launch Antimalware boot-start driver. The Early Launch Antimalware boot-start driver can return the following classifications for each boot-start driver: - Good: The driver has been signed and has not been tampered with. - Bad: The driver has been identified as malware. It is recommended that you do not allow known bad drivers to be initialized. - Bad, but required for boot: The driver has been identified as malware, but the computer cannot successfully boot without loading this driver. - Unknown: This driver has not been attested to by your malware detection application and has not been classified by the Early Launch Antimalware boot-start driver. If you enable this policy setting you will be able to choose which boot-start drivers to initialize the next time the computer is started. If you disable or do not configure this policy setting, the boot start drivers determined to be Good, Unknown or Bad but Boot Critical are initialized and the initialization of drivers determined to be Bad is skipped. If your malware detection application does not include an Early Launch Antimalware boot-start driver or if your Early Launch Antimalware boot-start driver has been this setting has no effect and all boot-start drivers are initialized.<br />**Key Path**: SYSTEM\CurrentControlSet\Policies\EarlyLaunch\DriverLoadPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Warning | +|Boot-Start Driver Initialization Policy<br /><sub>(CCE-37912-3)</sub> |**Description**: This policy setting allows you to specify which boot-start drivers are initialized based on a classification determined by an Early Launch Antimalware boot-start driver. The Early Launch Antimalware boot-start driver can return the following classifications for each boot-start driver: - Good: The driver has been signed and has not been tampered with. - Bad: The driver has been identified as malware. It is recommended that you do not allow known bad drivers to be initialized. - Bad, but required for boot: The driver has been identified as malware, but the computer cannot successfully boot without loading this driver. - Unknown: This driver has not been attested to by your malware detection application and has not been classified by the Early Launch Antimalware boot-start driver. If you enable this policy setting you will be able to choose which boot-start drivers to initialize the next time the computer is started. If you disable or do not configure this policy setting, the boot start drivers determined to be Good, Unknown or Bad but Boot Critical are initialized and the initialization of drivers determined to be Bad is skipped. If your malware detection application does not include an Early Launch Antimalware boot-start driver or if your Early Launch Antimalware boot-start driver has been disabled, this setting has no effect and all boot-start drivers are initialized.<br />**Key Path**: SYSTEM\CurrentControlSet\Policies\EarlyLaunch\DriverLoadPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Warning | |Configure Offer Remote Assistance<br /><sub>(CCE-36388-7)</sub> |**Description**: This policy setting allows you to turn on or turn off Offer (Unsolicited) Remote Assistance on this computer. Help desk and support personnel will not be able to proactively offer assistance, although they can still respond to user assistance requests. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowUnsolicited<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Configure Solicited Remote Assistance<br /><sub>(CCE-37281-3)</sub> |**Description**: This policy setting allows you to turn on or turn off Solicited (Ask for) Remote Assistance on this computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowToGetHelp<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Do not display network selection UI<br /><sub>(CCE-38353-9)</sub> |**Description**: This policy setting allows you to control whether anyone can interact with available networks UI on the logon screen. If you enable this policy setting, the PC's network connectivity state cannot be changed without signing into Windows. If you disable or don't configure this policy setting, any user can disconnect the PC from the network or can connect the PC to other available networks without signing into Windows.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DontDisplayNetworkSelectionUI<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |+|Do not enumerate connected users on domain-joined computers<br /><sub>(AZ-WIN-202216)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows\System\DontEnumerateConnectedUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Enable RPC Endpoint Mapper Client Authentication<br /><sub>(CCE-37346-4)</sub> |**Description**: This policy setting controls whether RPC clients authenticate with the Endpoint Mapper Service when the call they are making contains authentication information. The Endpoint Mapper Service on computers running Windows NT4 (all service packs) cannot process authentication information supplied in this manner. If you disable this policy setting, RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Endpoint Mapper Service on Windows NT4 Server. If you enable this policy setting, RPC clients will authenticate to the Endpoint Mapper Service for calls that contain authentication information. Clients making such calls will not be able to communicate with the Windows NT4 Server Endpoint Mapper Service. If you do not configure this policy setting, it remains disabled. RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Windows NT4 Server Endpoint Mapper Service. Note: This policy will not be applied until the system is rebooted.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Rpc\EnableAuthEpResolution<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Enable Windows NTP Client<br /><sub>(CCE-37843-0)</sub> |**Description**: This policy setting specifies whether the Windows NTP Client is enabled. Enabling the Windows NTP Client allows your computer to synchronize its computer clock with other NTP servers. You might want to disable this service if you decide to use a third-party time provider. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\Enabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |+|Encryption Oracle Remediation for CredSSP protocol<br /><sub>(AZ-WIN-201910)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP\Parameters\AllowEncryptionOracle<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|Ensure 'Configure registry policy processing: Do not apply during periodic background processing' is set to 'Enabled: FALSE'<br /><sub>(CCE-36169-1)</sub> |**Description**: The "Do not apply during periodic background processing" option prevents the system from updating affected policies in the background while the computer is in use. When background updates are disabled, policy changes will not take effect until the next user logon or system restart. The recommended state for this setting is: `Enabled: FALSE` (unchecked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoBackgroundPolicy<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|Ensure 'Configure registry policy processing: Process even if the Group Policy objects have not changed' is set to 'Enabled: TRUE'<br /><sub>(CCE-36169-1a)</sub> |**Description**: The "Process even if the Group Policy objects have not changed" option updates and reapplies policies even if the policies have not changed. The recommended state for this setting is: `Enabled: TRUE` (checked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoGPOListChanges<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Ensure 'Continue experiences on this device' is set to 'Disabled'<br /><sub>(AZ-WIN-00170)</sub> |**Description**: This policy setting determines whether the Windows device is allowed to participate in cross-device experiences (continue experiences). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableCdp<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |+|Enumerate local users on domain-joined computers<br /><sub>(AZ_WIN_202204)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnumerateLocalUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Include command line in process creation events<br /><sub>(CCE-36925-6)</sub> |**Description**: This policy setting determines what information is logged in security audit events when a new process has been created. This setting only applies when the Audit Process Creation policy is enabled. If you enable this policy setting the command line information for every process will be logged in plain text in the security event log as part of the Audit Process Creation event 4688, "a new process has been created," on the workstations and servers on which this policy setting is applied. If you disable or do not configure this policy setting, the process's command line information will not be included in Audit Process Creation events. Default: Not configured Note: When this policy setting is enabled, any user with access to read the security events will be able to read the command line arguments for any successfully created process. Command line arguments can contain sensitive or private information such as passwords or user data.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Audit\ProcessCreationIncludeCmdLine_Enabled<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |+|Prevent device metadata retrieval from the Internet<br /><sub>(AZ-WIN-202251)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Device Metadata\PreventDeviceMetadataFromNetwork<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational | +|Remote host allows delegation of non-exportable credentials<br /><sub>(AZ-WIN-20199)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredentialsDelegation\AllowProtectedCreds<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Turn off app notifications on the lock screen<br /><sub>(CCE-35893-7)</sub> |**Description**: This policy setting allows you to prevent app notifications from appearing on the lock screen. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DisableLockScreenAppNotifications<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |-|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning | +|Turn off background refresh of Group Policy<br /><sub>(CCE-14437-8)</sub> |<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\DisableBkGndGroupPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning | +|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Turn off Internet Connection Wizard if URL connection is referring to Microsoft.com<br /><sub>(CCE-37163-3)</sub> |**Description**: This policy setting specifies whether the Internet Connection Wizard can connect to Microsoft to download a list of Internet Service Providers (ISPs). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Internet Connection Wizard\ExitOnMSICW<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Turn on convenience PIN sign-in<br /><sub>(CCE-37528-7)</sub> |**Description**: This policy setting allows you to control whether a domain user can sign in using a convenience PIN. In Windows 10, convenience PIN was replaced with Passport, which has stronger security properties. To configure Passport for domain users, use the policies under Computer configuration\\Administrative Templates\\Windows Components\\Microsoft Passport for Work. **Note:** The user's domain password will be cached in the system vault when using this feature. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\AllowDomainPINLogon<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | +## Administrative Templates - Windows Component ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Turn off cloud consumer account state content<br /><sub>(AZ-WIN-202217)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CloudContent\DisableConsumerAccountStateContent<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | ++## Administrative Templates - Windows Components ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Do not allow drive redirection<br /><sub>(AZ-WIN-73569)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fDisableCdm<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|Turn on PowerShell Transcription<br /><sub>(AZ-WIN-202208)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\PowerShell\Transcription\EnableTranscripting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning | ++## Administrative Templates - Windows Security ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Prevent users from modifying settings<br /><sub>(AZ-WIN-202209)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender Security Center\App and Browser protection\DisallowExploitProtectionOverride<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | ++## Administrative Template - Windows Defender ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Configure Attack Surface Reduction rules<br /><sub>(AZ_WIN_202205)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Windows Defender Exploit Guard\ASR\ExploitGuard_ASR_Rules<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|Prevent users and apps from accessing dangerous websites<br /><sub>(AZ_WIN_202207)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Windows Defender Exploit Guard\Network Protection\EnableNetworkProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | ++## Audit Computer Account Management ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Audit Computer Account Management<br /><sub>(CCE-38004-8)</sub> |<br />**Key Path**: {0CCE9236-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\= Success<br /><sub>(Audit)</sub> |Critical | ++## Secured Core ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Enable boot DMA protection<br /><sub>(AZ-WIN-202250)</sub> |<br />**Key Path**: BootDMAProtection<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(OsConfig)</sub> |Critical | +|Enable hypervisor enforced code integrity<br /><sub>(AZ-WIN-202246)</sub> |<br />**Key Path**: HypervisorEnforcedCodeIntegrityStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical | +|Enable secure boot<br /><sub>(AZ-WIN-202248)</sub> |<br />**Key Path**: SecureBootState<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(OsConfig)</sub> |Critical | +|Enable system guard<br /><sub>(AZ-WIN-202247)</sub> |<br />**Key Path**: SystemGuardStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical | +|Enable virtualization based security<br /><sub>(AZ-WIN-202245)</sub> |<br />**Key Path**: VirtualizationBasedSecurityStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical | +|Set TPM version<br /><sub>(AZ-WIN-202249)</sub> |<br />**Key Path**: TPMVersion<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | 2.0<br /><sub>(OsConfig)</sub> |Critical | + ## Security Options - Accounts |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||+|Accounts: Block Microsoft accounts<br /><sub>(AZ-WIN-202201)</sub> |<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\NoConnectedUser<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 3<br /><sub>(Registry)</sub> |Warning | |Accounts: Guest account status<br /><sub>(CCE-37432-2)</sub> |**Description**: This policy setting determines whether the Guest account is enabled or disabled. The Guest account allows unauthenticated network users to gain access to the system. The recommended state for this setting is: `Disabled`. **Note:** This setting will have no impact when applied to the domain controller organizational unit via group policy because domain controllers have no local account database. It can be configured at the domain level via group policy, similar to account lockout and password policy settings.<br />**Key Path**: [System Access]EnableGuestAccount<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical | |Accounts: Limit local account use of blank passwords to console logon only<br /><sub>(CCE-37615-2)</sub> |**Description**: This policy setting determines whether local accounts that are not password protected can be used to log on from locations other than the physical computer console. If you enable this policy setting, local accounts that have blank passwords will not be able to log on to the network from remote client computers. Such accounts will only be able to log on at the keyboard of the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\LimitBlankPasswordUse<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |+|Network access: Allow anonymous SID/Name translation<br /><sub>(CCE-10024-8)</sub> |<br />**Key Path**: [System Access]LSAAnonymousNameLookup<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Policy)</sub> |Warning | ## Security Options - Audit For more information, see [Azure Policy guest configuration](../../machine-confi |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||-|Devices: Allow undock without having to log on<br /><sub>(AZ-WIN-00120)</sub> |**Description**: This policy setting determines whether a portable computer can be undocked if the user does not log on to the system. Enable this policy setting to eliminate a Logon requirement and allow use of an external hardware eject button to undock the computer. If you disable this policy setting, a user must log on and have been assigned the Remove computer from docking station user right to undock the computer.<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\UndockWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational | |Devices: Allowed to format and eject removable media<br /><sub>(CCE-37701-0)</sub> |**Description**: This policy setting determines who is allowed to format and eject removable media. You can use this policy setting to prevent unauthorized users from removing data on one computer to access it on another computer on which they have local administrator privileges.<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AllocateDASD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Devices: Prevent users from installing printer drivers<br /><sub>(CCE-37942-0)</sub> |**Description**: For a computer to print to a shared printer, the driver for that shared printer must be installed on the local computer. This security setting determines who is allowed to install a printer driver as part of connecting to a shared printer. The recommended state for this setting is: `Enabled`. **Note:** This setting does not affect the ability to add a local printer. This setting does not affect Administrators.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Print\Providers\LanMan Print Services\Servers\AddPrinterDrivers<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |+|Limits print driver installation to Administrators<br /><sub>(AZ_WIN_202202)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows NT\Printers\PointAndPrint\RestrictDriverInstallationToAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | ++## Security Options - Domain member ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Ensure 'Domain member: Digitally encrypt or sign secure channel data (always)' is set to 'Enabled'<br /><sub>(CCE-36142-8)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\RequireSignOrSeal<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | +|Ensure 'Domain member: Digitally encrypt secure channel data (when possible)' is set to 'Enabled'<br /><sub>(CCE-37130-2)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\SealSecureChannel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | +|Ensure 'Domain member: Digitally sign secure channel data (when possible)' is set to 'Enabled'<br /><sub>(CCE-37222-7)</sub> |**Description**: <p><span>This policy setting determines whether a domain member should attempt to negotiate whether all secure channel traffic that it initiates must be digitally signed. Digital signatures protect the traffic from being modified by anyone who captures the data as it traverses the network. The recommended state for this setting is: 'Enabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\SignSecureChannel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | +|Ensure 'Domain member: Disable machine account password changes' is set to 'Disabled'<br /><sub>(CCE-37508-9)</sub> |**Description**: <p><span>This policy setting determines whether a domain member can periodically change its computer account password. Computers that cannot automatically change their account passwords are potentially vulnerable, because an attacker might be able to determine the password for the system's domain account. The recommended state for this setting is: 'Disabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\DisablePasswordChange<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | +|Ensure 'Domain member: Maximum machine account password age' is set to '30 or fewer days, but not 0'<br /><sub>(CCE-37431-4)</sub> |**Description**: This policy setting determines the maximum allowable age for a computer account password. By default, domain members automatically change their domain passwords every 30 days. If you increase this interval significantly so that the computers no longer change their passwords, an attacker would have more time to undertake a brute force attack against one of the computer accounts. The recommended state for this setting is: `30 or fewer days, but not 0`. **Note:** A value of `0` does not conform to the benchmark as it disables maximum password age.<br />**Key Path**: System\CurrentControlSet\Services\Netlogon\Parameters\MaximumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |In 1-30<br /><sub>(Registry)</sub> |Critical | +|Ensure 'Domain member: Require strong (Windows 2000 or later) session key' is set to 'Enabled'<br /><sub>(CCE-37614-5)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\RequireStrongKey<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | ## Security Options - Interactive Logon |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||+|Caching of logon credentials must be limited<br /><sub>(AZ-WIN-73651)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\CachedLogonsCount<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-4<br /><sub>(Registry)</sub> |Informational | |Interactive logon: Do not display last user name<br /><sub>(CCE-36056-0)</sub> |**Description**: This policy setting determines whether the account name of the last user to log on to the client computers in your organization will be displayed in each computer's respective Windows logon screen. Enable this policy setting to prevent intruders from collecting account names visually from the screens of desktop or laptop computers in your organization. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DontDisplayLastUserName<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Interactive logon: Do not require CTRL+ALT+DEL<br /><sub>(CCE-37637-6)</sub> |**Description**: This policy setting determines whether users must press CTRL+ALT+DEL before they log on. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableCAD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |+|Interactive logon: Machine inactivity limit<br /><sub>(AZ-WIN-73645)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\InactivityTimeoutSecs<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-900<br /><sub>(Registry)</sub> |Important | +|Interactive logon: Message text for users attempting to log on<br /><sub>(AZ-WIN-202253)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LegalNoticeText<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | <br /><sub>(Registry)</sub> |Warning | +|Interactive logon: Message title for users attempting to log on<br /><sub>(AZ-WIN-202254)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LegalNoticeCaption<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | <br /><sub>(Registry)</sub> |Warning | +|Interactive logon: Prompt user to change password before expiration<br /><sub>(CCE-10930-6)</sub> |<br />**Key Path**: Software\Microsoft\Windows NT\CurrentVersion\Winlogon\PasswordExpiryWarning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 5-14<br /><sub>(Registry)</sub> |Informational | ## Security Options - Microsoft Network Client For more information, see [Azure Policy guest configuration](../../machine-confi |Microsoft network server: Digitally sign communications (always)<br /><sub>(CCE-37864-6)</sub> |**Description**: This policy setting determines whether packet signing is required by the SMB server component. Enable this policy setting in a mixed environment to prevent downstream clients from using the workstation as a network server. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\RequireSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Microsoft network server: Digitally sign communications (if client agrees)<br /><sub>(CCE-35988-5)</sub> |**Description**: This policy setting determines whether the SMB server will negotiate SMB packet signing with clients that request it. If no signing request comes from the client, a connection will be allowed without a signature if the **Microsoft network server: Digitally sign communications (always)** setting is not enabled. **Note:** Enable this policy setting on SMB clients on your network to make them fully effective for packet signing with all clients and servers in your environment. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Microsoft network server: Disconnect clients when logon hours expire<br /><sub>(CCE-37972-7)</sub> |**Description**: This security setting determines whether to disconnect users who are connected to the local computer outside their user account's valid logon hours. This setting affects the Server Message Block (SMB) component. If you enable this policy setting you should also enable **Network security: Force logoff when logon hours expire** (Rule 2.3.11.6). If your organization configures logon hours for users, this policy setting is necessary to ensure they are effective. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableForcedLogoff<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |+|Microsoft network server: Server SPN target name validation level<br /><sub>(CCE-10617-9)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\LanManServer\Parameters\SMBServerNameHardeningLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | ## Security Options - Microsoft Network Server For more information, see [Azure Policy guest configuration](../../machine-confi |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||+|Accounts: Rename administrator account<br /><sub>(CCE-10976-9)</sub> |<br />**Key Path**: [System Access]NewAdministratorName<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | Administrator<br /><sub>(Policy)</sub> |Warning | |Network access: Do not allow anonymous enumeration of SAM accounts<br /><sub>(CCE-36316-8)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate the accounts in the Security Accounts Manager (SAM). If you enable this policy setting, users with anonymous connections will not be able to enumerate domain account user names on the systems in your environment. This policy setting also allows additional restrictions on anonymous connections. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymousSAM<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | |Network access: Do not allow anonymous enumeration of SAM accounts and shares<br /><sub>(CCE-36077-6)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate SAM accounts as well as shares. If you enable this policy setting, anonymous users will not be able to enumerate domain account user names and network share names on the systems in your environment. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Network access: Let Everyone permissions apply to anonymous users<br /><sub>(CCE-36148-5)</sub> |**Description**: This policy setting determines what additional permissions are assigned for anonymous connections to the computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\EveryoneIncludesAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | For more information, see [Azure Policy guest configuration](../../machine-confi |Network security: Minimum session security for NTLM SSP based (including secure RPC) clients<br /><sub>(CCE-37553-5)</sub> |**Description**: This policy setting determines which behaviors are allowed by clients for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinClientSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical | |Network security: Minimum session security for NTLM SSP based (including secure RPC) servers<br /><sub>(CCE-37835-6)</sub> |**Description**: This policy setting determines which behaviors are allowed by servers for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinServerSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical | -## Security Options - Recovery console +## Security Options - Shutdown |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||-|Recovery console: Allow floppy copy and access to all drives and all folders<br /><sub>(AZ-WIN-00180)</sub> |**Description**: This policy setting makes the Recovery Console SET command available, which allows you to set the following recovery console environment variables: • AllowWildCards. Enables wildcard support for some commands (such as the DEL command). • AllowAllPaths. Allows access to all files and folders on the computer. • AllowRemovableMedia. Allows files to be copied to removable media, such as a floppy disk. • NoCopyPrompt. Does not prompt when overwriting an existing file.<br />**Key Path**: Software\Microsoft\Windows NT\CurrentVersion\Setup\RecoveryConsole\setcommand<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | +|Shutdown: Allow system to be shut down without having to log on<br /><sub>(CCE-36788-8)</sub> |**Description**: This policy setting determines whether a computer can be shut down when a user is not logged on. If this policy setting is enabled, the shutdown command is available on the Windows logon screen. It is recommended to disable this policy setting to restrict the ability to shut down the computer to users with credentials on the system. The recommended state for this setting is: `Disabled`. **Note:** In Server 2008 R2 and older versions, this setting had no impact on Remote Desktop (RDP) / Terminal Services sessions - it only affected the local console. However, Microsoft changed the behavior in Windows Server 2012 (non-R2) and above, where if set to Enabled, RDP sessions are also allowed to shut down or restart the server.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ShutdownWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | +|Shutdown: Clear virtual memory pagefile<br /><sub>(AZ-WIN-00181)</sub> |**Description**: This policy setting determines whether the virtual memory pagefile is cleared when the system is shut down. When this policy setting is enabled, the system pagefile is cleared each time that the system shuts down properly. If you enable this security setting, the hibernation file (Hiberfil.sys) is zeroed out when hibernation is disabled on a portable computer system. It will take longer to shut down and restart the computer, and will be especially noticeable on computers with large paging files.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | -## Security Options - Shutdown +## Security Options - System cryptography |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||-|Shutdown: Allow system to be shut down without having to log on<br /><sub>(CCE-36788-8)</sub> |**Description**: This policy setting determines whether a computer can be shut down when a user is not logged on. If this policy setting is enabled, the shutdown command is available on the Windows logon screen. It is recommended to disable this policy setting to restrict the ability to shut down the computer to users with credentials on the system. The recommended state for this setting is: `Disabled`. **Note:** In Server 2008 R2 and older versions, this setting had no impact on Remote Desktop (RDP) / Terminal Services sessions - it only affected the local console. However, Microsoft changed the behavior in Windows Server 2012 (non-R2) and above, where if set to Enabled, RDP sessions are also allowed to shut down or restart the server.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ShutdownWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | -|Shutdown: Clear virtual memory pagefile<br /><sub>(AZ-WIN-00181)</sub> |**Description**: This policy setting determines whether the virtual memory pagefile is cleared when the system is shut down. When this policy setting is enabled, the system pagefile is cleared each time that the system shuts down properly. If you enable this security setting, the hibernation file (Hiberfil.sys) is zeroed out when hibernation is disabled on a portable computer system. It will take longer to shut down and restart the computer, and will be especially noticeable on computers with large paging files.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | +|Users must be required to enter a password to access private keys stored on the computer.<br /><sub>(AZ-WIN-73699)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Cryptography\ForceKeyProtection<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 2<br /><sub>(Registry)</sub> |Important | +|Windows Server must be configured to use FIPS-compliant algorithms for encryption, hashing, and signing.<br /><sub>(AZ-WIN-73701)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Important | ## Security Options - System objects |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |System objects: Require case insensitivity for non-Windows subsystems<br /><sub>(CCE-37885-1)</sub> |**Description**: This policy setting determines whether case insensitivity is enforced for all subsystems. The Microsoft Win32 subsystem is case insensitive. However, the kernel supports case sensitivity for other subsystems, such as the Portable Operating System Interface for UNIX (POSIX). Because Windows is case insensitive (but the POSIX subsystem will support case sensitivity), failure to enforce this policy setting makes it possible for a user of the POSIX subsystem to create a file with the same name as another file by using mixed case to label it. Such a situation can block access to these files by another user who uses typical Win32 tools, because only one of the files will be available. The recommended state for this setting is: `Enabled`.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Kernel\ObCaseInsensitive<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |-|System objects: Strengthen default permissions of internal system objects (e.g. Symbolic Links)<br /><sub>(CCE-37644-2)</sub> |**Description**: This policy setting determines the strength of the default discretionary access control list (DACL) for objects. Active Directory maintains a global list of shared system resources, such as DOS device names, mutexes, and semaphores. In this way, objects can be located and shared among processes. Each type of object is created with a default DACL that specifies who can access the objects and what permissions are granted. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\ProtectionMode<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | +|System objects: Strengthen default permissions of internal system objects (e.g. Symbolic Links)<br /><sub>(CCE-37644-2)</sub> |**Description**: This policy setting determines the strength of the default discretionary access control list (DACL) for objects. Active Directory maintains a global list of shared system resources, such as DOS device names, mutexes, and semaphores. In this way, objects can be located and shared among processes. Each type of object is created with a default DACL that specifies who can access the objects and what permissions are granted. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\ProtectionMode<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | ## Security Options - System settings For more information, see [Azure Policy guest configuration](../../machine-confi |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |User Account Control: Admin Approval Mode for the Built-in Administrator account<br /><sub>(CCE-36494-3)</sub> |**Description**: This policy setting controls the behavior of Admin Approval Mode for the built-in Administrator account. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\FilterAdministratorToken<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |-|User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop<br /><sub>(CCE-36863-9)</sub> |**Description**: This policy setting controls whether User Interface Accessibility (UIAccess or UIA) programs can automatically disable the secure desktop for elevation prompts used by a standard user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableUIADesktopToggle<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | +|User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop<br /><sub>(CCE-36863-9)</sub> |**Description**: This policy setting controls whether User Interface Accessibility (UIAccess or UIA) programs can automatically disable the secure desktop for elevation prompts used by a standard user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableUIADesktopToggle<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | |User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode<br /><sub>(CCE-37029-6)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for administrators. The recommended state for this setting is: `Prompt for consent on the secure desktop`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorAdmin<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 2<br /><sub>(Registry)</sub> |Critical | |User Account Control: Behavior of the elevation prompt for standard users<br /><sub>(CCE-36864-7)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for standard users. The recommended state for this setting is: `Automatically deny elevation requests`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorUser<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | |User Account Control: Detect application installations and prompt for elevation<br /><sub>(CCE-36533-8)</sub> |**Description**: This policy setting controls the behavior of application installation detection for the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableInstallerDetection<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |-|User Account Control: Only elevate UIAccess applications that are installed in secure locations<br /><sub>(CCE-37057-7)</sub> |**Description**: This policy setting controls whether applications that request to run with a User Interface Accessibility (UIAccess) integrity level must reside in a secure location in the file system. Secure locations are limited to the following: - `…\Program Files\`, including subfolders - `…\Windows\system32\` - `…\Program Files (x86)\`, including subfolders for 64-bit versions of Windows **Note:** Windows enforces a public key infrastructure (PKI) signature check on any interactive application that requests to run with a UIAccess integrity level regardless of the state of this security setting. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableSecureUIAPaths<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | -|User Account Control: Run all administrators in Admin Approval Mode<br /><sub>(CCE-36869-6)</sub> |**Description**: This policy setting controls the behavior of all User Account Control (UAC) policy settings for the computer. If you change this policy setting, you must restart your computer. The recommended state for this setting is: `Enabled`. **Note:** If this policy setting is the Security Center notifies you that the overall security of the operating system has been reduced.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLUA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | -|User Account Control: Switch to the secure desktop when prompting for elevation<br /><sub>(CCE-36866-2)</sub> |**Description**: This policy setting controls whether the elevation request prompt is displayed on the interactive user's desktop or the secure desktop. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\PromptOnSecureDesktop<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | -|User Account Control: Virtualize file and registry write failures to per-user locations<br /><sub>(CCE-37064-3)</sub> |**Description**: This policy setting controls whether application write failures are redirected to defined registry and file system locations. This policy setting mitigates applications that run as administrator and write run-time application data to: - `%ProgramFiles%`, - `%Windir%`, - `%Windir%\system32`, or - `HKEY_LOCAL_MACHINE\Software`. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableVirtualization<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | +|User Account Control: Only elevate UIAccess applications that are installed in secure locations<br /><sub>(CCE-37057-7)</sub> |**Description**: This policy setting controls whether applications that request to run with a User Interface Accessibility (UIAccess) integrity level must reside in a secure location in the file system. Secure locations are limited to the following: - `…\Program Files\`, including subfolders - `…\Windows\system32\` - `…\Program Files (x86)\`, including subfolders for 64-bit versions of Windows **Note:** Windows enforces a public key infrastructure (PKI) signature check on any interactive application that requests to run with a UIAccess integrity level regardless of the state of this security setting. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableSecureUIAPaths<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|User Account Control: Run all administrators in Admin Approval Mode<br /><sub>(CCE-36869-6)</sub> |**Description**: This policy setting controls the behavior of all User Account Control (UAC) policy settings for the computer. If you change this policy setting, you must restart your computer. The recommended state for this setting is: `Enabled`. **Note:** If this policy setting is disabled, the Security Center notifies you that the overall security of the operating system has been reduced.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLUA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|User Account Control: Switch to the secure desktop when prompting for elevation<br /><sub>(CCE-36866-2)</sub> |**Description**: This policy setting controls whether the elevation request prompt is displayed on the interactive user's desktop or the secure desktop. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\PromptOnSecureDesktop<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|User Account Control: Virtualize file and registry write failures to per-user locations<br /><sub>(CCE-37064-3)</sub> |**Description**: This policy setting controls whether application write failures are redirected to defined registry and file system locations. This policy setting mitigates applications that run as administrator and write run-time application data to: - `%ProgramFiles%`, - `%Windir%`, - `%Windir%\system32`, or - `HKEY_LOCAL_MACHINE\Software`. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableVirtualization<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | ## Security Settings - Account Policies |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||+|Account lockout threshold.<br /><sub>(AZ-WIN-73311)</sub> |<br />**Key Path**: [System Access]LockoutBadCount<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-3<br /><sub>(Policy)</sub> |Important | |Enforce password history<br /><sub>(CCE-37166-6)</sub> |**Description**: <p><span>This policy setting determines the number of renewed, unique passwords that have to be associated with a user account before you can reuse an old password. The value for this policy setting must be between 0 and 24 passwords. The default value for Windows Vista is 0 passwords, but the default setting in a domain is 24 passwords. To maintain the effectiveness of this policy setting, use the Minimum password age setting to prevent users from repeatedly changing their password. The recommended state for this setting is: '24 or more password(s)'.</span></p><br />**Key Path**: [System Access]PasswordHistorySize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 24<br /><sub>(Policy)</sub> |Critical | |Maximum password age<br /><sub>(CCE-37167-4)</sub> |**Description**: This policy setting defines how long a user can use their password before it expires. Values for this policy setting range from 0 to 999 days. If you set the value to 0, the password will never expire. Because attackers can crack passwords, the more frequently you change the password the less opportunity an attacker has to use a cracked password. However, the lower this value is set, the higher the potential for an increase in calls to help desk support due to users having to change their password or forgetting which password is current. The recommended state for this setting is `60 or fewer days, but not 0`.<br />**Key Path**: [System Access]MaximumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-70<br /><sub>(Policy)</sub> |Critical | |Minimum password age<br /><sub>(CCE-37073-4)</sub> |**Description**: This policy setting determines the number of days that you must use a password before you can change it. The range of values for this policy setting is between 1 and 999 days. (You may also set the value to 0 to allow immediate password changes.) The default value for this setting is 0 days. The recommended state for this setting is: `1 or more day(s)`.<br />**Key Path**: [System Access]MinimumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 1<br /><sub>(Policy)</sub> |Critical | |Minimum password length<br /><sub>(CCE-36534-6)</sub> |**Description**: This policy setting determines the least number of characters that make up a password for a user account. There are many different theories about how to determine the best password length for an organization, but perhaps "pass phrase" is a better term than "password." In Microsoft Windows 2000 or later, pass phrases can be quite long and can include spaces. Therefore, a phrase such as "I want to drink a $5 milkshake" is a valid pass phrase; it is a considerably stronger password than an 8 or 10 character string of random numbers and letters, and yet is easier to remember. Users must be educated about the proper selection and maintenance of passwords, especially with regard to password length. In enterprise environments, the ideal value for the Minimum password length setting is 14 characters, however you should adjust this value to meet your organization's business requirements. The recommended state for this setting is: `14 or more character(s)`.<br />**Key Path**: [System Access]MinimumPasswordLength<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 14<br /><sub>(Policy)</sub> |Critical | |Password must meet complexity requirements<br /><sub>(CCE-37063-5)</sub> |**Description**: This policy setting checks all new passwords to ensure that they meet basic requirements for strong passwords. When this policy is enabled, passwords must meet the following minimum requirements: - Does not contain the user's account name or parts of the user's full name that exceed two consecutive characters - Be at least six characters in length - Contain characters from three of the following four categories: - English uppercase characters (A through Z) - English lowercase characters (a through z) - Base 10 digits (0 through 9) - Non-alphabetic characters (for example, !, $, #, %) - A catch-all category of any Unicode character that does not fall under the previous four categories. This fifth category can be regionally specific. Each additional character in a password increases its complexity exponentially. For instance, a seven-character, all lower-case alphabetic password would have 267 (approximately 8 x 109 or 8 billion) possible combinations. At 1,000,000 attempts per second (a capability of many password-cracking utilities), it would only take 133 minutes to crack. A seven-character alphabetic password with case sensitivity has 527 combinations. A seven-character case-sensitive alphanumeric password without punctuation has 627 combinations. An eight-character password has 268 (or 2 x 1011) possible combinations. Although this might seem to be a large number, at 1,000,000 attempts per second it would take only 59 hours to try all possible passwords. Remember, these times will significantly increase for passwords that use ALT characters and other special keyboard characters such as "!" or "@". Proper use of the password settings can help make it difficult to mount a brute force attack. The recommended state for this setting is: `Enabled`.<br />**Key Path**: [System Access]PasswordComplexity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= true<br /><sub>(Policy)</sub> |Critical |+|Reset account lockout counter.<br /><sub>(AZ-WIN-73309)</sub> |<br />**Key Path**: [System Access]ResetLockoutCount<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 15<br /><sub>(Policy)</sub> |Important | |Store passwords using reversible encryption<br /><sub>(CCE-36286-3)</sub> |**Description**: This policy setting determines whether the operating system stores passwords in a way that uses reversible encryption, which provides support for application protocols that require knowledge of the user's password for authentication purposes. Passwords that are stored with reversible encryption are essentially the same as plaintext versions of the passwords. The recommended state for this setting is: `Disabled`.<br />**Key Path**: [System Access]ClearTextPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical | +## Security Settings - Windows Firewall ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Windows Firewall: Domain: Allow unicast response<br /><sub>(AZ-WIN-00088)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Domain: Firewall state<br /><sub>(CCE-36062-8)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Domain: Inbound connections<br /><sub>(AZ-WIN-202252)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Domain: Logging: Log dropped packets<br /><sub>(AZ-WIN-202226)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Informational | +|Windows Firewall: Domain: Logging: Log successful connections<br /><sub>(AZ-WIN-202227)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Domain: Logging: Name<br /><sub>(AZ-WIN-202224)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\domainfw.log<br /><sub>(Registry)</sub> |Informational | +|Windows Firewall: Domain: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202225)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 16384<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Domain: Outbound connections<br /><sub>(CCE-36146-9)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. In Windows Vista, the default behavior is to allow connections unless there are firewall rules that block the connection.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Domain: Settings: Apply local connection security rules<br /><sub>(CCE-38040-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Domain: Settings: Apply local firewall rules<br /><sub>(CCE-37860-4)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Domain: Settings: Display a notification<br /><sub>(CCE-38041-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Private: Allow unicast response<br /><sub>(AZ-WIN-00089)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Private: Firewall state<br /><sub>(CCE-38239-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Private: Inbound connections<br /><sub>(AZ-WIN-202228)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Private: Logging: Log dropped packets<br /><sub>(AZ-WIN-202231)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational | +|Windows Firewall: Private: Logging: Log successful connections<br /><sub>(AZ-WIN-202232)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Private: Logging: Name<br /><sub>(AZ-WIN-202229)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\privatefw.log<br /><sub>(Registry)</sub> |Informational | +|Windows Firewall: Private: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202230)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 16384<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Private: Outbound connections<br /><sub>(CCE-38332-3)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Private: Settings: Apply local connection security rules<br /><sub>(CCE-36063-6)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Private: Settings: Apply local firewall rules<br /><sub>(CCE-37438-9)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Private: Settings: Display a notification<br /><sub>(CCE-37621-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span> Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Public: Allow unicast response<br /><sub>(AZ-WIN-00090)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages. This can be done by changing the state for this setting to ‘No’, this will set the registry value to 1.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Public: Firewall state<br /><sub>(CCE-37862-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Public: Inbound connections<br /><sub>(AZ-WIN-202234)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Public: Logging: Log dropped packets<br /><sub>(AZ-WIN-202237)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational | +|Windows Firewall: Public: Logging: Log successful connections<br /><sub>(AZ-WIN-202233)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|Windows Firewall: Public: Logging: Name<br /><sub>(AZ-WIN-202235)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\publicfw.log<br /><sub>(Registry)</sub> |Informational | +|Windows Firewall: Public: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202236)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 16384<br /><sub>(Registry)</sub> |Informational | +|Windows Firewall: Public: Outbound connections<br /><sub>(CCE-37434-8)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Public: Settings: Apply local connection security rules<br /><sub>(CCE-36268-1)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Public: Settings: Apply local firewall rules<br /><sub>(CCE-37861-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | +|Windows Firewall: Public: Settings: Display a notification<br /><sub>(CCE-38043-6)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | + ## System Audit Policies - Account Logon |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit Credential Validation<br /><sub>(CCE-37741-6)</sub> |**Description**: <p><span>This subcategory reports the results of validation tests on credentials submitted for a user account logon request. These events occur on the computer that is authoritative for the credentials. For domain accounts, the domain controller is authoritative, whereas for local accounts, the local computer is authoritative. In domain environments, most of the Account Logon events occur in the Security log of the domain controllers that are authoritative for the domain accounts. However, these events can occur on other computers in the organization when local accounts are used to log on. Events for this subcategory include: - 4774: An account was mapped for logon. - 4775: An account could not be mapped for logon. - 4776: The domain controller attempted to validate the credentials for an account. - 4777: The domain controller failed to validate the credentials for an account. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE923F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |+|Audit Kerberos Authentication Service<br /><sub>(AZ-WIN-00004)</sub> |<br />**Key Path**: {0CCE9242-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success and Failure<br /><sub>(Audit)</sub> |Critical | ## System Audit Policies - Account Management |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||-|Audit Other Account Management Events<br /><sub>(CCE-37855-4)</sub> |**Description**: This subcategory reports other account management events. Events for this subcategory include: — 4782: The password hash an account was accessed. — 4793: The Password Policy Checking API was called. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE923A-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | +|Audit Distribution Group Management<br /><sub>(CCE-36265-7)</sub> |<br />**Key Path**: {0CCE9238-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical | +|Audit Other Account Management Events<br /><sub>(CCE-37855-4)</sub> |**Description**: This subcategory reports other account management events. Events for this subcategory include: — 4782: The password hash an account was accessed. — 4793: The Password Policy Checking API was called. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE923A-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit Security Group Management<br /><sub>(CCE-38034-5)</sub> |**Description**: This subcategory reports each event of security group management, such as when a security group is created, changed, or deleted or when a member is added to or removed from a security group. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of security group accounts. Events for this subcategory include: - 4727: A security-enabled global group was created. - 4728: A member was added to a security-enabled global group. - 4729: A member was removed from a security-enabled global group. - 4730: A security-enabled global group was deleted. - 4731: A security-enabled local group was created. - 4732: A member was added to a security-enabled local group. - 4733: A member was removed from a security-enabled local group. - 4734: A security-enabled local group was deleted. - 4735: A security-enabled local group was changed. - 4737: A security-enabled global group was changed. - 4754: A security-enabled universal group was created. - 4755: A security-enabled universal group was changed. - 4756: A member was added to a security-enabled universal group. - 4757: A member was removed from a security-enabled universal group. - 4758: A security-enabled universal group was deleted. - 4764: A group's type was changed. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9237-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |-|Audit User Account Management<br /><sub>(CCE-37856-2)</sub> |**Description**: This subcategory reports each event of user account management, such as when a user account is created, changed, or deleted; a user account is renamed, or enabled; or a password is set or changed. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of user accounts. Events for this subcategory include: - 4720: A user account was created. - 4722: A user account was enabled. - 4723: An attempt was made to change an account's password. - 4724: An attempt was made to reset an account's password. - 4725: A user account was disabled. - 4726: A user account was deleted. - 4738: A user account was changed. - 4740: A user account was locked out. - 4765: SID History was added to an account. - 4766: An attempt to add SID History to an account failed. - 4767: A user account was unlocked. - 4780: The ACL was set on accounts which are members of administrators groups. - 4781: The name of an account was changed: - 4794: An attempt was made to set the Directory Services Restore Mode. - 5376: Credential Manager credentials were backed up. - 5377: Credential Manager credentials were restored from a backup. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9235-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | +|Audit User Account Management<br /><sub>(CCE-37856-2)</sub> |**Description**: This subcategory reports each event of user account management, such as when a user account is created, changed, or deleted; a user account is renamed, disabled, or enabled; or a password is set or changed. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of user accounts. Events for this subcategory include: - 4720: A user account was created. - 4722: A user account was enabled. - 4723: An attempt was made to change an account's password. - 4724: An attempt was made to reset an account's password. - 4725: A user account was disabled. - 4726: A user account was deleted. - 4738: A user account was changed. - 4740: A user account was locked out. - 4765: SID History was added to an account. - 4766: An attempt to add SID History to an account failed. - 4767: A user account was unlocked. - 4780: The ACL was set on accounts which are members of administrators groups. - 4781: The name of an account was changed: - 4794: An attempt was made to set the Directory Services Restore Mode. - 5376: Credential Manager credentials were backed up. - 5377: Credential Manager credentials were restored from a backup. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9235-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | ## System Audit Policies - Detailed Tracking |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit PNP Activity<br /><sub>(AZ-WIN-00182)</sub> |**Description**: This policy setting allows you to audit when plug and play detects an external device. The recommended state for this setting is: `Success`. **Note:** A Windows 10, Server 2016 or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9248-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |-|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: Description of security events in Windows Vista and in Windows Server 2008 for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | +|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: [Description of security events in Windows Vista and in Windows Server 2008](https://support.microsoft.com/en-us/kb/947226) for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | ++## System Audit Policies - DS Access ++|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | +||||| +|Audit Directory Service Access<br /><sub>(CCE-37433-0)</sub> |<br />**Key Path**: {0CCE923B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Failure<br /><sub>(Audit)</sub> |Critical | +|Audit Directory Service Changes<br /><sub>(CCE-37616-0)</sub> |<br />**Key Path**: {0CCE923C-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical | +|Audit Directory Service Replication<br /><sub>(AZ-WIN-00093)</sub> |<br />**Key Path**: {0CCE923D-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= No Auditing<br /><sub>(Audit)</sub> |Critical | ## System Audit Policies - Logon-Logoff |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||-|Audit Account Lockout<br /><sub>(CCE-37133-6)</sub> |**Description**: This subcategory reports when a user's account is locked out as a result of too many failed logon attempts. Events for this subcategory include: — 4625: An account failed to log on. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9217-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | +|Audit Account Lockout<br /><sub>(CCE-37133-6)</sub> |**Description**: This subcategory reports when a user's account is locked out as a result of too many failed logon attempts. Events for this subcategory include: — 4625: An account failed to log on. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9217-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical | |Audit Group Membership<br /><sub>(AZ-WIN-00026)</sub> |**Description**: Audit Group Membership enables you to audit group memberships when they are enumerated on the client computer. This policy allows you to audit the group membership information in the user's logon token. Events in this subcategory are generated on the computer on which a logon session is created. For an interactive logon, the security audit event is generated on the computer that the user logged on to. For a network logon, such as accessing a shared folder on the network, the security audit event is generated on the computer hosting the resource. You must also enable the Audit Logon subcategory. Multiple events are generated if the group membership information cannot fit in a single security audit event. The events that are audited include the following: - 4627(S): Group membership information.<br />**Key Path**: {0CCE9249-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit Logoff<br /><sub>(CCE-38237-4)</sub> |**Description**: <p><span>This subcategory reports when a user logs off from the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4634: An account was logged off. - 4647: User initiated logoff. The recommended state for this setting is: 'Success'.</span></p><br />**Key Path**: {0CCE9216-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit Logon<br /><sub>(CCE-38036-0)</sub> |**Description**: <p><span>This subcategory reports when a user attempts to log on to the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4624: An account was successfully logged on. - 4625: An account failed to log on. - 4648: A logon was attempted using explicit credentials. - 4675: SIDs were filtered. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE9215-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | For more information, see [Azure Policy guest configuration](../../machine-confi |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||+|Audit Detailed File Share<br /><sub>(AZ-WIN-00100)</sub> |<br />**Key Path**: {0CCE9244-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical | +|Audit File Share<br /><sub>(AZ-WIN-00102)</sub> |<br />**Key Path**: {0CCE9224-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | |Audit Other Object Access Events<br /><sub>(AZ-WIN-00113)</sub> |**Description**: This subcategory reports other object access-related events such as Task Scheduler jobs and COM+ objects. Events for this subcategory include: — 4671: An application attempted to access a blocked ordinal through the TBS. — 4691: Indirect access to an object was requested. — 4698: A scheduled task was created. — 4699: A scheduled task was deleted. — 4700: A scheduled task was enabled. — 4701: A scheduled task was disabled. — 4702: A scheduled task was updated. — 5888: An object in the COM+ Catalog was modified. — 5889: An object was deleted from the COM+ Catalog. — 5890: An object was added to the COM+ Catalog. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9227-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | |Audit Removable Storage<br /><sub>(CCE-37617-8)</sub> |**Description**: This policy setting allows you to audit user attempts to access file system objects on a removable storage device. A security audit event is generated only for all objects for all types of access requested. If you configure this policy setting, an audit event is generated each time an account accesses a file system object on a removable storage. Success audits record successful attempts and Failure audits record unsuccessful attempts. If you do not configure this policy setting, no audit event is generated when an account accesses a file system object on a removable storage. The recommended state for this setting is: `Success and Failure`. **Note:** A Windows 8, Server 2012 (non-R2) or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9245-69AE-11D9-BED3-505054503030}<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | For more information, see [Azure Policy guest configuration](../../machine-confi |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit Authentication Policy Change<br /><sub>(CCE-38327-3)</sub> |**Description**: This subcategory reports changes in authentication policy. Events for this subcategory include: — 4706: A new trust was created to a domain. — 4707: A trust to a domain was removed. — 4713: Kerberos policy was changed. — 4716: Trusted domain information was modified. — 4717: System security access was granted to an account. — 4718: System security access was removed from an account. — 4739: Domain Policy was changed. — 4864: A namespace collision was detected. — 4865: A trusted forest information entry was added. — 4866: A trusted forest information entry was removed. — 4867: A trusted forest information entry was modified. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9230-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |+|Audit Authorization Policy Change<br /><sub>(CCE-36320-0)</sub> |<br />**Key Path**: {0CCE9231-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit MPSSVC Rule-Level Policy Change<br /><sub>(AZ-WIN-00111)</sub> |**Description**: This subcategory reports changes in policy rules used by the Microsoft Protection Service (MPSSVC.exe). This service is used by Windows Firewall and by Microsoft OneCare. Events for this subcategory include: — 4944: The following policy was active when the Windows Firewall started. — 4945: A rule was listed when the Windows Firewall started. — 4946: A change has been made to Windows Firewall exception list. A rule was added. — 4947: A change has been made to Windows Firewall exception list. A rule was modified. — 4948: A change has been made to Windows Firewall exception list. A rule was deleted. — 4949: Windows Firewall settings were restored to the default values. — 4950: A Windows Firewall setting has changed. — 4951: A rule has been ignored because its major version number was not recognized by Windows Firewall. — 4952: Parts of a rule have been ignored because its minor version number was not recognized by Windows Firewall. The other parts of the rule will be enforced. — 4953: A rule has been ignored by Windows Firewall because it could not parse the rule. — 4954: Windows Firewall Group Policy settings have changed. The new settings have been applied. — 4956: Windows Firewall has changed the active profile. — 4957: Windows Firewall did not apply the following rule: — 4958: Windows Firewall did not apply the following rule because the rule referred to items not configured on this computer: Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9232-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |+|Audit Other Policy Change Events<br /><sub>(AZ-WIN-00114)</sub> |<br />**Key Path**: {0CCE9234-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical | |Audit Policy Change<br /><sub>(CCE-38028-7)</sub> |**Description**: This subcategory reports changes in audit policy including SACL changes. Events for this subcategory include: — 4715: The audit policy (SACL) on an object was changed. — 4719: System audit policy was changed. — 4902: The Per-user audit policy table was created. — 4904: An attempt was made to register a security event source. — 4905: An attempt was made to unregister a security event source. — 4906: The CrashOnAuditFail value has changed. — 4907: Auditing settings on object were changed. — 4908: Special Groups Logon table modified. — 4912: Per User Audit Policy was changed. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE922F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | ## System Audit Policies - Privilege Use For more information, see [Azure Policy guest configuration](../../machine-confi |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||+|Audit IPsec Driver<br /><sub>(CCE-37853-9)</sub> |<br />**Key Path**: {0CCE9213-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success and Failure<br /><sub>(Audit)</sub> |Critical | +|Audit Other System Events<br /><sub>(CCE-38030-3)</sub> |<br />**Key Path**: {0CCE9214-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | |Audit Security State Change<br /><sub>(CCE-38114-5)</sub> |**Description**: This subcategory reports changes in security state of the system, such as when the security subsystem starts and stops. Events for this subcategory include: — 4608: Windows is starting up. — 4609: Windows is shutting down. — 4616: The system time was changed. — 4621: Administrator recovered system from CrashOnAuditFail. Users who are not administrators will now be allowed to log on. Some auditable activity might not have been recorded. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9210-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit Security System Extension<br /><sub>(CCE-36144-4)</sub> |**Description**: This subcategory reports the loading of extension code such as authentication packages by the security subsystem. Events for this subcategory include: — 4610: An authentication package has been loaded by the Local Security Authority. — 4611: A trusted logon process has been registered with the Local Security Authority. — 4614: A notification package has been loaded by the Security Account Manager. — 4622: A security package has been loaded by the Local Security Authority. — 4697: A service was installed in the system. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9211-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit System Integrity<br /><sub>(CCE-37132-8)</sub> |**Description**: This subcategory reports on violations of integrity of the security subsystem. Events for this subcategory include: — 4612: Internal resources allocated for the queuing of audit messages have been exhausted, leading to the loss of some audits. — 4615: Invalid use of LPC port. — 4618: A monitored security event pattern has occurred. — 4816 : RPC detected an integrity violation while decrypting an incoming message. — 5038: Code integrity determined that the image hash of a file is not valid. The file could be corrupt due to unauthorized modification or the invalid hash could indicate a potential disk device error. — 5056: A cryptographic self-test was performed. — 5057: A cryptographic primitive operation failed. — 5060: Verification operation failed. — 5061: Cryptographic operation. — 5062: A kernel-mode cryptographic self-test was performed. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9212-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | For more information, see [Azure Policy guest configuration](../../machine-confi |Restore files and directories<br /><sub>(CCE-37613-7)</sub> |**Description**: This policy setting determines which users can bypass file, directory, registry, and other persistent object permissions when restoring backed up files and directories on computers that run Windows Vista in your environment. This user right also determines which users can set valid security principals as object owners; it is similar to the Backup files and directories user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRestorePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators<br /><sub>(Policy)</sub> |Warning | |Shut down the system<br /><sub>(CCE-38328-1)</sub> |**Description**: This policy setting determines which users who are logged on locally to the computers in your environment can shut down the operating system with the Shut Down command. Misuse of this user right can result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning | |Take ownership of files or other objects<br /><sub>(CCE-38325-7)</sub> |**Description**: This policy setting allows users to take ownership of files, folders, registry keys, processes, or threads. This user right bypasses any permissions that are in place to protect objects to give ownership to the specified user. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeTakeOwnershipPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |+|The Debug programs user right must only be assigned to the Administrators group.<br /><sub>(AZ-WIN-73755)</sub> |<br />**Key Path**: [Privilege Rights]SeDebugPrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical | +|The Impersonate a client after authentication user right must only be assigned to Administrators, Service, Local Service, and Network Service.<br /><sub>(AZ-WIN-73785)</sub> |<br />**Key Path**: [Privilege Rights]SeImpersonatePrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators,Service,Local Service,Network Service<br /><sub>(Policy)</sub> |Important | ## Windows Components |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Allow Basic authentication<br /><sub>(CCE-36254-1)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service accepts Basic authentication from a remote client. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowBasic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |-|Allow Cortana<br /><sub>(AZ-WIN-00131)</sub> |**Description**: This policy setting specifies whether Cortana is allowed on the device.   If you enable or don't configure this setting, Cortana will be allowed on the device. If you disable this setting, Cortana will be turned off.   When Cortana is off, users will still be able to use search to find things on the device and on the Internet.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortana<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | -|Allow Cortana above lock screen<br /><sub>(AZ-WIN-00130)</sub> |**Description**: This policy setting determines whether or not the user can interact with Cortana using speech while the system is locked. If you enable or don't configure this setting, the user can interact with Cortana using speech while the system is locked. If you disable this setting, the system will need to be unlocked for the user to interact with Cortana using speech.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortanaAboveLock<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Allow indexing of encrypted files<br /><sub>(CCE-38277-0)</sub> |**Description**: This policy setting controls whether encrypted items are allowed to be indexed. When this setting is changed, the index is rebuilt completely. Full volume encryption (such as BitLocker Drive Encryption or a non-Microsoft solution) must be used for the location of the index to maintain security for encrypted files. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowIndexingEncryptedStoresOrItems<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Allow Microsoft accounts to be optional<br /><sub>(CCE-38354-7)</sub> |**Description**: This policy setting lets you control whether Microsoft accounts are optional for Windows Store apps that require an account to sign in. This policy only affects Windows Store apps that support it. If you enable this policy setting, Windows Store apps that typically require a Microsoft account to sign in will allow users to sign in with an enterprise account instead. If you disable or do not configure this policy setting, users will need to sign in with a Microsoft account.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\MSAOptional<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |-|Allow search and Cortana to use location<br /><sub>(AZ-WIN-00133)</sub> |**Description**: This policy setting specifies whether search and Cortana can provide location aware search and Cortana results.   If this is enabled, search and Cortana can access location information.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowSearchToUseLocation<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Allow Telemetry<br /><sub>(AZ-WIN-00169)</sub> |**Description**: This policy setting determines the amount of diagnostic and usage data reported to Microsoft. A value of 0 will send minimal data to Microsoft. This data includes Malicious Software Removal Tool (MSRT) & Windows Defender data, if enabled, and telemetry client settings. Setting a value of 0 applies to enterprise, EDU, IoT and server devices only. Setting a value of 0 for other devices is equivalent to choosing a value of 1. A value of 1 sends only a basic amount of diagnostic and usage data. Note that setting values of 0 or 1 will degrade certain experiences on the device. A value of 2 sends enhanced diagnostic and usage data. A value of 3 sends the same data as a value of 2, plus additional diagnostics data, including the files and content that may have caused the problem. Windows 10 telemetry settings apply to the Windows operating system and some first party apps. This setting does not apply to third party apps running on Windows 10. The recommended state for this setting is: `Enabled: 0 - Security [Enterprise Only]`. **Note:** If the "Allow Telemetry" setting is configured to "0 - Security [Enterprise Only]", then the options in Windows Update to defer upgrades and updates will have no effect.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\AllowTelemetry<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 0<br /><sub>(Registry)</sub> |Warning | |Allow unencrypted traffic<br /><sub>(CCE-38223-4)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service sends and receives unencrypted messages over the network. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowUnencryptedTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Allow user control over installs<br /><sub>(CCE-36400-0)</sub> |**Description**: Permits users to change installation options that typically are available only to system administrators. The security features of Windows Installer prevent users from changing installation options typically reserved for system administrators, such as specifying the directory to which files are installed. If Windows Installer detects that an installation package has permitted the user to change a protected option, it stops the installation and displays a message. These security features operate only when the installation program is running in a privileged security context in which it has access to directories denied to the user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\EnableUserControl<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | For more information, see [Azure Policy guest configuration](../../machine-confi |Always prompt for password upon connection<br /><sub>(CCE-37929-7)</sub> |**Description**: This policy setting specifies whether Terminal Services always prompts the client computer for a password upon connection. You can use this policy setting to enforce a password prompt for users who log on to Terminal Services, even if they already provided the password in the Remote Desktop Connection client. By default, Terminal Services allows users to automatically log on if they enter a password in the Remote Desktop Connection client. Note If you do not configure this policy setting, the local computer administrator can use the Terminal Services Configuration tool to either allow or prevent passwords from being automatically sent.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fPromptForPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Application: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37775-4)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Application: Specify the maximum log file size (KB)<br /><sub>(CCE-37948-7)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2147483647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |+|Block all consumer Microsoft account user authentication<br /><sub>(AZ-WIN-20198)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\MicrosoftAccount\DisableUserAuth<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Configure local setting override for reporting to Microsoft MAPS<br /><sub>(AZ-WIN-00173)</sub> |**Description**: This policy setting configures a local override for the configuration to join Microsoft MAPS. This setting can only be set by Group Policy. If you enable this setting the local preference setting will take priority over Group Policy. If you disable or do not configure this setting Group Policy will take priority over the local preference setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\LocalSettingOverrideSpynetReporting<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Configure Windows SmartScreen<br /><sub>(CCE-35859-8)</sub> |**Description**: This policy setting allows you to manage the behavior of Windows SmartScreen. Windows SmartScreen helps keep PCs safer by warning users before running unrecognized programs downloaded from the Internet. Some information is sent to Microsoft about files and programs run on PCs with this feature enabled. If you enable this policy setting, Windows SmartScreen behavior may be controlled by setting one of the following options: • Give user a warning before running downloaded unknown software • Turn off SmartScreen If you disable or do not configure this policy setting, Windows SmartScreen behavior is managed by administrators on the PC by using Windows SmartScreen Settings in Security and Maintenance. Options: • Give user a warning before running downloaded unknown software • Turn off SmartScreen<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableSmartScreen<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-2<br /><sub>(Registry)</sub> |Warning | |Detect change from default RDP port<br /><sub>(AZ-WIN-00156)</sub> |**Description**: This setting determines whether the network port that listens for Remote Desktop Connections has been changed from the default 3389<br />**Key Path**: System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 3389<br /><sub>(Registry)</sub> |Critical | For more information, see [Azure Policy guest configuration](../../machine-confi |Do not show feedback notifications<br /><sub>(AZ-WIN-00140)</sub> |**Description**: This policy setting allows an organization to prevent its devices from showing feedback questions from Microsoft. If you enable this policy setting, users will no longer see feedback notifications through the Windows Feedback app. If you disable or do not configure this policy setting, users may see notifications through the Windows Feedback app asking users for feedback. Note: If you disable or do not configure this policy setting, users can control how often they receive feedback questions.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\DoNotShowFeedbackNotifications<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Do not use temporary folders per session<br /><sub>(CCE-38180-6)</sub> |**Description**: By default, Remote Desktop Services creates a separate temporary folder on the RD Session Host server for each active session that a user maintains on the RD Session Host server. The temporary folder is created on the RD Session Host server in a Temp folder under the user's profile folder and is named with the "sessionid." This temporary folder is used to store individual temporary files. To reclaim disk space, the temporary folder is deleted when the user logs off from a session. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\PerSessionTempDir<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | |Enumerate administrator accounts on elevation<br /><sub>(CCE-36512-2)</sub> |**Description**: This policy setting controls whether administrator accounts are displayed when a user attempts to elevate a running application. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\CredUI\EnumerateAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |+|PowerShell script block logging must be enabled.<br /><sub>(AZ-WIN-73591)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging\EnableScriptBlockLogging<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Important | |Prevent downloading of enclosures<br /><sub>(CCE-37126-0)</sub> |**Description**: This policy setting prevents the user from having enclosures (file attachments) downloaded from a feed to the user's computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Internet Explorer\Feeds\DisableEnclosureDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Require secure RPC communication<br /><sub>(CCE-37567-5)</sub> |**Description**: Specifies whether a Remote Desktop Session Host server requires secure RPC communication with all clients or allows unsecured communication. You can use this setting to strengthen the security of RPC communication with clients by allowing only authenticated and encrypted requests. If the status is set to Enabled, Remote Desktop Services accepts requests from RPC clients that support secure requests, and does not allow unsecured communication with untrusted clients. If the status is set to Disabled, Remote Desktop Services always requests security for all RPC traffic. However, unsecured communication is allowed for RPC clients that do not respond to the request. If the status is set to Not Configured, unsecured communication is allowed. Note: The RPC interface is used for administering and configuring Remote Desktop Services.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fEncryptRPCTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Require user authentication for remote connections by using Network Level Authentication<br /><sub>(AZ-WIN-00149)</sub> |**Description**: Require user authentication for remote connections by using Network Level Authentication<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\UserAuthentication<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | For more information, see [Azure Policy guest configuration](../../machine-confi |Specify the interval to check for definition updates<br /><sub>(AZ-WIN-00152)</sub> |**Description**: This policy setting allows you to specify an interval at which to check for definition updates. The time value is represented as the number of hours between update checks. Valid values range from 1 (every hour) to 24 (once per day). If you enable this setting, checking for definition updates will occur at the interval specified. If you disable or do not configure this setting, checking for definition updates will occur at the default interval.<br />**Key Path**: SOFTWARE\Microsoft\Microsoft Antimalware\Signature Updates\SignatureUpdateInterval<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 8<br /><sub>(Registry)</sub> |Critical | |System: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-36160-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |System: Specify the maximum log file size (KB)<br /><sub>(CCE-36092-5)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |+|The Application Compatibility Program Inventory must be prevented from collecting data and sending the information to Microsoft.<br /><sub>(AZ-WIN-73543)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\AppCompat\DisableInventory<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational | |Turn off Autoplay<br /><sub>(CCE-36875-3)</sub> |**Description**: Autoplay starts to read from a drive as soon as you insert media in the drive, which causes the setup file for programs or audio media to start immediately. An attacker could use this feature to launch a program to damage the computer or data on the computer. You can enable the Turn off Autoplay setting to disable the Autoplay feature. Autoplay is disabled by default on some removable drive types, such as floppy disk and network drives, but not on CD-ROM drives. Note You cannot use this policy setting to enable Autoplay on computer drives in which it is disabled by default, such as floppy disk and network drives.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoDriveTypeAutoRun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 255<br /><sub>(Registry)</sub> |Critical | |Turn off Data Execution Prevention for Explorer<br /><sub>(CCE-37809-1)</sub> |**Description**: Disabling data execution prevention can allow certain legacy plug-in applications to function without terminating Explorer. The recommended state for this setting is: `Disabled`. **Note:** Some legacy plug-in applications and other software may not function with Data Execution Prevention and will require an exception to be defined for that specific plug-in/software.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoDataExecutionPrevention<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Turn off heap termination on corruption<br /><sub>(CCE-36660-9)</sub> |**Description**: Without heap termination on corruption, legacy plug-in applications may continue to function when a File Explorer session has become corrupt. Ensuring that heap termination on corruption is active will prevent this. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoHeapTerminationOnCorruption<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | For more information, see [Azure Policy guest configuration](../../machine-confi |Turn off shell protocol protected mode<br /><sub>(CCE-36809-2)</sub> |**Description**: This policy setting allows you to configure the amount of functionality that the shell protocol can have. When using the full functionality of this protocol applications can open folders and launch files. The protected mode reduces the functionality of this protocol allowing applications to only open a limited set of folders. Applications are not able to open files with this protocol when it is in the protected mode. It is recommended to leave this protocol in the protected mode to increase the security of Windows. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\PreXPSP2ShellProtocolBehavior<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Turn on behavior monitoring<br /><sub>(AZ-WIN-00178)</sub> |**Description**: This policy setting allows you to configure behavior monitoring. If you enable or do not configure this setting behavior monitoring will be enabled. If you disable this setting behavior monitoring will be disabled.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableBehaviorMonitoring<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | -## Windows Firewall Properties +## Windows Settings - Security Settings |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||-|Windows Firewall: Domain: Allow unicast response<br /><sub>(AZ-WIN-00088)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning | -|Windows Firewall: Domain: Firewall state<br /><sub>(CCE-36062-8)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Domain: Outbound connections<br /><sub>(CCE-36146-9)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. In Windows Vista, the default behavior is to allow connections unless there are firewall rules that block the connection.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Domain: Settings: Apply local connection security rules<br /><sub>(CCE-38040-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Domain: Settings: Apply local firewall rules<br /><sub>(CCE-37860-4)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Domain: Settings: Display a notification<br /><sub>(CCE-38041-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning | -|Windows Firewall: Private: Allow unicast response<br /><sub>(AZ-WIN-00089)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | -|Windows Firewall: Private: Firewall state<br /><sub>(CCE-38239-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Private: Outbound connections<br /><sub>(CCE-38332-3)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Private: Settings: Apply local connection security rules<br /><sub>(CCE-36063-6)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Private: Settings: Apply local firewall rules<br /><sub>(CCE-37438-9)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Private: Settings: Display a notification<br /><sub>(CCE-37621-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span> Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | -|Windows Firewall: Public: Allow unicast response<br /><sub>(AZ-WIN-00090)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages. This can be done by changing the state for this setting to ‘No’, this will set the registry value to 1.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | -|Windows Firewall: Public: Firewall state<br /><sub>(CCE-37862-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Public: Outbound connections<br /><sub>(CCE-37434-8)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Public: Settings: Apply local connection security rules<br /><sub>(CCE-36268-1)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Public: Settings: Apply local firewall rules<br /><sub>(CCE-37861-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | -|Windows Firewall: Public: Settings: Display a notification<br /><sub>(CCE-38043-6)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | +|Adjust memory quotas for a process<br /><sub>(CCE-10849-8)</sub> |<br />**Key Path**: [Privilege Rights]SeIncreaseQuotaPrivilege<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | Administrators, Local Service, Network Service<br /><sub>(Policy)</sub> |Warning | > [!NOTE] > Availability of specific Azure Policy guest configuration settings may vary in Azure Government For more information, see [Azure Policy guest configuration](../../machine-confi Additional articles about Azure Policy and guest configuration: -- [Azure Policy guest configuration](../../machine-configuration/overview.md).+- [Azure Policy guest configuration](../concepts/guest-configuration.md). - [Regulatory Compliance](../concepts/regulatory-compliance.md) overview. - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md). |
hdinsight | Apache Hadoop Dotnet Csharp Mapreduce Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-dotnet-csharp-mapreduce-streaming.md | description: Learn how to use C# to create MapReduce solutions with Apache Hadoo Previously updated : 04/28/2020 Last updated : 08/23/2022 # Use C# with MapReduce streaming on Apache Hadoop in HDInsight youth 17 * [Use MapReduce in Apache Hadoop on HDInsight](hdinsight-use-mapreduce.md). * [Use a C# user-defined function with Apache Hive and Apache Pig](apache-hadoop-hive-pig-udf-dotnet-csharp.md).-* [Develop Java MapReduce programs](apache-hadoop-develop-deploy-java-mapreduce-linux.md) +* [Develop Java MapReduce programs](apache-hadoop-develop-deploy-java-mapreduce-linux.md) |
hdinsight | Apache Interactive Query Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-interactive-query-get-started.md | description: An introduction to Interactive Query, also called Apache Hive LLAP, Previously updated : 03/03/2020 Last updated : 08/23/2022 #Customer intent: As a developer new to Interactive Query in Azure HDInsight, I want to have a basic understanding of Interactive Query so I can decide if I want to use it rather than build my own cluster. |
hdinsight | Apache Kafka Azure Container Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-azure-container-services.md | description: Learn how to use Kafka on HDInsight from container images hosted in Previously updated : 12/04/2019 Last updated : 08/23/2022 # Use Azure Kubernetes Service with Apache Kafka on HDInsight |
hdinsight | Apache Kafka Streams Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-streams-api.md | description: Tutorial - Learn how to use the Apache Kafka Streams API with Kafka Previously updated : 04/01/2021 Last updated : 08/23/2022 #Customer intent: As a developer, I need to create an application that uses the Kafka streams API with Kafka on HDInsight |
hdinsight | Apache Azure Spark History Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-azure-spark-history-server.md | description: Use the extended features in the Apache Spark History Server to deb Previously updated : 11/25/2019 Last updated : 08/23/2022 # Use the extended features of the Apache Spark History Server to debug and diagnose Spark applications |
healthcare-apis | Deploy Iot Connector In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md | In this quickstart, you'll learn how to deploy the MedTech service in the Azure > [!IMPORTANT] >-> You'll want to confirm that the **Microsoft.HealthcareApis** and **Microsoft.EventHub** resource providers have been registered with your Azure subscription for a successful deployment. To learn more about registering resource providers, see [Azure resource providers and types](/azure-resource-manager/management/resource-providers-and-types) +> You'll want to confirm that the **Microsoft.HealthcareApis** and **Microsoft.EventHub** resource providers have been registered with your Azure subscription for a successful deployment. To learn more about registering resource providers, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types) ## Deploy the MedTech service with a quickstart template |
iot-hub | Iot Hub Create Through Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md | You can change the settings of an existing IoT hub after it's created from the I ### Shared access policies -You can also view or modify the list of shared access policies by clicking **Shared access policies** in the **Security settings** section. These policies define the permissions for devices and services to connect to IoT Hub. +You can also view or modify the list of shared access policies by choosing **Shared access policies** in the **Security settings** section. These policies define the permissions for devices and services to connect to IoT Hub. -Click **Add shared access policy** to open the **Add shared access policy** blade. You can enter the new policy name and the permissions that you want to associate with this policy, as shown in the following figure: +Select **Add shared access policy** to open the **Add shared access policy** blade. You can enter the new policy name and the permissions that you want to associate with this policy, as shown in the following figure: :::image type="content" source="./media/iot-hub-create-through-portal/iot-hub-add-shared-access-policy.png" alt-text="Screenshot showing adding a shared access policy." lightbox="./media/iot-hub-create-through-portal/iot-hub-add-shared-access-policy.png"::: * The **Registry Read** and **Registry Write** policies grant read and write access rights to the identity registry. These permissions are used by back-end cloud services to manage device identities. Choosing the write option automatically chooses the read option. -* The **Service Connect** policy grants permission to access service endpoints. This permission is used by back-end cloud services to send and receive messages from devices as well as to update and read device twin and module twin data. +* The **Service Connect** policy grants permission to access service endpoints. This permission is used by back-end cloud services to send and receive messages from devices. It's also used to update and read device twin and module twin data. -* The **Device Connect** policy grants permissions for sending and receiving messages using the IoT Hub device-side endpoints. This permission is used by devices to send and receive messages from an IoT hub, update and read device twin and module twin data, and perform file uploads. +* The **Device Connect** policy grants permissions for sending and receiving messages using the IoT Hub device-side endpoints. This permission is used by devices to send and receive messages from an IoT hub or update and read device twin and module twin data. It's also used for file uploads. -Click **Add** to add this newly created policy to the existing list. +Select **Add** to add this newly created policy to the existing list. For more detailed information about the access granted by specific permissions, see [IoT Hub permissions](./iot-hub-dev-guide-sas.md#access-control-and-permissions). For more detailed information about the access granted by specific permissions, [!INCLUDE [iot-hub-include-create-device](../../includes/iot-hub-include-create-device.md)] -## Message Routing for an IoT hub +## Message routing for an IoT hub -Click **Message Routing** under **Messaging** to see the Message Routing pane, where you define routes and custom endpoints for the hub. [Message routing](iot-hub-devguide-messages-d2c.md) enables you to manage how data is sent from your devices to your endpoints. The first step is to add a new route. Then you can add an existing endpoint to the route, or create a new one of the types supported, such as blob storage. -- +Select **Message Routing** under **Messaging** to see the Message Routing pane, where you define routes and custom endpoints for the hub. [Message routing](iot-hub-devguide-messages-d2c.md) enables you to manage how data is sent from your devices to your endpoints. The first step is to add a new route. Then you can add an existing endpoint to the route, or create a new one of the types supported, such as blob storage. ### Routes -Routes is the first tab on the Message Routing pane. To add a new route, click +**Add**. You see the following screen. +**Routes** is the first tab on the **Message Routing** pane. To add a new route, select **+ Add**. ++ - +You see the following screen. + Name your route. The route name must be unique within the list of routes for that hub. -For **Endpoint**, you can select one from the dropdown list, or add a new one. In this example, a storage account and container are already available. To add them as an endpoint, click +**Add** next to the Endpoint dropdown and select **Blob Storage**. The following screen shows where the storage account and container are specified. +For **Endpoint**, select one from the dropdown list or add a new one. In this example, a storage account and container are already available. To add them as an endpoint, choose **+ Add** next to the Endpoint dropdown and select **Blob Storage**. ++The following screen shows where the storage account and container are specified. - + -Click **Pick a container** to select the storage account and container. When you have selected those fields, it returns to the Endpoint pane. Use the defaults for the rest of the fields and **Create** to create the endpoint for the storage account and add it to the routing rules. +Add an endpoint name in **Endpoint name** if needed. Select **Pick a container** to select the storage account and container. When you've chosen a container then **Select**, the page returns to the **Add a storage endpoint** pane. Use the defaults for the rest of the fields and **Create** to create the endpoint for the storage account and add it to the routing rules. -For **Data source**, select Device Telemetry Messages. +You return to the **Add a route** page. For **Data source**, select Device Telemetry Messages. Next, add a routing query. In this example, the messages that have an application property called `level` with a value equal to `critical` are routed to the storage account.  -Click **Save** to save the routing rule. You return to the Message Routing pane, and your new routing rule is displayed. +Select **Save** to save the routing rule. You return to the **Message routing** pane, and your new routing rule is displayed. ### Custom endpoints -Click the **Custom endpoints** tab. You see any custom endpoints already created. From here, you can add new endpoints or delete existing endpoints. +Select the **Custom endpoints** tab. You see any custom endpoints already created. From here, you can add new endpoints or delete existing endpoints. > [!NOTE]-> If you delete a route, it does not delete the endpoints assigned to that route. To delete an endpoint, click the Custom endpoints tab, select the endpoint you want to delete, and click Delete. -> +> If you delete a route, it does not delete the endpoints assigned to that route. To delete an endpoint, select the Custom endpoints tab, select the endpoint you want to delete, and choose **Delete**. You can read more about custom endpoints in [Reference - IoT hub endpoints](iot-hub-devguide-endpoints.md). To see a full example of how to use custom endpoints with routing, see [Message Here are two ways to find a specific IoT hub in your subscription: -1. If you know the resource group to which the IoT hub belongs, click **Resource groups**, then select the resource group from the list. The resource group screen shows all of the resources in that group, including the IoT hubs. Click on the hub for which you're looking. +1. If you know the resource group to which the IoT hub belongs, choose **Resource groups**, then select the resource group from the list. The resource group screen shows all of the resources in that group, including the IoT hubs. Select your hub. -2. Click **All resources**. On the **All resources** pane, there is a dropdown list that defaults to `All types`. Click on the dropdown list, uncheck `Select all`. Find `IoT Hub` and check it. Click on the dropdown list box to close it, and the entries will be filtered, showing only your IoT hubs. +2. Choose **All resources**. On the **All resources** pane, there's a dropdown list that defaults to `All types`. Select the dropdown list, uncheck `Select all`. Find `IoT Hub` and check it. Select the dropdown list box to close it, and the entries will be filtered, showing only your IoT hubs. ## Delete the IoT hub -To delete an Iot hub, find the IoT hub you want to delete, then click the **Delete** button below the IoT hub name. +To delete an IoT hub, find the IoT hub you want to delete, then choose **Delete**. ## Next steps |
lab-services | Reliability In Azure Lab Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/reliability-in-azure-lab-services.md | + + Title: Reliability in Azure Lab Services +description: Learn about reliability in Azure Lab Services ++ Last updated : 08/18/2022+++# What is reliability in Azure Lab Services? ++This article describes reliability support in Azure Lab Services, and covers regional resiliency with availability zones. For a more detailed overview of reliability in Azure, see [Azure resiliency](/azure/availability-zones/overview.md). ++## Availability zone support ++Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones allow the services to fail over to the other availability zones to provide continuity in service with minimal interruption. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview.md). ++Azure availability zones-enabled services are designed to provide the right level of resiliency and flexibility. They can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or zonal, with instances pinned to a specific zone. You can also combine these approaches. For more information on zonal vs. zone-redundant architecture, see [Build solutions with availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability). ++Azure Lab Services provide availability zone redundancy automatically in all regions listed in this article. While the service infrastructure is zone redundant, customer labs and VMs are not zone redundant. ++Currently, the service is not zonal. That is, you canΓÇÖt configure a lab or the VMs in the lab to align to a specific zone. A lab and VMs may be distributed across zones in a region. ++### SLA improvements ++There are no increased SLAs available for availability in Azure Lab Services. For the monthly uptime SLAs for Azure Lab Services, see [SLA for Azure Lab Services](https://azure.microsoft.com/support/legal/sla/lab-services/v1_0/). ++The Azure Lab Services infrastructure uses Cosmos DB storage. The Cosmos DB storage region is the same as the region where the lab plan is located. All the regional Cosmos DB accounts are single region. In the zone-redundant regions listed in this article, the Cosmos DB accounts are single region with Availability Zones. In the other regions, the accounts are single region without Availability Zones. For high availability capabilities for these account types, see [SLAs for Cosmos DB](/azure/cosmos-db/high-availability#slas). ++### Zone down experience ++#### Azure Lab Services infrastructure ++Azure Lab Services infrastructure is zone-redundant in the following regions: ++- Australia East +- Canada Central +- France Central +- Korea Central +- East Asia ++Resources apart from the Lab resources and virtual machines are zone redundant in these regions. ++In the event of a zone outage in these regions, you can still perform the following tasks: ++- Access the Azure Lab Services website +- Create/manage lab plans +- Create Users +- Configure lab schedules +- Create/manage labs and VMs in regions unaffected by the zone outage. ++Data loss may occur only with an unrecoverable disaster in the Cosmos DB region. For more information, see [Region Outages](/azure/cosmos-db/high-availability#region-outages). ++For regions not listed, access to the Azure Lab Services infrastructure is not guaranteed when there is a zone outage in the region containing the lab plan. You will only be able to perform the following tasks: ++- Access the Azure Lab Services website +- Create/manage lab plans, labs, and VMs in regions unaffected by the zone outage ++> [!NOTE] +> Existing labs and VMs in regions unaffected by the zone outage aren't affected by a loss of infrastructure in the lab plan region. Existing labs and VMs in unaffected regions can still run and operate as normal. ++#### Labs and VMs ++Azure Lab Services is not currently zone aligned. So, VMs in a region may be distributed across zones in the region. Therefore, when a zone in a region experiences an outage, there are no guarantees that a lab or any VMs in the associated region will be available. ++As a result, the following operations are not guaranteed in the event of a zone outage: ++- Manage or access labs/VMs +- Start/stop/reset VMs +- Create/publish/delete labs +- Scale up/down labs +- Connect to VMs ++If there's a zone outage in the region, there's no expectation that you can use any labs or VMs in the region. +Labs and VMs in other regions will be unaffected by the outage. ++#### Zone outage preparation and recovery ++Lab and VM services will be restored as soon as the zone availability is restored (the outage is resolved). ++If infrastructure is impacted, it will be restored when the zone availability is resolved. ++### Region down experience ++#### Azure Lab Services infrastructure ++In a regional outage, in most scenarios you will only be able to perform the following tasks related to Azure Lab Services infrastructure: ++- Access the Azure Lab Services website +- Create/manage lab plans, labs, and VMs in regions unaffected by the zone outage ++Typically, labs are in the same region as the lab plan. However, if the outage is in the lab plan region and an existing lab is in an unaffected region, you can still perform the following tasks for the existing lab in the unaffected region: ++- Create Users +- Configure lab schedules ++#### Labs and VMs ++In a regional outage, labs and VMs in the region are unavailable, so you will not be able to use or manage them. ++Existing labs and VMs in regions unaffected by the outage aren't affected by a loss of infrastructure in the lab plan region. Existing labs and VMs in unaffected regions can still run and operate as normal. ++#### Regional outage preparation and recovery ++Lab and VM services will be restored as soon as the regional outage is restored (the outage is resolved). ++If infrastructure is impacted, it will be restored when the regional outage is resolved. ++### Fault tolerance ++If you want to preserve maximum access to Azure Lab Services infrastructure during a zone outage, create the lab plan in one of the zone-redundant regions listed. ++- Australia East +- Canada Central +- France Central +- Korea Central +- East Asia ++## Disaster recovery ++Azure Lab Services does not provide regional failover support. If you want to preserve maximum access to the Azure Lab Services infrastructure during a zone outage, create the lab plan in one of the [zone-redundant regions](#fault-tolerance). ++### Outage detection, notification, and management ++Azure Lab Services does not provide any service-specific signals about an outage, but is dependent on Azure communications that inform customers about outages. For more information on service health, see [Resource health overview](/azure/service-health/resource-health-overview). ++## Next steps ++> [!div class="nextstepaction"] +> [Resiliency in Azure](/azure/availability-zones/overview.md) |
logic-apps | Logic Apps Custom Api Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-custom-api-authentication.md | Title: Add authentication for securing calls to custom APIs -description: Set up authentication to improve security for calls to custom APIs from Azure Logic Apps. + Title: Add authentication for calls to custom APIs +description: Set up authentication for calls to custom APIs from Azure Logic Apps. ms.suite: integration Previously updated : 09/22/2017 Last updated : 08/22/2022 -# Increase security for calls to custom APIs from Azure Logic Apps +# Add authentication when calling custom APIs from Azure Logic Apps -To improve security for calls to your APIs, you can set up Azure Active Directory (Azure AD) -authentication through the Azure portal so you don't have to update your code. -Or, you can require and enforce authentication through your API's code. +To improve security for calls to your APIs, you can set up Azure Active Directory (Azure AD) authentication through the Azure portal so you don't have to update your code. Or, you can require and enforce authentication through your API's code. -## Authentication options for your API +You can add authentication in the following ways: -You can improve security for calls to your custom API in these ways: --* [No code changes](#no-code): Protect your API with -[Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) -through the Azure portal, so you don't have to update your code or redeploy your API. +* [No code changes](#no-code): Protect your API with [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) through the Azure portal, so you don't have to update your code or redeploy your API. > [!NOTE]- > By default, the Azure AD authentication that you turn on - > in the Azure portal doesn't provide fine-grained authorization. - > For example, this authentication locks your API to just a specific tenant, - > not to a specific user or app. + > + > By default, the Azure AD authentication that you select in the Azure portal doesn't + > provide fine-grained authorization. For example, this authentication locks your API + > to just a specific tenant, not to a specific user or app. -* [Update your API's code](#update-code): Protect your API by enforcing -[certificate authentication](#certificate), [basic authentication](#basic), -or [Azure AD authentication](#azure-ad-code) through code. +* [Update your API's code](#update-code): Protect your API by enforcing [certificate authentication](#certificate), [basic authentication](#basic), or [Azure AD authentication](#azure-ad-code) through code. <a name="no-code"></a> -### Authenticate calls to your API without changing code +## Authenticate calls to your API without changing code Here are the general steps for this method: -1. Create two Azure Active Directory (Azure AD) application identities: -one for your logic app and one for your web app (or API app). +1. Create two Azure Active Directory (Azure AD) application identities: one for your logic app resource and one for your web app (or API app). -2. To authenticate calls to your API, use the credentials (client ID and secret) for the -service principal that's associated with the Azure AD application identity for your logic app. +1. To authenticate calls to your API, use the credentials (client ID and secret) for the service principal that's associated with the Azure AD application identity for your logic app. -3. Include the application IDs in your logic app definition. +1. Include the application IDs in your logic app's workflow definition. -#### Part 1: Create an Azure AD application identity for your logic app +### Part 1: Create an Azure AD application identity for your logic app -Your logic app uses this Azure AD application identity to authenticate against Azure AD. -You only have to set up this identity one time for your directory. -For example, you can choose to use the same identity for all your logic apps, -even though you can create unique identities for each logic app. -You can set up these identities in the Azure portal or use [PowerShell](#powershell). +Your logic app resource uses this Azure AD application identity to authenticate against Azure AD. You only have to set up this identity one time for your directory. For example, you can choose to use the same identity for all your logic apps, even though you can create unique identities for each logic app. You can set up these identities in the Azure portal or use [PowerShell](#powershell). -**Create the application identity for your logic app in the Azure portal** +#### [Portal](#tab/azure-portal) -1. In the [Azure portal](https://portal.azure.com "https://portal.azure.com"), -choose **Azure Active Directory**. +1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**. -2. Confirm that you're in the same directory as your web app or API app. +1. Confirm that you're in the same directory as your web app or API app. > [!TIP]+ > > To switch directories, choose your profile and select another directory.- > Or, choose **Overview** > **Switch directory**. + > Or, select **Overview** > **Switch directory**. -3. On the directory menu, under **Manage**, -choose **App registrations** > **New application registration**. +1. On the directory menu, under **Manage**, select **App registrations** > **New registration**. - > [!TIP] - > By default, the app registrations list shows all - > app registrations in your directory. - > To view only your app registrations, next to the search box, - > select **My apps**. + The **All registrations** list shows all the app registrations in your directory. To view only your app registrations, select **Owned applications**. ++  ++1. Provide a user-facing name for your logic app's application identity. Select the supported account types. For **Redirect URI**, select **Web**, provide a unique URL where to return the authentication response, and select **Register**. -  +  -4. Give your application identity a name, -leave **Application type** set to **Web app / API**, -provide a unique string formatted as a domain -for **Sign-on URL**, and choose **Create**. + The **Owned applications** list now includes your created application identity. If this identity doesn't appear, on the toolbar, select **Refresh**. -  +  - The application identity that you created for your - logic app now appears in the app registrations list. +1. From the app registrations list, select your new application identity. -  +1. From the application identity navigation menu, select **Overview**. -5. In the app registrations list, select your new application identity. -Copy and save the **Application ID** to use as the "client ID" -for your logic app in Part 3. +1. On the **Overview** pane, under **Essentials**, copy and save the **Application ID** to use as the "client ID" for your logic app in Part 3. -  +  -6. If your application identity settings aren't visible, -choose **Settings** or **All settings**. +1. From the application identity navigation menu, select **Certificates & secrets**. -7. Under **API Access**, choose **Keys**. Under **Description**, -provide a name for your key. Under **Expires**, select a duration for your key. +1. On the **Client secrets** tab, select **New client secret**. - The key that you're creating acts as the application identity's - "secret" or password for your logic app. +1. For **Description**, provide a name for your secret. Under **Expires**, select a duration for your secret. When you're done, select **Add**. -  + The secret that you create acts as the application identity's "secret" or password for your logic app. -8. On the toolbar, choose **Save**. Under **Value**, your key now appears. -**Make sure to copy and save your key** for later use because the key is hidden -when you leave the **Keys** page. +  - When you configure your logic app in Part 3, - you specify this key as the "secret" or password. + On the **Certificates & secrets** pane, under **Client secrets**, your secret now appears along with a secret value and secret ID. -  +  ++1. Copy the secret value for later use. When you configure your logic app in Part 3, you specify this value as the "secret" or password. <a name="powershell"></a> -**Create the application identity for your logic app in PowerShell** +#### [PowerShell](#tab/azure-powershell) [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] -You can perform this task through Azure Resource Manager with PowerShell. -In PowerShell, run these commands: +You can perform this task through Azure Resource Manager with PowerShell. In PowerShell, run the following commands: 1. `Add-AzAccount` In PowerShell, run these commands: 1. `New-AzADApplication -DisplayName "MyLogicAppID" -HomePage "http://mydomain.tld" -IdentifierUris "http://mydomain.tld" -Password $SecurePassword` -1. Make sure to copy the **Tenant ID** (GUID for your Azure AD tenant), -the **Application ID**, and the password that you used. +1. Make sure to copy the **Tenant ID** (GUID for your Azure AD tenant), the **Application ID**, and the password that you used. ++For more information, learn how to [create a service principal with PowerShell to access resources](../active-directory/develop/howto-authenticate-service-principal-powershell.md). -For more information, learn how to -[create a service principal with PowerShell to access resources](../active-directory/develop/howto-authenticate-service-principal-powershell.md). + -#### Part 2: Create an Azure AD application identity for your web app or API app +### Part 2: Create an Azure AD application identity for your web app or API app -If your web app or API app is already deployed, you can turn on authentication -and create the application identity in the Azure portal. Otherwise, you can -[turn on authentication when you deploy with an Azure Resource Manager template](#authen-deploy). +If your web app or API app is already deployed, you can turn on authentication and create the application identity in the Azure portal. Otherwise, you can [turn on authentication when you deploy with an Azure Resource Manager template](#authen-deploy). -**Create the application identity and turn on authentication in the Azure portal for deployed apps** +**Create the application identity for a deployed web app or API app in the Azure portal** -1. In the [Azure portal](https://portal.azure.com "https://portal.azure.com"), -find and select your web app or API app. +1. In the [Azure portal](https://portal.azure.com), find and select your web app or API app. -2. Under **Settings**, choose **Authentication/Authorization**. -Under **App Service Authentication**, turn au |